CN114359519A - Meta universe system - Google Patents

Meta universe system Download PDF

Info

Publication number
CN114359519A
CN114359519A CN202111513293.XA CN202111513293A CN114359519A CN 114359519 A CN114359519 A CN 114359519A CN 202111513293 A CN202111513293 A CN 202111513293A CN 114359519 A CN114359519 A CN 114359519A
Authority
CN
China
Prior art keywords
model
user
virtual
equipment
sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111513293.XA
Other languages
Chinese (zh)
Inventor
牛艳青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Hansen Future Technology Co ltd
Original Assignee
Beijing Hansen Future Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Hansen Future Technology Co ltd filed Critical Beijing Hansen Future Technology Co ltd
Priority to CN202111513293.XA priority Critical patent/CN114359519A/en
Publication of CN114359519A publication Critical patent/CN114359519A/en
Pending legal-status Critical Current

Links

Images

Abstract

A metasystem comprises one or more service devices and one or more user side devices, and is characterized in that the service devices comprise object 3D model data, human body 3D model data and map data, the object 3D model data comprises a building 3D model, a furniture 3D model, a household appliance 3D model, a commodity 3D model and a character identification 3D model, the service device virtual scene can carry out user-defined setting on the geographic position of the object 3D model, the types of the human body 3D model and the object 3D model, the character content of the character or two-dimensional code identification 3D model, the two-dimensional code content and the spatial position, and the spatial positions of the object 3D model and the human body 3D model through the user side devices, a virtual scene required by a user is set up, and the virtual scene, the real scene, the virtual scene and the real scene can be combined, the remote interaction experience between people is enhanced.

Description

Meta universe system
Technical Field
The invention relates to a metasystem, and belongs to the technical field of augmented reality.
Background
The conventional virtual reality technology is mainly to generate a three-dimensional virtual scene through simulation by a service device, so that a user can watch a virtual scene picture or a simple combination of a virtual scene and a real scene through a user end device, wherein the user end device is generally an AR, VR, MR and other types of head display devices. The interaction mode is single, and different users cannot communicate or are inconvenient to communicate; interaction can be carried out only in the same real space place, and the communication place is limited; the scenes are fixed and cannot be arranged according to the requirements of the users.
Disclosure of Invention
The first purpose of the invention is to enrich the interactive content and make the communication more real and rich. The second purpose is to make the interaction between users independent of space or time. A third object is to enable convenient interaction between multiple persons. The scheme can be widely applied to various application scenes such as social contact, tourism, shopping, movie theaters, KTVs, entertainment, meetings, exhibitions, live broadcasts, concerts and the like.
In order to achieve the purpose, the technical scheme of the invention is as follows:
the metastic system comprises one or more service devices and one or more user side devices, and is characterized in that the service devices comprise object 3D model data, human body 3D model data and map data, the object 3D model data comprise a building 3D model, a furniture 3D model, a household appliance 3D model, a commodity 3D model and a character identification 3D model, the user side devices can set the content in a self-defined mode and comprise the geographic position of the object 3D model, the types of the human body 3D model and the object 3D model, the character content and the spatial position of the character or two-dimensional code identification 3D model, the spatial position of the object 3D model and the human body 3D model, and a virtual scene required by a user can be set up in the self-defined mode; preferably, the number of the service devices is 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 16, 20, 30, 40, 50, 60, 70, 80, 90, 100, 1000, 1 million, 5 million, 10 million, 100 million, 1000 million, 1 hundred million, 10 hundred million, 100 hundred million, 1000 hundred million; preferably, the number of the customer premises devices is 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 16, 20, 30, 40, 50, 60, 70, 80, 90, 100, 1000, 1 million, 5 million, 10 million, 100 million, 1000 million, 1 hundred million, 10 hundred million, 100 hundred million, 1000 hundred million.
Further, the data imported by the service device through the user end device includes an object 3D model data file and a human body 3D model data file.
Further, the service device further comprises weather data, such as sun, sky, clouds, rain, snow, wind, temperature, humidity, fog, cloudy day, night, and moon, which can be displayed in a virtual scene; the weather data of different geographic positions in the virtual scene map data are real-time weather information data of corresponding geographic positions in reality.
Further, the map data includes roads, streets, rivers, lakes, seas, mountains, villages, towns, fields, scenic spots, buildings; the service equipment further comprises 3D model color data, animal 3D model data and plant 3D model data, wherein the building 3D model comprises a one-storey building 3D model, a multi-storey building 3D model, a building outer wall 3D model, a building internal layout and decoration 3D model, a building internal decoration 3D model and a lighting 3D model; the figure 3D model comprises a hairstyle 3D model, an ornament 3D model, a mirror 3D model, a camera 3D model, a clothes 3D model, a shoe 3D model, a glove 3D model and a face 3D model, and the face 3D model can support a photo to change faces; the number of each 3D model is more than 10 different data, and each 3D model can support the replacement of color data and the replacement of appearance by pictures; preferably, the number of different data per 3D model is 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 16, 20, 30, 40, 50, 60, 70, 80, 90, 100, 1000, 1 million, 5 million, 10 million, 100 million, 1000 million, 1 hundred million, 10 hundred million, 100 hundred million, 1000 hundred million, 10000 hundred million, 100000 million.
Furthermore, the user side equipment further comprises a cylindrical or cylindrical display bin, the display bin comprises a shell and a main controller, and further comprises a power module connected with the main controller, a communication module capable of being connected with a network, a sound sensor, a loudspeaker, a limb action sensor, a facial expression recognition sensor, a camera, a peripheral display screen, a top display screen and a bottom display screen; the method can be used for scene display or real person image acquisition.
Furthermore, the metastic system further comprises a cloud computer, and the cloud computer is remotely controlled through the MR head display device.
Furthermore, the user side equipment is MR head display equipment which can display a virtual scene, a real scene or a scene combined with virtual reality, and the MR head display equipment comprises a shell, a main controller, a display screen, a sound sensor, a loudspeaker, a limb action sensor, a positioning module and a communication module which can be connected with a network, wherein the display screen, the sound sensor, the loudspeaker, the limb action sensor, the positioning module and the communication module are connected with the main controller; the cloud computer can be remotely connected and controlled through a network, the cloud computer can be connected and a desktop image of the cloud computer can be displayed, a virtual keyboard, a virtual mouse and a virtual human hand can be displayed, the virtual keyboard is displayed below the desktop image, and the virtual mouse is displayed on the right side of the virtual keyboard; the MR head display equipment also comprises a human hand scanning and tracking system which scans and tracks the position of the human hand below and the motion track of each finger, and the virtual human hand also performs corresponding motion when the hand or the finger moves. Guiding a person to perform corresponding actions to operate a virtual keyboard or a virtual mouse and operate a computer; preferably, the limb motion sensor is a gyroscope, an electronic compass, an acceleration sensor, an angle sensor, a displacement sensor and a camera, and the sensor of the hand scanning and tracking system is an ALS light sensor, a PS distance sensor, an infrared sensor, a camera, a gyroscope, an acceleration sensor, an angle sensor and a displacement sensor.
Furthermore, when the MR head display equipment is not connected with a cloud computer, the MR head display equipment can scan and track the motion tracks of the hands and the fingers through a hand scanning and tracking system, and when the hands or the fingers move, the virtual hands also move correspondingly to guide the operation of a person and select a system menu; the user A and the user B can respectively carry out voice communication and interaction through the MR head display equipment A and the MR head display equipment B; the hand scanning and tracking system comprises an automatic stabilizer, when the head of a person drives the MR head display equipment to move in a small range, but the hand is not moved, the automatic stabilizer enables a sensor of the hand scanning and tracking system to keep still, so that the virtual hand in a display picture can keep still when the MR head display equipment moves in a small range; preferably, the sensor of the automatic stabilizer is a camera, a gyroscope, an acceleration sensor, an angle sensor, and a displacement sensor.
Furthermore, the MR head display device further comprises a mouth motion sensor and a facial expression sensor, the user a can set a virtual role of the user a as a human body 3D model a through the MR head display device a, the MR head display device a can upload the limb motion, expression motion and mouth motion information of the user a using the device to the service device, the MR head display device a can send the information to the MR head display device B in the same virtual space, and the user B using the MR head display device B can see the human body 3D model a and the limb motion, expression motion and mouth motion corresponding to the user a; the MR head display equipment can be connected with the smart phone through the WIFI communication module, displays a screen interface of the smart phone and plays sound of the smart phone, also displays a virtual hand, realizes click and sliding operation through gestures to control the interface of the smart phone, and can realize remote video call through a display screen, a sound sensor and a loudspeaker of the MR head display equipment; preferably, the mouth motion sensor and the facial expression sensor are infrared sensors, cameras, ALS light sensors and PS distance sensors.
Furthermore, the MR head display equipment has a video recording function and also comprises a camera which is provided with a moving mechanism, wherein the horizontal rotating angle is more than or equal to 180 degrees, and the vertical rotating angle is more than or equal to 45 degrees; the MR head display equipment A can be connected with the MR head display equipment B through a network, the MR head display equipment A can display video information acquired by a camera of the MR head display equipment B in real time, the MR head display equipment A can play sound information acquired by a sound sensor of the MR head display equipment B through a loudspeaker in real time, the MR head display equipment B can also transmit the video information and the sound information acquired by the camera and the sound sensor to the MR head display equipment A in real time, and the MR head display equipment A broadcasts the video information and the sound information through a display screen and the loudspeaker in real time; the MR head display equipment A can be simultaneously connected with n MR head display equipment B, wherein n is more than or equal to 10; preferably, n is 10, 16, 20, 30, 40, 50, 60, 70, 80, 90, 100, 1000, 1 million, 10 million, 100 million, 1000 million, 1 hundred million, 10 hundred million, 100 hundred million, 1000 hundred million.
Furthermore, the MR head display equipment B can be connected with the MR head display A through two-dimensional code scanning or input coding, and the MR head display equipment further comprises instant messaging software which comprises an address book and can carry out video call, voice call and character communication with friends or strange users.
Furthermore, the MR head display equipment is characterized by having the functions of a smart phone, including the functions of voice call, video call, short message communication, operation installation and operation APP; the shell also comprises a fingerprint identification module, and the functions of fingerprint password, fingerprint payment and starting scanning of the two-dimensional code can be realized through the fingerprint identification module; the head MR display equipment can enter a certain virtual space by scanning the two-dimensional code or inputting a coding instruction; the head MR display equipment can enter a certain virtual space by clicking the position on a virtual map or inputting geographical position information for searching, or enter a certain virtual space by inputting the longitude and latitude of the map; the MR head display equipment comprises three working modes of AR, VR and MR.
Furthermore, the metastic system further comprises a cloud projector, the cloud projector comprises a communication module capable of being connected with a network, and a desktop image of a cloud computer can be projected to the cloud projector in a remote mode.
Furthermore, the metastic universe system further comprises a cloud printer, wherein the cloud printer comprises a communication module capable of being connected with a network, and can be connected to the cloud printer through a cloud computer to print files.
Furthermore, the MR head display equipment has a computer function, and can process files, be connected with a cloud projector for projection or be connected with a cloud printer for printing.
Further, the metastic system further comprises a 3D scanner, and can create real objects, characters or animals as 3D model data files and transmit the 3D model data files to a service device for selection and setting by a user.
Furthermore, the metasystem also comprises an intelligent automobile with a communication module, the intelligent automobile is connected with a network through the communication module, a user can remotely drive the intelligent automobile through user side equipment, and the communication module is a 5G communication module.
Furthermore, the user side equipment further comprises a robot for replacement, and the robot for replacement is used for acquiring video images and voice information of the actual scene of the MR head display equipment in real time; the robot of replacing body, including shell, master controller and the power module who links to each other with the master controller, but the communication module of connecting network, the camera, sound sensor, the speaker, distance sensor, a drive mechanism for the health removes, a serial communication port, the user can show the first equipment of showing of MR through the network with the robot of replacing body communication, the first equipment of showing of MR can show video information and the sound information that the robot of replacing body acquireed in real time, the user also can give the robot of replacing body with sound information real-time transmission through the first equipment of showing of MR, the robot of replacing body accessible speaker real-time broadcast this sound information. The user can also transmit the sound information to the avatar robot in real time through the MR head display equipment, and the avatar robot can broadcast the sound information in real time through the loudspeaker.
Furthermore, the robot substitute comprises a storage battery, and can automatically move to a charger for charging when the electric quantity is insufficient.
Further, the service device may replace the real appearance image of the avatar robot B captured by the camera of the avatar robot a with the virtual human body 3D model B specified by the user, and the virtual human body 3D model B may also move correspondingly while the avatar robot B moves. The robot for replacing the body also comprises four limbs which can move, a touch sensor on the surface of the body, the MR head display equipment also comprises an four limb action sensor and a touch action actuator, and when the four limbs of a user A using the MR head display equipment move, the four limbs of the robot for replacing the body can also move correspondingly. When a user B touches a touch sensor of the avatar robot, a corresponding touch action executor of the MR head display equipment acts to enable the user A to feel the action; the touch sensors on the body surface are resistance type or capacitance type, and the number of the sensors is two or more; the touch actuator is an electric motor type or electromagnet type vibrator, the number of the touch actuator is two or more, and the touch actuator is arranged on the clothes. Preferably, the number of sensors or touch actuators is 2, 3, 4, 5, 6, 7, 8, 9, 10, 16, 20, 30, 40, 50, 60, 70, 80, 90, 100, 1000, 1 million, 5 million, 10 million, 100 million, 1000 million, 1 hundred million, 10 hundred million, 100 hundred million, 1000 million, 10000 million, 100000 million.
Further preferably, the communication module of the user side device is a WIFI wireless communication module or a 5G communication module.
Furthermore, the service device or the user terminal device comprises voice translation software, and supports real-time voice translation or text translation between different national languages.
Furthermore, the user end equipment also comprises payment software, so that the payment function can be realized, and payment transaction can be carried out among users.
Further, the currency of the payment software transaction is digital currency.
Furthermore, the user interaction method in the metacavity system comprises the steps that a user is connected to the service device through user side equipment, a place where a virtual scene needs to be built is selected from a virtual map of the service device, building 3D model data is selected from the service device to build a house virtual scene, a building 3D model is placed on the place, a furniture 3D model, a commodity 3D model and a character or two-dimensional code identification 3D model are selected from the service device to build a virtual scene in a house, the 3D model is placed at a fixed position of a space in the house, and the 3D model can also be placed at a fixed position of a space outside the house.
Furthermore, the user interaction method in the metasystem comprises the steps that the user can select a human body 3D model from the service equipment to serve as a virtual avatar of the user, the virtual avatar is displayed in a virtual scene, and when the limb, the mouth and the expression of the user change or move, the virtual avatar also changes or moves correspondingly.
Furthermore, the user interaction method in the metasystem comprises the steps that a user A and a user B respectively enter the same virtual space through a user end device A and a user end device B, the user B sees a virtual avatar A of the user A through the user end device B, the user A sees the virtual avatar B of the user B through the user end device A, the user A and the user B can communicate through body motions and voices, the number of the user end devices is 2 or more, and the preferred values are 2, 3, 4, 5, 6, 7, 8, 9, 10, 16, 20, 30, 40, 50, 60, 70, 80, 90, 100, 1000, 1 ten thousand, 5 ten thousand, 10 ten thousand, 100 ten thousand, 1000 ten thousand, 1, 10, 100 hundred million and 1000 hundred million.
Furthermore, the user interaction method in the metacavity system comprises the steps that the user A sees the virtual avatar A of the user A through the user side device A, the user B arranges the clothing commodity 3D model, the price tag and the collection two-dimensional code in a virtual scene through the user side device B to display clothing commodities, the user A selects the clothing commodity 3D model A through the virtual avatar A to try on, and then the clothing commodity 3D model A is worn on the virtual avatar A.
Furthermore, the user interaction method in the metasystem comprises the steps that a user A starts a two-dimensional code scanning function by touching a fingerprint identification module of user side equipment A, starts a payment function after scanning a collected two-dimensional code, and finishes payment by touching the fingerprint identification module; when the virtual body 3D model a stands in front of the mirror 3D model in the virtual scene, a mirror image of the virtual body 3D model a is seen.
Furthermore, the user equipment in the metasystem comprises the avatar robot, the user interaction method comprises the avatar robot A obtaining images and sounds of a real scene through the camera and the sound sensor, uploading the image and sound information to the service equipment, the service equipment stores appearance data of the avatar robot B, when the service equipment identifies that the images contain the appearance of the avatar robot B, the appearance of the user equipment B in the images is replaced by the virtual human body 3D model B, and the virtual human body 3D model B is combined with a virtual scene or a real scene to realize the appearance of virtual characters in the real scene or the appearance of virtual characters in the virtual scene.
Furthermore, the user side equipment in the metastic system comprises an alternative MR head display device, when the body of the user B moves, the alternative robot B also moves correspondingly, and the virtual human body 3D model B also moves correspondingly; when the user A touches the touch sensor of the avatar robot B, the corresponding touch action actuator of the MR head display device B acts, and the user B using the MR head display device B feels the action.
Furthermore, the user interaction method in the metasystem comprises the steps that the user side equipment A obtains images and sound of a real scene, the images and sound information are uploaded to the service equipment, the service equipment separates the character images to generate a real character image A, the real character image A and the virtual scene are combined and displayed on the user side equipment B in real time, and the fact that real characters appear in the virtual scene is achieved.
Furthermore, the user interaction method in the metasystem system comprises the steps that the user side equipment A obtains images and sound of a real scene, the images and sound information are uploaded to the service equipment, the service equipment separates the character images to generate a real character image A, the real character image A is combined with the real scene and is displayed on a display of the user side equipment B in real time, and therefore the real character of a different place in the real scene is achieved; the real scene is obtained in a manner that a display of the user side equipment B is a semitransparent display, and the user B can see the real scene around while seeing the real character image A in the display; or the real scene is acquired by the user side equipment B through the camera.
Furthermore, the user voice interaction method in the metasystem comprises the steps that the user A speaks the language type A to the sound sensor of the user side equipment A, the language type A is translated into the language type B through translation software, the language type B is transmitted through a loudspeaker of the user side equipment B, the language type B is spoken by the user B to the sound sensor of the user side equipment B, the language type A is translated into the language type A through the translation software, the language type A is transmitted through the loudspeaker of the user side equipment A, and the language type can be set through the user side equipment.
Furthermore, the user interaction method in the metasystem comprises the steps that each service device contains different map data, the map data unit is, village, town, city, province, country and continent, a user enters the map data of the service device through the MR head display device clicking icons, the specific position of the map data is clicked, and the virtual scene of the current position is displayed.
Furthermore, the selection mode of the building 3D model, the furniture 3D model, the commodity 3D model, the character or two-dimensional code identification 3D model and the human body 3D model in the metasystem includes that a user uploads a 3D model data file to service equipment through user side equipment and selects from the service equipment.
Furthermore, the appearance changing method of the building 3D model, the furniture 3D model, the commodity 3D model, the character or two-dimensional code identification 3D model and the human body 3D model in the metasystem includes that a user uploads a picture or a picture file to service equipment through user side equipment, then selects the picture or the picture file from the service equipment, selects the corresponding part of the building 3D model, the furniture 3D model, the commodity 3D model, the character or two-dimensional code identification 3D model and the human body 3D model which needs to change the appearance, and then the appearance of the corresponding part changes to the corresponding appearance of the picture or the picture.
Furthermore, the user end equipment in the metasma system can display a desktop image, a virtual keyboard, a virtual mouse and a virtual human hand of the computer or the mobile phone, wherein the virtual keyboard is displayed below the desktop image, and the virtual mouse is displayed on the right side of the virtual keyboard; the user end equipment scans and tracks the position of a hand and the action track of each finger, when the hand or the fingers act, the virtual hand also acts correspondingly, and guides the user to perform corresponding action to operate a virtual keyboard or a virtual mouse and operate a mobile phone or a computer; the hand scanning and tracking system comprises an automatic stabilizer, when the head of a person drives the user equipment to move in a small range, but the hands of the person are not moved, the automatic stabilizer enables a sensor of the hand scanning and tracking system to keep still, and virtual hands of the user equipment in a display picture can keep still when the user equipment moves in a small range.
Furthermore, in the virtual scene, the closer the human 3D model a is to the human 3D model B, the larger the speech hum.
The invention has the beneficial effects that:
firstly, the voice, the video, the action and the body feeling can be interacted in multiple directions through the user end equipment connection network between people, and the communication is more real and rich.
And secondly, people can be connected with the network through user side equipment at different places according to needs for real-time interaction, and the method is not limited by space and time.
And thirdly, multi-user interaction in multiple directions in the same virtual space through a plurality of user side equipment connection networks can be realized, so that the interaction among the multiple users is more convenient.
Drawings
Fig. 1 is a schematic diagram of a metasystem architecture according to a first embodiment, a second embodiment, and a third embodiment of the present invention.
Fig. 2 is an appearance schematic diagram of a client device according to a first embodiment, a second embodiment, and a third embodiment of the present invention.
Fig. 3 is an external view of a triple-proxy robot a according to an embodiment of the present invention.
Fig. 4 is an appearance schematic diagram of a triple-proxy robot B according to an embodiment of the present invention.
Detailed Description
For the purpose of enhancing the understanding of the present invention, the present invention will be further described with reference to the following examples and the accompanying drawings, which are provided for the purpose of illustration only and are not intended to limit the scope of the present invention.
The first embodiment.
Meta universe system, including 2 service equipment, 100 customer premises equipment, service equipment contains object 3D model data, human 3D model data, map data, and object 3D model data includes building 3D model, furniture 3D model, household electrical appliances 3D model, commodity 3D model, characters sign 3D model, the content that customer premises equipment can self-defined setting includes the geographical position of object 3D model, the type of human 3D model and object 3D model, the characters or the characters content and the two-dimensional code content and the spatial position of two-dimensional code sign 3D model, the spatial position of object 3D model and human 3D model, and the required shop virtual scene of user is set up to accessible self-defined setting.
The data imported by the service equipment through the user side equipment comprises an object 3D model data file and a human body 3D model data file.
The map data comprises roads, streets, rivers, lakes, seas, mountains, villages, towns, fields, scenic spots and buildings; the service equipment further comprises 3D model color data, animal 3D model data and plant 3D model data, wherein the building 3D model comprises a one-storey building 3D model, a multi-storey building 3D model, a building outer wall 3D model, a building internal layout and decoration 3D model, a building internal decoration 3D model and a lighting 3D model; the figure 3D model comprises a hairstyle 3D model, an ornament 3D model, a mirror 3D model, a camera 3D model, a clothes 3D model, a shoe 3D model, a glove 3D model and a face 3D model, and the face 3D model can support a photo to change faces; the number of each 3D model is more than 10 different data, and each 3D model can support the replacement of color data and the replacement of appearance by pictures; preferably, the number of different data per 3D model is 1000.
The MR head display equipment comprises a shell, a main controller, a display screen, a sound sensor, a loudspeaker, a limb motion sensor, a positioning module and a communication module, wherein the display screen, the sound sensor, the loudspeaker, the limb motion sensor, the positioning module and the communication module can be connected with a network, and the limb motion sensor is an electronic compass and a gyroscope.
The MR head display equipment further comprises a mouth motion sensor and a facial expression sensor, the mouth motion sensor and the facial expression sensor are cameras, a user A can set own virtual roles as a human body 3D model A through the MR head display equipment A, the MR head display equipment A can upload limb motion, expression motion and mouth motion information of the user A using the equipment to service equipment, the service equipment can send the limb motion, expression motion and mouth motion information to the MR head display equipment B in the same virtual space, and the user B using the MR head display equipment B can see the human body 3D model A and the limb motion, expression motion and mouth motion corresponding to the user A.
The MR head display device a can be connected to n MR head display devices B simultaneously, where n = 80.
The shell of the MR head display equipment also comprises a fingerprint identification module, and the functions of fingerprint passwords, fingerprint payment and starting scanning of two-dimensional codes can be realized through the fingerprint identification module; the head MR display equipment can enter a certain virtual space by scanning the two-dimensional code or inputting a coding instruction; the head MR display equipment can enter a certain virtual space by clicking the position on a virtual map or inputting geographical position information for searching, or enter a certain virtual space by inputting the longitude and latitude of the map; the MR head display equipment comprises three working modes of AR, VR and MR.
The metastic system further comprises a 3D scanner, and the 3D scanner can create real objects, characters and animals into 3D model data files and transmit the 3D model data files to service equipment for selection and setting of a user.
The communication module of the user side equipment is a WIFI wireless communication module.
The service equipment or the user terminal equipment comprises voice translation software.
The user end equipment also comprises payment software which can realize the payment function and can carry out payment transaction among users.
The currency of the payment software transaction is digital currency.
The user interaction method in the metacavity system comprises the steps that a user is connected to service equipment through user side equipment, a place needing to build a virtual scene is selected from a virtual map of the service equipment, building 3D model data is selected from the service equipment to build a house virtual scene, a building 3D model is placed on the place, a furniture 3D model, a commodity 3D model and a character or two-dimensional code identification 3D model are selected from the service equipment to build a house virtual scene, and the 3D model is placed at a fixed position of a space in a house or at a fixed position of a space outside the house.
The user interaction method in the metacavity system comprises the steps that a user can select a human body 3D model from service equipment to serve as a virtual avatar of the user, the virtual avatar is displayed in a virtual scene, and when limbs, a mouth and an expression of the user change or move, the virtual avatar also makes corresponding changes or moves.
The user interaction method in the metacavity system comprises the steps that a user A and a user B respectively enter the same virtual space through user side equipment A and user side equipment B, the user B sees a virtual avatar A of the user A through the user side equipment B, the user A sees the virtual avatar B of the user B through the user side equipment A, and the user A and the user B can communicate through body actions and voice.
The user interaction method in the metachrosis system comprises the steps that a user A sees a virtual avatar A of the user A through user side equipment A, the user B arranges a clothing commodity 3D model, a price tag and a collection two-dimensional code in a virtual scene through the user side equipment B to display clothing commodities, the user A selects the clothing commodity 3D model A through the virtual avatar A to try on, and then the clothing commodity 3D model A is worn on the virtual avatar A.
The user interaction method in the metachrosis system comprises the steps that a user A starts a two-dimensional code scanning function by touching a fingerprint identification module of user side equipment A, starts a payment function after scanning a collection two-dimensional code, and finishes payment by touching the fingerprint identification module; when the virtual human body 3D model A stands in front of the mirror 3D model in the virtual scene, a mirror image of the virtual human body 3D model A is seen, so that the fitting effect of clothes can be seen; in the mode of simulating the real visual angle, the eyes of the user are equal to the eyes of the simulated human body 3D model, the full appearance of the simulated human body 3D model cannot be seen, and the full appearance of the simulated human body 3D model can be seen through the mirror 3D model; in the mode of the game view angle, the user can see the full view of the simulated human body 3D model.
The user B can also arrange other commodity 3D models in the virtual scene for the user A to purchase, the commodity 3D models can be works made by 3D software, and the commodity 3D models can also be generated by scanning real articles through a 3D scanner and are uploaded to the service equipment through user side equipment or computer equipment by the user.
The user interaction method in the metachrosis system comprises the steps that user side equipment A obtains images and sound of real scenes, the images and sound information are uploaded to service equipment, the service equipment separates character images to generate real character images A, the real character images A and virtual scenes are combined and displayed on user side equipment B in real time, and real characters appear in the virtual scenes.
The user interaction method in the metachrosis system comprises the steps that user side equipment A obtains images and sound of real scenes, the images and sound information are uploaded to service equipment, the service equipment separates character images to generate real character images A, the real character images A are combined with the real scenes and are displayed on a display of user side equipment B in real time, and real characters appearing in different places in the real scenes are achieved; the real scene is obtained in a manner that a display of the user side equipment B is a semitransparent display, and the user B can see the real scene around while seeing the real character image A in the display; or the real scene is acquired by the user side equipment B through the camera.
The user voice interaction method in the metacorology system comprises the steps that a user A speaks to a voice sensor of user side equipment A to use a language type A, the language type A is translated into a language type B through translation software, the language type B is transmitted through a loudspeaker of the user side equipment B, the user B speaks to the voice sensor of the user side equipment B to use the language type B, the language type A is translated into the language type A through the translation software, the language type A is transmitted through the loudspeaker of the user side equipment A, and the language type can be set through the user side equipment.
The user interaction method in the metacavity system comprises the steps that each service device contains different map data, the map data are in units of village, town, city, province, country and continent, a user clicks the map data of the service device through an MR head display device to enter the map data, clicks the specific position of the map data and displays the virtual scene of the current position, the user can enter the corresponding scene through scanning two-dimensional codes or inputting the number corresponding to the scene, and the user can enter the corresponding scene through searching addresses.
The selection mode of the building 3D model, the furniture 3D model, the commodity 3D model, the character or two-dimensional code identification 3D model and the human body 3D model in the metasystem comprises that a user uploads a 3D model data file to service equipment through user side equipment and selects from the service equipment.
The appearance changing method of the building 3D model, the furniture 3D model, the commodity 3D model, the character or two-dimensional code identification 3D model and the human body 3D model in the metasystem comprises the steps that a user uploads a picture or a picture file to service equipment through user side equipment, then selects the picture or the picture file from the service equipment, selects the corresponding part of the building 3D model, the furniture 3D model, the commodity 3D model, the character or two-dimensional code identification 3D model and the human body 3D model which needs to change the appearance, and then the appearance of the corresponding part changes into the corresponding appearance of the picture or the picture.
The embodiment can be used for scenes such as shopping and live broadcasting, merchants or anchor broadcasters sell commodities in virtual scenes, a plurality of customers can enter the scenes to shop commodities at the same time, a 3D (three-dimensional) model of the commodities can be seen, purchasing decisions of the customers are facilitated, voice, video, actions and multi-directional interaction with the merchants can be carried out in real time, the shopping scenes are similar to the real scenes, the shopping scenes can not be limited by places, people in different places can communicate with one another through the network, the virtual scenes in other areas can be entered, local specialties are purchased, and local culture interest is experienced.
When the languages of the merchant and the customer are not communicated, the languages can be translated into the understandable languages through the translation software in real time, and the communication is very convenient.
In the virtual scene, the closer the human body 3D model A and the human body 3D model B are, the larger the voice hum sound is.
Example two.
The metasystem comprises 10 service devices and 1 ten thousand user end devices, wherein each service device contains object 3D model data, human body 3D model data and map data, the object 3D model data comprise a building 3D model, a furniture 3D model, a household appliance 3D model, a commodity 3D model and a character identification 3D model, the user end device can be set in a user-defined mode, the user end device can set the content in the user-defined mode and comprises the geographic position of the object 3D model, the type of the human body 3D model and the object 3D model, the character content of the character or two-dimensional code identification 3D model, the spatial position of the two-dimensional code content and the spatial position of the object 3D model and the human body 3D model, and the user-required exhibition virtual scene can be set up in the user-defined mode.
The data imported by the service equipment through the user side equipment comprises an object 3D model data file and a human body 3D model data file.
The map data comprises roads, streets, rivers, lakes, seas, mountains, villages, towns, fields, scenic spots and buildings; the service equipment further comprises 3D model color data, animal 3D model data and plant 3D model data, wherein the building 3D model comprises a one-storey building 3D model, a multi-storey building 3D model, a building outer wall 3D model, a building internal layout and decoration 3D model, a building internal decoration 3D model and a lighting 3D model; the figure 3D model comprises a hairstyle 3D model, an ornament 3D model, a mirror 3D model, a camera 3D model, a clothes 3D model, a shoe 3D model, a glove 3D model and a face 3D model, and the face 3D model can support a photo to change faces; the number of each 3D model is 10 different data, and each 3D model can support the replacement of color data and the replacement of appearance by photos; preferably, the number of different data per 3D model is 10000.
The metastic system further comprises a cloud computer, and the cloud computer is remotely controlled through the MR head display device.
The metastic universe system further comprises a cloud projector, the cloud projector comprises a communication module capable of being connected with a network, and a desktop image of a cloud computer can be projected to the cloud projector in a remote mode.
The metastic universe system further comprises a cloud printer, wherein the cloud printer comprises a communication module capable of being connected with a network and can be connected to the cloud printer through a cloud computer to print files.
The MR head display equipment has a computer function, and can process files, be connected with a cloud projector for projection or be connected with a cloud printer for printing.
The MR head display equipment comprises a shell, a main controller, a display screen, a sound sensor, a loudspeaker, a limb action sensor, a positioning module and a communication module, wherein the display screen, the sound sensor, the loudspeaker, the limb action sensor, the positioning module and the communication module can be connected with a network; the cloud computer can be remotely connected and controlled through a network, and the cloud computer can be connected and display a desktop image of the cloud computer, and can display a virtual keyboard, a virtual mouse and a virtual human hand, wherein the virtual keyboard is displayed below the desktop image, and the virtual mouse is displayed on the right side of the virtual keyboard; the MR head display equipment also comprises a hand scanning and tracking system which scans and tracks the position of a hand below and the action track of each finger, when the hand or the finger acts, the virtual hand also acts correspondingly to guide the person to perform corresponding action to operate a virtual keyboard or a virtual mouse and operate a computer; the limb motion sensor is a gyroscope, an electronic compass, an acceleration sensor, an angle sensor, a displacement sensor and a camera; the sensor of the human hand scanning and tracking system is an ALS light sensor, a PS distance sensor, an infrared sensor, a camera, a gyroscope, an acceleration sensor, an angle sensor and a displacement sensor.
When the MR head display equipment is not connected with a cloud computer, the MR head display equipment can scan and track the action tracks of the hands and the fingers through a hand scanning and tracking system, and when the hands or the fingers act, the virtual hands also do corresponding actions to guide the operation of a person and select a system menu; the user A and the user B can respectively carry out voice communication and interaction through the MR head display equipment A and the MR head display equipment B; the hand scanning and tracking system comprises an automatic stabilizer, when the head of a person drives the MR head display equipment to move in a small range, but the hand is not moved, the automatic stabilizer enables a sensor of the hand scanning and tracking system to keep still, so that the virtual hand in a display picture can keep still when the MR head display equipment moves in a small range; the sensor of the automatic stabilizer is a camera and a gyroscope.
The MR head display equipment comprises a shell, a main controller, a display screen, a sound sensor, a loudspeaker, a limb action sensor, a positioning module and a communication module, wherein the display screen, the sound sensor, the loudspeaker, the limb action sensor, the positioning module and the communication module can be connected with a network.
The MR head display equipment also comprises a mouth motion sensor and a facial expression sensor, a user A can set own virtual role as a human body 3D model A through the MR head display equipment A, the MR head display equipment A can upload limb motion, expression motion and mouth motion information of the user A using the equipment to service equipment, the information can be sent to MR head display equipment B in the same virtual space by the service equipment, and the user B using the MR head display equipment B can see the human body 3D model A and the limb motion, the expression motion and the mouth motion corresponding to the user A; the mouth action sensor and the facial expression sensor are infrared sensors and cameras.
The MR head display device a can be connected to n MR head display devices B simultaneously, where n = 10000.
The shell of the MR head display equipment also comprises a fingerprint identification module, and the functions of fingerprint passwords, fingerprint payment and starting scanning of two-dimensional codes can be realized through the fingerprint identification module; the head MR display equipment can enter a certain virtual space by scanning the two-dimensional code or inputting a coding instruction; the head MR display equipment can enter a certain virtual space by clicking the position on a virtual map or inputting geographical position information for searching, or enter a certain virtual space by inputting the longitude and latitude of the map; the MR head display equipment comprises three working modes of AR, VR and MR.
The metastic system further comprises a 3D scanner, and the 3D scanner can create real objects, characters and animals into 3D model data files and transmit the 3D model data files to service equipment for selection and setting of a user.
The communication module of the user side equipment is a 5G wireless communication module.
The service equipment or the user side equipment comprises voice translation software which supports real-time voice translation and character translation between Chinese-English, English-Chinese, Chinese-Russian and Russian-Chinese.
The user end equipment also comprises payment software which can realize the payment function and can carry out payment transaction among users.
The currency of the payment software transaction is digital currency.
The user interaction method in the metacavity system comprises the steps that a user is connected to service equipment through user side equipment, a place needing to build a virtual scene is selected from a virtual map of the service equipment, building 3D model data is selected from the service equipment to build a house virtual scene, a building 3D model is placed on the place, a furniture 3D model, a commodity 3D model and a character or two-dimensional code identification 3D model are selected from the service equipment to build a house virtual scene, and the 3D model is placed at a fixed position of a space in a house or at a fixed position of a space outside the house.
The user interaction method in the metacavity system comprises the steps that a user can select a human body 3D model from service equipment to serve as a virtual avatar of the user, the virtual avatar is displayed in a virtual scene, and when limbs, a mouth and an expression of the user change or move, the virtual avatar also makes corresponding changes or moves.
The user interaction method in the metacavity system comprises the steps that a user A and a user B respectively enter the same virtual space through user side equipment A and user side equipment B, the user B sees a virtual avatar A of the user A through the user side equipment B, the user A sees the virtual avatar B of the user B through the user side equipment A, and the user A and the user B can communicate through body actions and voice.
The user interaction method in the metachrosis system comprises the steps that a user A sees a virtual avatar A of the user A through user side equipment A, the user B arranges a clothing commodity 3D model, a price tag and a collection two-dimensional code in a virtual scene through the user side equipment B to display clothing commodities, the user A selects the clothing commodity 3D model A through the virtual avatar A to try on, and then the clothing commodity 3D model A is worn on the virtual avatar A.
The user interaction method in the metachrosis system comprises the steps that a user A starts a two-dimensional code scanning function by touching a fingerprint identification module of user side equipment A, starts a payment function after scanning a collection two-dimensional code, and finishes payment by touching the fingerprint identification module; when the virtual human body 3D model A stands in front of the mirror 3D model in the virtual scene, a mirror image of the virtual human body 3D model A is seen, so that the fitting effect of clothes can be seen; in the mode of simulating the real visual angle, the eyes of the user are equal to the eyes of the simulated human body 3D model, the full appearance of the simulated human body 3D model cannot be seen, and the full appearance of the simulated human body 3D model can be seen through the mirror 3D model; in the mode of the game view angle, the user can see the full view of the simulated human body 3D model.
The user B can also arrange other commodity 3D models in the virtual scene for the user A to purchase, the commodity 3D models can be works made by 3D software, and the commodity 3D models can also be generated by scanning real articles through a 3D scanner and are uploaded to the service equipment through user side equipment or computer equipment by the user.
The user interaction method in the metachrosis system comprises the steps that user side equipment A obtains images and sound of real scenes, the images and sound information are uploaded to service equipment, the service equipment separates character images to generate real character images A, the real character images A and virtual scenes are combined and displayed on user side equipment B in real time, and real characters appear in the virtual scenes.
The user interaction method in the metachrosis system comprises the steps that user side equipment A obtains images and sound of real scenes, the images and sound information are uploaded to service equipment, the service equipment separates character images to generate real character images A, the real character images A are combined with the real scenes and are displayed on a display of user side equipment B in real time, and real characters appearing in different places in the real scenes are achieved; the real scene is obtained in a manner that a display of the user side equipment B is a semitransparent display, and the user B can see the real scene around while seeing the real character image A in the display; or the real scene is acquired by the user side equipment B through the camera.
The user voice interaction method in the metacorology system comprises the steps that a user A speaks to a voice sensor of user side equipment A to use a language type A, the language type A is translated into a language type B through translation software, the language type B is transmitted through a loudspeaker of the user side equipment B, the user B speaks to the voice sensor of the user side equipment B to use the language type B, the language type A is translated into the language type A through the translation software, the language type A is transmitted through the loudspeaker of the user side equipment A, and the language type can be set through the user side equipment.
The user interaction method in the metacavity system comprises the steps that each service device contains different map data, the map data are in units of village, town, city, province, country and continent, a user clicks the map data of the service device through an MR head display device to enter the map data, clicks the specific position of the map data and displays the virtual scene of the current position, the user can enter the corresponding scene through scanning two-dimensional codes or inputting the number corresponding to the scene, and the user can enter the corresponding scene through searching addresses.
The selection mode of the building 3D model, the furniture 3D model, the commodity 3D model, the character or two-dimensional code identification 3D model and the human body 3D model in the metasystem comprises that a user uploads a 3D model data file to service equipment through user side equipment and selects from the service equipment.
The appearance changing method of the building 3D model, the furniture 3D model, the commodity 3D model, the character or two-dimensional code identification 3D model and the human body 3D model in the metasystem comprises the steps that a user uploads a picture or a picture file to service equipment through user side equipment, then selects the picture or the picture file from the service equipment, selects the corresponding part of the building 3D model, the furniture 3D model, the commodity 3D model, the character or two-dimensional code identification 3D model and the human body 3D model which needs to change the appearance, and then the appearance of the corresponding part changes into the corresponding appearance of the picture or the picture.
In the virtual scene, the closer the human body 3D model A and the human body 3D model B are, the larger the voice hum sound is.
The embodiment can be used in scenes such as exhibition, meeting and the like, users of different manufacturers build own exhibition stands in virtual scenes to build commodities which the users want to display, a plurality of customers can enter the scenes simultaneously to visit and exchange or choose, the customers can also see the human body 3D model or real character image of the other side to carry out interactive exchange, and can see the more real commodity 3D model, so that the customers can conveniently visit; the system can also perform multi-directional interaction with the merchant in real time, so that the interactive scene is similar to a real scene, and can not be limited by places, and people in different places enter the same virtual space through a network to perform interactive communication.
Under the meeting scene, the cloud computer can be connected with the cloud projector to project and watch files such as slides.
When the languages of the manufacturer and the client are not communicated, the languages can be translated into the understandable languages in real time through the translation software, and the communication is very convenient.
In the virtual scene, the closer the human 3D model a or the real character image a is to the human 3D model B or the real character image B, the larger the speech hum sound is.
Example three.
The metasystem comprises 100 service devices and 10 ten thousand user end devices, and is characterized in that the service devices contain object 3D model data, human body 3D model data and map data, the object 3D model data comprise a building 3D model, a furniture 3D model, a household appliance 3D model, a commodity 3D model and a character identification 3D model, the user end devices can set the content in a user-defined mode and comprise the geographic position of the object 3D model, the types of the human body 3D model and the object 3D model, the character content of the character or two-dimensional code identification 3D model, the space position of the object 3D model and the space position of the two-dimensional code identification 3D model, and the virtual scene required by a user can be set up in the user-defined mode.
The data imported by the service equipment through the user side equipment comprises an object 3D model data file and a human body 3D model data file.
The service equipment also comprises weather data, such as sun, sky, cloud, rain, snow, wind, temperature, humidity, fog, cloudy day, night and moon, and can be displayed in a virtual scene; the weather data of different geographic positions in the virtual scene map data are real-time weather information data of corresponding geographic positions in reality.
The map data comprises roads, streets, rivers, lakes, seas, mountains, villages, towns, fields, scenic spots and buildings; the service equipment further comprises 3D model color data, animal 3D model data and plant 3D model data, wherein the building 3D model comprises a one-storey building 3D model, a multi-storey building 3D model, a building outer wall 3D model, a building internal layout and decoration 3D model, a building internal decoration 3D model and a lighting 3D model; the figure 3D model comprises a hairstyle 3D model, an ornament 3D model, a mirror 3D model, a camera 3D model, a clothes 3D model, a shoe 3D model, a glove 3D model and a face 3D model, and the face 3D model can support a photo to change faces; each 3D model is 100 million different data, and each 3D model can support changing color data and changing appearance with a photo.
The client equipment further comprises a cylindrical or cylindrical display bin, the display bin comprises a shell and a master controller, and further comprises a power module connected with the master controller, a communication module capable of being connected with a network, a sound sensor, a loudspeaker, a limb action sensor, a facial expression recognition sensor, a camera, a display screen, a top display screen and a bottom display screen.
Furthermore, the metastic system further comprises a cloud computer, and the cloud computer is remotely controlled through the MR head display device.
Furthermore, the user side equipment is MR head display equipment which can display a virtual scene, a real scene or a scene combined with virtual reality, and the MR head display equipment comprises a shell, a main controller, a display screen, a sound sensor, a loudspeaker, a limb action sensor, a positioning module and a communication module which can be connected with a network, wherein the display screen, the sound sensor, the loudspeaker, the limb action sensor, the positioning module and the communication module are connected with the main controller; the cloud computer can be remotely connected and controlled through a network, and the cloud computer can be connected and display a desktop image of the cloud computer, and can display a virtual keyboard, a virtual mouse and a virtual human hand, wherein the virtual keyboard is displayed below the desktop image, and the virtual mouse is displayed on the right side of the virtual keyboard; the MR head display equipment also comprises a human hand scanning and tracking system which scans and tracks the position of a human hand below and the action track of each finger, and when the hand or the finger acts, the virtual human hand also acts correspondingly; guiding a person to perform corresponding actions to operate a virtual keyboard or a virtual mouse and operate a computer; the limb motion sensor comprises a gyroscope, an electronic compass, an acceleration sensor, an angle sensor, a displacement sensor and a camera, and the sensor of the hand scanning and tracking system comprises an ALS light sensor, a PS distance sensor, an infrared sensor, a camera, a gyroscope, an acceleration sensor, an angle sensor and a displacement sensor.
Furthermore, when the MR head display equipment is not connected with a cloud computer, the MR head display equipment can scan and track the motion tracks of the hands and the fingers through a hand scanning and tracking system, and when the hands or the fingers move, the virtual hands also move correspondingly to guide the operation of a person and select a system menu; the user A and the user B can respectively carry out voice communication and interaction through the MR head display equipment A and the MR head display equipment B; the hand scanning and tracking system comprises an automatic stabilizer, when the head of a person drives the MR head display equipment to move in a small range, but the hand is not moved, the automatic stabilizer enables a sensor of the hand scanning and tracking system to keep still, so that the virtual hand in a display picture can keep still when the MR head display equipment moves in a small range; the sensor of the automatic stabilizer is a camera, a gyroscope, an acceleration sensor, an angle sensor and a displacement sensor.
Furthermore, the MR head display device further comprises a mouth motion sensor and a facial expression sensor, the user a can set a virtual role of the user a as a human body 3D model a through the MR head display device a, the MR head display device a can upload the limb motion, expression motion and mouth motion information of the user a using the device to the service device, the MR head display device a can send the information to the MR head display device B in the same virtual space, and the user B using the MR head display device B can see the human body 3D model a and the limb motion, expression motion and mouth motion corresponding to the user a; the MR head display equipment can be connected with the smart phone through the WIFI communication module, displays a screen interface of the smart phone and plays sound of the smart phone, also displays a virtual hand, realizes click and sliding operation through gestures to control the interface of the smart phone, and can realize remote video call through a display screen, a sound sensor and a loudspeaker of the MR head display equipment; the mouth action sensor and the facial expression sensor are infrared sensors, cameras, ALS light sensors and PS distance sensors.
Furthermore, the MR head display equipment has a video recording function and also comprises a camera which is provided with a moving mechanism, wherein the horizontal rotating angle is more than or equal to 180 degrees, and the vertical rotating angle is more than or equal to 45 degrees; the MR head display equipment A can be connected with the MR head display equipment B through a network, the MR head display equipment A can display video information acquired by a camera of the MR head display equipment B in real time, the MR head display equipment A can play sound information acquired by a sound sensor of the MR head display equipment B through a loudspeaker in real time, the MR head display equipment B can also transmit the video information and the sound information acquired by the camera and the sound sensor to the MR head display equipment A in real time, and the MR head display equipment A broadcasts the video information and the sound information through a display screen and the loudspeaker in real time; the MR head display device a can be connected to n MR head display devices B simultaneously, where n = 50000.
The MR head display equipment B can be connected with the MR head display A through two-dimensional code scanning or input coding, and the MR head display equipment further comprises instant messaging software which comprises an address book and can carry out video call, voice call and character communication with friends or strange users.
The MR head display equipment is characterized by having the functions of a smart phone, including the functions of voice call, video call, short message communication, operation installation and operation APP; the shell also comprises a fingerprint identification module, and the functions of fingerprint password, fingerprint payment and starting scanning of the two-dimensional code can be realized through the fingerprint identification module; the head MR display equipment can enter a certain virtual space by scanning the two-dimensional code or inputting a coding instruction; the head MR display equipment can enter a certain virtual space by clicking the position on a virtual map or inputting geographical position information for searching, or enter a certain virtual space by inputting the longitude and latitude of the map; the MR head display equipment comprises three working modes of AR, VR and MR.
The metastic system further comprises a 3D scanner, and the 3D scanner can create real objects, characters and animals into 3D model data files and transmit the 3D model data files to service equipment for selection and setting of a user.
The client equipment also comprises a robot for replacing the user equipment, and the robot is used for acquiring video images and voice information of the actual scene of the MR head display equipment in real time; the robot of replacing body, including shell, master controller and the power module who links to each other with the master controller, but the communication module of connecting network, the camera, sound sensor, the speaker, distance sensor, a drive mechanism for the health removes, a serial communication port, the user can show the first equipment of showing of MR through the network with the robot of replacing body communication, the first equipment of showing of MR can show video information and the sound information that the robot of replacing body acquireed in real time, the user also can give the robot of replacing body with sound information real-time transmission through the first equipment of showing of MR, the robot of replacing body accessible speaker real-time broadcast this sound information. The user can also transmit the sound information to the avatar robot in real time through the MR head display equipment, and the avatar robot can broadcast the sound information in real time through the loudspeaker.
The service equipment can replace the real appearance image of the avatar robot B captured by the camera of the avatar robot A with the virtual human body 3D model B designated by the user, and the virtual human body 3D model B can also move correspondingly while the avatar robot B moves. The robot for replacing the body also comprises four limbs which can move, a touch sensor on the surface of the body, the MR head display equipment also comprises an four limb action sensor and a touch action actuator, and when the four limbs of a user A using the MR head display equipment move, the four limbs of the robot for replacing the body can also move correspondingly. When a user B touches a touch sensor of the avatar robot, a corresponding touch action executor of the MR head display equipment acts to enable the user A to feel the action; the touch sensors on the body surface are resistance type or capacitance type, and the number of the sensors is two or more; the touch actuator is an electric motor type or electromagnet type vibrator, the number of the touch actuator is two or more, and the touch actuator is arranged on the clothes. Preferably, the number of sensors or touch actuators is 100.
The communication module of the user side equipment is a 5G communication module.
The service equipment or the user side equipment comprises voice translation software which supports real-time voice translation and character translation between Chinese-English, English-Chinese, Chinese-Russian and Russian-Chinese.
The user end equipment also comprises payment software which can realize the payment function and can carry out payment transaction among users.
The currency of the payment software transaction is digital currency.
The user interaction method in the metacavity system comprises the steps that a user is connected to service equipment through user side equipment, a place needing to build a virtual scene is selected from a virtual map of the service equipment, building 3D model data is selected from the service equipment to build a house virtual scene, a building 3D model is placed on the place, a furniture 3D model, a commodity 3D model and a character or two-dimensional code identification 3D model are selected from the service equipment to build a house virtual scene, and the 3D model is placed at a fixed position of a space in a house or at a fixed position of a space outside the house.
The user interaction method in the metacavity system comprises the steps that a user can select a human body 3D model from service equipment to serve as a virtual avatar of the user, the virtual avatar is displayed in a virtual scene, and when limbs, a mouth and an expression of the user change or move, the virtual avatar also makes corresponding changes or moves.
The user interaction method in the metacavity system comprises the steps that a user A and a user B respectively enter the same virtual space through user side equipment A and user side equipment B, the user B sees a virtual avatar A of the user A through the user side equipment B, the user A sees the virtual avatar B of the user B through the user side equipment A, the user A and the user B can communicate through limb actions and voice, and the number of the user side equipment is 50000.
The user interaction method in the metachrosis system comprises the steps that a user A sees a virtual avatar A of the user A through user side equipment A, the user B arranges a clothing commodity 3D model, a price tag and a collection two-dimensional code in a virtual scene through the user side equipment B to display clothing commodities, the user A selects the clothing commodity 3D model A through the virtual avatar A to try on, and then the clothing commodity 3D model A is worn on the virtual avatar A.
The user interaction method in the metachrosis system comprises the steps that a user A starts a two-dimensional code scanning function by touching a fingerprint identification module of user side equipment A, starts a payment function after scanning a collection two-dimensional code, and finishes payment by touching the fingerprint identification module; when the virtual body 3D model a stands in front of the mirror 3D model in the virtual scene, a mirror image of the virtual body 3D model a is seen.
The user side equipment in the metachrosis system comprises a substitution robot, the substitution robot A obtains images and sound of a real scene through a camera and a sound sensor and uploads image and sound information to the service equipment, appearance data of the substitution robot B are stored in the service equipment, when the service equipment identifies that the images contain the appearance of the substitution robot B, the appearance of the user side equipment B in the images is replaced by a virtual human body 3D model B, the virtual human body 3D model B is combined with a virtual scene or a real scene, and virtual characters appear in the real scene or virtual characters appear in the virtual scene.
The user side equipment in the metacavity system comprises an alternative MR head display device, when the body of a user B moves, the alternative robot B also correspondingly moves, and the virtual human body 3D model B also correspondingly moves; when the user A touches the touch sensor of the avatar robot B, the corresponding touch action actuator of the MR head display device B acts, and the user B using the MR head display device B feels the action.
The user interaction method in the metachrosis system comprises the steps that user side equipment A obtains images and sound of real scenes, the images and sound information are uploaded to service equipment, the service equipment separates character images to generate real character images A, the real character images A and virtual scenes are combined and displayed on user side equipment B in real time, and real characters appear in the virtual scenes.
The user interaction method in the metachrosis system comprises the steps that user side equipment A obtains images and sound of real scenes, the images and sound information are uploaded to service equipment, the service equipment separates character images to generate real character images A, the real character images A are combined with the real scenes and are displayed on a display of user side equipment B in real time, and real characters appearing in different places in the real scenes are achieved; the real scene is obtained in a manner that a display of the user side equipment B is a semitransparent display, and the user B can see the real scene around while seeing the real character image A in the display; or the real scene is acquired by the user side equipment B through the camera.
The user voice interaction method in the metacorology system comprises the steps that a user A speaks to a voice sensor of user side equipment A to use a language type A, the language type A is translated into a language type B through translation software, the language type B is transmitted through a loudspeaker of the user side equipment B, the user B speaks to the voice sensor of the user side equipment B to use the language type B, the language type A is translated into the language type A through the translation software, the language type A is transmitted through the loudspeaker of the user side equipment A, and the language type can be set through the user side equipment.
The user interaction method in the metacavity system comprises the steps that each service device contains different map data, the map data are in units of village, town, city, province, country and continent, a user enters the map data of the service device through a point selection icon of the MR head display device, the specific position of the map data is selected, and the virtual scene of the current position is displayed.
The selection mode of the building 3D model, the furniture 3D model, the commodity 3D model, the character or two-dimensional code identification 3D model and the human body 3D model in the metasystem comprises that a user uploads a 3D model data file to service equipment through user side equipment and selects from the service equipment.
The appearance changing method of the building 3D model, the furniture 3D model, the commodity 3D model, the character or two-dimensional code identification 3D model and the human body 3D model in the metasystem comprises the steps that a user uploads a picture or a picture file to service equipment through user side equipment, then selects the picture or the picture file from the service equipment, selects the corresponding part of the building 3D model, the furniture 3D model, the commodity 3D model, the character or two-dimensional code identification 3D model and the human body 3D model which needs to change the appearance, and then the appearance of the corresponding part changes into the corresponding appearance of the picture or the picture.
The user end equipment in the metacavity system can display a computer or mobile phone desktop image, a virtual keyboard, a virtual mouse and a virtual human hand, wherein the virtual keyboard is displayed below the desktop image, and the virtual mouse is displayed on the right side of the virtual keyboard; the user end equipment scans and tracks the position of a hand and the action track of each finger, when the hand or the fingers act, the virtual hand also acts correspondingly, and guides the user to perform corresponding action to operate a virtual keyboard or a virtual mouse and operate a mobile phone or a computer; the hand scanning and tracking system comprises an automatic stabilizer, when the head of a person drives the user equipment to move in a small range, but the hands of the person are not moved, the automatic stabilizer enables a sensor of the hand scanning and tracking system to keep still, and virtual hands of the user equipment in a display picture can keep still when the user equipment moves in a small range.
The embodiment can be used for various scenes such as various application scenes such as social contact, tourism, shopping, movie theaters, KTVs, entertainment, meetings, exhibitions, live broadcasts, concerts and the like; the system is particularly suitable for social, tourism and entertainment scenes, multiple users can enter a virtual scene at the same time, the users can see a human body 3D model or a real character image of the other side for interactive communication, and can also enter a real scene through the self-replacing robot to see the real scene, voice, video, action and multi-direction interaction can be performed among the users in real time, so that the scene is more real, the limitation of places can be avoided, and people in different places can intercommunicate and communicate and enjoy beautiful scenes through networks; and the system can also enter a foreign virtual scene or a foreign real scene to communicate and interact with local people.
For example, 10000 stand-by robots are arranged in a certain scenic spot, a user A applies for connection and controls the stand-by robot A through the MR head display device A, the user A can control the movement of the stand-by robot A, and the real scenic spot scene is seen through the camera of the stand-by robot A, so that the remote tourism is realized.
Or 1 body-replacing robot B is placed in the home of the user A, the user B applies for connection and controls the body-replacing robot B through the MR head display device B, the user B can control the movement of the body-replacing robot B, a real scene is seen through a camera of the body-replacing robot B, and the user B realizes interaction with the user A in different places.
When the language between users is not communicated, the language can be translated into the language which can be understood through the translation software in real time, and the communication is very convenient.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (21)

1. The metasystem comprises one or more service devices and one or more user side devices, and is characterized in that the service devices contain object 3D model data, human body 3D model data and map data, the object 3D model data comprise a building 3D model, a furniture 3D model, a household appliance 3D model, a commodity 3D model and a character identification 3D model, the user side device can set the user-defined content comprising the geographic position of the object 3D model, the types of the human body 3D model and the object 3D model, the character content and the two-dimensional code content of the character or two-dimensional code identification 3D model, the spatial position of the object 3D model and the human body 3D model, and the virtual scene required by the user can be set up through the user-defined setting.
2. The metasystem according to claim 1, wherein the service device, the data imported by the client device, includes an object 3D model data file and a human 3D model data file.
3. The metasystem according to claim 2, wherein the service device further comprises weather data, including sun, sky, clouds, rain, snow, wind, temperature, humidity, fog, cloudy day, night, moon, which may be displayed in a virtual scene; the weather data of different geographic positions in the virtual scene map data are real-time weather information data of corresponding geographic positions in reality.
4. The metasystem according to claim 2, wherein the map data includes roads, streets, rivers, lakes, seas, mountains, villages, towns, fields, attractions, buildings; the service equipment further comprises 3D model color data, animal 3D model data and plant 3D model data, wherein the building 3D model comprises a one-storey building 3D model, a multi-storey building 3D model, a building outer wall 3D model, a building internal layout and decoration 3D model, a building internal decoration 3D model and a lighting 3D model; the figure 3D model comprises a hairstyle 3D model, an ornament 3D model, a mirror 3D model, a camera 3D model, a clothes 3D model, a shoe 3D model, a glove 3D model and a face 3D model, and the face 3D model can support a photo to change faces; the number of each 3D model is more than 10 different data, and each 3D model can support changing color data and changing appearance with a photo.
5. A metachromatic system according to any one of claims 1, 2, 3 and 4, wherein the user side equipment further comprises a cylindrical or cylindrical display bin, the display bin comprises a housing and a main controller, and further comprises a power supply module connected with the main controller, a communication module capable of being connected with a network, a sound sensor, a loudspeaker, a limb movement sensor, a facial expression recognition sensor, a camera, a peripheral display screen, a top display screen and a bottom display screen.
6. A Meta-universe system according to any one of claims 1, 2, 3 or 4 further including a cloud computer, the cloud computer being remotely controlled by the MR head-mounted device.
7. A metachromatic system according to any one of claims 1, 2, 3 and 4, wherein the client device is an MR head display device capable of displaying a virtual scene or a real scene or a scene combined with virtual reality, the MR head display device comprises a shell, a main controller, a display screen connected with the main controller, a sound sensor, a loudspeaker, a limb movement sensor, a positioning module and a communication module capable of being connected with a network; the virtual keyboard, the virtual mouse and the virtual human hand can be displayed, the virtual keyboard is displayed below the desktop image, and the virtual mouse is displayed on the right side of the virtual keyboard; the MR head display equipment also comprises a human hand scanning and tracking system which scans and tracks the position of the human hand below and the motion track of each finger, and the virtual human hand also performs corresponding motion when the hand or the finger moves.
8. The metasystem according to claim 7, wherein the hand scan tracking system comprises an automatic stabilizer that keeps a virtual human hand still in the display when the MR head display apparatus moves at a small amplitude.
9. The metasystem according to claim 8, wherein the MR head display device further comprises a human limb motion sensor, a mouth motion sensor, a facial expression sensor.
10. The metasystem according to claim 9, wherein the MR head display device has a video recording function, and further comprises a camera having a moving mechanism, a horizontal rotation angle is 180 degrees or more, and a vertical rotation angle is 45 degrees or more; the MR head display equipment A can be simultaneously connected with n MR head display equipment B, wherein n is more than or equal to 10.
11. The metasystem according to claim 10, wherein smart phone functions are provided, including voice call, video call, sms, run install and operate APP functions; still include fingerprint identification module on the shell, accessible fingerprint identification module realizes fingerprint password, fingerprint payment, starts the function of scanning the two-dimensional code, the first equipment that shows of MR include AR, VR, three kinds of mode of MR.
12. A meta space system according to any of claims 1, 2, 3, 4, 8, 9, 10, 11, further comprising a cloud projector, said cloud projector comprising a network-connectable communication module, wherein desktop images of the cloud computer can be projected remotely onto the cloud projector.
13. The metasystem according to claim 12, further comprising a cloud printer, the cloud printer including a network-connectable communication module, the cloud printer being connectable to the cloud printer via a cloud computer for printing a file.
14. The metasystem as recited in claim 13, wherein the MR head display device has a computer function, and can process a file, connect to a cloud projector for projection, or connect to a cloud printer for printing.
15. The metasystem as claimed in any of claims 1, 2, 3, 4, 8, 9, 10, 11, 13, and 14, further comprising a 3D scanner for creating a 3D model data file of real objects or characters or animals to be transmitted to a service device for selection and setup by a user.
16. The metasystem according to any one of claims 1, 2, 3, 4, 8, 9, 10, 11, 13 and 14, further comprising a smart car having a communication module, wherein the smart car is connected to a network through the communication module, and a user can remotely drive the smart car through a client device, and the communication module is a 5G communication module.
17. The metasystem according to any of claims 1, 2, 3, 4, 8, 9, 10, 11, 13, and 14, wherein the client device further includes an avatar robot for acquiring video images and voice information of a real scene of the MR head display device in real time; the robot comprises a shell, a main controller, a power module connected with the main controller, a communication module capable of being connected with a network, a camera, a sound sensor, a loudspeaker, a distance sensor and a transmission mechanism for body movement.
18. The metasystem according to claim 17, wherein the avatar robot further comprises four limbs which are movable, a touch sensor on a body surface, the MR head display device further comprises a four limbs motion sensor and a touch motion actuator, the touch sensor on the body surface is a resistive type or a capacitive type, and the number of sensors is two or more; the touch actuator is an electric motor type or electromagnet type vibrator, the number of the touch actuator is two or more, and the touch actuator is arranged on the clothes.
19. A metasystem according to any of claims 1, 2, 3, 4, 8, 9, 10, 11, 13, 14, 18, wherein said service device or said client device comprises speech translation software.
20. A meta space system according to any of claims 1, 2, 3, 4, 8, 9, 10, 11, 13, 14, 18, wherein the client device further comprises payment software.
21. The metasystem according to claim 20, wherein the currency of the payment software transaction includes digital currency.
CN202111513293.XA 2021-12-12 2021-12-12 Meta universe system Pending CN114359519A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111513293.XA CN114359519A (en) 2021-12-12 2021-12-12 Meta universe system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111513293.XA CN114359519A (en) 2021-12-12 2021-12-12 Meta universe system

Publications (1)

Publication Number Publication Date
CN114359519A true CN114359519A (en) 2022-04-15

Family

ID=81099866

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111513293.XA Pending CN114359519A (en) 2021-12-12 2021-12-12 Meta universe system

Country Status (1)

Country Link
CN (1) CN114359519A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115239435A (en) * 2022-08-08 2022-10-25 深圳市柔灵科技有限公司 Supply chain management system and method for flexible wearable equipment based on metauniverse
CN116185206A (en) * 2023-04-27 2023-05-30 碳丝路文化传播(成都)有限公司 Method and system for synchronizing meta-cosmic weather and real weather

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115239435A (en) * 2022-08-08 2022-10-25 深圳市柔灵科技有限公司 Supply chain management system and method for flexible wearable equipment based on metauniverse
CN115239435B (en) * 2022-08-08 2024-02-02 深圳市柔灵科技有限公司 Flexible wearable equipment supply chain management system and method based on meta universe
CN116185206A (en) * 2023-04-27 2023-05-30 碳丝路文化传播(成都)有限公司 Method and system for synchronizing meta-cosmic weather and real weather

Similar Documents

Publication Publication Date Title
CN114359520A (en) Meta-universe system and interaction method thereof
US11206373B2 (en) Method and system for providing mixed reality service
Schnädelbach et al. The augurscope: a mixed reality interface for outdoors
CN106683197A (en) VR (virtual reality) and AR (augmented reality) technology fused building exhibition system and VR and AR technology fused building exhibition method
CN106951561B (en) Electronic map system based on virtual reality technology and GIS data
CN109085966A (en) A kind of three-dimensional display system and method based on cloud computing
US20120192088A1 (en) Method and system for physical mapping in a virtual world
CN114359519A (en) Meta universe system
CN104102412A (en) Augmented reality technology-based handheld reading equipment and reading method thereof
CN102982194A (en) Online experience system of three dimensional products
CN207212211U (en) A kind of interactive intelligent window
CN109257589A (en) Long-range 3-D scanning holographic cartoon special efficacy generates vertical aobvious system and method
CN204028887U (en) A kind of reading of the hand-held based on augmented reality equipment
CN107608649A (en) A kind of AR augmented realities intelligent image identification displaying content system and application method
CN110930517A (en) Panoramic video interaction system and method
KR20190078675A (en) Immersed exhibition system based on AR(augmented reality)-VR(virtual reality) technology and method thereof
CN109918005A (en) A kind of displaying control system and method based on mobile terminal
CN206946194U (en) A kind of holographic 3D interactive exhibition systems based on artificial intelligence Visual identification technology
KR20200067537A (en) System and method for providing a virtual environmental conference room
CN110764247A (en) AR telescope
KR102043274B1 (en) Digital signage system for providing mixed reality content comprising three-dimension object and marker and method thereof
JP2016200884A (en) Sightseeing customer invitation system, sightseeing customer invitation method, database for sightseeing customer invitation, information processor, communication terminal device and control method and control program therefor
CN110634190A (en) Remote camera VR experience system
CN117010965A (en) Interaction method, device, equipment and medium based on information stream advertisement
Broll et al. Interface with angels: the future of VIR and AR interfaces

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination