CN115686183A - Mixed reality device and equipment, information processing method and storage medium - Google Patents

Mixed reality device and equipment, information processing method and storage medium Download PDF

Info

Publication number
CN115686183A
CN115686183A CN202110836392.5A CN202110836392A CN115686183A CN 115686183 A CN115686183 A CN 115686183A CN 202110836392 A CN202110836392 A CN 202110836392A CN 115686183 A CN115686183 A CN 115686183A
Authority
CN
China
Prior art keywords
information
user
diving
display
mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110836392.5A
Other languages
Chinese (zh)
Inventor
范清文
郝帅
郑超
苗京花
陈丽莉
张�浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202110836392.5A priority Critical patent/CN115686183A/en
Priority to PCT/CN2022/105084 priority patent/WO2023001019A1/en
Publication of CN115686183A publication Critical patent/CN115686183A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the disclosure provides a mixed reality device and equipment, an information processing method and a storage medium. The mixed reality device includes: a diving information acquisition module configured to acquire diving information; a processing module configured to implement a function of any one of an AR diving mode and a VR diving mode based on the diving information; the AR diving mode includes: the VR diving mode comprises one or more of a first function mode used for monitoring whether an underwater environment where a user is located is in an abnormal state, a second function mode used for monitoring whether the user is in the abnormal state, a third function mode used for displaying introduction information of a gazing object and a fourth function mode used for initiating diving interaction, and the VR diving mode comprises the following steps: one or more of a fifth functional mode for responding to a diving interaction and a sixth functional mode for displaying a diving trajectory; a display module configured to display any one of an augmented reality screen and a virtual reality screen.

Description

Mixed reality device and equipment, information processing method and storage medium
Technical Field
The present disclosure relates to, but not limited to, the field of information processing technologies, and in particular, to a mixed reality apparatus and device, an information processing method, and a storage medium.
Background
The diving personnel can carry diving equipment usually when carrying out diving activities such as touring, investigation, salvage, repair and underwater engineering in the environment under water, but, be subject to the diving equipment function singleness that is used for the environment under water, factor such as the operation is intelligent inadequately for most masses can't experience the enjoyment of diving activity, are unfavorable for advancing the development and the progress of diving activity. Therefore, there is a need to provide an intelligent diving apparatus with rich functions.
Disclosure of Invention
The following is a summary of the subject matter described in detail herein. This summary is not intended to limit the scope of the claims.
In a first aspect, an embodiment of the present disclosure provides a mixed reality apparatus, including: a processing module, a display module and a diving information acquisition module, wherein,
the diving information acquisition module is configured to acquire diving information and send the diving information to the processing module; wherein the diving information comprises: one or more of head information of the user, eye information of the user and environmental information of an underwater environment in which the user is located;
the processing module is configured to implement a function of any one of an Augmented Reality (AR) diving mode and a Virtual Reality (VR) diving mode based on the diving information; wherein the AR diving mode comprises: one or more of a first function mode for monitoring whether an abnormal state occurs in an underwater environment where a user is located, a second function mode for monitoring whether an abnormal state occurs in the user, a third function mode for displaying introduction information of a gazing object and a fourth function mode for initiating diving interaction, wherein the VR diving mode comprises: one or more of a fifth functional mode for responding to a diving interaction and a sixth functional mode for displaying a diving trajectory;
the display module is configured to display any one of an augmented reality screen and a virtual reality screen.
In a second aspect, an embodiment of the present disclosure provides an information processing method, which is applied to the mixed reality device described in the foregoing embodiment, and the method includes: obtain dive information through dive information acquisition module, dive information includes: one or more of head information of the user, eye information of the user and environmental information of an underwater environment in which the user is located; based on the diving information, realizing the function of any one of an augmented reality AR diving mode and a virtual reality VR diving mode; wherein the AR diving mode comprises: one or more of a first function mode for monitoring whether an abnormal state occurs in an underwater environment where a user is located, a second function mode for monitoring whether an abnormal state occurs in the user, a third function mode for displaying introduction information of a gazing object and a fourth function mode for initiating diving interaction, wherein the VR diving mode comprises: one or more of a fifth functional mode for responding to a diving interaction and a sixth functional mode for displaying a diving trajectory; and controlling the display module to display any one of the augmented reality picture and the virtual reality picture.
In a third aspect, an embodiment of the present disclosure provides a mixed reality device, including: a processor and a memory storing a computer program operable on the processor, wherein the processor implements the steps of the information processing method described in the above embodiments when executing the program.
In a fourth aspect, an embodiment of the present disclosure provides a computer-readable storage medium, which includes a stored program, where, when the program runs, a device where the storage medium is located is controlled to execute the steps of the information processing method in the foregoing embodiment.
The mixed reality device and equipment, information processing method and storage medium that this disclosed embodiment provided, when the user dresses mixed reality device and carries out dive activity in the environment under water, gather dive information through dive information acquisition module to through processing module based on the dive information of gathering, can realize AR dive mode and virtual reality VR dive mode in the function of arbitrary one, AR dive mode includes: one or more of a first function mode for monitoring whether an abnormal state occurs in an underwater environment where a user is located, a second function mode for monitoring whether an abnormal state occurs in the user, a third function mode for displaying introduction information of a gazing object and a fourth function mode for initiating diving interaction, wherein the VR diving mode comprises: one or more of a fifth functional mode for responding to diving interaction and a sixth functional mode for displaying diving trajectories, so that an intelligent diving device with rich functions can be realized, and therefore, the development and progress of diving activities can be facilitated to be promoted.
Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the disclosure. Other advantages of the disclosure may be realized and attained by the instrumentalities and combinations particularly pointed out in the specification and the drawings.
Other aspects will be apparent upon reading and understanding the attached drawings and detailed description.
Drawings
The accompanying drawings are included to provide an understanding of the disclosed embodiments and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the examples serve to explain the principles of the disclosure and not to limit the disclosure. The shapes and sizes of the various elements in the drawings are not to be considered as true proportions, but are merely intended to illustrate the present disclosure.
Fig. 1 is a schematic structural diagram of a mixed reality diving system in an exemplary embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a mixed reality device in an exemplary embodiment of the present disclosure;
FIG. 3 is a flow diagram of an information processing method in an exemplary embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a mixed reality apparatus in an exemplary embodiment of the present disclosure.
Detailed Description
Various embodiments are described herein, but the description is intended to be exemplary, rather than limiting and many more embodiments and implementations are possible within the scope of the embodiments described herein. Although many possible combinations of features are shown in the drawings and discussed in the exemplary embodiments, many other combinations of the disclosed features are possible. Any feature or element of any embodiment may be used in combination with, or instead of, any other feature or element in any other embodiment, unless expressly limited otherwise.
In describing representative embodiments, the specification may have presented the method and/or process as a particular sequence of steps. However, to the extent that the method or process does not rely on the particular order of steps herein, the method or process should not be limited to the particular sequence of steps. Other orders of steps are possible as will be understood by those of ordinary skill in the art. Therefore, the particular order of the steps set forth in the specification should not be construed as limitations on the claims. Further, the claims directed to the method and/or process should not be limited to the performance of their steps in the order written, and one skilled in the art can readily appreciate that the sequences may be varied and still remain within the spirit and scope of the embodiments of the present disclosure.
In the drawings of the present disclosure, the size of each component, the thickness of a layer, or a region may be exaggerated for clarity. Therefore, one aspect of the present disclosure is not necessarily limited to the dimensions, and the shapes and sizes of the respective components in the drawings do not reflect a true scale. Further, the drawings schematically show ideal examples, and one embodiment of the present disclosure is not limited to the shapes, numerical values, and the like shown in the drawings.
In the exemplary embodiments of the present disclosure, ordinal numbers such as "first", "second", "third", and the like are provided to avoid confusion of constituent elements, and are not limited in number.
In the exemplary embodiments of the present disclosure, words such as "middle", "upper", "lower", "front", "rear", "vertical", "horizontal", "top", "bottom", "inner", "outer", etc., indicating orientations or positional relationships are used for convenience in describing positional relationships of constituent elements with reference to the drawings, only for convenience in describing the present specification and simplifying the description, but not for indicating or implying that the referred device or element must have a specific orientation, be configured and operated in a specific orientation, and thus, should not be construed as limiting the present disclosure. The positional relationship of the components is changed as appropriate in accordance with the direction in which each component is described. Therefore, the words and phrases described in the specification are not limited thereto, and may be replaced as appropriate depending on the case.
In the exemplary embodiments of the present disclosure, the terms "mounted," "connected," and "connected" are to be construed broadly unless otherwise explicitly specified or limited. For example, it may be a fixed connection, or a removable connection, or an integral connection; can be a mechanical connection, or an electrical connection; either directly or indirectly through intervening components, or both may be interconnected. The specific meaning of the above terms in the present disclosure can be understood in a specific case to those of ordinary skill in the art.
The term "module" as used in exemplary embodiments of the present disclosure can refer to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the functionality associated with that element.
In exemplary embodiments of the present disclosure, the terms "interface" and "user interface" are used, which may be a media interface for interaction and information exchange between an application or operating system and a user that enables conversion between an internal form of information and a form acceptable to the user. A commonly used presentation form of the User Interface is a Graphical User Interface (GUI), which refers to a User Interface related to operations and displayed in a graphical manner. The user interface may include visual interface elements such as icons, windows, buttons, dialog boxes, and the like.
Mixed Reality (MR) technology is actually a combination of Augmented Reality (AR) technology and Virtual Reality (VR) technology. With MR technology, the user can see the real world (characteristic of AR technology) and also see virtual objects (characteristic of VR technology). Therefore, the MR technology is a further development of the virtual reality technology, the MR technology can build an interactive feedback information loop among the virtual world, the real world and the user by introducing the real scene information into the virtual environment, can enhance the reality sense of user experience, and has the characteristics of reality, real-time interactivity, imagination and the like.
The disclosed embodiments provide a mixed reality diving system. In practical applications, the mixed reality diving system can be applied to diving activities such as sightseeing, exploration, salvaging, repair and underwater engineering.
Fig. 1 is a schematic structural diagram of a mixed reality diving system in an exemplary embodiment of the present disclosure, and as shown in fig. 1, the mixed reality diving system may include: the terminal 11 and N mixed reality devices, wherein the mixed reality devices are connected with the terminal 11 in a communication mode. N is a positive integer greater than or equal to 1. For example, as shown in fig. 1, the N mixed reality devices may include: mixed reality device 121, mixed reality device 122, \ 8230, mixed reality device 12N, and the like.
In an exemplary embodiment, the terminal may be an electronic device such as a server, a smart phone, a tablet computer, a notebook computer, or a desktop computer. Here, the embodiment of the present disclosure does not limit this.
In an exemplary embodiment, taking the terminal as an example of a server, the server is configured to process and feed back one or more types of information transmitted by the processing module of the mixed reality device, and simultaneously feed back the content to be displayed by the processing module of the mixed reality device. For example, the server is configured to process the environmental information to determine whether an abnormal state occurs in the underwater environment where the user is located based on the environmental information, so as to generate and issue warning information when the abnormal state occurs; or when the diver is in an abnormal state, the position information of the user is sent to the rescuer, so that the rescuer can rescue the diver in the abnormal state, and the like. The information processed by the server is different according to the function of the diving mode limited by the mixed reality device. Here, the embodiment of the present disclosure does not limit this.
In an exemplary embodiment, the terminal may have a plurality of display card ports, each mixed reality device may be communicatively connected to the terminal through one display card port, and each display card port has one port identifier. For example, the video card Port may be a High Definition Multimedia Interface (HDMI) or a High Definition digital Display Interface (DP). Here, the embodiment of the present disclosure does not limit this.
In an exemplary embodiment, the mixed reality diving system may further comprise: a plurality of wireless signal transmitter, a plurality of wireless signal transmitter one-to-ones set up in two at least display card ports at terminal, and these a plurality of mixed reality devices are connected with a plurality of wireless signal transmitter one-to-ones wireless communication for a plurality of mixed reality devices are connected with terminal wireless communication.
For example, the port identifier corresponding to each mixed reality device may be a port identifier of a display card port to which the mixed reality device is connected, or a port identifier of a display card port at which a wireless signal transmitter connected to the mixed reality device is located.
In one exemplary embodiment, the mixed reality apparatus may be a wearable display device. For example, the wearable display device may include a head mounted display device or an in-ear display device, or the like. For example, the wearable display device may be MR diving glasses or MR diving helmets, etc. Here, the embodiment of the present disclosure does not limit this.
The embodiment of the disclosure provides a mixed reality device. In practical applications, the mixed reality apparatus may be used in diving activities such as touring, surveying, salvaging, repairing and underwater engineering.
Fig. 2 is a schematic structural diagram of a mixed reality device in an exemplary embodiment of the present disclosure, and as shown in fig. 2, the mixed reality device 12 may include: the system comprises a processing module 21, a display module 22 and a diving information acquisition module 23; wherein, the processing module 21 is respectively connected with the display module 22 and the diving information acquisition module 23;
the diving information acquisition module 23 is configured to acquire diving information and send the diving information to the processing module 21; wherein the diving information may include: one or more of head information of the user, eye information of the user and environmental information of an underwater environment in which the user is located;
a processing module 21 configured to implement a function of any one of an augmented reality AR diving mode and a virtual reality VR diving mode based on the diving information; wherein, the AR diving mode may comprise: the VR diving mode may include one or more of a first function mode for monitoring whether an abnormal state occurs in an underwater environment where a user is located, a second function mode for monitoring whether an abnormal state occurs in the user, a third function mode for displaying introduction information of a gazing object, and a fourth function mode for initiating a diving interaction, and the VR diving mode may include: one or more of a fifth functional mode for responding to a diving interaction and a sixth functional mode for displaying a diving trajectory;
a display module 22 configured to display any one of an augmented reality screen corresponding to the AR diving mode and a virtual reality screen corresponding to the VR diving mode.
Here, the user may refer to a diver who wears the mixed reality device to perform diving activities in an underwater environment.
So, the mixed reality device that this disclosed embodiment provided when the user dresses mixed reality device and dive the activity in the environment under water, through dive information acquisition module collection dive information to through processing module based on the dive information of gathering, can realize the function of arbitrary one in AR dive mode and the VR dive mode, can realize the intelligent dive device that the function is abundant, can do benefit to the development and the progress that impel the dive activity.
In an exemplary embodiment, the abnormal condition of the underwater environment where the user is located may include: a dangerous object or a dangerous environment that may threaten life safety of a user occurs in a nearby area of an underwater environment in which the user is located (for example, an area inside a preset distance centered on the position of the user). For example, the hazardous object may include: dangerous animals and plants or obstacles, etc. For example, a hazardous environment may include the water flow rate exceeding a preset threshold, etc. Here, the exemplary embodiments of the present disclosure do not limit this.
In an exemplary embodiment, the occurrence of the abnormal state by the user may include: the user experiences physical discomfort, for example, the user is in a tired state or a faint state, or the like. Here, the exemplary embodiments of the present disclosure do not limit this.
In an exemplary embodiment, the gazing object may include: at least one underwater object in the underwater environment and the underwater environment itself. For example, the introduction information of the gazing object may include: one or more of text, image and video.
In an exemplary embodiment, a diving interaction may refer to a diver sharing his view (e.g., underwater environment, something in the underwater environment, etc.) with one or more other divers, or may refer to a diver inviting one or more other divers to jointly perform a diving activity in the same piece of water, etc. Here, the embodiment of the present disclosure does not limit this.
In an exemplary embodiment, the diving information collection module may include: and the sensor is used for acquiring diving information. For example, the diving information collection module may collect the diving information in real time at preset time intervals. For example, the preset time interval may be 1s (second), 2s, 3s, etc. Here, the embodiment of the present disclosure does not limit this.
In an exemplary embodiment, the diving information comprises: for example, the head information of the user, the eye information of the user, and the underwater environment where the user is located, as shown in fig. 2, the diving information collecting module 23 may include: the head information acquisition module 231, the eye information acquisition module 232 and the environment information acquisition module 233, wherein the head information acquisition module 231 is configured to acquire head information of a user and send the head information to the processing module 21; an eye information collecting module 232 configured to collect eye information of the user and send the eye information to the processing module 21; and the environment information acquisition module 233 is configured to acquire environment information of the underwater environment where the user is located, and send the environment information to the processing module 21.
In one exemplary embodiment, the head information of the user may include head pose information of the user.
In an exemplary embodiment, the head information acquisition module may include, but is not limited to, a posture sensor. For example, the attitude sensor is a high-performance three-dimensional motion attitude measurer based on Micro-Electro-Mechanical systems (MEMS) technology, and may generally include motion sensors such as a three-axis gyroscope, a three-axis accelerometer, and a three-axis electronic compass, and the attitude sensor may use these motion sensors to collect head attitude information of a user. Of course, the header information collection module may also be implemented by other sensors, which are not limited herein by the embodiments of the present disclosure.
In an exemplary embodiment, the eye information of the user may include: eye image information of the user.
In an exemplary embodiment, the eye information collection module may include, but is not limited to, a camera employing an image sensor. For example, the camera may be a camera using a Complementary Metal Oxide Semiconductor (CMOS) image sensor. Of course, the eye information collection module may also be implemented by other sensors, and this is not limited in this disclosure.
In an exemplary embodiment, the environmental information of the underwater environment in which the user is located may include: one or more of environmental image information, environmental depth information and environmental position information of an underwater environment in which the user is located. Here, the embodiment of the present disclosure does not limit this.
In an exemplary embodiment, the environment information collecting module may include, but is not limited to, a camera using an image sensor. For example, the environment information collecting module may be a Wide Angle Camera (Wide Angle Camera), a fish eye Camera (fishery Camera), a depth Camera (deep Camera), or the like. For example, the environment information acquisition module may be a CMOS camera or the like. For example, the environment information collection module may include a first camera for collecting environment image information of an underwater environment where the user is located And a second camera for collecting environment depth information of the underwater environment where the user is located, so that the environment information collection module is used to perform environment scanning on the underwater environment where the user is located, the environment image information And the environment depth information of the underwater environment where the user is located may be collected, and instant positioning And Mapping (SLAM) may be performed. Of course, the environment information collecting module may further include other sensors, for example, a positioning sensor for collecting environment position information of an underwater environment where the user is located, and this is not limited in this disclosure.
In one exemplary embodiment, the environment information collecting module includes: the first camera and the eye information acquisition module for acquiring the environment image information of the underwater environment where the user is located comprise: the third camera and the mixed implementation device for collecting the eye image information of the user are head-mounted display equipment, for example, the third camera can be arranged on the inner side of the head-mounted display equipment body, the first camera can be arranged on the outer side of the head-mounted display equipment body, when the head-mounted display equipment body is worn by the user, the first camera can face the eyes of the user, and the third camera can face the underwater environment where the user is located, so that the eye image information of the user and the environment image information of the underwater environment where the user is located can be collected.
In an exemplary embodiment, the Processing module may include, but is not limited to, a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, application specific integrated circuit, and the like. The general-purpose Processor may be a Microprocessor (MPU) or the Processor may be any conventional Processor. Here, the embodiment of the present disclosure does not limit this.
In an exemplary embodiment, the display module may include at least one display module or a device containing a display module, for example, a mixed reality display, a head mounted display, and the like. For example, the display module may be an Organic Light Emitting Diode (OLED) display, a Quantum-dot Light Emitting Diode (QLED) display, or the like. Here, the embodiment of the present disclosure does not limit this.
The following is a mixed reality diving system comprising: the server and a plurality of mixed reality devices are taken as examples, and each mixed reality device comprises: the head information acquisition module, and the environmental information acquisition module are taken as examples to describe in detail different operating modes of the mixed reality device provided in the exemplary embodiment of the present disclosure.
In an exemplary embodiment, the operation mode of the mixed reality apparatus may include: AR diving mode and VR diving mode.
The following describes how the operation mode of the mixed reality device is switched between the AR diving mode and the VR diving mode.
In an exemplary embodiment, a processing module configured to obtain first header information; when the first head information meets a first preset condition, controlling a display module to display a first confirmation interface (for example, an interface for confirming whether to switch the working mode) and acquiring first eye information; determining a gazing area of the user on the first confirmation interface based on the first eye information; when the watching area of the user on the first confirmation interface is determined to be a preset first display area, switching the working mode from one of the AR diving mode and the VR diving mode to the other one of the AR diving mode and the VR diving mode; or when the watching area of the user on the first confirmation interface is determined to be the preset second display area, keeping the working mode unchanged. Therefore, the first confirmation interface is triggered and displayed through the collected head information of the user, and whether the working mode is switched or not is confirmed through the collected eye information of the user, so that the user can conveniently operate the mixed reality device through the head and the eyes. Furthermore, the use and the operation of the diving device are facilitated, and the use convenience of the diving device is improved.
In an exemplary embodiment, the first preset condition may be that the head of the user is in a head-lowering state, the head of the user is in a head-raising state, the head of the user performs one or more head shaking actions within a preset time (for example, the shaking amplitude of the head of the user in the first direction is greater than a preset first threshold), or the head of the user performs one or more head nodding actions within a preset time (for example, the shaking amplitude of the head of the user in the second direction is greater than a preset second threshold), or the head of the user performs one circling action within a preset time (for example, the shaking amplitude of the head of the user in the first direction and the second direction is greater than a preset third threshold), and the like. For example, when the first head information is preset head information representing that the user performs a head shaking motion within a preset time, that is, the first head information collected within the preset time indicates that the shaking amplitude of the head of the user in the first direction is greater than a preset threshold, it may be determined that the first head information meets a first preset condition, and thus, a first confirmation interface may be popped up, so that the user may select whether to switch the working mode through an eye motion.
In an exemplary embodiment, the processing module is configured to initialize the mixed reality device and set an operating mode of the mixed reality device to an AR diving mode. Here, the embodiment of the present disclosure does not limit this.
In one exemplary embodiment, the first eye information may be eye image information of the user. For example, the processing module is configured to determine an eyeball orientation of the user according to the eye image information of the user; determining the sight line direction of the user according to the eyeball position of the user; and determining the area of the first confirmation interface in the sight line direction as the gazing area of the user on the first confirmation interface.
In an exemplary embodiment, the first confirmation interface may be a user interface implemented based on AR technology when switching the operation mode from the AR diving mode to the VR diving mode. Alternatively, when the operating mode VR diving mode is switched to the AR diving mode, the first confirmation interface may be a user interface implemented based on VR technology.
A plurality of functional modes in which the operation mode of the mixed reality device is the AR diving mode will be described below.
In one exemplary embodiment, the processing module is configured to obtain first environment information in a first functional mode for monitoring whether an abnormal state occurs in an underwater environment where a user is located; sending the first environment information to a server so that the server can determine whether the underwater environment where the user is located is in an abnormal state or not based on the first environment information; receiving an augmented reality picture which is sent by a server and contains warning information used for indicating that an underwater environment where a user is located is in an abnormal state; and controlling the display module to display an augmented reality picture containing the warning information. Therefore, whether the underwater environment where the user is located is in an abnormal state or not can be monitored, and the user can be warned through warning information when the abnormal state occurs. Therefore, the personal safety of the divers can be guaranteed, the safety of the divers is improved, and the interest in diving activities is promoted.
In an exemplary embodiment, the abnormal condition of the underwater environment where the user is located may include: a dangerous object or a dangerous environment that may threaten life safety of a user occurs in a nearby area of an underwater environment in which the user is located (for example, an area inside a preset distance centered on the position of the user). For example, the hazardous object may include: dangerous animals and plants, obstacles, etc. For example, a hazardous environment may include the water flow rate exceeding a preset threshold, etc. Here, the exemplary embodiments of the present disclosure do not limit this.
In one exemplary embodiment, the alert information may include one or more of information of a dangerous object threatening life safety of the user in an underwater environment where the user is located and navigation information for instructing the user to travel along a first target route, wherein the first target route is a route capable of avoiding the dangerous object.
In an exemplary embodiment, taking the example that the warning information includes information of a dangerous object threatening life safety of the user in the underwater environment where the user is located and navigation information for instructing the user to travel along the first target route, the processing module is further configured to control the display module to display an augmented reality picture containing information of the dangerous object and acquire second eye information; determining whether the user gazes at the information of the dangerous object based on the second eye information; and when the information that the user gazes at the dangerous object is determined, controlling the display module to display an augmented reality picture containing navigation information. Therefore, in the diving process of the user, when dangerous objects appear in the underwater environment where the user is located, the mixed reality device can display the information of the dangerous objects to the user; and after confirming that the user sees the information of the dangerous object, displaying navigation information to the user so that the user avoids the dangerous object while traveling. Therefore, when the dangerous object appears in the underwater environment where the user is located, the information that the user gazes at the dangerous object can be guaranteed, and the navigation information for avoiding the dangerous object is displayed for the user. Therefore, the personal safety of the divers in the underwater environment can be effectively guaranteed, the safety of the divers is improved, and the interest in diving activities is promoted.
In one exemplary embodiment, the second eye information may be eye image information of the user. For example, the processing module is configured to determine an eyeball orientation of the user according to the eye image information of the user; determining the sight line direction of the user according to the eyeball position of the user; an area of the first confirmation interface that is located in the gaze direction is determined as a gaze area of the user at the first confirmation interface.
In an exemplary embodiment, the processing module is configured to, in a second function mode for monitoring whether an abnormal state occurs to the user, acquire third eye information; when the third eye information meets a second preset condition that the user is in an abnormal state, acquiring second environment information; determining location information of the user based on the second environment information; and sending the position information of the user to the server so that the server sends the position information of the user to other mixed reality devices to request divers using the other mixed reality devices to rescue. So, when the eye information based on the user, monitor that dive personnel keep under the unusual condition of time appearance unusual and the eyeball state of watching of stillness, can automatic switch-over to SOS mode, be convenient for in time call out rescue personnel, and send user's positional information to other mixed reality devices, send SOS signal to dive personnel on every side and be convenient for in time rescue, thereby, be convenient for in time rescue the dive personnel that the accident appears, the dive personnel accident risk has been reduced, reduce the dive accident rate, dive device's security has been promoted.
In an exemplary embodiment, whether the user has an abnormal state may include: the user experiences physical discomfort, for example, the user is in a tired state, a faint state, or the like. The exemplary embodiments of the present disclosure are not limited thereto.
In an exemplary embodiment, the second preset condition may be that the user performs the following eye actions including but not limited to: the sight line of the user stays in the same area for more than preset time, the number of blinking actions of the user in the preset time is smaller than a preset threshold value, or the user continues to perform eye closing actions in the preset time, and the like. For example, the collected eye information of the user includes: for example, when it is detected that the eye image information does not include an iris boundary image region of the eyeballs of the user, it may be determined that the user performs an eye closing operation, or when it is detected that the iris area of the eyeballs of the user is gradually decreased and then gradually increased through a plurality of pieces of eye image information acquired within a preset time, it may be determined that the user performs an eye blinking operation.
In an exemplary embodiment, the processing module is further configured to receive information of the rescuers sent by the server; the control display module displays the information of the rescuers.
In an exemplary embodiment, the processing module is configured to, in a third functional mode for presenting introduction information of the gazing object, acquire third environment information and fourth eye information; acquiring identification information of a gazing object of the user in the underwater environment from the third environment information based on the fourth eye information; sending the identification information of the gazing object to a server so that the server sends introduction information of the gazing object; and receiving an augmented reality picture which is sent by the server and contains introduction information of the gazing object, and controlling the display module to display the augmented reality picture containing the introduction information of the gazing object. Therefore, divers can display the introduction information of the gazing object through eye gazing control, and the divers can deeply know underwater objects and environments.
In one exemplary embodiment, the gazing object may include: at least one underwater object in the underwater environment and the underwater environment itself. For example, underwater objects may include: animals, plants, or rocks, etc. For example, an underwater environment may include: a sea ditch or volcano, etc. Here, the exemplary embodiments of the present disclosure do not limit this.
In an exemplary embodiment, the fourth eye information may be eye image information of the user. For example, the processing module is configured to determine an eyeball orientation of the user according to the eye image information of the user; determining the sight line direction of the user according to the eyeball position of the user; and determining information of the third environment information located in the sight line direction as identification information of the gazing object of the user in the underwater environment.
In an exemplary embodiment, the collecting the environmental information includes: for example, the environmental image information, the identification information of the gazing object may include: image information of the gazing object. For example, based on an image recognition algorithm, image information of the fixation object is extracted from the environment image information, and the image information of the fixation object is determined as identification information of the fixation object.
In an exemplary embodiment, the introduction information of the gazing object may include: one or more of text, image, and video. For example, taking the target object as a marine animal as an example, the introductory information of the gazing object may include: name, subject, classification, morphological characteristics, life habit or protection level, etc. The exemplary embodiments of the present disclosure are not limited thereto.
In an exemplary embodiment, the processing module is configured to, in a fourth functional mode for initiating a diving interaction, obtain second head information; when the second head information meets a third preset condition, controlling the display module to display a second confirmation interface; acquiring fifth eye information; determining a gazing area of the user on the second confirmation interface based on the fifth eye information; and when the watching area of the user on the second confirmation interface is determined to be a preset third display area, sending a request message for requesting to initiate the diving interaction to the server. So, the user can conveniently operate mixed reality device through head and eyes to launch dive interactive activity. Therefore, by initiating the diving interactive activity, a plurality of divers can be taken as a whole, and the real-time sharing of the seen pictures (such as the swimming underwater environment, the observed animals and plants in the underwater environment and the like) is facilitated.
In an exemplary embodiment, the third preset condition may be that the head of the user is in a head-lowering state, the head of the user is in a head-raising state, the head of the user performs one or more head shaking actions within a preset time (for example, the shaking amplitude of the head of the user in the first direction is greater than a preset first threshold), or the head of the user performs one or more head nodding actions within a preset time (for example, the shaking amplitude of the head of the user in the second direction is greater than a preset second threshold), or the head of the user performs one circling action within a preset time (for example, the shaking amplitude of the head of the user in the first direction and the second direction is greater than a preset third threshold), and the like. For example, when the second head information is preset head information representing that the user performs a nodding action within a preset time, that is, the first head information acquired within the preset time indicates that the shaking amplitude of the head of the user in the second direction is greater than a preset threshold, it may be determined that the second head information meets a third preset condition, and thus, a second confirmation interface may be popped up so that the user selects to initiate a diving interaction through an eye movement. For example, when the eye gaze of the user is in the preset third display area of the gaze area of the second confirmation interface, the user is indicated to select to initiate the diving interaction activity.
In an exemplary embodiment, the second confirmation interface may be a user interface implemented based on AR technology.
In an exemplary embodiment, the fifth eye information may be eye image information of the user. For example, the processing module is configured to determine an eyeball orientation of the user according to the eye image information of the user; determining the sight direction of the user according to the eyeball position of the user; and determining the area of the second confirmation interface in the sight line direction as the gazing area of the user on the second confirmation interface.
In an exemplary embodiment, the processing module is further configured to receive an augmented reality screen containing a diver information list sent by the server, and control the display module to display the augmented reality screen containing the diver information list; acquiring sixth eye information; based on the sixth eye information, obtaining the information of the target diver watched by the user from the diver information list; and sending the information of the target diver to a server so that the server sends a request message for inviting diving interaction to a mixed reality device corresponding to the target diver. Therefore, the target divers to be invited to carry out diving interaction can be selected pointedly through eyes of the user.
In an exemplary embodiment, the diver information list may include: information of one or more divers in an underwater environment within a preset distance centered on the position of the interaction initiator. For example, the information of the diver in the diver information list may include: positional information of the diver or introduction information (e.g., name, image information, etc.) of the diver, and the like. The embodiments of the present disclosure do not limit this.
In an exemplary embodiment, the number of target divers may be one or more, e.g., two, three, four, etc. Here, the embodiment of the present disclosure does not limit this.
In an exemplary embodiment, the request message for inviting a diving interaction may include: the introduction information of the object to be shared comprises the following steps: the method comprises the following steps that the interaction initiator is in one or more of underwater environment and underwater objects watched by the interaction initiator, and introduction information of objects to be shared comprises the following steps: one or more of text, images, and video. Therefore, a plurality of divers can share the seen pictures (such as the underwater environment of the tourism, the animals and plants in the observed underwater environment) in real time as a whole, the diving activity can be enriched, and the diving experience is improved.
In an exemplary embodiment, the processing module is further configured to receive an augmented reality picture sent by the server containing position information of at least one diver of the target divers; and controlling the display module to display an augmented reality picture containing the position information of at least one diver. Therefore, the diving interaction initiator can know the position information of the diving interaction receiver conveniently, and the diving pleasure can be improved.
In an exemplary embodiment, the processing module is further configured to receive an augmented reality picture containing updated position information of the at least one diver; and controlling a display module to display an augmented reality picture containing the updated position information of the at least one diver. Therefore, the position of the diving interactive receiver can be updated in real time in the process of waiting for the diving interactive receiver.
In an exemplary embodiment, the sixth eye information may be eye image information of the user. For example, the processing module is configured to determine an eyeball orientation of the user according to the eye image information of the user; determining the sight line direction of the user according to the eyeball position of the user; and determining the information of the diver in the diver information list in the sight line direction as the information of the target diver at which the user gazes.
In an exemplary embodiment, the processing module is further configured to display a third confirmation interface when the gaze area of the user at the second confirmation interface is determined to be a preset fourth display area; acquiring seventh eye information; determining a gazing area of the user on the third confirmation interface based on the seventh eye information; and when the watching area of the user on the third confirmation interface is determined to be the preset fifth display area, ending the current working mode. In this way, the user can conveniently operate the mixed reality device through the head and eyes in order to end the current mode of operation, for example, end a diving interaction activity.
In an exemplary embodiment, the third confirmation interface may be a user interface implemented based on AR technology.
In an exemplary embodiment, the seventh eye information may be eye image information of the user. For example, the processing module is configured to determine an eyeball orientation of the user according to the eye image information of the user; determining the sight line direction of the user according to the eyeball position of the user; and determining the area of the third confirmation interface in the sight line direction as the gazing area of the user on the third confirmation interface.
A plurality of functional modes in which the operation mode of the mixed reality device is the VR diving mode will be described below.
In an exemplary embodiment, the processing module is configured to, in a fifth functional mode for responding to the diving interaction, control the display module to display a fourth confirmation interface in response to a request message sent by the server for confirming whether to perform the diving interaction; acquiring eighth eye information; determining a gazing area of the user on the fourth confirmation interface based on the eighth eye information; when the watching area of the user on the fourth confirmation interface is determined to be a preset sixth display area, receiving a virtual reality picture which is sent by the server and contains introduction information of the object to be shared; and controlling the display module to display a virtual reality picture containing introduction information of the object to be shared. Therefore, when the diving interaction invitation is received, the fifth confirmation interface can be displayed, so that the user can conveniently operate the fifth confirmation interface through eyes to determine whether to accept the diving interaction. Therefore, through receiving the diving interaction activity, a plurality of divers can share the seen pictures (such as the swimming underwater environment, the observed animals and plants in the underwater environment, and the like) in real time, and further, the diving interest can be enhanced.
In one exemplary embodiment, the object to be shared includes: the underwater environment of the diving interaction initiator and the underwater object watched by the interaction initiator comprise one or more of the following introduction information of the object to be shared: one or more of text, images, and video. Therefore, a plurality of divers can share the seen pictures (such as the underwater environment of the tourism, the animals and plants in the observed underwater environment) in real time as a whole, the diving activity can be enriched, and the diving experience is improved.
In an exemplary embodiment, the fourth confirmation interface may be a user interface implemented based on VR technology.
In an exemplary embodiment, the processing module is further configured to send a response message accepting the diving interaction to the server to cause the server to issue second navigation information for instructing the user to travel along the second target route; and receiving a virtual reality picture containing second navigation information sent by the server, and controlling the display module to display the virtual reality picture containing the second navigation information, wherein the second target route is a route from the position of the diving interaction receiver to the position of the diving interaction initiator. So, when carrying out dive interdynamic, through showing the virtual reality picture that contains second navigation information to dive interdynamic receiver carries out contact interaction according to advancing to dive interdynamic initiator place underwater environment, strengthens diving interesting. Moreover, the method is beneficial to a diving interaction receiver to quickly enter the underwater environment where the diving interaction initiator is located, and a large amount of time and cost are saved.
In an exemplary embodiment, the processing module is further configured to obtain third header information; when the third head information meets a fourth preset condition, controlling the display module to display a fifth confirmation interface; acquiring ninth eye information; determining a gazing area of the user on the fifth confirmation interface based on the ninth eye information; and when the watching area of the user on the fifth confirmation interface is determined to be the preset seventh display area, sending a response message of accepting the diving interaction to the server so as to enable the server to issue second navigation information for indicating the user to travel along the second target line. Thus, the user can control whether to send a response message for accepting the diving interaction to the server through the head and the eyes.
In an exemplary embodiment, the fourth preset condition may be that the head of the user is in a head-lowering state, the head of the user is in a head-raising state, the head of the user performs one or more head shaking actions within a preset time (for example, the shaking amplitude of the head of the user in the first direction is greater than a preset first threshold), or the head of the user performs one or more head nodding actions within a preset time (for example, the shaking amplitude of the head of the user in the second direction is greater than a preset second threshold), or the head of the user performs one circling action within a preset time (for example, the shaking amplitude of the head of the user in the first direction and the second direction is greater than a preset third threshold), and the like. For example, when the third head information is preset head information representing that the user performs a nodding action within a preset time, it may be determined that the third head information meets a fourth preset condition, and thus, a fifth confirmation interface may be popped up, so that the user confirms whether to send a response message of accepting the diving interaction to the server through an eye action.
In an exemplary embodiment, the ninth eye information may be eye image information of the user. For example, the processing module is configured to determine an eyeball orientation of the user according to the eye image information of the user; determining the sight direction of the user according to the eyeball position of the user; and determining the area of the fifth confirmation interface in the sight line direction as the gazing area of the user on the fifth confirmation interface.
In an exemplary embodiment, the fifth confirmation interface (i.e. the interface for confirming whether to travel to the area where the diving interaction initiator is located for diving interaction) may be a user interface implemented based on VR technology.
In an exemplary embodiment, the processing module is configured to, in a sixth functional mode for exhibiting a diving trajectory, obtain fifth environmental information; determining current location information of the user based on the fifth environmental information; generating a virtual reality picture containing a diving track of the user based on the current position information and the historical position information of the user; and controlling a display module to display a virtual reality picture containing the diving trajectory of the user. So, can acquire diver's current position information, the diver of being convenient for marks route and special marker that oneself walked, can reduce whole sight by VR dive mode, helps diver to form holistic consciousness and makes the dive direction more have the direction type, makes things convenient for diver to enter into next dive destination fast, practices thrift dive power consumption and time cost in a large number.
In an exemplary embodiment, the fifth environmental information may be environmental location information. In this way, the environmental position information of the underwater environment where the user is currently located can be used as the current position information of the user. Thus, the location information of the user is obtained.
The embodiment of the disclosure also provides an information processing method. May be applied to the hybrid implementation in one or more of the exemplary embodiments described above.
Fig. 3 is a flowchart illustrating an information processing method in an exemplary embodiment of the present disclosure. As shown in fig. 3, the information processing method may include:
step 31: obtain dive information through dive information acquisition module, dive information includes: one or more of head information of the user, eye information of the user and environmental information of an underwater environment in which the user is located;
step 32: based on the diving information, realizing the function of any one of an augmented reality AR diving mode and a virtual reality VR diving mode;
wherein, AR dive mode includes: the VR diving mode comprises one or more of a first function mode used for monitoring whether an underwater environment where a user is located is in an abnormal state, a second function mode used for monitoring whether the user is in the abnormal state, a third function mode used for displaying introduction information of a gazing object and a fourth function mode used for initiating diving interaction, and the VR diving mode comprises the following steps: one or more of a fifth functional mode for responding to a diving interaction and a sixth functional mode for displaying a diving trajectory.
Step 33: and controlling the display module to display any one of the augmented reality picture and the virtual reality picture.
In an exemplary embodiment, step 32 may include:
step 3211: acquiring first head information;
step 3212: when the first head information meets a first preset condition, controlling a display module to display a first confirmation interface and acquiring first eye information;
step 3213: determining a gazing area of the user on the first confirmation interface based on the first eye information;
step 3214: when the watching area of the user on the first confirmation interface is determined to be a preset first display area, switching the working mode from one of the AR diving mode and the VR diving mode to the other one of the AR diving mode and the VR diving mode; or when the watching area of the user on the first confirmation interface is determined to be the preset second display area, the working mode is kept unchanged.
In an exemplary embodiment, step 32 may comprise:
step 3221: in a first functional mode, acquiring first environment information;
step 3222: sending the first environment information to a server so that the server can determine whether the underwater environment where the user is located is in an abnormal state or not based on the first environment information;
step 3223: receiving an augmented reality picture which is sent by a server and contains warning information used for indicating that an underwater environment where a user is located is in an abnormal state;
step 3224: and controlling the display module to display an augmented reality picture containing the warning information.
In one exemplary embodiment, the alert information includes: the method comprises the following steps of one or more of information of dangerous objects threatening the life safety of a user in an underwater environment where the user is located and navigation information for indicating the user to travel along a first target route, wherein the first target route is a route capable of avoiding the dangerous objects.
In an exemplary embodiment, step 3224 may include: controlling a display module to display an augmented reality picture containing information of the dangerous object and acquiring second eye information; determining whether the user gazes at the information of the dangerous object based on the second eye information; and when the information that the user gazes at the dangerous object is determined, controlling the display module to display an augmented reality picture containing navigation information.
In an exemplary embodiment, step 32 may comprise:
step 3231: in the second function mode, acquiring third eye information;
step 3232: when the third eye information meets a second preset condition that the user is in an abnormal state, acquiring second environment information;
step 3233: determining location information of the user based on the second environment information;
step 3234: and sending the position information of the user to the server so that the server sends the position information of the user to other mixed reality devices to request divers using the other mixed reality devices to rescue.
In an exemplary embodiment, after step 3234, step 32 may further include:
step 3235: receiving information of rescuers sent by a server;
step 3236: the control display module displays the information of the rescuers.
In an exemplary embodiment, step 32 may comprise:
step 3241: in a third function mode, acquiring third environment information and fourth eye information;
step 3242: acquiring identification information of a gazing object of the user in the underwater environment from the third environment information based on the fourth eye information;
step 3243: sending the identification information of the gazing object to a server so that the server sends introduction information of the gazing object;
step 3244: and receiving an augmented reality picture which is sent by the server and contains introduction information of the gazing object, and controlling the display module to display the augmented reality picture containing the introduction information of the gazing object.
In an exemplary embodiment, step 32 may comprise:
step 3251: in a fourth functional mode, acquiring second header information;
step 3252: when the second head information meets a third preset condition, controlling the display module to display a second confirmation interface;
step 3253: acquiring fifth eye information; determining a gazing area of the user on the second confirmation interface based on the fifth eye information;
step 3254a: and when the watching area of the user on the second confirmation interface is determined to be a preset third display area, sending a request message for requesting to initiate the diving interaction to the server.
In an exemplary embodiment, after step 3254a, step 32 may further include:
step 3255a: receiving an augmented reality picture which is sent by a server and contains a diver information list, and controlling a display module to display the augmented reality picture containing the diver information list;
step 3256a: acquiring sixth eye information;
step 3257a: based on the sixth eye information, obtaining the information of the target diver watched by the user from the diver information list;
step 3258a: and sending the information of the target diver to a server so that the server sends a request message for inviting diving interaction to a mixed reality device corresponding to the target diver.
In one exemplary embodiment, the request message for inviting a diving interaction comprises: the introduction information of the object to be shared comprises the following steps: the introduction information of the object to be shared comprises one or more of the underwater environment and the underwater object: one or more of text, images, and video.
In an exemplary embodiment, after step 3258a, step 32 may further include:
step 3259a: and receiving an augmented reality picture which is sent by the server and contains the position information of at least one diver in the target divers, and controlling the display module to display the augmented reality picture containing the position information of at least one diver.
In an exemplary embodiment, after step 3253, step 32 may include:
step 3254b: when the watching area of the user on the second confirmation interface is determined to be a preset fourth display area, displaying a third confirmation interface;
step 3255b: acquiring seventh eye information;
step 3256b: determining a gazing area of the user on the third confirmation interface based on the seventh eye information;
step 3257b: and when the watching area of the user on the third confirmation interface is determined to be a preset fifth display area, ending the current working mode.
In an exemplary embodiment, step 32 may comprise:
step 3261: in a fifth function mode, responding to a request message sent by the server for inviting diving interaction, and controlling the display module to display a fourth confirmation interface;
step 3262: acquiring eighth eye information;
step 3263: determining a gazing area of the user on the fourth confirmation interface based on the eighth eye information;
step 3264: when the watching area of the user on the fourth confirmation interface is determined to be a preset sixth display area, receiving a virtual reality picture which is sent by a server and contains introduction information of the object to be shared;
step 3265: and controlling the display module to display a virtual reality picture containing introduction information of the object to be shared.
Wherein, the object to be shared includes: the introduction information of the object to be shared comprises one or more of the underwater environment and the underwater object: one or more of text, images, and video.
In an exemplary embodiment, after step 3265, step 32 may further include:
step 3266: sending a response message for accepting the diving interaction to the server so as to enable the server to send down second navigation information for indicating the user to travel along a second target line;
step 3267: and receiving the virtual reality picture containing the second navigation information sent by the server, and controlling the display module to display the virtual reality picture containing the second navigation information.
And the second target route is a route from the position of the diving interaction receiver to the position of the diving interaction initiator.
In an exemplary embodiment, step 3266 may include: acquiring third header information; when the third head information meets a fourth preset condition, controlling the display module to display a fifth confirmation interface; acquiring ninth eye information; determining a gazing area of the user on the fifth confirmation interface based on the ninth eye information; and when the watching area of the user on the fifth confirmation interface is determined to be the preset seventh display area, sending a response message of accepting the diving interaction to the server so as to enable the server to issue second navigation information for indicating the user to travel along the second target line. Thus, the user can control whether to send a response message for accepting the diving interaction to the server through the head and the eyes.
In an exemplary embodiment, step 32 may include:
step 3271: in a sixth functional mode, acquiring fifth environmental information;
step 3272: determining current location information of the user based on the fifth environmental information;
step 3273: generating a virtual reality picture containing a diving trajectory of the user based on the current position information and the historical position information of the user;
step 3274: and controlling a display module to display a virtual reality picture containing the diving trajectory of the user.
The following are included in the AR diving mode: the device comprises a first function mode, a second function mode, a third function mode and a fourth function mode, wherein the first function mode is used for monitoring whether an abnormal state occurs in an underwater environment where a user is located, the second function mode is used for monitoring whether an abnormal state occurs in the user, the third function mode is used for displaying introduction information of a watching object, and the fourth function mode is used for initiating diving interaction, and the VR diving mode comprises the following steps: the fifth functional mode for responding to the diving interaction and the sixth functional mode for displaying the diving trajectory are taken as examples, and an application scenario of the information processing method is described with an exemplary embodiment.
In one exemplary embodiment, the information processing method may include the processes of:
step 1: the processing module controls the display module to display an initialization setting interface, and a user can operate the initialization setting interface through eyes to select an initialization working mode. For example, the initial operating mode may be selected as the AR diving mode.
And 2, step: the head information acquisition module acquires first eye information and sends the first eye information to the head processing module, and when the first head information meets a first preset condition (for example, the shaking amplitude of the head of a user in the first direction is larger than a preset first threshold), the processing module controls the display module to display a first confirmation interface (namely, a confirmation switching interface), or when the first head information does not meet the first preset condition, the current working mode is kept.
And step 3: the method comprises the steps that an eye information acquisition module acquires first eye information of a diver; when the processing module determines that the gazing area (i.e., the eye gazing information) of the user on the first confirmation interface is the preset first display area for controlling switching of the working mode, the processing module switches the working mode from the AR diving mode to the VR diving mode, and may continue to perform step 20, or, when the processing module determines that the gazing area of the user on the first confirmation interface is the preset second display area, the working mode is kept unchanged, and may continue to perform step 4.
And 4, step 4: the environment information acquisition module (for example, comprising a plurality of cameras) acquires environment image information (for example, first environment information) of an underwater environment where the user is located in real time and transmits the environment image information to the processing module.
And 5: the processing module acquires the first environment information in real time and sends the first environment information to the server, and the server returns a processing result of the current environment of the processor to the processor: when the server calculates that a dangerous object (such as a dangerous organism or an obstacle) threatening the life safety of the user appears in the underwater environment where the user is located based on the first environment information, the server executes step 6 if the underwater environment where the user is located is in an abnormal state, or executes step 8 if the processing result of the current environment where the user is located indicates that the underwater environment where the user is located is not in an abnormal state;
and 6: the processing module obtains an augmented reality picture which is sent by the server and contains warning information used for indicating that dangerous objects threatening life safety of the user appear in the underwater environment where the user is located, wherein the warning information can include: the position and the size of the dangerous object control the display module to display an augmented reality picture containing warning information so as to warn and remind;
and 7: after the processing module determines that the diver watches the information of the dangerous object based on the second eye information, the processing module controls the display module to display the augmented reality picture containing navigation information. Continuously executing the steps 4 to 7 to monitor whether the underwater environment where the user is located is in an abnormal state;
and step 8: the eye information acquisition module acquires third eye information of the diver and sends the third eye information to the processing module;
and step 9: the processing module records eyeball active duration data in real time, and executes step 10 when the third eye information meets a second preset condition that the user is in an abnormal state (for example, the duration that the user does not blink continuously or does not open eyes exceeds a preset duration), or executes step 13 when the third eye information does not meet the second preset condition that the user is in the abnormal state;
step 10: the environment information acquisition module acquires second environment information (including environment position information) and sends the second environment information to the processing module, so that the processing module calculates the position information of the user in the abnormal state based on the second environment information and sends the position information of the user to the server;
step 11: the server acquires a peripheral diver information list of the user in the abnormal state, and sends distress information to a processing module corresponding to the peripheral diver information list to request other divers to rescue;
step 12: the processing module receives the information of the rescuers sent by the server, controls the display module to display the information and the distance of the rescuers, and executes step 28;
step 13: according to the fourth eye information acquired by the eye information acquisition module and the third environment information acquired by the environment information acquisition module, the processing module calculates the identification information of the gazing object gazed by the divers, acquires introduction information of the gazing object from the server, and controls the display module to display the information so as to facilitate the divers to deeply know underwater creatures and environment;
step 14: when the second head information acquired by the processing module in the AR diving mode meets a third preset condition (for example, the representation that the diver performs a nodding action), the processing module controls the display module to display a second confirmation interface (i.e., a shared view interface is confirmed), and after the user performs eye interaction, the user can confirm whether to share the view of the field of view at this time. When the watching area of the user on the second confirmation interface is determined to be a preset third display area, confirming that diving interaction is selected to be initiated to share the view, executing step 15, or when the watching area of the user on the second confirmation interface is determined to be a preset fourth display area, displaying a third confirmation interface, and executing step 19 to confirm whether the current working mode is finished;
step 15: the processing module sends a request message for requesting initiation of diving interaction to the server, and the server responds to the request message, and sends an acquired diving personnel information list in a VR diving mode in the diving field to a processing module corresponding to a diving interaction initiator;
step 16: the processing module controls the display module to display an augmented reality picture containing a diver information list, a diver can select a diver interaction object through eyes, information of a target diver is sent to the server, the server sends one-to-one or one-to-many request information for inviting the diver interaction, and receives a fed back result whether to accept the invitation;
and step 17: after receiving the feedback information of the server, the processing module corresponding to the diving interaction initiator can perform diving interaction by selecting whether to wait for the arrival of a diving partner, and when the diving interaction initiator selects to wait, the step 18 is executed, or when the diving interaction initiator selects not to wait, the step 2 is executed;
step 18: the server updates the position information of the diver receiving the invitation in real time, calculates the distance and feeds the distance back to a processing module of the diving interaction initiator so as to update a display interface;
step 19: when the processing module determines that the watching area of the user on the third confirmation interface is the preset fifth display area based on the seventh eye information, executing step 28, or determining that the watching area of the user on the third confirmation interface is not the preset fifth display area, continuing to execute step 2;
step 20: the processing module monitors whether a request message sent by the server for inviting diving interaction is received. When the request message is received, the processing module controls the display module to display a fourth confirmation interface (for example, an interface for confirming whether to display introduction information of the object to be shared by the diving interaction initiator or not).
Step 21: the processing module determines that a watching area of the user on the fourth confirmation interface is a preset sixth display area based on the obtained eighth eye information, and receives a virtual reality picture which contains introduction information of an object to be shared and is sent by the server if the watching area indicates that the user selects to receive diving interaction; after the control display module displays the introduction information virtual reality picture containing the object to be shared, executing step 22, or if the user selects not to accept, executing step 24;
step 22: when the third head information acquired by the processing module in the AR diving mode meets a fourth preset condition (for example, it represents that the diver performs a nodding action), the processing module controls the display module to display a fifth confirmation interface (for example, an interface for confirming whether to travel to the area where the diving interaction initiator is located to perform diving interaction). Then, the user interacts with the eyes to determine whether to join the contact diving interaction invitation, and if the user chooses to join the invitation, step 23 is executed to facilitate the user to travel to the area where the diving interaction initiator is located, or if the user chooses not to join the invitation, step 24 is executed;
step 23: the processing module sends a response message to the server accepting the dive interaction to cause the server to issue second navigation information (e.g., including shortest path planning between the initiator and the recipient) instructing the user to travel along a second target route. The processing module of the diving interaction receiver receives the virtual reality picture containing the second navigation information sent by the server, and controls the display module of the diving interaction receiver to display the virtual reality picture containing the second navigation information so as to prompt a route;
and step 24: and the processing module acquires the fifth environmental information acquired by the environmental information acquisition module and sends the fifth environmental information to the server.
Step 25: the server records all the tour points of the divers and marks the tour points into a tour diving track based on the fifth environmental information and the pre-stored historical environmental information, and generates a virtual reality picture containing the diving track of the user;
step 26: the server sends a virtual reality picture containing the diving trajectory of the user to the processing module;
step 27: the control display module sent by the processing module server displays a virtual reality picture containing the diving trajectory of the user, and then step 2 can be executed;
step 28: and finishing the diving.
The above description of the method embodiment, similar to the above description of the apparatus embodiment, has similar beneficial effects as the apparatus embodiment. For technical details that are not disclosed in the embodiments of the method of the present disclosure, those skilled in the art should understand with reference to the description in the embodiments of the apparatus of the present disclosure, and therefore, the detailed description is omitted here.
The embodiment of the present disclosure further provides a mixed reality device, including: a memory and a memory storing a computer program operable on a processor, wherein the processor implements the steps of the information processing method in one or more of the above embodiments when executing the program.
In an exemplary embodiment, as shown in fig. 4, the mixed reality device 40 may include: at least one processor 401; and at least one memory 402, a bus 403 connected to the processor 401; the processor 401 and the memory 402 complete communication with each other through the bus 403; the processor 401 is configured to call program instructions in the memory 402 to perform the steps of the information processing method in one or more embodiments described above.
In an exemplary embodiment, the processor may be a CPU, other general purpose processor, a DSP, a field FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, application specific integrated circuits, or the like. The general purpose processor may be an MPU or the processor may be any conventional processor or the like. The embodiments of the present disclosure do not limit this.
In an exemplary embodiment, the Memory may include a volatile Memory in a computer-readable storage medium, a Random Access Memory (RAM), a non-volatile Memory (RAM), such as a Read Only Memory (ROM) or a Flash Memory (Flash RAM), and the Memory includes at least one Memory chip. The embodiments of the present disclosure do not limit this.
In an exemplary embodiment, the bus may include a power bus, a control bus, a status signal bus, and the like, in addition to the data bus. But for clarity of illustration the various buses are labeled as bus 403 in figure 4. Here, the embodiment of the present disclosure does not limit this.
In implementation, the processing performed by the mixed reality device may be performed by instructions in the form of hardware integrated logic circuits or software in the processor. That is, the method steps of the embodiments of the present disclosure may be implemented by a hardware processor, or implemented by a combination of hardware and software modules in a processor. The software module may be located in a storage medium such as a random access memory, a flash memory, a read only memory, a programmable read only memory or an electrically erasable programmable memory, a register, etc. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor. To avoid repetition, it is not described in detail here.
The embodiment of the present disclosure also provides a computer-readable storage medium, which includes a stored program, where when the program runs, the touch device where the storage medium is located is controlled to execute the steps of the information processing method in one or more embodiments described above.
In an exemplary embodiment, the computer readable storage medium may be: ROM/RAM, magnetic disk, optical disk, etc. The embodiments of the present disclosure do not limit this.
The above description of the embodiments of the touch device or the computer-readable storage medium is similar to the description of the above method embodiments, and has similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the touch device or the computer-readable storage medium of the present disclosure, please refer to the description of the embodiments of the method of the present disclosure. And will not be described in detail herein.
It will be understood by those of ordinary skill in the art that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, or suitable combinations thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed by several physical components in cooperation. Some or all of the components may be implemented as software executed by a processor, such as a digital signal processor or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as is well known to those skilled in the art.
Although the embodiments of the present disclosure have been described above, the above description is only for the purpose of understanding the present disclosure, and is not intended to limit the present disclosure. It will be understood by those skilled in the art of the present disclosure that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure, and that the scope of the disclosure is to be limited only by the terms of the appended claims.

Claims (20)

1. A mixed reality apparatus, comprising: a processing module, a display module and a diving information acquisition module, wherein,
the diving information acquisition module is configured to acquire diving information and send the diving information to the processing module; wherein the diving information comprises: one or more of head information of the user, eye information of the user and environmental information of an underwater environment in which the user is located;
the processing module is configured to implement a function of any one of an Augmented Reality (AR) diving mode and a Virtual Reality (VR) diving mode based on the diving information; wherein the AR diving mode comprises: one or more of a first function mode for monitoring whether an abnormal state occurs in an underwater environment where a user is located, a second function mode for monitoring whether an abnormal state occurs in the user, a third function mode for displaying introduction information of a gazing object and a fourth function mode for initiating diving interaction, wherein the VR diving mode comprises: one or more of a fifth functional mode for responding to a diving interaction and a sixth functional mode for displaying a diving trajectory;
the display module is configured to display any one of an augmented reality screen and a virtual reality screen.
2. The apparatus of claim 1, wherein the diving information acquisition module comprises: a head information acquisition module, an eye information acquisition module and an environment information acquisition module, wherein,
the head information acquisition module is configured to acquire head information of a user;
the eye information acquisition module is configured to acquire eye information of a user;
the environmental information acquisition module is configured to acquire environmental information of an underwater environment where a user is located.
3. The apparatus according to claim 1 or 2, wherein the processing module is configured to obtain first header information; when the first head information meets a first preset condition, controlling the display module to display a first confirmation interface and acquiring first eye information; determining a gaze area of a user at the first confirmation interface based on the first eye information; when the watching area of the user on the first confirmation interface is determined to be a preset first display area, switching the working mode from one of the AR diving mode and the VR diving mode to the other one of the AR diving mode and the VR diving mode; or when the watching area of the user on the first confirmation interface is determined to be the preset second display area, keeping the working mode unchanged.
4. The apparatus according to claim 1 or 2, wherein the processing module is configured to, in the first functional mode, obtain first environment information; sending the first environment information to a server so that the server can determine whether the underwater environment where the user is located is in an abnormal state or not based on the first environment information; receiving an augmented reality picture which is sent by the server and contains warning information used for indicating that the underwater environment where the user is located is in an abnormal state; and controlling the display module to display an augmented reality picture containing the warning information.
5. The apparatus of claim 4, wherein the alert message comprises: one or more of information of dangerous objects threatening the life safety of the user in the underwater environment where the user is located and navigation information for instructing the user to travel along a first target route, wherein the first target route is a route capable of avoiding the dangerous objects.
6. The device according to claim 5, wherein the processing module is further configured to control the display module to display an augmented reality screen containing information of the dangerous object and acquire second eye information; determining whether the user gazes at the information of the dangerous object based on the second eye information; and when the information that the user gazes at the dangerous object is determined, controlling the display module to display an augmented reality picture containing the navigation information.
7. The apparatus according to claim 1 or 2, wherein the processing module is configured to, in the second functional mode, obtain third eye information; when the third eye information meets a second preset condition that the user is in an abnormal state, acquiring second environment information; determining location information of a user based on the second environment information; and sending the position information of the user to a server so that the server sends the position information of the user to other mixed reality devices to request divers using other mixed reality devices to rescue.
8. The apparatus of claim 7, wherein the processing module is further configured to receive information of a rescuer sent by the server; and controlling the display module to display the information of the rescuers.
9. The apparatus according to claim 1 or 2, wherein the processing module is configured to, in the third functional mode, obtain third environment information and fourth eye information; acquiring identification information of a gazing object of a user in the underwater environment from the third environment information based on the fourth eye information; sending the identification information of the gazing object to a server so that the server sends introduction information of the gazing object; and receiving an augmented reality picture which is sent by the server and contains introduction information of the gazing object, and controlling the display module to display the augmented reality picture containing the introduction information of the gazing object.
10. The apparatus according to claim 1 or 2, wherein the processing module is configured to, in the fourth functional mode, obtain second header information; when the second head information meets a third preset condition, controlling the display module to display a second confirmation interface; acquiring fifth eye information; determining a gazing area of the user on the second confirmation interface based on the fifth eye information; and when the watching area of the user on the second confirmation interface is determined to be a preset third display area, sending a request message for requesting to initiate diving interaction to a server.
11. The apparatus according to claim 10, wherein the processing module is further configured to receive an augmented reality screen containing a diver information list sent by the server, and control the display module to display the augmented reality screen containing the diver information list; acquiring sixth eye information; acquiring information of a target diver gazed by the user from the diver information list based on the sixth eye information; and sending the information of the target diver to the server so that the server sends a request message for inviting diving interaction to a mixed reality device corresponding to the target diver.
12. The apparatus of claim 11, wherein the request message for inviting a diving interaction comprises: the introduction information of the object to be shared comprises the following steps: the introduction information of the object to be shared comprises one or more of an underwater environment and an underwater object: one or more of text, images, and video.
13. The apparatus of claim 11, wherein the processing module is further configured to receive an augmented reality picture sent by the server containing location information of at least one of the target divers; and controlling the display module to display an augmented reality picture containing the position information of the at least one diver.
14. The apparatus of claim 10, wherein the processing module is further configured to display a third confirmation interface when the user's gaze area at the second confirmation interface is determined to be a preset fourth display area; acquiring seventh eye information; determining a gaze area of a user at the third confirmation interface based on the seventh eye information; and when the watching area of the user on the third confirmation interface is determined to be a preset fifth display area, ending the current working mode.
15. The apparatus according to claim 1 or 2, wherein the processing module is configured to, in the fifth functional mode, control the display module to display a fourth confirmation interface in response to a request message sent by a server for inviting a diving interaction; acquiring eighth eye information; determining a gaze area of a user at the fourth confirmation interface based on the eighth eye information; when the watching area of the user on the fourth confirmation interface is determined to be a preset sixth display area, receiving a virtual reality picture which is sent by a server and contains introduction information of an object to be shared; controlling the display module to display the introduction information virtual reality picture containing the object to be shared, wherein the object to be shared comprises: the introduction information of the object to be shared comprises one or more of an underwater environment and an underwater object: one or more of text, images, and video.
16. The apparatus of claim 15, wherein the processing module is further configured to send a response message to the server accepting the diving interaction to cause the server to issue a second navigation message instructing the user to travel along a second target line; and receiving a virtual reality picture containing the second navigation information sent by the server, and controlling the display module to display the virtual reality picture containing the second navigation information, wherein the second target route is a route from the position of the diving interaction receiver to the position of the diving interaction initiator.
17. The apparatus according to claim 1 or 2, wherein the processing module is configured to, in the sixth functional mode, obtain fifth environment information; sending the fifth environment information to a server so that the server generates a virtual reality picture containing the diving trajectory of the user based on the fifth environment information and pre-stored historical environment information; and receiving a virtual reality picture containing the diving trajectory of the user sent by the server, and controlling the display module to display the virtual reality picture containing the diving trajectory of the user.
18. An information processing method applied to the mixed reality apparatus according to any one of claims 1 to 17, the method comprising:
obtain dive information through dive information acquisition module, dive information includes: one or more of head information of the user, eye information of the user and environmental information of an underwater environment in which the user is located;
based on the diving information, realizing the function of any one of an augmented reality AR diving mode and a virtual reality VR diving mode; wherein the AR diving mode comprises: one or more of a first function mode for monitoring whether an abnormal state occurs in an underwater environment where a user is located, a second function mode for monitoring whether an abnormal state occurs in the user, a third function mode for displaying introduction information of a gazing object and a fourth function mode for initiating diving interaction, wherein the VR diving mode comprises: one or more of a fifth functional mode for responding to a diving interaction and a sixth functional mode for displaying a diving trajectory;
and controlling the display module to display any one of the augmented reality picture and the virtual reality picture.
19. A mixed reality device, comprising: a processor and a memory storing a computer program operable on the processor, wherein the processor implements the steps of the information processing method according to claim 18 when executing the program.
20. A computer-readable storage medium comprising a stored program, wherein a device on which the storage medium is located is controlled to perform the steps of the information processing method according to claim 18 when the program is run.
CN202110836392.5A 2021-07-23 2021-07-23 Mixed reality device and equipment, information processing method and storage medium Pending CN115686183A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110836392.5A CN115686183A (en) 2021-07-23 2021-07-23 Mixed reality device and equipment, information processing method and storage medium
PCT/CN2022/105084 WO2023001019A1 (en) 2021-07-23 2022-07-12 Mixed reality apparatus and device, information processing method, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110836392.5A CN115686183A (en) 2021-07-23 2021-07-23 Mixed reality device and equipment, information processing method and storage medium

Publications (1)

Publication Number Publication Date
CN115686183A true CN115686183A (en) 2023-02-03

Family

ID=84978883

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110836392.5A Pending CN115686183A (en) 2021-07-23 2021-07-23 Mixed reality device and equipment, information processing method and storage medium

Country Status (2)

Country Link
CN (1) CN115686183A (en)
WO (1) WO2023001019A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5301668A (en) * 1991-06-20 1994-04-12 Hales Lynn B Field of view underwater diving computer monitoring and display system
WO2008144244A2 (en) * 2007-05-15 2008-11-27 Divenav, Inc. Scuba diving device providing underwater navigation and communication capability
KR20170005602A (en) * 2015-07-06 2017-01-16 삼성전자주식회사 Method for providing an integrated Augmented Reality and Virtual Reality and Electronic device using the same
WO2019117325A1 (en) * 2017-12-12 2019-06-20 전자부품연구원 Mixed reality-based non-submerged surface supply diving virtual training apparatus and system
CN207946596U (en) * 2018-02-11 2018-10-09 亮风台(上海)信息科技有限公司 A kind of diving face mirror

Also Published As

Publication number Publication date
WO2023001019A1 (en) 2023-01-26

Similar Documents

Publication Publication Date Title
US10809530B2 (en) Information processing apparatus and information processing method
CN108139799B (en) System and method for processing image data based on a region of interest (ROI) of a user
CN104781873B (en) Image display device, method for displaying image, mobile device, image display system
CN107015638B (en) Method and apparatus for alerting a head mounted display user
CN108292490B (en) Display control device and display control method
EP3252714A1 (en) Camera selection in positional tracking
WO2020241189A1 (en) Information processing device, information processing method, and program
US20170206666A1 (en) Information processing apparatus, information processing method, and program
US11156844B2 (en) Information processing apparatus, information processing method, and program
CN110998666B (en) Information processing device, information processing method, and program
JP6540572B2 (en) Display device and display control method
US11443540B2 (en) Information processing apparatus and information processing method
JP2003267295A (en) Remote operation system
US20190377474A1 (en) Systems and methods for a mixed reality user interface
WO2018136072A1 (en) Telepresence
CN107851334A (en) Information processor
WO2017064926A1 (en) Information processing device and information processing method
EP3926976B1 (en) Information processing device, information processing method, and program
CN106791691B (en) Control system of unmanned ship
US20230362484A1 (en) Systems, devices, and methods supporting multiple photography modes with a control device
JP6518645B2 (en) INFORMATION PROCESSING APPARATUS AND IMAGE GENERATION METHOD
JP6113897B1 (en) Method for providing virtual space, method for providing virtual experience, program, and recording medium
CN115686183A (en) Mixed reality device and equipment, information processing method and storage medium
JP2018028765A (en) Method for providing virtual space, program, and recording medium
JP6953744B2 (en) Information sharing system, information sharing method, terminal device and information processing program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination