CN116954362A - Virtual reality scene display method and related device - Google Patents

Virtual reality scene display method and related device Download PDF

Info

Publication number
CN116954362A
CN116954362A CN202310647510.7A CN202310647510A CN116954362A CN 116954362 A CN116954362 A CN 116954362A CN 202310647510 A CN202310647510 A CN 202310647510A CN 116954362 A CN116954362 A CN 116954362A
Authority
CN
China
Prior art keywords
boundary
distance
virtual reality
safety zone
reality scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310647510.7A
Other languages
Chinese (zh)
Inventor
陈千举
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202310647510.7A priority Critical patent/CN116954362A/en
Publication of CN116954362A publication Critical patent/CN116954362A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • G06F9/442Shutdown

Abstract

The embodiment of the application discloses a display method and a related device of a virtual reality scene, which can be applied to scenes such as digital people, virtual people, games, virtual reality, and extended reality. The method comprises the following steps: and when the object is in a safety zone included in the virtual reality scene and the distance between the object and the boundary of the safety zone is greater than a first preset distance, displaying the virtual reality scene. When the object is in the safety zone and the distance between the object and the boundary of the safety zone is smaller than or equal to a first preset distance, displaying first prompt information in the virtual reality scene to prompt that the object possibly has a risk of exceeding the safety zone. When the distance between the object and the boundary of the safety zone is smaller than or equal to the second preset distance, the object is closer to the boundary of the safety zone, and the real intention of the object is beyond the safety zone, the real scene is displayed in the target area of the virtual reality scene, so that the object can see the real scene outside the virtual reality scene, and the safety risk of the object is reduced.

Description

Virtual reality scene display method and related device
Technical Field
The application relates to the technical field of virtual reality, in particular to a display method and a related device of a virtual reality scene.
Background
A virtual reality scene refers to a virtual environment constructed by computer technology that simulates a real scene of the real world. Unlike conventional user interfaces, virtual reality scenes can be perceived by users as if they were in the scene, and can be viewed and interacted with anywhere and anytime without limitation.
The user is in the virtual reality scene, and can only see the virtual reality scene generally, and can not see the real scene, and security problems can occur. For example, in a virtual reality scene, a passable road is in front of the road, and in a real scene, an obstacle exists in front of the road, and if a user continues to pass according to the instruction of the virtual reality scene, the user may collide with the obstacle to cause a safety problem. Based on this, a safety zone is generally set in the virtual reality scene to ensure the safety of the user.
In the related art, if the user is detected to exceed the safety zone, the user is prompted by the red warning effect, or the user exits from the direct virtual reality scene, the real scene is displayed, and the like, so that the safety of the user is ensured. The related art approach guarantees security but reduces practicality and user experience.
Disclosure of Invention
In order to solve the technical problems, the application provides a display method and a related device of a virtual reality scene, which are used for improving practicality and user experience while reducing safety risks.
The embodiment of the application discloses the following technical scheme:
in one aspect, an embodiment of the present application provides a method for displaying a virtual reality scene, where the method includes:
if the object is located in a safety zone included in the virtual reality scene and the distance between the object and the safety zone boundary is greater than a first preset distance, displaying the virtual reality scene, wherein the safety zone boundary is the boundary of the safety zone;
if the object is located in the safety zone and the distance between the object and the boundary of the safety zone is smaller than or equal to the first preset distance, displaying first prompt information in the virtual reality scene, wherein the first prompt information is used for prompting the distance between the object and the boundary of the safety zone;
and if the distance between the object and the boundary of the safety zone is smaller than or equal to a second preset distance, displaying a real scene in a target area of the virtual reality scene, wherein the second preset distance is smaller than the first preset distance, and the target area is a partial area of the virtual reality scene.
In another aspect, an embodiment of the present application provides a display apparatus for a virtual reality scene, including: a first display unit, a second display unit, and a third display unit;
The first display unit is configured to display the virtual reality scene if an object is located in a safety zone included in the virtual reality scene, and if a distance between the object and a boundary of the safety zone is greater than a first preset distance, the boundary of the safety zone is a boundary of the safety zone;
the second display unit is configured to display, if the object is located in the safety zone and the distance between the object and the boundary of the safety zone is less than or equal to the first preset distance, first prompt information in the virtual reality scene, where the first prompt information is used to prompt the distance between the object and the boundary of the safety zone;
and the third display unit is used for displaying a real scene in a target area of the virtual reality scene if the distance between the object and the boundary of the safety zone is smaller than or equal to a second preset distance, wherein the second preset distance is smaller than the first preset distance, and the target area is a partial area of the virtual reality scene.
In another aspect, an embodiment of the present application provides a computer device including a processor and a memory:
the memory is used for storing a computer program and transmitting the computer program to the processor;
The processor is configured to perform the method of the above aspect according to instructions in the computer program.
In another aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program for executing the method described in the above aspect.
In another aspect, embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the method described in the above aspect.
According to the technical scheme, when the object is located in the safety zone included in the virtual reality scene and the distance between the object and the boundary of the safety zone is larger than the first preset distance, the object is not only located in the safety zone, but also far away from the boundary of the safety zone, and the virtual reality scene is displayed. When an object is in a safe zone, and the distance between the object and the boundary of the safe zone is smaller than or equal to a first preset distance, the object is not only in the safe zone, but also is closer to the boundary of the safe zone, and then first prompt information is displayed in a virtual reality scene to prompt that the object is closer to the boundary of the safe zone, and the risk exceeding the safe zone possibly exists, so that a user can timely keep away from the boundary of the safe zone, and the safety risk of the object is reduced. When the distance between the object and the boundary of the safe area is smaller than or equal to the second preset distance, wherein the second preset distance is smaller than the first preset distance, which means that the object is closer to the boundary of the safe area, the object possibly exceeds the safe area in the safe area or is out of the safe area, and if the object has a safety risk, the real scene is displayed in the target area of the virtual reality scene, so that the object can see the real scene out of the virtual reality scene, and the safety risk of the object is reduced.
Therefore, different contents are displayed based on the distance between the object and the boundary of the safety zone, so that first prompt information is displayed when the object starts to approach the boundary of the safety zone, if the distance between the object and the boundary of the safety zone is continuously reduced after the object sees the first prompt information, the real intention of the object is proved to be beyond the safety zone, at the moment, the virtual reality scene is not immediately exited, and the real scene is displayed in a partial area of the virtual reality scene, so that the requirement of the object on the real scene is met while the experience of the object on the virtual reality scene is not interrupted, and the practicability and the user experience are improved.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is an application scenario schematic diagram of a virtual reality scenario display method provided by an embodiment of the present application;
Fig. 2 is a flow chart of a method for displaying a virtual reality scene according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an object in a safe area according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an object in a safe area, where the distance between the object and the boundary of the safe area is greater than a first preset distance;
FIG. 5 is a schematic diagram of an object in a safe area, where a distance between the object and a boundary of the safe area is less than or equal to a first preset distance;
fig. 6 is a schematic diagram showing a first prompt message and a real scene according to an embodiment of the present application;
fig. 7 is a schematic diagram of displaying a real scene according to an embodiment of the present application;
FIG. 8 is a schematic diagram of an object provided in an embodiment of the present application having a distance from a boundary of a security zone less than or equal to a second predetermined distance;
FIG. 9 is a schematic diagram of a security fence according to an embodiment of the present application;
FIG. 10 is a schematic view of a security fence according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a detection ball according to an embodiment of the present application;
FIG. 12 is a schematic diagram of a distance between an object and a safe area according to an embodiment of the present application;
FIG. 13 is a schematic diagram of an object according to an embodiment of the present application being outside a safe area, and a distance between the object and a boundary of the safe area being greater than a second predetermined distance;
FIG. 14 is a schematic diagram of a distance between an object and a safe area according to an embodiment of the present application;
fig. 15 is a schematic cross-sectional view of a virtual reality scene according to an embodiment of the present application;
FIG. 16 is a schematic view of a regular-shape safety zone according to an embodiment of the present application;
FIG. 17 is a schematic diagram of two security zones provided by an embodiment of the present application;
FIG. 18 is a schematic diagram illustrating a distance between an object and a boundary of a security zone according to an embodiment of the present application;
fig. 19 is an application scenario diagram of a virtual reality scenario display method according to an embodiment of the present application;
fig. 20 is a schematic structural diagram of a display device for a virtual reality scene according to an embodiment of the present application;
fig. 21 is a schematic structural diagram of a server according to an embodiment of the present application;
fig. 22 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described below with reference to the accompanying drawings.
In the following, a Virtual Reality (VR) technique is taken as an example, and a related art method is described in which security is ensured but practicality and user experience are reduced.
The VR based interactive device generally includes a handle and a head mounted display device that a user may hold, wear on his head, and therefore detect whether the user's hand is beyond a safe zone based on the handle, and whether the user's head is beyond a safe zone based on the display device. The following description will be given separately.
(1) When the hands of the user exceed the safety zone, displaying the boundary of the safety zone in the virtual reality scene, and displaying a red warning effect on the safety zone touched by the hands of the user. However, because the current virtual reality scene is still displayed, the surrounding environment of the current hand in the real scene cannot be displayed to the user, for example, in the scene that the user needs to drink water, the user needs to manually switch to the perspective mode (see through), that is, the virtual reality scene is manually switched to the real scene, so that the practicability is low.
(2) When the user's body exceeds the safe zone, the VR will immediately actively switch to perspective mode and display the current safe zone range. However, this mechanism may also be triggered in the event that some of the user's body is slightly beyond the safe zone, or the user is unconsciously adjusted, resulting in a user experience that is easily disrupted.
Based on the above, the embodiment of the application provides a display method and a related device for a virtual reality scene, which display different contents based on the distance between an object and a boundary of a safety zone, can meet the requirement of the object on the real scene while not interrupting the experience of the object on the virtual reality scene, and improve the practicability and the user experience.
In addition, the interaction device may be a device based on Extended Reality (XR), augmented Reality (Augmented Reality, AR), mixed Reality (MR), and the like, which is not particularly limited in the embodiment of the present application.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented, for example, in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "includes" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
The display method of the virtual reality scene provided by the application can be applied to computer equipment with the display capability of the virtual reality scene, such as terminal equipment and a server. The terminal device may be a desktop computer, a notebook computer, a mobile phone, a tablet computer, an internet of things device, a portable wearable device, the internet of things device may be an intelligent sound box, an intelligent television, an intelligent air conditioner, an intelligent vehicle-mounted device, etc., the intelligent vehicle-mounted device may be a vehicle-mounted navigation terminal, a vehicle-mounted computer, etc., and the portable wearable device may be an intelligent watch, an intelligent bracelet, a head-mounted device, etc., but is not limited thereto; the server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server or a server cluster for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDN), basic cloud computing services such as big data and artificial intelligent platforms, and the like. The terminal device and the server may be directly or indirectly connected through wired or wireless communication, and the present application is not limited herein.
In order to facilitate understanding of the method for displaying a virtual reality scenario provided by the embodiment of the present application, an application scenario of the method for displaying a virtual reality scenario is described in an exemplary manner by taking an execution body of the method for displaying a virtual reality scenario as a terminal device.
Referring to fig. 1, the diagram is an application scenario schematic diagram of a virtual reality scenario display method provided by an embodiment of the present application. As shown in fig. 1, the terminal device included in the application scenario is a VR device, and the VR device includes a head-mounted display 111 and two handles. Wherein the head-mounted display device 111 is worn on the head of a user, the left hand of the user holds the handle 112, and the right hand holds the handle 113.
The head-mounted display device 111, the handle 112, and the handle 113 are used to determine the position where the subject is located, and if the position determined based on the handle 112 is closer to the safety zone, the position determined by the handle 112 is taken as the position of the subject. In the VR device, there is a computing module for executing the method for displaying a virtual reality scene provided in the embodiment of the present application, as shown in fig. 1, and the computing module is installed in the head-mounted display device 111.
After the VR device is worn by the object, the VR device can construct a virtual reality scene, so that the object is immersed in the virtual reality scene and cannot notice the real scene in the real world. To reduce the security risk that an object may use a VR device, three possible scenarios are described below. For convenience of description, the right foot position of the subject will be shown in fig. 1 as the subject position, and the safety zone boundary in front of the right foot will be used as the comparison subject.
Case one: as in the case of the case shown in fig. 1, in which the object is located within the safe zone and the distance between the object and the safe zone boundary is 40 cm, i.e., the distance between the object and the safe zone boundary is greater than 35 cm (a first preset distance), the virtual reality scene is displayed.
And a second case: the object continues to travel forward on the basis of the first case, as in the second case shown in fig. 1, the object is located in the safety zone, and the distance between the object and the boundary of the safety zone is 20 cm, that is, the distance between the object and the boundary of the safety zone is less than 35 cm (the first preset distance), and then the first prompt information is displayed in the virtual reality scene. As shown in fig. 1, text information with warning is displayed in the form of a prompt box in the virtual field so as to prompt that the object is approaching the safe area soon.
And a third case: the object continues to travel forward on the basis of the above case two, as in the case three subgraph shown in fig. 1, the object is located within the safe zone, and the distance between the object and the safe zone boundary is 10 cm, i.e., the distance between the object and the safe zone boundary is less than 15 cm (the second preset distance), then the real scene is displayed in the target area of the virtual reality scene. As shown in fig. 1, an external real environment, that is, a cup of water on a table is displayed in a perspective frame form in a virtual field, so that an object can accurately and safely take the cup in the real world while being in the virtual environment, thereby improving practicability.
Therefore, different contents are displayed based on the distance between the object and the boundary of the safety zone, so that first prompt information is displayed when the object starts to approach the boundary of the safety zone, if the distance between the object and the boundary of the safety zone is continuously reduced after the object sees the first prompt information, the real intention of the object is proved to be beyond the safety zone, at the moment, the virtual reality scene is not immediately exited, and the real scene is displayed in a partial area of the virtual reality scene, so that the requirement of the object on the real scene is met while the experience of the object on the virtual reality scene is not interrupted, and the practicability and the user experience are improved.
The display method of the virtual reality scene provided by the embodiment of the application can be executed by the terminal equipment. However, in other embodiments of the present application, the server may have a similar function to the server, so as to perform the method for displaying a virtual reality scene provided in the embodiment of the present application, or the terminal device and the server together perform the method for displaying a virtual reality scene provided in the embodiment of the present application, which is not limited in this embodiment.
The method for displaying the virtual reality scene provided by the application is described in detail through the method embodiment.
Referring to fig. 2, the flow chart of a method for displaying a virtual reality scene according to an embodiment of the application is shown. For convenience of description, the following embodiments will be described by taking an execution body of the virtual reality scene display method as an example of a terminal device. As shown in fig. 2, the method for displaying a virtual reality scene includes the following steps:
s201: and if the object is positioned in the safety zone included in the virtual reality scene and the distance between the object and the boundary of the safety zone is greater than the first preset distance, displaying the virtual reality scene.
The object refers to an object or living being, such as a person, capable of controlling the terminal device.
A virtual reality scene refers to a digital environment simulating the real world created by computer technology. It may be a three-dimensional, interactive picture. Virtual reality scenes typically include virtual objects such as buildings, landscapes, people, animals, vehicles, etc., as well as virtual environments such as sky, ground, water, weather, etc. The object may be made to have an immersive experience by a virtual reality head mounted display device or the like.
In practical application, after the terminal equipment is started by the object, the position of the boundary of the safety zone can be determined so as to remind the object to be positioned in the safety zone included in the virtual reality scene as far as possible when the object is in the virtual reality scene, and the safety risk of the object is reduced. As a possible implementation, the security zone boundary may be provided by the terminal device or may be an object-custom range. For example, the object uses a terminal device, a boundary of a movable space in the real world.
As a possible implementation manner, the terminal device may include a head-mounted display device, a position determining device, a control device for holding or carrying by an object, and a computing module for executing the display method of the virtual reality scene provided by the embodiment of the application. The head-mounted display device is used for displaying the content experienced by the work object such as the virtual reality scene, the position determining device is used for determining the position of the boundary of the safety area, the position of the object and the like, the control device such as the object is used for receiving the instruction sent by the object, the computing module is used for determining the content displayed for the object based on the position of the object and the position of the boundary of the safety area, such as displaying only the virtual reality scene, displaying the virtual reality scene, the first prompt information and the like.
It will be appreciated that in the specific embodiments of the present application, data relating to the location of objects, etc. is required, when the above embodiments of the present application are applied to specific products or technologies, to obtain individual permissions or individual consents of users, and that the collection, use and processing of the relevant data is required to comply with relevant laws and regulations and standards of the relevant countries and regions.
In the following, several possible situations will be specifically described by taking the case where a security zone is provided for a terminal device as an example. After the object enters the security zone based on the terminal device, the terminal device establishes a circular security zone with the head of the object as a center and a fixed value as a radius, as shown in fig. 3.
Case one: and if the object is positioned in the safety zone and the distance between the object and the boundary of the safety zone is greater than the first preset distance, displaying the virtual reality scene.
If the object is located in the safe area and the distance between the object and the boundary of the safe area is greater than the first preset distance, the object is not only indicated to be in the safe area, but also is far away from the boundary of the safe area, the possibility that the object exceeds the boundary of the safe area is small, namely, the safety risk is small, and the user can normally display the virtual reality scene required by the object without prompting the user that the safety risk exists currently.
Referring to fig. 4, the schematic diagram of an object in a safe area and having a distance greater than a first preset distance from a boundary of the safe area according to an embodiment of the present application is shown.
In fig. 4, a solid circle represents a safe zone boundary, a point a represents a position of an object, a point B represents a point closest to the point a and belonging to the safe zone boundary, and a distance AB represents a distance of the object from the safe zone boundary. For convenience of understanding, in fig. 4, the boundary formed by the first preset distance from the boundary of the safety zone is also indicated by a dotted circle, and it is understood that in the actual rendering effect, the dotted circle is not generally rendered. In fig. 4, it may be determined that the object is located within the safe zone based on the point a, and the length of AB is greater than a first preset distance, i.e., the distance of the object from the safe zone boundary is greater than the first preset distance. It will be appreciated that in fig. 4, the virtual reality scene is not shown, and the virtual reality scene may cover the object in a hemispherical shape.
As a possible implementation manner, in order to save computing resources and improve performance of the terminal device, when the object is located in the secure area and the distance between the object and the secure area boundary is greater than the first preset distance, the secure area boundary is not displayed, and only the virtual reality scene is displayed.
S202: and if the object is positioned in the safety zone and the distance between the object and the boundary of the safety zone is smaller than or equal to a first preset distance, displaying first prompt information in the virtual reality scene.
And a second case: if the object is located in the safe area and the distance between the object and the safe area boundary is smaller than or equal to the first preset distance, the object is not only indicated to be in the safe area, but also is closer to the safe area boundary, the possibility that the object exceeds the safe area boundary is larger, namely, the safety risk is larger, at the moment, the user can be prompted to currently have the safety risk, and first prompt information is displayed in the virtual reality scene, so that the object can be far away from the safe area boundary after the first prompt information is seen, and the safety risk is reduced.
The first prompt information is used for prompting the distance between the object and the boundary of the safety zone, or the first prompt information is used for prompting that the current position of the object is closer to the boundary of the safety zone. The embodiment of the application is not particularly limited to the expression form of the first prompt information. For example, the first prompt message may be a security zone fence, and may also be a text message. The following detailed description will be given with respect to the embodiments shown in fig. 9 and fig. 10, and the detailed description will be omitted.
Referring to fig. 5, the schematic diagram of an object in a safe area and having a distance between the object and a boundary of the safe area less than or equal to a first preset distance according to an embodiment of the present application is shown.
In fig. 5, a solid circle represents a safe zone boundary, a point a represents a position of an object, a point B represents a point closest to the point a and belonging to the safe zone boundary, and a distance AB represents a distance of the object from the safe zone boundary. For convenience of understanding, in fig. 5, the boundary formed by the first preset distance from the boundary of the safety zone is also indicated by a dotted circle, and it is understood that in the actual rendering effect, the dotted circle is generally not rendered. In fig. 5, it may be determined based on the point a that the object is located in the safety zone, and the length of AB is smaller than the first preset distance, that is, the distance between the object and the boundary of the safety zone is smaller than the first preset distance, then the first prompt message is displayed. In fig. 5, the first prompt message is based on the boundary of the safety area and extends upwards to form a special effect of a light curtain, and the shape of the light curtain is similar to the surface of a cylinder, so that the object definitely locates in the range of the safety area.
As a possible implementation manner, if the object is located in the safe area and the distance between the object and the safe area boundary is less than or equal to the first preset distance, in the virtual reality scene, not only the first prompt information but also the safe area boundary may be displayed, as shown by a solid circle in fig. 5. So that the object knows how to get further from the safe zone boundary after definitely going beyond the safe zone to reduce the security risk.
As a possible implementation manner, if the object is located in the safe area and the distance between the object and the boundary of the safe area is smaller than or equal to the first preset distance, not only the first prompt information but also the real scene can be displayed in the virtual reality scene. Referring to fig. 6, the diagram is a schematic diagram for displaying a first prompt message and a real scene according to an embodiment of the present application. Fig. 6 is equivalent to adding a real scene display area, such as a perspective window, to fig. 5, and the object can see the real scene outside the virtual reality scene based on the perspective window. Therefore, the object can see the real scene near the object in the real world while the current position is clear to be close to the safety zone, and the object can learn that the object is about to exceed the safety zone more gently based on the real scene, so that whether the object really exceeds the safety zone or not is determined. With continued reference to fig. 6, the object has a table in front of it and a cup on the table, and the object can acquire the cup in the real scene based on the perspective window extension without exiting the virtual reality scene.
The embodiment of the application is not particularly limited to the content of the real scene display. For example, there may be a real scene in front of the real world location where the object is located as shown in fig. 6. As another example, it may be a real scene determined based on the direction of motion of the object. For example, from fig. 5 to 6, there is a greater possibility that the object is backing, i.e., the object continues to backing next, whereby a real scene of the object behind the real world position can be displayed in the real scene display area shown in fig. 6 to better help the user avoid obstacles and the like, reducing the security risk.
As one possible implementation, the transparency of the real scene may be determined based on the distance of the object from the safe-zone boundary, wherein the closer the object is to the safe-zone boundary, the lower the transparency of the real scene. The transparency can be understood as a mask layer, and the lower the transparency is, the clearer the corresponding displayed content is, so that the displayed real scene is clearer.
Referring to fig. 7, the diagram is a display schematic diagram of a real scene according to an embodiment of the present application. The length AB in the sub-graph on the right side of fig. 7 is shorter than the length AB in the sub-graph on the left side of fig. 7, i.e. the object in the sub-graph on the right side of fig. 7 is closer to the boundary of the safety zone than the object in the sub-graph on the left side of fig. 7, and correspondingly, the transparency of the real scene display area in the sub-graph on the right side of fig. 7 is lower than the transparency of the real scene display area in the sub-graph on the left side of fig. 7, so that the table and the cup displayed in the sub-graph on the left side of fig. 7 are blurred, and the table and the cup displayed in the sub-graph on the right side of fig. 7 are clear.
Therefore, if the object is located in the safety zone and the distance between the object and the boundary of the safety zone is smaller than or equal to the first preset distance, the first prompt information and the real scene are displayed in the virtual reality scene. In addition, as the distance between the object and the boundary of the safety zone is smaller, the transparency of the real scene is smaller, so that the displayed real scene is clearer in the virtual reality scene, the object can be clear and clear gradually while being clear to be close to the boundary of the safety zone, the object experience is not influenced, the object is helped to know the real world better, the safety risk is reduced, and the practicability is improved.
S203: and if the distance between the object and the boundary of the safety zone is smaller than or equal to the second preset distance, displaying the real scene in the target area of the virtual reality scene.
And a third case: if the distance between the object and the boundary of the safe area is smaller than or equal to the second preset distance, the second preset distance is smaller than the first preset distance, the object is closer to the boundary of the safe area in the third case than in the second case, and at this time, the object may be located in the safe area, possibly located on the boundary of the safe area, and possibly located outside the safe area. The further the object is away from the safety zone, the higher the safety risk the object faces, so if the distance between the object and the boundary of the safety zone is smaller than or equal to the second preset distance, the real scene is displayed in the target area of the virtual reality scene, so that the object can determine the situation of the object near the position in the real world, whether an obstacle exists or not, and the like, so as to reduce the safety risk.
It should be noted that, although in the third case, the object may leave the safe area, because the second preset distance is smaller, it is noted that the object leaves the safe area too far, it may be that only the hand leaves the safe area, or the head leaves the safe area but the body does not leave the safe area yet, at this time, the object may be only slightly out of bounds or the object is unintentionally adjusted, and if the object directly exits the virtual reality scene based on the related technology, the experience of the object may be interrupted. Based on the above, the embodiment of the application directly displays the real scene in the target area of the virtual reality scene, wherein the target area is a partial area in the virtual reality scene, that is, the size of the virtual reality scene is larger than the size of the target area. Therefore, the experience of the object aiming at the virtual reality scene is not interrupted, and the safety of the object can be ensured based on the real scene.
Referring to fig. 8, the schematic diagram of a distance between an object and a boundary of a security zone, provided by an embodiment of the present application, is less than or equal to a second preset distance.
In fig. 8, a solid circle represents a safe zone boundary, a point a represents a position of an object, a point B represents a point closest to the point a and belonging to the safe zone boundary, and a distance AB represents a distance of the object from the safe zone boundary. For ease of understanding, in fig. 8, the boundary formed by the second preset distance from the safety zone boundary is also indicated by a dotted circle, and since there is no limitation as to whether the object is inside or outside the safety zone boundary, there are two dotted circles. It will be appreciated that in an actual rendering effect, a dashed circle is generally not rendered. In fig. 8, it may be determined that the object is located in the safety zone based on the point a, and the length of AB is smaller than the second preset distance, that is, the distance between the object and the boundary of the safety zone is smaller than the second preset distance, the real scene is displayed.
In addition, if the object is in the safety zone and the distance between the object and the boundary of the safety zone is smaller than or equal to the second preset distance, at this time, the virtual reality scene and the first prompt information can be displayed at the same time as the real scene is displayed. The method may further include, based on a condition, for example, displaying a virtual reality scene and first prompt information if the object is located in the safety zone and a distance between the object and a boundary of the safety zone is less than or equal to a first preset distance and a distance between the object and the boundary of the safety zone is greater than a second preset distance; and if the distance between the object and the boundary of the safety zone is smaller than or equal to the second preset distance, displaying the real scene in the target zone.
It is emphasized that the movement of the object is generally continuous, i.e. the object may continue to move in one direction in the safety zone, or the object may not move immediately from the centre of the safety zone to outside the safety zone, generally passing through the above-mentioned cases one, two and three. Thus, the object moves from within the safe zone to outside the safe zone, generally continuously through the following process:
(1) The object is generally located in the safety zone first, and is far away from the boundary of the safety zone, at this time, the safety risk of the object is small, and only the virtual reality scene can be displayed. (2) Along with the fact that the object continues to move along the previous movement track, the object is still in the safe area, but is closer to the boundary of the safe area, and the safety risk of the object starts to rise, so that first prompt information is displayed in the virtual reality scene, the object can be away from the boundary of the safe area, and the safety risk is reduced. (3) If the object still continues to move along the previous movement track after seeing the first prompt information, it is stated that the object has a mind that the object is expected to be beyond the safe area, and the object is likely to be only close to the safe area boundary or is actually expected to be beyond the safe area, at the moment, the object is closer to the safe area boundary and is likely to be immediately beyond the safe area, and the safety risk of the object continues to rise, so that the real scene is displayed in the target area of the virtual reality scene, and the object can notice the real scene, thereby reducing the safety risk. (4) If the object still continues to move along the previous movement track after seeing the real scene, the real intention of the object is to exceed the safety zone, and the object passes through the boundary of the safety zone and exceeds the safety zone by a distance (the distance is smaller than the second preset distance), because the object looks at the real scene to exceed the safety zone, the requirement of the object on the real scene is met while the safety risk is reduced, and if the object wants to drink water by taking a cup positioned in the real scene, the practicability is improved.
The embodiment of the application does not specifically limit the size, the position, the number and other attributes of the target area, and can be set by a person skilled in the art according to actual needs.
As a possible implementation, the size of the target area may be set according to the distance between the object and the safe zone boundary, for example, the closer the object is to the safe zone boundary, the larger the size of the target area so that the object knows that it is closer to the safe zone boundary. As a possible implementation manner, the size of the target area may be determined according to how much content the real scene includes, for example, the more content the real scene includes, the larger the size of the target area is so as to be able to better display the real scene. As a possible implementation, the size of the target area may be fixed, as by taking a strip (similar to a floor rectangular dressing mirror) upright in the cylinder surface as shown in fig. 5.
As a possible implementation manner, the position of the target area may be determined according to the line of sight center of the object, so that the object can conveniently and better view the real scene, and the experience of the object is improved. As a possible implementation, the location of the target area may be determined according to the body part of the subject, e.g. a target area near the foot, a target area near the head, etc.
According to the technical scheme, when the object is located in the safety zone included in the virtual reality scene and the distance between the object and the boundary of the safety zone is larger than the first preset distance, the object is not only located in the safety zone, but also far away from the boundary of the safety zone, and the virtual reality scene is displayed. When an object is in a safe zone, and the distance between the object and the boundary of the safe zone is smaller than or equal to a first preset distance, the object is not only in the safe zone, but also is closer to the boundary of the safe zone, and then first prompt information is displayed in a virtual reality scene to prompt that the object is closer to the boundary of the safe zone, and the risk exceeding the safe zone possibly exists, so that a user can timely keep away from the boundary of the safe zone, and the safety risk of the object is reduced. When the distance between the object and the boundary of the safe area is smaller than or equal to the second preset distance, wherein the second preset distance is smaller than the first preset distance, which means that the object is closer to the boundary of the safe area, the object possibly exceeds the safe area in the safe area or is out of the safe area, and if the object has a safety risk, the real scene is displayed in the target area of the virtual reality scene, so that the object can see the real scene out of the virtual reality scene, and the safety risk of the object is reduced.
Therefore, different contents are displayed based on the distance between the object and the boundary of the safety zone, so that first prompt information is displayed when the object starts to approach the boundary of the safety zone, if the distance between the object and the boundary of the safety zone is continuously reduced after the object sees the first prompt information, the real intention of the object is proved to be beyond the safety zone, at the moment, the virtual reality scene is not immediately exited, and the real scene is displayed in a partial area of the virtual reality scene, so that the requirement of the object on the real scene is met while the experience of the object on the virtual reality scene is not interrupted, and the practicability and the user experience are improved.
As one possible implementation, the embodiment of the present application further provides two specific implementations of the object in the second case, that is, two specific implementations of S202, which are described below with reference to fig. 9 and 10, respectively.
The specific implementation mode is as follows: the first prompt message is a safe area fence.
If the object is located in the safe area and the distance between the object and the safe area boundary is smaller than or equal to a first preset distance, determining the transparency of the safe area fence based on the distance between the object and the safe area boundary, and displaying the safe area fence based on the transparency of the safe area fence in the virtual reality scene. The safe area fence is used for prompting the information of the object about the safe area boundary, and forms a net-shaped structure similar to the fence effect in a line mode and the like.
Referring to fig. 9, a schematic diagram of a safe area fence according to an embodiment of the present application is shown. In fig. 9, the object is located in the safe zone, and the distance between the object and the safe zone boundary is smaller than the first preset distance, and the safe zone fence is displayed, so that the object is aware of the fact that the object is about to approach the safe zone boundary based on the safe zone fence, and the safety risk is high.
As can be seen from the foregoing, since the movement of the object has continuity, when the distance between the object and the boundary of the safety area is equal to the first preset distance, the safety area fence is displayed, and as the object moves, the distance between the object and the boundary of the safety area starts to be smaller than the first preset distance, in order to highlight that the distance between the object and the boundary of the safety area is different, the closer the distance between the object and the boundary of the safety area is, the lower the transparency of the safety area fence is, so that the lines of the safety area fence are displayed more and more clearly, such as the lines are gradually blackened from transparent. Thus, the object can determine its distance from the safe-zone boundary based on the color of the safe-zone fence, e.g., the closer the color of the safe-zone fence is, the more attention should be paid to safety.
In addition, other forms of special effects can be added to the first prompt information, such as continuous flashing of the fence in the safety area. Or the safe area fence can be displayed only in the direction that the object is about to exceed the safe area, so that the rendering content is reduced, the rendering speed is improved, the situation that the object exceeds the safe area and then the safe area fence is rendered is avoided, and the experience of the object is ensured.
The specific implementation mode II is as follows: the first prompt information is text information.
And if the object is positioned in the safety zone and the distance between the object and the boundary of the safety zone is smaller than or equal to a first preset distance, displaying text information in the virtual reality scene, wherein the text information comprises the distance between the object and the boundary of the safety zone so as to prompt the distance between the object and the boundary of the safety zone at the moment based on the distance.
Referring to fig. 10, a schematic diagram of a safe area fence according to an embodiment of the present application is shown. In FIG. 10, the object is located within the safe zone and the distance between the object and the safe zone boundary is less than the first preset distance, displaying the text message "alert-! You are also 5 cm from the safe zone boundary. Please note the security-! ", in order to let the object know that there is currently 5 cm closer to the safety zone boundary based on the text information, the safety risk is higher.
As a possible implementation manner, the distance between the object included in the text information and the boundary of the safety area is determined in real time, so that the distance between the object and the boundary of the safety area is displayed in real time based on the first prompt information in the process of moving the object, and the object can change the movement direction of the object in time.
As can be seen from the foregoing, since the movement of the object has continuity, when the distance between the object and the boundary of the safety zone is equal to the first preset distance, the distance between the object and the boundary of the safety zone starts to be displayed in the form of text information, and as the distance between the object and the boundary of the safety zone becomes smaller, the content of the text information is continuously changed based on the distance between the object and the boundary of the safety zone, so that the object can grasp the distance between itself and the boundary of the safety zone in time, and thus determine the distance between itself and the boundary of the safety zone based on the distance value, if the distance value is smaller, the distance between itself and the boundary of the safety zone is closer, and the safety should be more noted.
In addition, other forms of content can be added to the first prompt message, such as predicting that the object reaches the safety zone boundary for a few seconds based on the movement speed of the object, so that the first prompt message is displayed based on the countdown form.
For convenience of explanation, the following will take an example of an interaction device such as the VR device, where the interaction device is used to construct a virtual reality scenario, and for convenience of explanation, the interaction device includes a plurality of sub-devices, and for convenience of explanation, the following will take an example that the interaction device includes a first sub-device and a second sub-device.
The first sub-device and the second sub-device are located at different body parts of the object, respectively, in order to determine the position of the object based on the first sub-device or the second sub-device, i.e. the position of the object based on the first sub-device, or the position of the object based on the second sub-device. Therefore, the positions of different body parts of the object can be determined based on different sub-devices of the interaction device, and the positions of the object can be further determined, for example, the position closest to the boundary of the safety zone is taken as the position of the object, so that the accuracy of the position of the object is improved.
As a possible implementation manner, the interactive device may not have multiple sub-devices, and the interactive device may not only display the virtual reality scene, but also collect the object image and the real position of the object. The object image is an image comprising an object, and the real position of the object is the real position of the interaction device acquired by the interaction device. The real position of the object is represented by the real position of the interactive device, such as the head, on the body part of the object. Based on the body part of the interactive device in the object, the position of the body part in the object image, and the real position, the positions of other body parts in the object image are determined, so that whether a certain body part of the object exceeds a safe zone is determined.
For example, taking the interactive device as a head-mounted display device, the head-mounted display device is worn at the head position of the subject, so the real position acquired by the head-mounted display device is the real position of the head of the subject. Based on the actual position of the head of the subject and the position of the head of the subject in the subject image, the hand position of the subject can be estimated.
Therefore, the real position of each body part of the object can be determined based on the interactive device without increasing the number of the sub-devices, and the accuracy of the distance between the object and the safe zone boundary (for example, the head of the object is positioned in the safe zone boundary, but the hand is positioned outside the safe zone boundary) is improved while the hardware cost of the interactive device is not increased.
As a possible implementation, the hand is flexible compared to other parts of the body, so the second sub-device may be placed on the hand of the subject, such as the subject's hand-held handle, display, etc. The head stability is high, and the head can be used as a representative of other parts of the body except the hands, and the first sub-device is placed on the head of the subject. Thus, the first sub-device is located at the head of the subject, and the second sub-device is located at the hand of the subject, so that not only can the movement condition of the subject be adapted, but also the interactive device existing in the related art can be improved less, for example, VR devices generally comprise a head-mounted display (which can be used as the first sub-device) and a handle (which can be used as the second sub-device).
The following describes determining the position of an object based on a first sub-device located at the head and a second sub-device located at the hand.
It will be appreciated that since the safe zone boundary generally surrounds the object, in calculating the distance of the object from the safe zone boundary, the distance of the object's location from the surrounding 360 degrees safe zone boundary can be calculated, thereby taking the shortest distance as the distance of the object from the safe zone boundary. Based on this, the representation of the object and the preset distance image may be a detection ball, for example, a head detection ball may be obtained with the first sub-device as the center and the first preset distance as the radius. For another example, a hand detecting ball is obtained by taking the second sub-equipment as the center and taking the first preset distance as the radius. Thus, it is possible to detect whether the surroundings of the sub-device are close to the safe zone boundary based on the detection ball. Referring to fig. 11, a schematic diagram of a detection ball according to an embodiment of the present application is shown. In fig. 11, a head detection ball and a hand detection ball are shown. Taking a hand detection ball as an example, the hand is centered on the hand and is consistent with the 360-degree distance around the hand detection ball, so that the edge of the detection ball is contacted with the boundary of the safety zone, and the distance between the position of the hand and the boundary of the safety zone is equal to the preset distance.
As a possible implementation, the sizes of the head detection ball and the hand detection ball are not identical, or, when the distance between the object and the boundary of the safety area is determined using the first sub-device, the preset distance (e.g., the second preset distance) used is not identical to the size of the preset distance (e.g., the second preset distance) used when the distance between the object and the boundary of the safety area is determined using the second sub-device.
For example, the preset distance used in determining the position of the object based on the position of the first sub-device is greater than the preset distance used in determining the position of the object based on the position of the second sub-device. Thus, since the head detecting ball is positioned based on the body parts other than the hands, the head detecting ball may need to cover the body parts such as buttocks and feet, and the hand detecting ball may need to cover only the hands, so the head detecting ball may be larger than the hand detecting ball, and in addition, considering that the hands are flexible, there may be a large movement speed, and the size of the hand detecting ball may be enlarged to some extent. For example, with continued reference to fig. 11, the head detection ball has a diameter of 40 cm and the hand detection ball has a diameter of 30 cm.
The embodiment of the application does not limit the size of the detection ball, and the detection ball can be set according to the preset distance such as the first preset distance, the second preset distance and the like, and can also be set according to the size of the buffer zone. The buffer zone is arranged to take into consideration the characteristics that the time difference exists between the position of the acquired object and the distance between the calculated object and the boundary of the safety zone and the motion of the object has continuity, so that the situation that the calculated result is that the object is located in the safety zone but the actual object is located outside the safety zone can be avoided by arranging the buffer zone.
The following description will be given by taking, as an example, a determination of the position of the boundary between the object and the safety zone based on the head detection ball.
Referring to fig. 12, a schematic diagram of a distance between an object and a safe area according to an embodiment of the present application is shown. In fig. 12, a small circle is a top view of the head detection ball (point a is a center), a large circle is a top view of the safety zone (point O is a center), a length AB is a radius of the head detection ball, a length CD is a length of the first buffer zone, and DE is a length of the second buffer zone. Thus, the length of ab+cd is a first preset length, and the length of ab+de is a second preset length, as can be seen from fig. 12, the length of the first buffer is greater than the length of the second buffer, so the first preset length is less than the second preset length.
Referring to sub-graph 12a of fig. 12, point a coincides with point O, i.e. the object is located in the center of the safe zone, the length ab+cd is a first preset length, AD > ab+cd, i.e. the distance of the object from the safe zone boundary is greater than the first preset distance. At this time, if the head detection ball is located in the safety zone, the first situation is satisfied, and a virtual reality scene is displayed for the object.
Referring to fig. 12b, compared with fig. 12a, the object moves toward the boundary of the safety area, AD < ab+cd, and ab+de < AD, i.e., the distance between the object and the boundary of the safety area is greater than the second preset distance, and the distance between the object and the boundary of the safety area is less than the first preset distance, when the head detection ball is located in the safety area, the situation two described above is met, and the first prompt information is displayed for the object in the virtual reality scene.
In comparison with sub-graph 12b, the object continues to move toward the safe zone boundary, and since in the above case three it is not limited whether the object is inside or outside the safe zone, three sub-cases correspond to sub-graph 12c1, sub-graph 12c2, and sub-graph 12c3, respectively, and are described below. In this case, point C is not shown for convenience of explanation.
Referring to sub-graph 12c1 in fig. 12, compared to sub-graph 12b, the object continues to move toward the safe zone boundary, but the head detection sphere is still located within the safe zone, AD < ab+de, i.e., the distance between the object and the safe zone boundary is less than the second preset distance, in line with the third case, the object is within the safe zone, and the distance between the object and the safe zone boundary is less than or equal to the second preset distance. At this time, the real scene is displayed in the target area of the virtual reality scene.
Referring to fig. 12c2 of fig. 12, compared to fig. 12c1, the object continues to move toward the safety zone boundary, the head detection ball contacts the safety zone boundary, and ad=ab, i.e., the distance between the object and the safety zone boundary is smaller than the second preset distance, which corresponds to the third case, in which the object is located at the safety zone boundary, and the distance between the object and the safety zone boundary is smaller than or equal to the second preset distance. At this time, the real scene is displayed in the target area of the virtual reality scene.
Referring to sub-graph 12c3 in fig. 12, the object continues to move outwardly from the safe zone boundary, i.e., the distance of the object from the safe zone boundary is less than the second preset distance, as compared to sub-graph 12c2, in which case, although the edge of the head detection ball exceeds the safe zone, the center of gravity of the object is still within the safe zone, which may be the case when the object is slightly out of bounds or unintentionally adjusted. In accordance with the third aforementioned case, the object is located outside the safety zone, and the distance between the object and the boundary of the safety zone is less than or equal to the second preset distance. At this time, the real scene is displayed in the target area of the virtual reality scene.
Referring to sub-graph 12d in fig. 12, compared to sub-graph 12c3, the object continues to move outward and the head detection ball continues to move outward, where the object is outside the safe zone and the distance between the object and the safe zone boundary is greater than a second preset distance, which corresponds to case four. If the current detection ball is a hand detection ball, the case five corresponds to the case four, and the case five will be described below.
Case four: if the position of the object is determined based on the position of the first sub-device, it is explained that the position of the object is currently determined based on the head detection ball. If the object is located outside the safety zone and the distance between the object and the boundary of the safety zone is greater than the second preset distance, it is indicated that the head detection ball has left the safety zone, and since the head detection ball covers a wider area (such as the buttocks and feet, etc.), when the head detection ball leaves the safety zone, the whole body of the object is also indicated to exceed the safety zone, and at this time, the safety risk is too high, and the second prompt information needs to be displayed, so that the object is prompted to leave the safety zone based on the second prompt information.
Referring to fig. 13, the schematic diagram of an object according to an embodiment of the present application is located outside a security zone, and a distance between the object and a boundary of the security zone is greater than a second preset distance. In FIG. 13, the object is outside the secure zone, not only for which the second hint information "alert-! You have left the secure area, please return to the secure area or recreate the secure area. At this time, not only the second prompt information but also the boundary of the safety area can be displayed, so that the object can correctly return to the safety area, and the safety risk is reduced.
Case five: if the position of the object is determined based on the position of the second sub-device, it is explained that the position of the object is currently determined based on the hand detection ball. If the object is located outside the safety zone and the distance between the object and the boundary of the safety zone is greater than the second preset distance, the hand detection ball is indicated to have left the safety zone, and the hand detection ball exceeds the safety zone due to higher flexibility of the hand, but other parts of the object do not exceed the safety zone, for example, the object only extends out of the virtual reality scene such as taking a cup for drinking water. Based on the method, in order to improve practicability, the display of the real scene can be continued in the target area, so that the object can acquire the required object based on the real scene, the display of the virtual reality scene is realized, the real scene can be displayed, and the actual requirement of the object is met.
Referring to fig. 14, a schematic diagram of a distance between an object and a safe area according to an embodiment of the present application is shown. In fig. 14, a plurality of rectangular hollow squares form a virtual reality scene, a hand detection ball is provided around the hand, and a black thick line indicates a safety zone boundary. As shown in sub-graph 14d in fig. 14, if the hand detection ball of the object is outside the safety zone and the distance between the object and the boundary of the safety zone is greater than the second preset distance, not only the virtual reality scene but also the real scene is displayed for the hand detection ball of the object in the target zone.
In addition, as shown in sub-graph 14a in fig. 14, the hand detection ball of the object is located in the safety zone, and the distance between the object and the boundary of the safety zone is greater than the first preset distance, and at this time, only the virtual reality scene is displayed for the object. As shown in sub-graph 14b of fig. 14, the hand detection ball of the subject is located in the safety zone, and the distance between the subject and the boundary of the safety zone is smaller than the first preset distance and larger than the second preset distance, and at this time, the first hint information is displayed while the virtual reality scene is displayed for the subject (in the figure, the first hint information is shown in order to show the first hint information, because sub-graph 14b only shows the virtual reality scene near the hand region, the first hint information may be located in front of the eyes, not near the hand region). As shown in fig. 14c, the hand detection ball of the object is located in the safety zone, and the distance between the object and the boundary of the safety zone is smaller than the second preset distance, and at this time, a real scene is displayed for the object in the target area of the virtual reality scene.
The size and the position of the target area are not particularly limited, and can be set according to actual needs by a person skilled in the art.
As a possible implementation manner, the size of the target area may be dynamically adjusted along with the distance between the object and the boundary of the safety area, specifically, if the distance between the object and the boundary of the safety area is less than or equal to the second preset distance, the size of the target area is determined based on the distance between the object and the boundary of the safety area, and the real scene is displayed in the target area.
If the object is in the safe area, the closer the distance between the object and the safe area boundary is, the larger the size of the target area, i.e. the closer the object is to the safe area boundary in the safe area, the larger the size of the target area is, for example, the cross section of the detection ball and the safe area boundary is taken as the target area for displaying the real scene. Therefore, the object can definitely get closer to the boundary of the safety zone based on the size of the target area, the transition of the object from the virtual reality scene to the real scene can be helped, and the experience of the object is improved. Therefore, the positions of the objects determined by different positions are processed differently, so that the objects can be more targeted, and the practicability is improved.
If the object is at the safe area boundary, third prompt information can be displayed so as to prompt the object based on the third prompt information, and compared with the position at the safe area boundary based on the size change of the target area only, the position at the safe area boundary of the object is clearly prompted to be more visual.
If the object is outside the safe area and the object continues to move outwards, but the distance between the object and the boundary of the safe area is smaller than or equal to the second preset distance, the object is indicated to slightly exceed the boundary of the safe area, and the size of the target area can be kept at this time, or the size of the target area continues to be increased (i.e. if the object is outside the safe area, the further the distance between the boundary of the safe area and the larger the size of the target area), etc., the application is not limited in particular. With continued reference to fig. 14, the hand detection ball in sub-image 14d moves a greater distance from within the safe zone than sub-image 14c, based on the safe center, at which point the size of the target area in sub-image 14d is greater than sub-image 14 c.
If the object continues to move outwards and the distance between the object and the boundary of the safety zone is greater than the second preset distance, the real intention of the object is to exceed the safety zone, and at the moment, the display of the virtual reality scene can be canceled, only the real world is displayed, or the second prompt information and the boundary of the safety zone are displayed while the real world is displayed, so that the object can return to the safety zone again to continue to experience the virtual reality scene.
As a possible implementation manner, the position of the target area may be dynamically adjusted along with the line-of-sight line point of the object, specifically, the gaze center point of the object is determined, and the position of the target area in the virtual reality scene is determined based on the gaze center point of the object. The gaze center point refers to a line of sight centerline of the subject, and may be estimated based on the head position of the subject. Therefore, the gazing center point of the object can be used as the center point of the target area, and the real scene can be changed in position along with actions such as head lifting of the object, so that the object can better and faster see the real scene, and the safety risk is reduced.
As a possible implementation, the number of target areas may be determined according to the number of sub-devices, e.g. with a first target area in the vicinity of a first sub-device (e.g. right in front of) and a second target area in the vicinity of a second sub-device (e.g. right in front of). Even though the size of the target area is limited, the real scene may not be displayed entirely, and the real scene may be displayed based on a plurality of target areas, for example, a first real scene may be displayed in a first target area, a second real scene may be displayed in a second target area, the first real scene may be a real scene behind the first target display area, and the second real scene may be a real scene behind the second target display area.
Displaying a first real scene in a first target area, the first target area being determined based on a location of a first sub-device; and displaying a second real scene in a second target area, wherein the second target area is determined based on the second sub-device. Therefore, based on the positions of different pieces of sub-equipment, different real scenes are displayed, so that the object is in the virtual reality scene, and meanwhile, the real scene outside the virtual reality scene can be watched at multiple angles, the display range of the real scene is enlarged, and the experience of the object is improved.
The embodiment of the application is not particularly limited to the display content of the real scene, and can be set by a person skilled in the art according to actual needs. Two examples will be described below.
Mode one: and determining the position of the target area in the virtual reality scene, and not displaying the virtual reality scene in the position of the target area so as to display the real scene in the target area, wherein the virtual reality scene can be seen through the hole similarly to the way of buckling a hole in the virtual reality scene.
Mode two: acquiring an image of a real scene; rendering is carried out based on the image, and a rendering picture is obtained; rendering textures are created, which are attached into the virtual reality scene based on the perspective shader to display the real scene in the target area. The rendering texture is used for storing a rendering picture, and the size of the rendering texture is consistent with the size of the target area.
For example, a VR-based camera may acquire images of real scenes outside of the virtual reality scene and synchronize the acquired images of the real scenes in real time into an interface of the VR so that the interface of the VR renders the received images into the virtual reality scene. The rendering process is described below in conjunction with fig. 15.
Referring to fig. 15, a schematic view of a cut-off view of a virtual reality scene according to an embodiment of the present application is shown. In fig. 15, the virtual reality scene is similar to a semicircle sphere covering around the object, and its tangent plane is a semicircle shape.
Firstly, after an image of a real scene is acquired based on a perspective camera (Passthrough Camera), the image can be rendered, and a rendering picture is obtained. A new rendering Texture (Render Texture) is then created, and the rendered screen is stored based on the rendering Texture. The size, position and content of the rendering texture are adjusted so that the size of the rendering texture is consistent with the size of the target area, the center of the rendering texture is consistent with the fixation center point of the object, the content stored by the rendering texture is only the content which the object needs to see, if the object is backing and is about to exceed the safe area boundary, the content of the safe area boundary behind the object is displayed in front of the object. A perspective shader (Passthrough shader) is created for attaching the rendered image of the rendered texture store into the virtual reality scene. With continued reference to fig. 15, the rendering texture is in a portion of the virtual reality scene.
From the foregoing, the safety zone may be regular (generally for easy calculation, circular by default) or irregular (generally manually set for the subject to avoid obstacles existing in the real world). Two cases based on the safe zone are explained below.
Case one: the safety area is a circular area taking the initial position of the object as a circle center and taking the preset length as a radius.
In practical application, after the object wears the interactive device, the current position of the object is taken as an initial position, a circular area with the initial position as a circle center and the preset length as a radius is taken as a default safety area of the interactive device, and the safety area is in a regular shape.
Acquiring the current position of an object; determining a distance difference between the current position and the initial position; if the difference between the preset length and the distance difference is smaller than or equal to the first preset distance, the distance between the object and the boundary of the safety zone is smaller than the first preset distance. If the difference between the preset length and the distance difference is smaller than or equal to the second preset distance, the distance between the object and the boundary of the safety zone is smaller than or equal to the second preset distance.
Referring to fig. 16, a schematic diagram of a regular-shape safety zone is provided in an embodiment of the present application. In fig. 16, a point a is an initial position of an object, a length of a point G from the point a is G (i.e., a preset length), and a circular area is a safe area. Assuming that the object starts to move from the point A to the point B, the position of the point B is the current position, and the length of the AB is the distance difference between the current position and the initial position. If the coordinates of point a are (x 1, y 1) and the coordinates of point B are (x 2, y 2), the length of AB can be expressed as follows:
The difference between the preset length and the distance difference may be expressed as g-AB, and if g-AB is less than the first preset distance, the distance between the object and the boundary of the safety zone is less than the first preset distance. If the absolute value of g-AB is smaller than or equal to the second preset distance, the distance between the object and the boundary of the safety zone is smaller than or equal to the second preset distance.
It should be noted that, the embodiment of the present application is not particularly limited to a manner of determining whether the object is located in the safe area or outside the safe area based on the current position, for example, g- |ab| is a positive value, and the current position of the object is located in the safe area. For another example, g- |AB| is negative, then the current location of the object is outside the safe zone. In addition, a plane rectangular coordinate system can be established, if the circle center is the circle center of the plane direct coordinate system, if the absolute value of the point B (the current position) is smaller than the preset length (g), the point B is indicated to be in the safety zone.
And a second case: the safety area is in an irregular pattern.
From the foregoing, it can be seen that the irregular pattern is generally an object-customized region. Taking VR equipment as an example, an object may draw an irregular pattern on the ground using a handle or bare hand rays, where the irregular pattern is a self-defined security zone of the object.
Referring to fig. 17, a schematic diagram of two security zones according to an embodiment of the present application is shown. After the object has set the safe zone, the interactive device determines each boundary point, or point representing each edge, based on the set safe zone, so as to determine the distance between the object and the safe zone boundary based on the points at the boundary.
Obtaining boundary point positions corresponding to a plurality of boundary points of the irregular graph respectively; acquiring the current position of an object; and determining the distance difference value between the current position and the positions of the plurality of boundary points, and taking the smallest distance difference value in the plurality of distance difference values as the distance between the object and the boundary of the safety zone. Taking the (1) th safety zone in fig. 17 as an example, the current position of the object is taken as the point B, and the distance difference between the point B and each boundary point included in the (1) th safety zone is calculated based on the position between the two points, wherein the distance difference between the point B and the point P is the smallest, and |bp| is taken as the distance between the object and the boundary of the safety zone. In addition, the foregoing manner of establishing the plane rectangular coordinate system may still be adopted to determine whether the current position of the object is in the safe area, which is not described herein again.
From the foregoing, it can be seen that an interactive device may be provided with a plurality of sub-devices, and a difference between a current position determined by each sub-device and a distance between the current position and each of a plurality of boundary point positions may be determined in a traversal manner. The following is a description with reference to fig. 18.
Referring to fig. 18, a schematic diagram of a distance between an object and a boundary of a security zone according to an embodiment of the present application is shown. The boundary points of fig. 18 are denser than those of fig. 17, and the accuracy of the distance of the resulting object from the safe zone boundary is higher. In fig. 18, the interactive device includes a first sub-device located at the head, a second sub-device located at the left hand, and a third sub-device located at the right hand. The first sub-device corresponds to a head detection ball, and the second and third sub-devices correspond to hand detection balls. On the basis of each boundary point, the distance difference value between the three detection balls and each boundary point is calculated. As shown in fig. 18, three distance differences may be calculated based on the current boundary point, namely, a distance difference S3 between the current boundary point and the head detection ball, and distance differences S1 and S2 between the current boundary point and the two hand detection balls. Comparing the sizes of S3, S1 and S2, because S3 is the smallest in FIG. 18, S3 is taken as the distance between the object and the current boundary point, each boundary point is the same as the current boundary point, the distance between each boundary point and the object is determined, and the distance between the object and the boundary of the safety zone is further obtained.
Because each boundary point is taken as a reference, the distance between the object and the boundary point is determined first, and then the boundary point is traversed to obtain the distance between the object and the boundary of the safety zone, the traversing times can be reduced, the performance of the interactive equipment is prevented from being occupied excessively, the computing speed is improved, and the experience of the object is improved.
In order to facilitate further understanding of the technical solution provided by the embodiments of the present application, an execution body of the method for displaying a virtual reality scene provided by the embodiments of the present application is taken as an example of an interaction device, and the method for displaying a virtual reality scene is described in an overall exemplary manner. Wherein the interactive apparatus comprises a head-mounted display device and two handles, and the safety area is an irregular pattern as shown in fig. 18. The game based on the virtual reality technology is experienced by the object based on the interactive equipment, namely, the displayed virtual reality scene is a game picture constructed based on the virtual reality technology, so that the immersion of the object is higher.
Referring to fig. 19, the application scenario diagram of the virtual reality scenario display method provided by the embodiment of the application is shown.
S1901: starting.
S1902: and acquiring boundary point positions corresponding to the boundary points included in the safety zone and the current position of the object.
The current position of the subject may be determined based on the head mounted display and the two handles.
S1903: and determining the distance difference value between the current position and the positions of the plurality of boundary points, and taking the smallest distance difference value in the plurality of distance difference values as the distance between the object and the boundary of the safety zone.
S1904: and if the object is positioned in the safety zone included in the virtual reality scene and the distance between the object and the boundary of the safety zone is greater than the first preset distance, displaying the virtual reality scene.
S1905: and if the object is positioned in the safety zone and the distance between the object and the boundary of the safety zone is smaller than or equal to a first preset distance, displaying first prompt information in the virtual reality scene.
S1906: and if the distance between the object and the boundary of the safety zone is smaller than or equal to the second preset distance, displaying the real scene in the target area of the virtual reality scene.
S1907: if the position of the object is determined based on the position of the first sub-device, if the object is located outside the safety zone and the distance between the object and the boundary of the safety zone is greater than a second preset distance, displaying second prompt information.
S1908: if the position of the object is determined based on the position of the second sub-device, if the object is located outside the safe zone and the distance between the object and the boundary of the safe zone is greater than the second preset distance, displaying the real scene in the target area.
S1909: and (5) ending.
Thus, when the object plays the game based on the interaction device, the game picture, namely the virtual reality scene, can be constructed based on the virtual reality technology. Different contents are displayed based on the distance between the object and the boundary of the safe area, so that first prompt information is displayed when the object starts to approach the boundary of the safe area, if the distance between the object and the boundary of the safe area is continuously reduced after the object sees the first prompt information, the real intention of the object is proved to be beyond the safe area, at the moment, the game is not exited immediately, and the real scene is displayed in a part of the area of the virtual reality scene, so that the requirement of the object on the real scene is met while the game experience of the object is not interrupted, and the practicability and the user experience sense are improved.
Aiming at the display method of the virtual reality scene, the application also provides a corresponding display device of the virtual reality scene, so that the display method of the virtual reality scene is applied and realized in practice.
Referring to fig. 20, the schematic structural diagram of a display device for a virtual reality scene according to an embodiment of the present application is shown. As shown in fig. 20, the display device 2000 of the virtual reality scene includes: a first display unit 2001, a second display unit 2002, and a third display unit 2003;
the first display unit 2001 is configured to display a virtual reality scene if an object is located in a safe area included in the virtual reality scene and a distance between the object and a safe area boundary is greater than a first preset distance, where the safe area boundary is a boundary of the safe area;
the second display unit 2002 is configured to display, if the object is located in the safety area and the distance between the object and the boundary of the safety area is less than or equal to the first preset distance, first prompt information in the virtual reality scene, where the first prompt information is used to prompt the distance between the object and the boundary of the safety area;
the third display unit 2003 is configured to display a real scene in a target area of the virtual reality scene if a distance between the object and the boundary of the safety area is less than or equal to a second preset distance, where the second preset distance is less than the first preset distance, and the target area is a partial area of the virtual reality scene.
According to the technical scheme, when the object is located in the safety zone included in the virtual reality scene and the distance between the object and the boundary of the safety zone is larger than the first preset distance, the object is not only located in the safety zone, but also far away from the boundary of the safety zone, and the virtual reality scene is displayed. When an object is in a safe zone, and the distance between the object and the boundary of the safe zone is smaller than or equal to a first preset distance, the object is not only in the safe zone, but also is closer to the boundary of the safe zone, and then first prompt information is displayed in a virtual reality scene to prompt that the object is closer to the boundary of the safe zone, and the risk exceeding the safe zone possibly exists, so that a user can timely keep away from the boundary of the safe zone, and the safety risk of the object is reduced. When the distance between the object and the boundary of the safe area is smaller than or equal to the second preset distance, wherein the second preset distance is smaller than the first preset distance, which means that the object is closer to the boundary of the safe area, the object possibly exceeds the safe area in the safe area or is out of the safe area, and if the object has a safety risk, the real scene is displayed in the target area of the virtual reality scene, so that the object can see the real scene out of the virtual reality scene, and the safety risk of the object is reduced.
Therefore, different contents are displayed based on the distance between the object and the boundary of the safety zone, so that first prompt information is displayed when the object starts to approach the boundary of the safety zone, if the distance between the object and the boundary of the safety zone is continuously reduced after the object sees the first prompt information, the real intention of the object is proved to be beyond the safety zone, at the moment, the virtual reality scene is not immediately exited, and the real scene is displayed in a partial area of the virtual reality scene, so that the requirement of the object on the real scene is met while the experience of the object on the virtual reality scene is not interrupted, and the practicability and the user experience are improved.
As a possible implementation manner, the first prompt information is a safe area fence, and the second display unit 2002 is specifically configured to:
if the object is located in the safe area and the distance between the object and the safe area boundary is smaller than or equal to the first preset distance, determining the transparency of the safe area fence based on the distance between the object and the safe area boundary, wherein the closer the distance between the object and the safe area boundary is, the lower the transparency of the safe area fence is;
in the virtual reality scene, the safe-zone fence is displayed based on transparency of the safe-zone fence.
As a possible implementation manner, the first prompt information is text information, and the second display unit 2002 is specifically configured to:
and if the object is positioned in the safety zone and the distance between the object and the boundary of the safety zone is smaller than or equal to the first preset distance, displaying the text information in the virtual reality scene, wherein the text information comprises the distance between the object and the boundary of the safety zone.
As a possible implementation manner, the second display unit 2002 is specifically configured to:
and displaying the first prompt information and the safe zone boundary in real time in the virtual reality scene.
As a possible implementation manner, the second display unit 2002 is specifically configured to:
determining the transparency of the real scene, wherein the transparency of the real scene is determined based on the distance between the object and the safety zone boundary, and the closer the distance between the object and the safety zone boundary is, the lower the transparency of the real scene is;
and displaying the first prompt information in the virtual reality scene, and displaying the real scene based on the transparency of the real scene.
As a possible implementation manner, the virtual reality scenario is constructed based on an interactive device, the interactive device includes a first sub-device and a second sub-device, the first sub-device and the second sub-device are located at different body parts of the object, and the display apparatus 2000 of the virtual reality scenario further includes a determining unit configured to:
Determining a location of the object based on the location of the first sub-device; or alternatively, the first and second heat exchangers may be,
based on the location of the second sub-device, a location of the object is determined.
As a possible implementation, the first sub-device is located at the head of the subject, and the second sub-device is located at the hand of the subject.
As a possible implementation manner, the display device 2000 of the virtual reality scenario further includes a fourth display unit, configured to:
if the position of the object is determined based on the position of the first sub-device, if the object is located outside the safety zone and the distance between the object and the boundary of the safety zone is greater than the second preset distance, displaying second prompt information, wherein the second prompt information is used for prompting that the object leaves the safety zone.
As a possible implementation manner, the display device 2000 of the virtual reality scenario further includes a fifth display unit, configured to:
and if the position of the object is determined based on the position of the second sub-device, and if the object is located outside the safety zone and the distance between the object and the boundary of the safety zone is greater than the second preset distance, displaying the real scene in the target area.
As a possible implementation, the preset distance used in determining the position of the object based on the position of the first sub-device is greater than the preset distance used in determining the position of the object based on the position of the second sub-device.
As a possible implementation manner, the third display unit 2003 is specifically configured to:
if the distance between the object and the safety zone boundary is smaller than or equal to the second preset distance, determining the size of the target zone based on the distance between the object and the safety zone boundary; if the object is in the safety zone, the closer the object is to the boundary of the safety zone, the larger the size of the target area is;
and displaying the real scene in the target area.
As a possible implementation manner, the third display unit 2003 is specifically configured to:
acquiring an image of a real scene;
rendering is carried out based on the image, and a rendering picture is obtained;
creating a rendering texture, wherein the rendering texture is used for storing the rendering picture, and the size of the rendering texture is consistent with the size of the target area;
the rendering texture is attached into the virtual reality scene based on a perspective shader to display the real scene in the target area.
As a possible implementation manner, the third display unit 2003 is specifically configured to:
determining the position of the target area in the virtual reality scene;
the virtual reality scene is not displayed at the location of the target area so that the real scene is displayed in the target area.
As a possible implementation manner, the display device 2000 of the virtual reality scenario further includes a determining unit, configured to:
determining a gaze center point of the object;
and determining the position of the target area in the virtual reality scene based on the gazing center point of the object.
As a possible implementation manner, if the safety area is a circular area with a preset length as a radius and the initial position of the object is used as a center, the display device 2000 of the virtual reality scenario further includes a determining unit, configured to:
acquiring the current position of the object;
determining a distance difference between the current position and the initial position;
if the difference value between the preset length and the distance difference value is smaller than or equal to the first preset distance, the distance between the object and the boundary of the safety zone is smaller than the first preset distance;
And if the difference value between the preset length and the distance difference value is smaller than or equal to the second preset distance, the distance between the object and the boundary of the safety zone is smaller than or equal to the second preset distance.
As a possible implementation manner, if the safety area is an irregular graph, the display device 2000 of the virtual reality scenario further includes a determining unit configured to:
obtaining boundary point positions corresponding to a plurality of boundary points of the irregular graph respectively;
acquiring the current position of the object;
and determining the distance difference value between the current position and the positions of the plurality of boundary points, and taking the smallest distance difference value in the plurality of distance difference values as the distance between the object and the boundary of the safety zone.
The embodiment of the application also provides a computer device, which is the computer device introduced above, the computer device can be a server or a terminal device, the display device of the virtual reality scene can be built in the server or the terminal device, and the computer device provided by the embodiment of the application is introduced from the perspective of hardware materialization. Fig. 21 is a schematic structural diagram of a server, and fig. 22 is a schematic structural diagram of a terminal device.
Referring to fig. 21, which is a schematic diagram of a server structure according to an embodiment of the present application, the server 1400 may have a relatively large difference between configurations or performances, and may include one or more processors 1422, such as a central processing unit (Central Processing Units, CPU), a memory 1432, one or more application programs 1442, or a storage medium 1430 (such as one or more mass storage devices) for data 1444. Wherein the memory 1432 and storage medium 1430 can be transitory or persistent storage. The program stored in the storage medium 1430 may include one or more modules (not shown), each of which may include a series of instruction operations on a server. Still further, a processor 1422 may be provided in communication with a storage medium 1430 to execute a series of instructions operations on the storage medium 1430 on the server 1400.
Server 1400 may also include one or more power supplies 1426, one or more wired or wireless network interfaces 1450, one or more input/output interfaces 1458, and/or one or moreMore than one operating system 1441, e.g. Windows Server TM ,Mac OS X TM ,Unix TM ,Linux TM ,FreeBSD TM Etc.
The steps performed by the server in the above embodiments may be based on the server structure shown in fig. 21.
Wherein, the CPU 1422 is configured to perform the following steps:
if the object is located in a safety zone included in the virtual reality scene and the distance between the object and the safety zone boundary is greater than a first preset distance, displaying the virtual reality scene, wherein the safety zone boundary is the boundary of the safety zone;
if the object is located in the safety zone and the distance between the object and the boundary of the safety zone is smaller than or equal to the first preset distance, displaying first prompt information in the virtual reality scene, wherein the first prompt information is used for prompting the distance between the object and the boundary of the safety zone;
and if the distance between the object and the boundary of the safety zone is smaller than or equal to a second preset distance, displaying a real scene in a target area of the virtual reality scene, wherein the second preset distance is smaller than the first preset distance, and the target area is a partial area of the virtual reality scene.
Optionally, the CPU 1422 may further execute method steps of any specific implementation manner of the virtual reality scenario display method in the embodiment of the present application.
Referring to fig. 22, the structure of a terminal device according to an embodiment of the present application is shown. Fig. 22 is a block diagram showing a part of a structure of a smart phone related to a terminal device provided by an embodiment of the present application, where the smart phone includes: radio Frequency (RF) circuitry 1510, memory 1520, input unit 1530, display unit 1540, sensor 1550, audio circuitry 1560, wireless fidelity (WiFi) module 1570, processor 1580, power supply 1590, and the like. Those skilled in the art will appreciate that the smartphone structure shown in fig. 22 is not limiting of the smartphone and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The following describes each component of the smart phone in detail with reference to fig. 22:
the RF circuit 1510 may be used for receiving and transmitting signals during a message or a call, and particularly, after receiving downlink information of a base station, the signal is processed by the processor 1580; in addition, the data of the design uplink is sent to the base station.
The memory 1520 may be used to store software programs and modules, and the processor 1580 implements various functional applications and data processing of the smartphone by running the software programs and modules stored in the memory 1520.
The input unit 1530 may be used to receive input numerical or character information and generate key signal inputs related to user settings and function control of the smart phone. In particular, the input unit 1530 may include a touch panel 1531 and other input devices 1532. The touch panel 1531, also referred to as a touch screen, may collect touch operations on or near the user and drive the corresponding connection device according to a predetermined program. The input unit 1530 may include other input devices 1532 in addition to the touch panel 1531. In particular, other input devices 1532 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, mouse, joystick, etc.
The display unit 1540 may be used to display information input by a user or information provided to the user and various menus of the smart phone. The display unit 1540 may include a display panel 1541, and optionally, the display panel 1541 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like.
The smartphone may also include at least one sensor 1550, such as a light sensor, a motion sensor, and other sensors. Other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that may also be configured with the smart phone are not described in detail herein.
Audio circuitry 1560, speaker 1561, and microphone 1562 may provide an audio interface between a user and a smart phone. The audio circuit 1560 may transmit the received electrical signal converted from audio data to the speaker 1561, and be converted into a sound signal by the speaker 1561 for output; on the other hand, the microphone 1562 converts the collected sound signals into electrical signals, which are received by the audio circuit 1560 for conversion into audio data, which is processed by the audio data output processor 1580 for transmission to, for example, another smart phone via the RF circuit 1510 or for output to the memory 1520 for further processing.
Processor 1580 is a control center of the smartphone, connects various parts of the entire smartphone with various interfaces and lines, performs various functions of the smartphone and processes data by running or executing software programs and/or modules stored in memory 1520, and invoking data stored in memory 1520. In the alternative, processor 1580 may include one or more processing units.
The smart phone also includes a power source 1590 (e.g., a battery) for powering the various components, which may preferably be logically connected to the processor 1580 via a power management system, such as to provide for managing charging, discharging, and power consumption.
Although not shown, the smart phone may further include a camera, a bluetooth module, etc., which will not be described herein.
In an embodiment of the present application, the memory 1520 included in the smart phone may store program codes and transmit the program codes to the processor.
The processor 1580 included in the smart phone may execute the method for displaying a virtual reality scene provided in the foregoing embodiment according to an instruction in the program code.
The embodiment of the application also provides a computer readable storage medium for storing a computer program for executing the method for displaying the virtual reality scene provided by the above embodiment.
Embodiments of the present application also provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the display method of the virtual reality scene provided in various optional implementations of the above aspect.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, where the above program may be stored in a computer readable storage medium, and when the program is executed, the program performs steps including the above method embodiments; and the aforementioned storage medium may be at least one of the following media: read-Only Memory (ROM), RAM, magnetic disk or optical disk, etc.
It should be noted that, in the present specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment is mainly described in a different point from other embodiments. In particular, for the apparatus and system embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, with reference to the description of the method embodiments in part. The apparatus and system embodiments described above are merely illustrative, in which elements illustrated as separate elements may or may not be physically separate, and elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present application without undue burden.
The foregoing is only one specific embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the technical scope of the present application should be included in the scope of the present application. Further combinations of the present application may be made to provide further implementations based on the implementations provided in the above aspects. Therefore, the protection scope of the present application should be subject to the protection scope of the claims.

Claims (20)

1. A method for displaying a virtual reality scene, the method comprising:
if the object is located in a safety zone included in the virtual reality scene and the distance between the object and the safety zone boundary is greater than a first preset distance, displaying the virtual reality scene, wherein the safety zone boundary is the boundary of the safety zone;
if the object is located in the safety zone and the distance between the object and the boundary of the safety zone is smaller than or equal to the first preset distance, displaying first prompt information in the virtual reality scene, wherein the first prompt information is used for prompting the distance between the object and the boundary of the safety zone;
And if the distance between the object and the boundary of the safety zone is smaller than or equal to a second preset distance, displaying a real scene in a target area of the virtual reality scene, wherein the second preset distance is smaller than the first preset distance, and the target area is a partial area of the virtual reality scene.
2. The method of claim 1, wherein the first hint information is a safe area fence, and displaying the first hint information in the virtual reality scene if the object is located in the safe area and a distance between the object and the safe area boundary is less than or equal to the first preset distance comprises:
if the object is located in the safe area and the distance between the object and the safe area boundary is smaller than or equal to the first preset distance, determining the transparency of the safe area fence based on the distance between the object and the safe area boundary, wherein the closer the distance between the object and the safe area boundary is, the lower the transparency of the safe area fence is;
in the virtual reality scene, the safe-zone fence is displayed based on transparency of the safe-zone fence.
3. The method of claim 1, wherein the first prompt message is text message, and displaying the first prompt message in the virtual reality scene if the object is located in the safe zone and the distance between the object and the safe zone boundary is less than or equal to the first preset distance comprises:
And if the object is positioned in the safety zone and the distance between the object and the boundary of the safety zone is smaller than or equal to the first preset distance, displaying the text information in the virtual reality scene, wherein the text information comprises the distance between the object and the boundary of the safety zone.
4. The method of claim 1, wherein displaying the first hint information in the virtual reality scene comprises:
and displaying the first prompt information and the safe zone boundary in real time in the virtual reality scene.
5. The method of claim 1, wherein displaying the first hint information in the virtual reality scene comprises:
determining the transparency of the real scene, wherein the transparency of the real scene is determined based on the distance between the object and the safety zone boundary, and the closer the distance between the object and the safety zone boundary is, the lower the transparency of the real scene is;
and displaying the first prompt information in the virtual reality scene, and displaying the real scene based on the transparency of the real scene.
6. The method of claim 1, wherein the virtual reality scene is constructed based on an interactive device, the interactive device comprising a first sub-device and a second sub-device, the first sub-device and the second sub-device being located at different body parts of the subject, the method further comprising:
Determining a location of the object based on the location of the first sub-device; or alternatively, the first and second heat exchangers may be,
based on the location of the second sub-device, a location of the object is determined.
7. The method of claim 6, wherein the first sub-device is located at a head of the subject and the second sub-device is located at a hand of the subject.
8. The method of claim 7, wherein if the location of the object is determined based on the location of the first sub-device, the method further comprises:
and if the object is positioned outside the safety zone and the distance between the object and the boundary of the safety zone is greater than the second preset distance, displaying second prompt information, wherein the second prompt information is used for prompting the object to leave the safety zone.
9. The method of claim 7, wherein if the location of the object is determined based on the location of the second sub-device, the method further comprises:
and if the object is positioned outside the safety zone and the distance between the object and the boundary of the safety zone is greater than the second preset distance, displaying the real scene in the target zone.
10. The method of claim 6, wherein a preset distance used in determining the location of the object based on the location of the first sub-device is greater than a preset distance used in determining the location of the object based on the location of the second sub-device.
11. The method of claim 1, wherein displaying the real scene in the target area of the virtual reality scene if the distance between the object and the safe zone boundary is less than or equal to a second preset distance comprises:
if the distance between the object and the safety zone boundary is smaller than or equal to the second preset distance, determining the size of the target zone based on the distance between the object and the safety zone boundary; if the object is in the safety zone, the closer the object is to the boundary of the safety zone, the larger the size of the target area is;
and displaying the real scene in the target area.
12. The method of claim 1, wherein the displaying the real scene in the target area of the virtual reality scene comprises:
Acquiring an image of a real scene;
rendering is carried out based on the image, and a rendering picture is obtained;
creating a rendering texture, wherein the rendering texture is used for storing the rendering picture, and the size of the rendering texture is consistent with the size of the target area;
the rendering texture is attached into the virtual reality scene based on a perspective shader to display the real scene in the target area.
13. The method of claim 1, wherein the displaying the real scene in the target area of the virtual reality scene comprises:
determining the position of the target area in the virtual reality scene;
the virtual reality scene is not displayed at the location of the target area so that the real scene is displayed in the target area.
14. The method according to claim 1, wherein the method further comprises:
determining a gaze center point of the object;
and determining the position of the target area in the virtual reality scene based on the gazing center point of the object.
15. The method of claim 1, wherein if the safety zone is a circular area centered around the initial position of the object and having a predetermined length as a radius, the method further comprises:
Acquiring the current position of the object;
determining a distance difference between the current position and the initial position;
if the difference value between the preset length and the distance difference value is smaller than or equal to the first preset distance, the distance between the object and the boundary of the safety zone is smaller than the first preset distance;
and if the difference value between the preset length and the distance difference value is smaller than or equal to the second preset distance, the distance between the object and the boundary of the safety zone is smaller than or equal to the second preset distance.
16. The method of claim 1, wherein if the secure enclave is an irregular pattern, the method further comprises:
obtaining boundary point positions corresponding to a plurality of boundary points of the irregular graph respectively;
acquiring the current position of the object;
and determining the distance difference value between the current position and the positions of the plurality of boundary points, and taking the smallest distance difference value in the plurality of distance difference values as the distance between the object and the boundary of the safety zone.
17. A display device for a virtual reality scene, the device comprising: a first display unit, a second display unit, and a third display unit;
The first display unit is configured to display the virtual reality scene if an object is located in a safety zone included in the virtual reality scene, and if a distance between the object and a boundary of the safety zone is greater than a first preset distance, the boundary of the safety zone is a boundary of the safety zone;
the second display unit is configured to display, if the object is located in the safety zone and the distance between the object and the boundary of the safety zone is less than or equal to the first preset distance, first prompt information in the virtual reality scene, where the first prompt information is used to prompt the distance between the object and the boundary of the safety zone;
and the third display unit is used for displaying a real scene in a target area of the virtual reality scene if the distance between the object and the boundary of the safety zone is smaller than or equal to a second preset distance, wherein the second preset distance is smaller than the first preset distance, and the target area is a partial area of the virtual reality scene.
18. A computer device, the computer device comprising a processor and a memory:
the memory is used for storing a computer program and transmitting the computer program to the processor;
The processor is configured to perform the method of any of claims 1-16 according to the computer program.
19. A computer readable storage medium, characterized in that the computer readable storage medium is for storing a computer program for executing the method of any one of claims 1-16.
20. A computer program product comprising a computer program which, when run on a computer device, causes the computer device to perform the method of any of claims 1-16.
CN202310647510.7A 2023-06-01 2023-06-01 Virtual reality scene display method and related device Pending CN116954362A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310647510.7A CN116954362A (en) 2023-06-01 2023-06-01 Virtual reality scene display method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310647510.7A CN116954362A (en) 2023-06-01 2023-06-01 Virtual reality scene display method and related device

Publications (1)

Publication Number Publication Date
CN116954362A true CN116954362A (en) 2023-10-27

Family

ID=88448251

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310647510.7A Pending CN116954362A (en) 2023-06-01 2023-06-01 Virtual reality scene display method and related device

Country Status (1)

Country Link
CN (1) CN116954362A (en)

Similar Documents

Publication Publication Date Title
US10765947B2 (en) Visual display method for compensating sound information, computer readable storage medium and electronic device
JP6316387B2 (en) Wide-area simultaneous remote digital presentation world
US11491393B2 (en) Generating collectible items based on location information
CN113853570B (en) System and method for generating dynamic obstacle collision warning for head mounted display
US20180240220A1 (en) Information processing apparatus, information processing method, and program
US11809617B2 (en) Systems and methods for generating dynamic obstacle collision warnings based on detecting poses of users
US20180356636A1 (en) Information processing apparatus, information processing method, and program
WO2014119098A1 (en) Information processing device, terminal device, information processing method, and programme
US11783549B2 (en) Method for observing virtual environment, device, and storage medium
CN112494955A (en) Skill release method and device for virtual object, terminal and storage medium
US10606342B2 (en) Handsfree user input method for controlling an immersive virtual environment application
CN111712780A (en) System and method for augmented reality
CN108592939A (en) A kind of air navigation aid and terminal
CN111159460A (en) Information processing method and electronic equipment
CN113082707A (en) Virtual object prompting method and device, storage medium and computer equipment
WO2023064719A1 (en) User interactions with remote devices
CN110536236A (en) A kind of communication means, terminal device and the network equipment
JP2023526990A (en) Selection target determination method, device, apparatus, and computer program
CN116954362A (en) Virtual reality scene display method and related device
CN108140124A (en) Prompt information determination method and device, electronic equipment and computer program product
CN108108017A (en) A kind of search information processing method and mobile terminal
CN117942563A (en) Game interface display method, game interface display device, electronic equipment and readable storage medium
CN117654062A (en) Virtual character display method, device, equipment and storage medium
CN117122910A (en) Method and system for adding real world sounds to virtual reality scenes
CN117008713A (en) Augmented reality display method and device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40098499

Country of ref document: HK