CN111243103A - Method and device for setting safety area, VR equipment and storage medium - Google Patents

Method and device for setting safety area, VR equipment and storage medium Download PDF

Info

Publication number
CN111243103A
CN111243103A CN202010014355.1A CN202010014355A CN111243103A CN 111243103 A CN111243103 A CN 111243103A CN 202010014355 A CN202010014355 A CN 202010014355A CN 111243103 A CN111243103 A CN 111243103A
Authority
CN
China
Prior art keywords
curve
virtual
scene image
real scene
closed curve
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010014355.1A
Other languages
Chinese (zh)
Other versions
CN111243103B (en
Inventor
舒玉龙
郑光璞
宋田
吴涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Xiaoniao Kankan Technology Co Ltd
Original Assignee
Qingdao Xiaoniao Kankan Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Xiaoniao Kankan Technology Co Ltd filed Critical Qingdao Xiaoniao Kankan Technology Co Ltd
Priority to CN202010014355.1A priority Critical patent/CN111243103B/en
Publication of CN111243103A publication Critical patent/CN111243103A/en
Application granted granted Critical
Publication of CN111243103B publication Critical patent/CN111243103B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8082Virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a method and a device for setting a safe area, VR equipment and a storage medium. The method for setting the safety zone is applied to Virtual Reality (VR) equipment and comprises the following steps: acquiring a real scene image shot by a camera of the VR device, and respectively acquiring a virtual ray of the VR device and a pixel coordinate corresponding to an intersection point of the virtual ray and a plane in the real scene image; performing curve fitting according to the pixel coordinates of the intersection points to generate a curve, and superposing the curve and the virtual ray on the real scene image to be displayed and output so as to guide a user to move and avoid an obstacle; and when curve fitting is carried out according to the pixel coordinates of the intersection points to generate a closed curve, the information of the closed curve is stored, and an area inside the closed curve is marked as a safety area. The embodiment of the application improves the immersion sense of the virtual reality equipment, optimizes the user experience, does not need to install additional hardware and is low in cost.

Description

Method and device for setting safety area, VR equipment and storage medium
Technical Field
The application relates to the technical field of virtual reality, in particular to a method and a device for setting a safe region, VR equipment and a storage medium.
Background
VR (Virtual Reality) technology is a technology for generating a three-dimensional interactive Virtual environment by fusing and reconstructing a plurality of information using a computer and providing a user with an immersion feeling. When a user wears the VR helmet and walks in a space with obstacles such as a table and a chair, the user is likely to collide with the obstacles and the safety of the user is affected because the user cannot see the surrounding environment.
Disclosure of Invention
The application aims to provide a method and a device for setting a safe region, VR equipment and a storage medium, wherein the safe region is set, safe guide protection is carried out on a user, and the safety of the virtual reality equipment in use is improved.
According to an aspect of an embodiment of the present application, there is provided a method for setting a safety region, which is applied to a virtual reality VR device, and the method for setting the safety region includes:
acquiring a real scene image shot by a camera of the VR device, and respectively acquiring a virtual ray of the VR device and a pixel coordinate corresponding to an intersection point of the virtual ray and a plane in the real scene image;
performing curve fitting according to the pixel coordinates of the intersection points to generate a curve, and superposing the curve and the virtual ray on the real scene image to be displayed and output so as to guide a user to move and avoid an obstacle;
and when curve fitting is carried out according to the pixel coordinates of the intersection points to generate a closed curve, the information of the closed curve is stored, and an area inside the closed curve is marked as a safety area.
According to another aspect of the embodiments of the present application, there is provided an apparatus for setting a safety region, which is applied to a virtual reality VR device, including:
the position determining module is used for acquiring a real scene image shot by a camera of the VR equipment, and respectively acquiring a virtual ray of the VR equipment and a pixel coordinate corresponding to an intersection point of the virtual ray and a plane in the real scene image;
the curve generating module is used for performing curve fitting according to the pixel coordinates of the intersection points to generate a curve, and the curve and the virtual ray are superposed on the real scene image and then are displayed and output so as to guide a user to move and avoid obstacles;
and the safety region setting module is used for storing the information of the closed curve and marking the region inside the closed curve as a safety region when the closed curve is generated by curve fitting according to the pixel coordinates of the intersection points.
According to yet another aspect of an embodiment of the present application, there is provided a virtual reality VR device, comprising a processor and a memory;
the memory storing computer-executable instructions;
the processor, which when executed, causes the processor to perform a method of setting a secure enclave as previously described.
According to a further aspect of embodiments herein, there is provided a computer readable storage medium having stored thereon one or more computer programs which, when executed, implement a method of setting a secure area as described above.
According to the technical scheme, the real scene image is obtained, the virtual ray information and the virtual ray and ground intersection point information obtained in the safety region setting process are rendered to the corresponding position of the real scene image for the user to check, so that the user is assisted to realize the safety region setting, the immersion sense of the virtual reality device is improved, and the user experience is optimized. In addition, the safety region setting method of the embodiment of the application has low requirements on scenes, and scenes do not need to be arranged in advance. In addition, the technical problem that extra hardware installation cost is high due to the use of ultrasonic waves and other means is solved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below. It is appreciated that the following drawings depict only certain embodiments of the application and are therefore not to be considered limiting of its scope. For a person skilled in the art, it is possible to derive other relevant figures from these figures without inventive effort.
Fig. 1 is a flowchart illustrating a method for setting a security area according to an embodiment of the present application;
fig. 2 is a flowchart illustrating a method for setting a security area according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a generated closed curve shown in an embodiment of the present application;
fig. 4 is a block diagram illustrating an apparatus for setting a security area according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a virtual reality device shown in an embodiment of the present application.
Detailed Description
Various exemplary embodiments of the present application will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present application unless specifically stated otherwise. The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the application, its application, or uses. Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate. In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
The technical problem that when a user wears a virtual reality helmet to walk in a scene with obstacles such as a table and a chair, the user is likely to collide with the obstacles due to the fact that the user cannot see the surrounding state when the user wears the virtual reality helmet in actual use, and safety of the user is affected is solved. There is a scheme of clearing obstacles in a use scene and establishing a similar soft mesh fence, and obviously, the scheme has high requirements on the scene and a complex scene arrangement process. There is also a scheme Of using an ultrasonic wave or a TOF (Time Of Flight) distance measuring sensor to perform obstacle detection and setting a safety area, which requires additional hardware, is relatively high in cost, and is not suitable for large-scale popularization.
The embodiment provides a safety zone setting scheme, and the technical concept of the technical scheme of the application lies in that: firstly, mapping between a virtual scene world coordinate system and an actual use environment scene is established through an image processing algorithm, a user can see a front real scene on a helmet screen through a camera arranged in front of a VR helmet, on the basis, the user can be used for judging whether obstacles exist or not, safety region labeling is carried out through an additional interaction means, and coordinate information of the labels is stored and processed into a closed region. When the head-mounted screen is switched to a virtual scene, the closed area is rendered into the scene, so that an effective guiding effect can be achieved, and the safety in use is improved.
Fig. 1 is a schematic flow diagram of a method for setting a safety region according to an embodiment of the present application, and referring to fig. 1, the method for setting a safety region according to the embodiment is applied to a virtual reality VR device, and includes:
step S101, acquiring a real scene image shot by a camera of the VR device, and respectively acquiring a virtual ray of the VR device and a pixel coordinate corresponding to an intersection point of the virtual ray and a plane in the real scene image;
step S102, performing curve fitting according to the pixel coordinates of the intersection points to generate a curve, and displaying and outputting the curve and the virtual ray after the curve and the virtual ray are superposed on the real scene image so as to guide a user to move and avoid obstacles;
and S103, when curve fitting is carried out according to the pixel coordinates of the intersection points to generate a closed curve, storing the information of the closed curve, and marking the area inside the closed curve as a safe area.
As shown in fig. 1, the method for setting the safe region according to the embodiment implements setting of the safe region based on user interaction, solves the problem of how an object in a real scene interacts with a virtual reality scene in a virtual reality experience process, improves the immersion of a virtual reality device, and optimizes user experience. Moreover, scenes do not need to be arranged in advance, the cost is low, and the method is suitable for large-scale popularization and application.
Fig. 2 is a flowchart of a method for setting a secure area according to an embodiment of the present application, and a flowchart of the method for setting a secure area at a time is described below with reference to fig. 2. The method for setting the safety zone is applied to Virtual Reality (VR) equipment.
Referring to fig. 2, the process starts by executing step S201, establishing a world coordinate system according to a real scene;
eyes are the main source for acquiring external information by human beings, And similarly, a visual SLAM (simultaneous localization And Mapping) algorithm acquires massive redundant texture information from a scene through a visual sensor (such as a binocular camera) to perform positioning tracking And predict a dynamic target in the scene. In the embodiment, a binocular camera-based SLAM algorithm is adopted and an inertial sensor is fused to position the helmet of the VR equipment. Specifically, the binocular camera acquires environment image information in a real scene, the inertial sensor acquires motion information of a user wearing the head, and a world coordinate system corresponding to the real scene is constructed based on the environment image information, the motion information and an SLAM algorithm. The direction opposite to the gravity direction is used as the y-axis direction of a world coordinate system, the direction which is perpendicular to the helmet when the helmet is started and points to a user is determined as the z-axis direction, and the first position coordinate of the helmet obtained by the system processing is determined as the origin of the world coordinate system.
And S202, calculating the pose of the handle in the world coordinate system.
In the embodiment, the user is assisted by the image prompt to complete the setting of the safety region, in order to improve the immersion of the user, the virtual ray of the VR device and the pixel coordinate of the intersection point of the virtual ray and a plane, which corresponds to the real scene image, are respectively obtained, curve fitting is performed according to the pixel coordinate of the intersection point to generate a curve, and the curve is displayed in real time. The virtual ray is emitted by a handle of the VR device; correspondingly, respectively acquiring corresponding pixel coordinates of the virtual ray and the intersection point of the virtual ray and a plane in the real scene image comprises: acquiring pixel coordinates corresponding to the virtual rays in the real scene image, specifically including: obtaining the coordinates of the handle in a world coordinate system according to the coordinates of the handle in a camera coordinate system and the pose of the helmet of the VR device; according to the coordinates of the handle in a world coordinate system, camera internal parameters and the pose of a helmet of the VR equipment, obtaining corresponding pixel coordinates of the handle in the real scene image; acquiring the pixel coordinates corresponding to the intersection point in the real scene image, specifically including: obtaining the coordinates of the intersection points in a world coordinate system based on a virtual ray equation and a plane equation; and obtaining the corresponding pixel coordinates of the intersection point in the real scene image according to the coordinates of the intersection point in a world coordinate system, the camera internal reference and the pose of the helmet of the VR equipment.
That is to say, in this embodiment, not only the curve generated by fitting the virtual ray with a plane intersection point is drawn at the corresponding position of the real scene image, but also the virtual ray emitted by the user operating the handle is drawn at the corresponding position of the real scene image, so that the user can see the process of forming a closed curve by closing the drawn curve step by step in real time, and the interactivity of the user is enhanced.
Based on this, the method of this embodiment includes two processes of obtaining the pixel coordinate corresponding to the virtual ray in the real scene image and obtaining the pixel coordinate corresponding to the intersection point in the real scene image, and in this step, the pixel coordinate corresponding to the virtual ray in the real scene image is specifically obtained. Because the virtual ray is emitted by the handle, the starting point of the direction vector of the virtual ray is the handle, the corresponding pixel coordinate of the handle in the real scene image needs to be calculated at first under the world coordinate system, and the position of the handle under the world coordinate system can be calculated through the following processes:
(1) calculating the distance z from the handle to the plane;
the handle of this embodiment is designed with special optical identification (if have the lamp pearl of certain structural feature), obtains the coordinate u of optical identification on the left and right camera of binocular camera respectively through the characteristic matching algorithmLAnd uRThe distance z from the handle to the plane can be obtained from the similar triangular relationship in the binocular camera model and through the following formula:
z=f*b/(uL-uR)
where f is the camera focal length and b is the baseline of the binocular camera.
(2) Calculating the coordinates P of the handle to the camera coordinate systemc
Knowing the handle-to-plane distance z and the coordinates u on the handle optically identified on the left and right cameras of the binocular cameraLAnd uRAnd combining the camera internal reference to obtain the coordinates Pc of the handle under the camera coordinate system.
(3) Calculating the coordinates P of the handle to the world coordinate systemw
Obtaining the coordinates P from the handle to the camera coordinate systemcThen, the coordinates of the handle under the world system can be obtained according to the six-degree-of-freedom pose (the pose comprises position information of three degrees of freedom and angle information of three degrees of freedom) of the VR helmet
pw=R*Pc+T
The rotation matrix R and the translation vector T are angle information and position information in a six-degree-of-freedom pose of the helmet respectively.
Step S203, displaying a real scene image;
in the embodiment, the left camera in the binocular camera is used as the center of the system world coordinate system, and the image content acquired by the left camera is displayed on the screen of the VR helmet to guide the user to draw the boundary line of the safety region.
And step S204, receiving the input ground height information.
As described above, the curve of the present embodiment is obtained from the intersection of the virtual ray and a plane, where the plane may be any plane, and considering that if the curve can be drawn on the ground, it means that there is no other object on the ground, and it is more suitable for the user to feel intuitively, the present embodiment sets the plane as the ground plane, and in order to take account of the difference in height between different users, a link from the helmet for receiving the input to the ground vertical height h is designed.
In step S205, the coordinates of the intersection in the world coordinate system are calculated.
The method comprises the steps of receiving ground height information input by a user according to height, and determining a ground plane equation according to the ground height information; and obtaining the coordinates of the intersection point in the world coordinate system based on the virtual ray equation and the ground plane equation.
Because the intersection point is the point where the virtual ray intersects with the ground, the intersection point is not only the point of the virtual ray, but also the point on the ground, based on which, a ray equation and a ground plane equation are established, and the coordinates of the intersection point and the intersection point under the world coordinate system can be determined by the two equations in a simultaneous manner.
For example, the direction vector of the ray emitted by the handle is determined according to the handle coordinate system as follows:
u=R*(0,1,0)+T
the virtual ray equation is then:
p=p0+ut
wherein P is0Is the coordinates of the handle in the world coordinate system, i.e. the aforementioned PwHere is used as P0The representation is because the handle is the origin of the virtual ray, P0Representing the start of the vector. The parameter t can be understood as the modulus of the vector, u the vector direction, P0As the vector start point, P is the vector end point, which is also the point on the ray.
The ground plane equation is: (0, h,0) { (0, -h,0) -P } -, 0, and the symbol · represents a dot product.
Where h is the ground height information received in the aforementioned step S203.
The coordinates P of the intersection point of the virtual ray and the ground plane can be obtained by combining the above two equations (i.e., the ground plane and the virtual ray equation).
And step S206, calculating the pixel coordinates of the handle and the intersection point in the real scene image, and rendering the intersection point and the ray emitted by the handle to the real scene image for display.
In order to guide the user to draw a line, i.e., draw the size of a safety area, in addition to displaying the actually photographed scene on the screen, the present embodiment simultaneously renders the handle ray, the ray intersection point with the ground, and the fitted curve in the virtual scene on the correct position of the real scene image.
The pose of the handle in the world coordinate system is obtained according to the aforementioned step S202, and the coordinates of the intersection point in the world coordinate system are obtained according to the aforementioned step S205, so in this step, the pixel coordinates of the ray emitted by the handle on the real scene image are calculated by the following formula:
Puv=K*E*PW
wherein K is camera reference PWThe pose of the handle in the world coordinate system is obtained by the following formula
Figure BDA0002358319530000071
And R and T are angle information and position information in the six-degree-of-freedom pose of the helmet respectively.
Similarly, formula Puv=K*E*PWThe pose of the handle in the world coordinate system is replaced by the coordinate of the intersection point P in the world coordinate system, and the pixel coordinate of the intersection point P on the real scene image can be calculated.
And after the pixel coordinates of the handle and the intersection point in the real scene image are obtained, curve fitting is carried out according to the pixel coordinates of the intersection point, rays emitted by the handle are displayed on the real scene image in real time, and the fitted curves are displayed for a user to check, so that the user is guided to draw lines in the safety area. I.e., to guide the user to draw lines on the ground to determine safe area coverage,
in this embodiment, the defined curve may form a closed curve, the inside of the closed curve is a safety region authorized by the user without any obstacle, when the closed curve is generated by curve fitting according to the pixel coordinates of the intersection point, the information of the closed curve is saved, and the area inside the closed curve is labeled as the safety region.
The user is guided to actively avoid the obstacle when drawing the line, and the obstacle is drawn outside the formed closed curve, and as shown in fig. 3, it can be seen that the inside of the closed curve 302 is a safe area, no obstacle exists, and the obstacles 301 are outside the closed curve 302.
At this point, the secure area setting step is completed.
Therefore, the curve indicating the boundary of the safe region is superposed at the corresponding position of the actual scene image through VR screen display for the user to view, so that the effect of drawing the line can be seen in real time, which is equivalent to the effect of augmented reality, and the user experience is better.
In one embodiment, the method further comprises: and when the VR equipment outputs a virtual scene, outputting virtual scene safety region indication information generated according to the information of the closed curve together so as to prompt a user to avoid an obstacle. For example, when the screen of the VR device is switched to a virtual scene, the closed area is drawn in the virtual scene, so that effective guiding and warning effects can be achieved. The embodiment does not limit how to generate the indication information of the safety area of the virtual scene according to the information of the closed curve, and the indication information of the safety area of the virtual scene should be generated according to actual requirements.
It should be noted that the execution sequence of the foregoing steps in fig. 2 is not limited strictly, for example, step S203 and step S202 may be executed synchronously instead of executing step S202 and then executing step S203.
In actual use, a line drawing result may not be very accurate, and in order to improve the accuracy and precision of setting a safety zone, the embodiment provides a scheme for adjusting a closed curve. That is, the method further includes: after the closed curve is generated, continuously acquiring the pixel coordinates of the new intersection point in the real scene image, and performing curve fitting according to the pixel coordinates of the new intersection point to generate a new curve; and adjusting the closed curve according to the new curve.
Specifically, according to the new curve, adjusting the closed curve includes: and when a new closed region is formed by the new curve and the closed curve and the area of the new closed region is larger than the safe region indicated by the closed curve, determining a target closed curve according to the new closed region, and replacing the closed curve with the target closed curve.
For example, after a first closed curve is obtained, the curves are connected end to form a safe closed area; and receiving the operation that the user continues to draw a line to determine a new curve, and replacing the original curve with the new safe closed region when the new curve and the closed curve form a new closed region and the area of the new closed region is larger than that of the original safe closed region. When the new curve and the safe closing area form a new closing area and the area is smaller than that of the original safe closing area, the safe closing area is not updated. And when the new curve and the closed region can not form a new closed region, the closed curve is kept as it is.
It should be noted that, in this embodiment, all curves and closed regions may be rendered on a real-time scene image captured by a camera, so as to assist a user in accurately drawing a line, improve user experience, and ensure safety of VR devices during use.
An apparatus for setting a safety region is provided in an embodiment of the present application, fig. 4 is a block diagram of the apparatus for setting a safety region shown in the embodiment of the present application, and referring to fig. 4, an apparatus 400 for setting a safety region is applied to a virtual reality VR device, and includes:
a position determining module 401, configured to obtain a real scene image captured by a camera of the VR device, and respectively obtain a virtual ray of the VR device and a pixel coordinate corresponding to an intersection point where the virtual ray intersects with a plane in the real scene image;
a curve generating module 402, configured to perform curve fitting according to the pixel coordinates of the intersection point to generate a curve, and superimpose the curve and the virtual ray on the real scene image to display and output the real scene image, so as to guide a user to move and avoid an obstacle;
and a safety region setting module 403, configured to, when a closed curve is generated by performing curve fitting according to the pixel coordinates of the intersection point, save information of the closed curve, and mark a region inside the closed curve as a safety region.
In one embodiment of the present application, the apparatus 400 for setting a security area further includes: and the prompting module is used for outputting virtual scene safety region indication information generated according to the information of the closed curve together when the VR equipment outputs a virtual scene so as to prompt a user to avoid an obstacle.
In one embodiment of the application, the virtual ray is issued by a handle of the VR device; a position determining module 401, configured to obtain coordinates of the handle in a world coordinate system according to the coordinates of the handle in a camera coordinate system and a pose of a helmet of the VR device; according to the coordinates of the handle in a world coordinate system, camera internal parameters and the pose of a helmet of the VR equipment, obtaining corresponding pixel coordinates of the handle in the real scene image; obtaining the coordinates of the intersection points in a world coordinate system based on a virtual ray equation and a plane equation; and obtaining the corresponding pixel coordinates of the intersection point in the real scene image according to the coordinates of the intersection point in a world coordinate system, the camera internal reference and the pose of the helmet of the VR equipment.
In one embodiment of the present application, the plane is a ground plane; the position determining module 401 is specifically configured to receive ground height information input by a user according to a height, and determine a ground plane equation according to the ground height information.
In an embodiment of the application, the curve generating module 402 is further configured to, after the closed curve is generated, continue to obtain pixel coordinates of a new intersection point in the real scene image, and perform curve fitting according to the pixel coordinates of the new intersection point to generate a new curve; and adjusting the closed curve according to the new curve.
In an embodiment of the application, the curve generating module 402 is specifically configured to, when a new closed region is formed by a new curve and a closed curve and an area of the new closed region is greater than a safe region indicated by the closed curve, determine a target closed curve according to the new closed region, and replace the closed curve with the target closed curve.
It should be noted that, for the specific implementation of each module in the foregoing apparatus embodiment, reference may be made to the specific implementation of the foregoing corresponding method embodiment, which is not described herein again.
Fig. 5 is a schematic structural diagram of a virtual reality device shown in an embodiment of the present application, and as shown in fig. 5, in a hardware level, the virtual reality system includes a processor, and optionally further includes an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory, such as at least one disk Memory. Of course, the virtual reality device also includes hardware required for other services, such as a handle.
The processor, the network interface, and the memory may be connected to each other via an internal bus, which may be an ISA (Industry Standard Architecture) bus, a PCI (peripheral component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 5, but this does not indicate only one bus or one type of bus.
And the memory is used for storing programs. In particular, the program may comprise program code comprising computer executable instructions. The memory may include both memory and non-volatile storage and provides instructions and data to the processor.
The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the computer program, and a device for setting the safety area is formed on the logic level. And a processor executing a program stored in the memory to implement the method for setting the secure area as described above.
The method performed by the device for setting a security area as disclosed in the embodiment of fig. 5 in this specification may be applied to a processor, or may be implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the method for setting the secure area described above may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present specification may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present specification may be embodied directly in a hardware decoding processor, or in a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the method for setting the safe area in combination with the hardware of the processor.
The present application also provides a computer-readable storage medium.
The computer readable storage medium stores one or more computer programs comprising instructions which, when executed by a processor, are capable of implementing the method of setting a secure enclave described above.
For the convenience of clearly describing the technical solutions of the embodiments of the present application, in the embodiments of the present application, the terms "first" and "second" are used to distinguish the same items or similar items with basically the same functions and actions, and those skilled in the art can understand that the terms "first" and "second" are not limited to numbers and execution orders.
While the foregoing is directed to embodiments of the present application, other modifications and variations of the present application may be devised by those skilled in the art in light of the above teachings. It should be understood by those skilled in the art that the foregoing detailed description is for the purpose of better explaining the present application, and the scope of protection of the present application shall be subject to the scope of protection of the claims.

Claims (10)

1. A method for setting a safety region is applied to Virtual Reality (VR) equipment and comprises the following steps:
acquiring a real scene image shot by a camera of the VR device, and respectively acquiring a virtual ray of the VR device and a pixel coordinate corresponding to an intersection point of the virtual ray and a plane in the real scene image;
performing curve fitting according to the pixel coordinates of the intersection points to generate a curve, and superposing the curve and the virtual ray on the real scene image to be displayed and output so as to guide a user to move and avoid an obstacle;
and when curve fitting is carried out according to the pixel coordinates of the intersection points to generate a closed curve, the information of the closed curve is stored, and an area inside the closed curve is marked as a safety area.
2. The method of claim 1, further comprising:
and when the VR equipment outputs a virtual scene, outputting virtual scene safety region indication information generated according to the information of the closed curve together so as to prompt a user to avoid an obstacle.
3. The method of claim 1, wherein the virtual ray is issued by a handle of a VR device;
the obtaining of the virtual ray and the corresponding pixel coordinates of the intersection point of the virtual ray and a plane in the real scene image respectively includes:
acquiring pixel coordinates corresponding to the virtual rays in the real scene image, specifically including:
obtaining the coordinates of the handle in a world coordinate system according to the coordinates of the handle in a camera coordinate system and the pose of the helmet of the VR device;
according to the coordinates of the handle in a world coordinate system, camera internal parameters and the pose of a helmet of the VR equipment, obtaining corresponding pixel coordinates of the handle in the real scene image;
acquiring the pixel coordinates corresponding to the intersection point in the real scene image, specifically including:
obtaining the coordinates of the intersection point under a world coordinate system based on a virtual ray equation and a plane equation;
and obtaining the corresponding pixel coordinates of the intersection point in the real scene image according to the coordinates of the intersection point in a world coordinate system, the camera internal reference and the pose of the VR equipment helmet.
4. The method of claim 3, wherein the plane is a ground plane;
the obtaining the coordinates of the intersection point in the world coordinate system based on the virtual ray equation and the plane equation comprises:
receiving ground height information input by a user according to height, and determining a ground plane equation according to the ground height information.
5. The method of claim 1, further comprising:
after the closed curve is generated, continuously acquiring the pixel coordinates of the new intersection point in the real scene image, and performing curve fitting according to the pixel coordinates of the new intersection point to generate a new curve;
and adjusting the closed curve according to the new curve.
6. The method of claim 5, wherein the adjusting the closed curve according to the new curve comprises:
and when a new closed region is formed by the new curve and the closed curve and the area of the new closed region is larger than the safe region indicated by the closed curve, determining a target closed curve according to the new closed region, and replacing the closed curve with the target closed curve.
7. An apparatus for setting a safety region, applied to a Virtual Reality (VR) device, comprising:
the position determining module is used for acquiring a real scene image shot by a camera of the VR equipment, and respectively acquiring a virtual ray of the VR equipment and a pixel coordinate corresponding to an intersection point of the virtual ray and a plane in the real scene image;
the curve generating module is used for performing curve fitting according to the pixel coordinates of the intersection points to generate a curve, and the curve and the virtual ray are superposed on the real scene image and then are displayed and output so as to guide a user to move and avoid obstacles;
and the safety region setting module is used for storing the information of the closed curve and marking the region inside the closed curve as a safety region when the closed curve is generated by curve fitting according to the pixel coordinates of the intersection points.
8. The apparatus of claim 7, further comprising:
and the prompting module is used for outputting virtual scene safety region indication information generated according to the information of the closed curve together when the VR equipment outputs a virtual scene so as to prompt a user to avoid an obstacle.
9. A virtual reality, VR, device comprising a processor and a memory;
the memory storing computer-executable instructions;
the processor, which when executed, causes the processor to perform the method of setting a secure area of any of claims 1-6.
10. A computer-readable storage medium, having one or more computer programs stored thereon which, when executed, implement the method of setting a secure enclave of any of claims 1 to 6.
CN202010014355.1A 2020-01-07 2020-01-07 Method and device for setting security area, VR equipment and storage medium Active CN111243103B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010014355.1A CN111243103B (en) 2020-01-07 2020-01-07 Method and device for setting security area, VR equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010014355.1A CN111243103B (en) 2020-01-07 2020-01-07 Method and device for setting security area, VR equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111243103A true CN111243103A (en) 2020-06-05
CN111243103B CN111243103B (en) 2023-04-28

Family

ID=70874288

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010014355.1A Active CN111243103B (en) 2020-01-07 2020-01-07 Method and device for setting security area, VR equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111243103B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111773724A (en) * 2020-07-31 2020-10-16 网易(杭州)网络有限公司 Method and device for crossing virtual obstacle
CN111798573A (en) * 2020-09-08 2020-10-20 南京爱奇艺智能科技有限公司 Electronic fence boundary position determination method and device and VR equipment
CN112037314A (en) * 2020-08-31 2020-12-04 北京市商汤科技开发有限公司 Image display method, image display device, display equipment and computer readable storage medium
CN113284258A (en) * 2021-07-13 2021-08-20 北京京东方技术开发有限公司 Method and device for setting safety zone and virtual reality equipment
CN114153307A (en) * 2020-09-04 2022-03-08 中移(成都)信息通信科技有限公司 Scene block processing method, device, electronic equipment and computer storage medium
CN115022611A (en) * 2022-03-31 2022-09-06 青岛虚拟现实研究院有限公司 VR picture display method, electronic device and readable storage medium
US20230066524A1 (en) * 2021-09-02 2023-03-02 International Business Machines Corporation Management of devices in a smart environment
WO2024060890A1 (en) * 2022-09-21 2024-03-28 北京字跳网络技术有限公司 Information prompting method and apparatus for virtual terminal device, device, medium, and product

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006018476A (en) * 2004-06-30 2006-01-19 Sega Corp Method for controlling display of image
US20160171771A1 (en) * 2014-12-10 2016-06-16 Sixense Entertainment, Inc. System and Method for Assisting a User in Remaining in a Selected Area While the User is in a Virtual Reality Environment
CN106873785A (en) * 2017-03-31 2017-06-20 网易(杭州)网络有限公司 For the safety custody method and device of virtual reality device
US20180373412A1 (en) * 2017-06-26 2018-12-27 Facebook, Inc. Virtual reality safety bounding box
CN109584148A (en) * 2018-11-27 2019-04-05 重庆爱奇艺智能科技有限公司 A kind of method and apparatus handling two-dimentional interface in VR equipment
US20190164343A1 (en) * 2017-11-30 2019-05-30 International Business Machines Corporation Modifying virtual reality boundaries based on usage
US10423241B1 (en) * 2017-07-31 2019-09-24 Amazon Technologies, Inc. Defining operating areas for virtual reality systems using sensor-equipped operating surfaces

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006018476A (en) * 2004-06-30 2006-01-19 Sega Corp Method for controlling display of image
US20160171771A1 (en) * 2014-12-10 2016-06-16 Sixense Entertainment, Inc. System and Method for Assisting a User in Remaining in a Selected Area While the User is in a Virtual Reality Environment
CN106873785A (en) * 2017-03-31 2017-06-20 网易(杭州)网络有限公司 For the safety custody method and device of virtual reality device
US20180373412A1 (en) * 2017-06-26 2018-12-27 Facebook, Inc. Virtual reality safety bounding box
US10423241B1 (en) * 2017-07-31 2019-09-24 Amazon Technologies, Inc. Defining operating areas for virtual reality systems using sensor-equipped operating surfaces
US20190164343A1 (en) * 2017-11-30 2019-05-30 International Business Machines Corporation Modifying virtual reality boundaries based on usage
CN109584148A (en) * 2018-11-27 2019-04-05 重庆爱奇艺智能科技有限公司 A kind of method and apparatus handling two-dimentional interface in VR equipment

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111773724A (en) * 2020-07-31 2020-10-16 网易(杭州)网络有限公司 Method and device for crossing virtual obstacle
CN111773724B (en) * 2020-07-31 2024-04-26 网易(上海)网络有限公司 Method and device for crossing virtual obstacle
CN112037314A (en) * 2020-08-31 2020-12-04 北京市商汤科技开发有限公司 Image display method, image display device, display equipment and computer readable storage medium
CN114153307A (en) * 2020-09-04 2022-03-08 中移(成都)信息通信科技有限公司 Scene block processing method, device, electronic equipment and computer storage medium
CN111798573A (en) * 2020-09-08 2020-10-20 南京爱奇艺智能科技有限公司 Electronic fence boundary position determination method and device and VR equipment
CN111798573B (en) * 2020-09-08 2020-12-08 南京爱奇艺智能科技有限公司 Electronic fence boundary position determination method and device and VR equipment
CN113284258A (en) * 2021-07-13 2021-08-20 北京京东方技术开发有限公司 Method and device for setting safety zone and virtual reality equipment
US20230066524A1 (en) * 2021-09-02 2023-03-02 International Business Machines Corporation Management of devices in a smart environment
US11804018B2 (en) * 2021-09-02 2023-10-31 International Business Machines Corporation Management of devices in a smart environment
CN115022611A (en) * 2022-03-31 2022-09-06 青岛虚拟现实研究院有限公司 VR picture display method, electronic device and readable storage medium
CN115022611B (en) * 2022-03-31 2023-12-29 青岛虚拟现实研究院有限公司 VR picture display method, electronic device and readable storage medium
WO2024060890A1 (en) * 2022-09-21 2024-03-28 北京字跳网络技术有限公司 Information prompting method and apparatus for virtual terminal device, device, medium, and product

Also Published As

Publication number Publication date
CN111243103B (en) 2023-04-28

Similar Documents

Publication Publication Date Title
CN111243103B (en) Method and device for setting security area, VR equipment and storage medium
US10223834B2 (en) System and method for immersive and interactive multimedia generation
US11693242B2 (en) Head-mounted display for virtual and mixed reality with inside-out positional, user body and environment tracking
US10573075B2 (en) Rendering method in AR scene, processor and AR glasses
US20180300551A1 (en) Identifying a Position of a Marker in an Environment
US7928977B2 (en) Image compositing method and apparatus for superimposing a computer graphics image on an actually-sensed image
US9740935B2 (en) Maintenance assistant system
CN113467600A (en) Information display method, system and device based on augmented reality and projection equipment
CN111260789A (en) Obstacle avoidance method, virtual reality head-mounted device and storage medium
KR20110136012A (en) Augmented reality device to track eyesight direction and position
WO2022174594A1 (en) Multi-camera-based bare hand tracking and display method and system, and apparatus
CN107077741A (en) Depth drawing generating method and the unmanned plane based on this method
US11436790B2 (en) Passthrough visualization
KR101865173B1 (en) Method for generating movement of motion simulator using image analysis of virtual reality contents
KR102312531B1 (en) Location system and computing device for executing the system
WO2014128751A1 (en) Head mount display apparatus, head mount display program, and head mount display method
JP2017120556A (en) Head-mounted display for operation, control method of head-mounted display for operation, and program for head-mounted display for operation
JP2022133133A (en) Generation device, generation method, system, and program
US20240233174A9 (en) Image display method and apparatus, and electronic device
US11657908B2 (en) Methods and systems for handling virtual 3D object surface interaction
WO2018042074A1 (en) A method, apparatus and computer program product for indicating a seam of an image in a corresponding area of a scene
JP6446465B2 (en) I / O device, I / O program, and I / O method
CN109085931A (en) A kind of interactive input method, device and storage medium that actual situation combines
Wang et al. Im2fit: Fast 3d model fitting and anthropometrics using single consumer depth camera and synthetic data
KR20190063601A (en) Augmentation Information Simulator for Providing Enhanced UI/UX of Realistic HUD

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant