CN111243103B - Method and device for setting security area, VR equipment and storage medium - Google Patents

Method and device for setting security area, VR equipment and storage medium Download PDF

Info

Publication number
CN111243103B
CN111243103B CN202010014355.1A CN202010014355A CN111243103B CN 111243103 B CN111243103 B CN 111243103B CN 202010014355 A CN202010014355 A CN 202010014355A CN 111243103 B CN111243103 B CN 111243103B
Authority
CN
China
Prior art keywords
curve
virtual
scene image
real scene
closed curve
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010014355.1A
Other languages
Chinese (zh)
Other versions
CN111243103A (en
Inventor
舒玉龙
郑光璞
宋田
吴涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Xiaoniao Kankan Technology Co Ltd
Original Assignee
Qingdao Xiaoniao Kankan Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Xiaoniao Kankan Technology Co Ltd filed Critical Qingdao Xiaoniao Kankan Technology Co Ltd
Priority to CN202010014355.1A priority Critical patent/CN111243103B/en
Publication of CN111243103A publication Critical patent/CN111243103A/en
Application granted granted Critical
Publication of CN111243103B publication Critical patent/CN111243103B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8082Virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a method, a device, VR equipment and a storage medium for setting a secure area. The method for setting the safety area is applied to the virtual reality VR equipment and comprises the following steps: acquiring a real scene image shot by a camera of the VR equipment, and respectively acquiring a virtual ray of the VR equipment and a pixel coordinate corresponding to an intersection point of the virtual ray and a plane in the real scene image; performing curve fitting according to the pixel coordinates of the intersection points to generate a curve, and superposing the curve and the virtual rays on the real scene image and then displaying and outputting the real scene image so as to guide a user to move and avoid obstacles; and when curve fitting is carried out according to the pixel coordinates of the intersection points to generate a closed curve, storing information of the closed curve, and marking the area inside the closed curve as a safety area. According to the embodiment of the application, the immersion sense of the virtual reality equipment is improved, the user experience is optimized, no additional hardware is required to be installed, and the cost is low.

Description

Method and device for setting security area, VR equipment and storage medium
Technical Field
The application relates to the technical field of virtual reality, in particular to a method and device for setting a safe area, VR equipment and a storage medium.
Background
VR (Virtual Reality) technology is a technology that utilizes a computer to fuse and reconstruct various information, generate a three-dimensional interactive Virtual environment, and provide a user with an immersion feeling. When a user walks in a space where obstacles such as a desk and a chair are present while wearing the VR headset, the user is likely to collide with the obstacles because the surrounding environment state cannot be seen, thereby affecting the safety of the user.
Disclosure of Invention
The purpose of the application is to provide a method, a device, VR equipment and a storage medium for setting a safety area, which are used for carrying out safety guidance protection on a user and improving the safety of virtual reality equipment during use.
According to an aspect of the embodiments of the present application, there is provided a method for setting a secure area, applied to a virtual reality VR device, the method including:
acquiring a real scene image shot by a camera of the VR equipment, and respectively acquiring a virtual ray of the VR equipment and a pixel coordinate corresponding to an intersection point of the virtual ray and a plane in the real scene image;
performing curve fitting according to the pixel coordinates of the intersection points to generate a curve, and superposing the curve and the virtual rays on the real scene image and then displaying and outputting the real scene image so as to guide a user to move and avoid obstacles;
and when curve fitting is carried out according to the pixel coordinates of the intersection points to generate a closed curve, storing information of the closed curve, and marking the area inside the closed curve as a safety area.
According to another aspect of an embodiment of the present application, there is provided an apparatus for setting a secure area, applied to a virtual reality VR device, including:
the position determining module is used for acquiring a real scene image shot by a camera of the VR equipment, and respectively acquiring a virtual ray of the VR equipment and a pixel coordinate corresponding to an intersection point of the virtual ray and a plane in the real scene image;
the curve generating module is used for generating a curve by curve fitting according to the pixel coordinates of the intersection points, and displaying and outputting the curve and the virtual rays after being overlapped to the real scene image so as to guide a user to move and avoid obstacles;
and the safety region setting module is used for storing the information of the closed curve when curve fitting is carried out according to the pixel coordinates of the intersection points to generate the closed curve, and marking the region inside the closed curve as a safety region.
According to yet another aspect of embodiments of the present application, there is provided a virtual reality VR device comprising a processor and a memory;
the memory stores computer executable instructions;
the processor, when executed, causes the processor to perform the method of setting a secure enclave as described above.
According to a further aspect of embodiments of the present application, there is provided a computer readable storage medium having stored thereon one or more computer programs which when executed implement a method of setting a secure area as described above.
According to the technical scheme, the real scene image is obtained, the virtual ray information and the intersection point information of the virtual ray and the ground, which are obtained in the setting process of the safety area, are rendered to the corresponding position of the real scene image for the user to check, so that the user is assisted to realize the setting of the safety area, the immersion of the virtual reality equipment is improved, and the user experience is optimized. In addition, the safety area setting method of the embodiment of the application has low requirements on scenes, and the scenes do not need to be arranged in advance. In addition, the technical problem of high cost of installing additional hardware caused by means of ultrasonic waves and the like is avoided by the scheme of the embodiment of the application.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below. It is appreciated that the following drawings depict only certain embodiments of the application and are therefore not to be considered limiting of its scope. Other relevant drawings may be made by those of ordinary skill in the art without undue burden from these drawings.
Fig. 1 is a flow chart of a method for setting a security area according to an embodiment of the present application;
FIG. 2 is a flow chart of a method for setting a security zone according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a generated closed curve shown in an embodiment of the present application;
FIG. 4 is a block diagram of an apparatus for setting a secure enclave according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a virtual reality device according to an embodiment of the present application.
Detailed Description
Various exemplary embodiments of the present application will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present application unless it is specifically stated otherwise. The following description of at least one exemplary embodiment is merely exemplary in nature and is in no way intended to limit the application, its application, or uses. Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but where appropriate, the techniques, methods, and apparatus should be considered part of the specification. In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of exemplary embodiments may have different values. It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
Aiming at the technical problems that the user is likely to collide with obstacles and the safety of the user is affected because the user cannot see the surrounding state when walking in the scene with obstacles such as a table, a chair and the like by wearing the virtual reality helmet during actual use, the technical problem is that the user can stand or sit in situ to realize the setting of a safety area by using the virtual reality helmet. There is a scheme for clearing up obstacles in a use scene and establishing a similar soft net fence, and obviously, the scheme has high scene requirements and complex scene arranging process. There is also a scheme Of detecting an obstacle and setting a safety area using an ultrasonic or TOF (Time Of Flight) ranging sensor, which requires additional hardware, has high cost, and is not suitable for large-scale popularization.
The embodiment provides a safe area setting scheme, and the technical conception of the technical scheme of the application is as follows: firstly, mapping between a virtual scene world coordinate system and an actual use environment scene is established through an image processing algorithm, a user can see the front real scene on a helmet screen through a camera arranged in front of a VR helmet, based on the mapping, whether an obstacle exists or not can be judged by the user, safety region labeling is carried out through an additional interaction means, and coordinate information of the labeling is stored and processed into a closed region. When the head-mounted screen is switched into a virtual scene, the closed area is rendered into the scene, so that an effective guiding effect can be achieved, and the safety in use is improved.
Fig. 1 is a flowchart of a method for setting a secure area according to an embodiment of the present application, referring to fig. 1, where the method for setting a secure area is applied to a virtual reality VR device, and includes:
step S101, acquiring a real scene image shot by a camera of the VR equipment, and respectively acquiring a virtual ray of the VR equipment and a pixel coordinate corresponding to an intersection point of the virtual ray and a plane in the real scene image;
step S102, curve fitting is carried out according to pixel coordinates of the intersection points to generate a curve, and the curve and the virtual rays are superimposed on the real scene image and then displayed and output so as to guide a user to move and avoid obstacles;
and step S103, when curve fitting is performed according to the pixel coordinates of the intersection points to generate a closed curve, storing information of the closed curve, and marking an area inside the closed curve as a safety area.
As can be seen from fig. 1, the method for setting a security area according to the present embodiment realizes security area setting based on user interaction, solves the problem of how objects in a real scene interact with a virtual reality scene in a virtual reality experience process, improves immersion of a virtual reality device, and optimizes user experience. And the scene is not required to be arranged in advance, the cost is low, and the method is suitable for large-scale popularization and application.
Fig. 2 is a flowchart of a method for setting a security area according to an embodiment of the present application, and a flowchart of a method for setting a security area at a time is described below with reference to fig. 2. The method for setting the secure area of the present embodiment is applied to a virtual reality VR device.
Referring to fig. 2, the flow starts by first executing step S201, and establishing a world coordinate system according to a real scene;
eyes are a main source for human to acquire external information, and similarly, a visual SLAM (Simultaneous Localization And Mapping) algorithm is used for acquiring massive and redundant texture information from a scene through a visual sensor (such as a binocular camera) to position, track and predict a dynamic target in the scene. In the embodiment, a binocular camera-based SLAM algorithm is adopted and an inertial sensor is fused to position the helmet of the VR equipment. Specifically, the binocular camera acquires environmental image information in a real scene, the inertial sensor acquires motion information of a user wearing the head, and a world coordinate system corresponding to the real scene is constructed based on the environmental image information, the motion information and the SLAM algorithm. The opposite direction of the gravity direction is taken as the y-axis direction of the world coordinate system, the direction which is perpendicular to the helmet and points to the user when the helmet is started is determined as the z-axis direction, and the first position coordinate of the helmet, which is obtained through system processing, is determined as the origin of the world coordinate system.
Step S202, calculating the pose of the handle in the world coordinate system.
In the embodiment, the user is assisted to complete the setting of the safety area through image prompt, in order to improve the immersion of the user, the virtual ray of the VR device and the pixel coordinates corresponding to the intersection point of the virtual ray and a plane in the real scene image are respectively obtained, curve fitting is performed according to the pixel coordinates of the intersection point to generate a curve, and the curve is displayed in real time. The virtual rays here are issued by the handle of the VR device; correspondingly, respectively acquiring the corresponding pixel coordinates of the virtual ray and the intersection point of the virtual ray and a plane in the real scene image comprises: the method for acquiring the pixel coordinates corresponding to the virtual rays in the real scene image specifically comprises the following steps: obtaining the coordinates of the handle in a world coordinate system according to the coordinates of the handle in a camera coordinate system and the pose of the helmet of the VR equipment; obtaining corresponding pixel coordinates of the handle in the real scene image according to the coordinates of the handle in a world coordinate system, camera internal parameters and the pose of the helmet of the VR equipment; the method for acquiring the pixel coordinates corresponding to the intersection point in the real scene image specifically comprises the following steps: based on the virtual ray equation and the plane equation, obtaining coordinates of the intersection point under a world coordinate system; and obtaining the corresponding pixel coordinates of the intersection point in the real scene image according to the coordinates of the intersection point in the world coordinate system, the camera internal parameters and the pose of the helmet of the VR equipment.
That is, in this embodiment, not only the curve generated by fitting the virtual ray with a plane intersection point is drawn at the corresponding position of the real scene image, but also the virtual ray emitted by the user operating handle is drawn at the corresponding position of the real scene image, so that the user can see the process of forming a closed curve by closing the drawn curve step by step in real time, and the interactivity of the user is enhanced.
Based on this, the method of this embodiment includes two processes of acquiring the pixel coordinates corresponding to the virtual ray in the real scene image and acquiring the pixel coordinates corresponding to the intersection point in the real scene image, and in this step, the pixel coordinates corresponding to the virtual ray in the real scene image are specifically described. Since the virtual ray is emitted by the handle, the starting point of the direction vector of the virtual ray is the handle, the position of the handle under the world coordinate system needs to be calculated firstly by the corresponding pixel coordinates of the handle in the real scene image, and the position of the handle under the world coordinate system can be calculated by the following processes:
(1) Calculating the distance z from the handle to the plane;
the handle of the embodiment is designed with special optical marks (such as lamp beads with certain structural characteristics), and coordinates u of the optical marks on the left and right cameras of the binocular camera are obtained respectively through a feature matching algorithm L And u R The distance z from the handle to the plane can be obtained from the similar triangle relationship in the binocular camera model and through the following formula:
z=f*b/(u L -u R )
where f is the camera focal length and b is the base line of the binocular camera.
(2) Calculating coordinates P of the handle to the camera coordinate system c
Knowing the handle-to-plane distance z, the coordinates u on the handle optically marked on the left and right cameras of the binocular camera L And u R The coordinates Pc of the handgrip in the camera coordinate system can be obtained in combination with the camera internal parameters.
(3) Calculating the coordinates P of the handle to the world coordinate system w
At the acquisition of coordinates P of the handle to the camera coordinate system c Then, according to the six-degree-of-freedom pose (the pose comprises three-degree-of-freedom position information and three-degree-of-freedom angle information) of the VR helmet, the coordinates of the handle in the world system can be obtained
p w =R*P c +T
The rotation matrix R and the translation vector T are angle information and position information in the six-degree-of-freedom pose of the helmet respectively.
Step S203, displaying a real scene image;
in the embodiment, the left camera in the binocular camera is taken as the center of the world coordinate system of the system, and the image content collected by the left camera is displayed on the screen of the VR helmet to guide the user to draw the boundary line of the safety area.
Step S204, receiving input ground height information.
As described above, the curve of the present embodiment is obtained according to the intersection point of the virtual ray and a plane, where the plane may be any plane, and considering that if the curve can be drawn on the ground, it is indicated that there is no other object on the ground, and the curve is more suitable for the visual perception of the user, so the present embodiment sets the plane as the ground plane, and in order to consider the difference of different heights of the user, a link from the helmet for receiving the input to the ground vertical height h is designed.
In step S205, coordinates of the intersection point in the world coordinate system are calculated.
Receiving ground height information input by a user according to height, and determining a ground plane equation according to the ground height information; and obtaining the coordinates of the intersection point under the world coordinate system based on the virtual ray equation and the ground plane equation.
Because the intersection point is the point where the virtual ray intersects with the ground, the intersection point is not only the point of the virtual ray, but also the point on the ground, based on the point, a ray equation and a ground plane equation are established, and the coordinates of the intersection point and the intersection point under the world coordinate system can be immediately determined by the two equations.
For example, the direction vector of the ray emitted by the handle is determined according to the handle coordinate system as follows:
u=R*(0,1,0)+T
the virtual ray equation is:
p=p 0 +ut
wherein P is 0 Is the coordinates of the handle in the world coordinate system, i.e. P as described above w Here use P 0 The representation is because the handle is the origin of the virtual ray, P 0 Representing the start of the vector. The parameter t can be understood as the modulus of the vector, u as the vector direction, P 0 For a vector start point, P is the vector end point, which is also the point on the ray.
The ground plane equation is: (0, h, 0) · { (0, -h, 0) -P } =0, and the symbol represents dot product.
Where h is the floor height information received in the aforementioned step S203.
The two equations above (i.e., the ground plane and the virtual ray equation) are combined to obtain the coordinate P of the intersection of the virtual ray and the ground plane.
And S206, calculating pixel coordinates of the handle and the intersection point in the real scene image, and rendering rays emitted by the intersection point and the handle on the real scene image for display.
In order to guide a user to draw lines, namely to draw the size of a safety area, in this embodiment, besides displaying a scene actually photographed on a screen, a handle ray in a virtual scene, an intersection point of the ray and the ground, and a curve obtained by fitting are simultaneously rendered on a correct position of an image of the real scene.
The pose of the handle in the world coordinate system is obtained according to the aforementioned step S202, and the coordinates of the intersection point in the world coordinate system are obtained according to the aforementioned step S205, so in this step, the pixel coordinates of the rays emitted from the handle on the real scene image are calculated by the following formula:
P uv =K*E*P W
wherein K is camera reference, P W E is obtained by the following formula for the pose of the handle in the world coordinate system
Figure BDA0002358319530000071
R and T are angle information and position information, respectively, in the six degrees of freedom pose of the helmet.
Similarly, equation P will be uv =K*E*P W The pose of the handle in the world coordinate system is replaced by the coordinate of the intersection point P in the world coordinate system, and the pixel coordinate of the intersection point P on the real scene image can be calculated.
After obtaining pixel coordinates of the handle and the intersection point in the real scene image, performing curve fitting according to the pixel coordinates of the intersection point, displaying a ray emitted by the handle on the real scene image in real time by a fitted curve for viewing by a user, and guiding the user to draw lines in a safety area. I.e., to guide the user around the obstacle in the ground to determine the safe area coverage,
the curve can be defined to form a closed curve, the inside of the closed curve is a safe area without any obstacle approved by a user, when curve fitting is performed according to pixel coordinates of the intersection points to generate the closed curve, information of the closed curve is stored, and the area inside the closed curve is marked as the safe area.
The user is guided to actively avoid the obstacle when drawing the line, and the obstacle is drawn outside the formed closed curve, the line drawing effect is as shown in fig. 3, and it is known that the inside of the closed curve 302 is a safe area, no obstacle exists, and the obstacle 301 is outside the closed curve 302.
Thus, the security area setting step is completed.
From the above, in this embodiment, the curve indicating the boundary of the safety area is superimposed on the corresponding position of the actual scene image for the user to view through VR screen display, so that the effect of drawing the line can be seen in real time, which is equivalent to the augmented reality effect, and the user experience is better.
In one embodiment, the method further comprises: and when the VR equipment outputs the virtual scene, outputting virtual scene safety area indication information generated according to the information of the closed curve together so as to prompt a user to avoid the obstacle. For example, when the VR device screen is switched to a virtual scene, the closed area is drawn therein, so that effective guiding and warning effects can be achieved. The embodiment does not limit how to generate the virtual scene safety area indication information according to the information of the closed curve, and the virtual scene safety area indication information should be performed according to actual requirements.
It should be noted that, the execution order of the steps in fig. 2 is not strictly limited, for example, the step S203 and the step S202 may be executed synchronously instead of executing the step S202 first and then executing the step S203.
In practical use, the result of one line drawing may not be very accurate, and in order to improve the precision and accuracy of the setting of the safety area, the embodiment proposes a scheme for adjusting the closed curve. That is, the method further comprises: after the closed curve is generated, continuously acquiring the corresponding pixel coordinates of a new intersection point in the real scene image, and performing curve fitting according to the pixel coordinates of the new intersection point to generate a new curve; and adjusting the closed curve according to the new curve.
Specifically, according to the new curve, adjusting the closed curve includes: and when the new closed curve and the closed curve form a new closed region and the area of the new closed region is larger than the safety region indicated by the closed curve, determining a target closed curve according to the new closed region, and replacing the closed curve with the target closed curve.
For example, after a first closed curve is obtained, connecting the curves in an end-to-end manner to form a safe closed area; and (3) the receiving user continues the line drawing operation to determine a new curve, and when the new curve and the closed curve form a new closed area and the area is larger than the original safe closed area, the new safe closed area is used for replacing the original safe closed area. When the new curve and the safe closing area form a new closing area and the area is smaller than the original safe closing area, the safe closing area is not updated. When the new curve and the closed region cannot form a new closed region, the closed curve is left as it is.
It should be noted that, in this embodiment, all curves and closed areas may be rendered on a real-time scene image captured by a camera, so as to assist a user in accurately drawing lines, improve user experience and ensure safety of VR equipment during use.
An embodiment of the present application provides an apparatus for setting a security area, and fig. 4 is a block diagram of the apparatus for setting a security area shown in the embodiment of the present application, referring to fig. 4, an apparatus 400 for setting a security area is applied to a virtual reality VR device, and includes:
the position determining module 401 is configured to obtain a real scene image captured by a camera of the VR device, and respectively obtain a virtual ray of the VR device and a pixel coordinate corresponding to an intersection point where the virtual ray intersects a plane in the real scene image;
the curve generating module 402 is configured to perform curve fitting according to the pixel coordinates of the intersection points to generate a curve, and superimpose the curve and the virtual ray on the real scene image and then display and output the superimposed curve and the virtual ray, so as to guide a user to move and avoid an obstacle;
and the safety area setting module 403 is configured to store information of a closed curve when performing curve fitting according to pixel coordinates of the intersection points to generate the closed curve, and mark an area inside the closed curve as a safety area.
In one embodiment of the present application, the apparatus 400 for setting a security area further includes: and the prompting module is used for outputting virtual scene safety area indication information generated according to the information of the closed curve when the VR equipment outputs the virtual scene so as to prompt a user to avoid the obstacle.
In one embodiment of the present application, the virtual ray is issued by a handle of the VR device; the position determining module 401 is specifically configured to obtain a coordinate of the handle in a world coordinate system according to a coordinate of the handle in a camera coordinate system and a pose of a helmet of the VR device; obtaining corresponding pixel coordinates of the handle in the real scene image according to the coordinates of the handle in a world coordinate system, camera internal parameters and the pose of the helmet of the VR equipment; based on the virtual ray equation and the plane equation, obtaining coordinates of the intersection point under a world coordinate system; and obtaining the corresponding pixel coordinates of the intersection point in the real scene image according to the coordinates of the intersection point in the world coordinate system, the camera internal parameters and the pose of the helmet of the VR equipment.
In one embodiment of the present application, the plane is a ground plane; the position determining module 401 is specifically configured to receive ground height information input by a user according to height, and determine a ground plane equation according to the ground height information.
In one embodiment of the present application, the curve generating module 402 is further configured to, after generating the closed curve, continuously obtain pixel coordinates corresponding to the new intersection point in the real scene image, and perform curve fitting according to the pixel coordinates of the new intersection point to generate a new curve; and adjusting the closed curve according to the new curve.
In one embodiment of the present application, the curve generating module 402 is specifically configured to determine a target closed curve according to the new closed region and replace the closed curve with the target closed curve when the new closed region is formed by the new curve and the closed curve and the area of the new closed region is greater than the safety region indicated by the closed curve.
It should be noted that, the specific implementation manner of each module in the above embodiment of the apparatus may be performed with reference to the specific implementation manner of the foregoing corresponding embodiment of the method, which is not described herein again.
Fig. 5 is a schematic structural diagram of a virtual reality device according to an embodiment of this application, where, as shown in fig. 5, the virtual reality system includes a processor, and optionally, an internal bus, a network interface, and a memory at a hardware level. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Of course, the virtual reality device also includes hardware required for other services, such as handles.
The processor, network interface, and memory may be interconnected by an internal bus, which may be an ISA (Industry Standard Architecture ) bus, a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus, or EISA (Extended Industry Standard Architecture ) bus, among others. The buses may be classified as address buses, data buses, control buses, etc. For ease of illustration, only one bi-directional arrow is shown in FIG. 5, but not only one bus or type of bus.
And the memory is used for storing programs. In particular, the program may comprise program code comprising computer executable instructions. The memory may include memory and non-volatile storage and provide instructions and data to the processor.
The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs, and a device for setting the safety area is formed on the logic level. The processor executes the program stored in the memory to implement the method of setting the secure enclave as described above.
The method performed by the apparatus for setting a security area disclosed in the embodiment shown in fig. 5 of the present specification may be applied to a processor or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the method of setting a secure enclave described above may be accomplished by instructions in the form of integrated logic circuits of hardware or software in a processor. The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of this specification may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present specification may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the method for setting a secure area described above.
The present application also provides a computer-readable storage medium.
The computer readable storage medium stores one or more computer programs comprising instructions that, when executed by a processor, enable the method of setting a secure enclave described above to be implemented.
In order to clearly describe the technical solutions of the embodiments of the present application, in the embodiments of the present application, the terms "first", "second", and the like are used to distinguish the same item or similar items having substantially the same function and effect, and those skilled in the art will understand that the terms "first", "second", and the like do not limit the number and execution order.
The foregoing is merely a specific implementation of the present application and other modifications and variations can be made by those skilled in the art based on the above-described examples in light of the above teachings. It is to be understood by persons skilled in the art that the foregoing detailed description is provided for the purpose of illustrating the present application and that the scope of the present application is to be controlled by the scope of the appended claims.

Claims (10)

1. A method of setting a secure area, characterized by being applied to a virtual reality VR device, the method comprising:
acquiring a real scene image shot by a camera of the VR equipment, and respectively acquiring a virtual ray of the VR equipment and a pixel coordinate corresponding to an intersection point of the virtual ray and a plane in the real scene image;
performing curve fitting according to the pixel coordinates of the intersection points to generate a curve, and superposing the curve and the virtual rays on the real scene image and then displaying and outputting the real scene image so as to guide a user to move and avoid obstacles;
and when curve fitting is carried out according to the pixel coordinates of the intersection points to generate a closed curve, storing information of the closed curve, and marking the area inside the closed curve as a safety area.
2. The method according to claim 1, characterized in that the method further comprises:
and when the VR equipment outputs the virtual scene, outputting virtual scene safety area indication information generated according to the information of the closed curve together so as to prompt a user to avoid the obstacle.
3. The method of claim 1, wherein the virtual ray is issued by a handle of a VR device;
the respectively obtaining the corresponding pixel coordinates of the virtual ray and the intersection point of the virtual ray and a plane in the real scene image comprises the following steps:
the method for acquiring the pixel coordinates corresponding to the virtual rays in the real scene image specifically comprises the following steps:
obtaining the coordinates of the handle in a world coordinate system according to the coordinates of the handle in a camera coordinate system and the pose of the helmet of the VR equipment;
obtaining corresponding pixel coordinates of the handle in the real scene image according to the coordinates of the handle in a world coordinate system, camera internal parameters and the pose of the helmet of the VR equipment;
the method for acquiring the pixel coordinates corresponding to the intersection point in the real scene image specifically comprises the following steps:
based on a virtual ray equation and a plane equation, obtaining coordinates of the intersection point under a world coordinate system;
and obtaining the corresponding pixel coordinates of the intersection point in the real scene image according to the coordinates of the intersection point in the world coordinate system, camera internal parameters and the pose of the helmet of the VR equipment.
4. A method according to claim 3, wherein the plane is a ground plane;
the obtaining coordinates of the intersection point in the world coordinate system based on the virtual ray equation and the plane equation comprises:
and receiving ground height information input by a user according to the height, and determining a ground plane equation according to the ground height information.
5. The method according to claim 1, characterized in that the method further comprises:
after the closed curve is generated, continuously acquiring the corresponding pixel coordinates of a new intersection point in the real scene image, and performing curve fitting according to the pixel coordinates of the new intersection point to generate a new curve;
and adjusting the closed curve according to the new curve.
6. The method of claim 5, wherein said adjusting said closed curve according to the new curve comprises:
and when the new closed curve and the closed curve form a new closed region and the area of the new closed region is larger than the safety region indicated by the closed curve, determining a target closed curve according to the new closed region, and replacing the closed curve with the target closed curve.
7. An apparatus for setting a secure area, for use with a virtual reality VR device, comprising:
the position determining module is used for acquiring a real scene image shot by a camera of the VR equipment, and respectively acquiring a virtual ray of the VR equipment and a pixel coordinate corresponding to an intersection point of the virtual ray and a plane in the real scene image;
the curve generating module is used for generating a curve by curve fitting according to the pixel coordinates of the intersection points, and displaying and outputting the curve and the virtual rays after being overlapped to the real scene image so as to guide a user to move and avoid obstacles;
and the safety region setting module is used for storing the information of the closed curve when curve fitting is carried out according to the pixel coordinates of the intersection points to generate the closed curve, and marking the region inside the closed curve as a safety region.
8. The apparatus of claim 7, wherein the apparatus further comprises:
and the prompting module is used for outputting virtual scene safety area indication information generated according to the information of the closed curve when the VR equipment outputs the virtual scene so as to prompt a user to avoid the obstacle.
9. A virtual reality VR device comprising a processor and a memory;
the memory stores computer executable instructions;
the processor, the computer executable instructions, when executed, cause the processor to perform the method of setting a security zone as claimed in any one of claims 1 to 6.
10. A computer readable storage medium, wherein one or more computer programs are stored on the computer readable storage medium, which when executed implement the method of setting a secure enclave as claimed in any of claims 1 to 6.
CN202010014355.1A 2020-01-07 2020-01-07 Method and device for setting security area, VR equipment and storage medium Active CN111243103B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010014355.1A CN111243103B (en) 2020-01-07 2020-01-07 Method and device for setting security area, VR equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010014355.1A CN111243103B (en) 2020-01-07 2020-01-07 Method and device for setting security area, VR equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111243103A CN111243103A (en) 2020-06-05
CN111243103B true CN111243103B (en) 2023-04-28

Family

ID=70874288

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010014355.1A Active CN111243103B (en) 2020-01-07 2020-01-07 Method and device for setting security area, VR equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111243103B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111773724B (en) * 2020-07-31 2024-04-26 网易(上海)网络有限公司 Method and device for crossing virtual obstacle
CN112037314A (en) * 2020-08-31 2020-12-04 北京市商汤科技开发有限公司 Image display method, image display device, display equipment and computer readable storage medium
CN114153307A (en) * 2020-09-04 2022-03-08 中移(成都)信息通信科技有限公司 Scene block processing method, device, electronic equipment and computer storage medium
CN111798573B (en) * 2020-09-08 2020-12-08 南京爱奇艺智能科技有限公司 Electronic fence boundary position determination method and device and VR equipment
CN113284258B (en) * 2021-07-13 2021-11-16 北京京东方技术开发有限公司 Method and device for setting safety zone and virtual reality equipment
US11804018B2 (en) * 2021-09-02 2023-10-31 International Business Machines Corporation Management of devices in a smart environment
CN115022611B (en) * 2022-03-31 2023-12-29 青岛虚拟现实研究院有限公司 VR picture display method, electronic device and readable storage medium
CN117785085A (en) * 2022-09-21 2024-03-29 北京字跳网络技术有限公司 Information prompting method, device, equipment, medium and product of virtual terminal equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006018476A (en) * 2004-06-30 2006-01-19 Sega Corp Method for controlling display of image
CN106873785A (en) * 2017-03-31 2017-06-20 网易(杭州)网络有限公司 For the safety custody method and device of virtual reality device
CN109584148A (en) * 2018-11-27 2019-04-05 重庆爱奇艺智能科技有限公司 A kind of method and apparatus handling two-dimentional interface in VR equipment
US10423241B1 (en) * 2017-07-31 2019-09-24 Amazon Technologies, Inc. Defining operating areas for virtual reality systems using sensor-equipped operating surfaces

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9851786B2 (en) * 2014-12-10 2017-12-26 Sixense Entertainment, Inc. System and method for assisting a user in remaining in a selected area while the user is in a virtual reality environment
US10416837B2 (en) * 2017-06-26 2019-09-17 Facebook, Inc. Virtual reality safety bounding box
US10832477B2 (en) * 2017-11-30 2020-11-10 International Business Machines Corporation Modifying virtual reality boundaries based on usage

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006018476A (en) * 2004-06-30 2006-01-19 Sega Corp Method for controlling display of image
CN106873785A (en) * 2017-03-31 2017-06-20 网易(杭州)网络有限公司 For the safety custody method and device of virtual reality device
US10423241B1 (en) * 2017-07-31 2019-09-24 Amazon Technologies, Inc. Defining operating areas for virtual reality systems using sensor-equipped operating surfaces
CN109584148A (en) * 2018-11-27 2019-04-05 重庆爱奇艺智能科技有限公司 A kind of method and apparatus handling two-dimentional interface in VR equipment

Also Published As

Publication number Publication date
CN111243103A (en) 2020-06-05

Similar Documents

Publication Publication Date Title
CN111243103B (en) Method and device for setting security area, VR equipment and storage medium
US10223834B2 (en) System and method for immersive and interactive multimedia generation
CN107223269B (en) Three-dimensional scene positioning method and device
US10489651B2 (en) Identifying a position of a marker in an environment
US7928977B2 (en) Image compositing method and apparatus for superimposing a computer graphics image on an actually-sensed image
CN113467600A (en) Information display method, system and device based on augmented reality and projection equipment
CN109801379B (en) Universal augmented reality glasses and calibration method thereof
CN110362193B (en) Target tracking method and system assisted by hand or eye tracking
CN112652016A (en) Point cloud prediction model generation method, pose estimation method and device
CN112651881B (en) Image synthesizing method, apparatus, device, storage medium, and program product
KR20110136012A (en) Augmented reality device to track eyesight direction and position
WO2022174594A1 (en) Multi-camera-based bare hand tracking and display method and system, and apparatus
KR101865173B1 (en) Method for generating movement of motion simulator using image analysis of virtual reality contents
CN111260789A (en) Obstacle avoidance method, virtual reality head-mounted device and storage medium
JP2015114905A (en) Information processor, information processing method, and program
CN113366491A (en) Eyeball tracking method, device and storage medium
CN112116631A (en) Industrial augmented reality combined positioning system
CN115525152A (en) Image processing method, system, device, electronic equipment and storage medium
JP6446465B2 (en) I / O device, I / O program, and I / O method
CN111089579B (en) Heterogeneous binocular SLAM method and device and electronic equipment
CN111354088A (en) Environment map establishing method and system
KR20190063601A (en) Augmentation Information Simulator for Providing Enhanced UI/UX of Realistic HUD
Wang et al. Im2fit: Fast 3d model fitting and anthropometrics using single consumer depth camera and synthetic data
US20180278902A1 (en) Projection device, content determination device and projection method
EP4256776A1 (en) Low motion to photon latency rapid target acquisition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant