CN114442804A - Boundary generation method and device, head-mounted display equipment and readable storage medium - Google Patents

Boundary generation method and device, head-mounted display equipment and readable storage medium Download PDF

Info

Publication number
CN114442804A
CN114442804A CN202111676556.9A CN202111676556A CN114442804A CN 114442804 A CN114442804 A CN 114442804A CN 202111676556 A CN202111676556 A CN 202111676556A CN 114442804 A CN114442804 A CN 114442804A
Authority
CN
China
Prior art keywords
wearer
image
head
mounted display
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111676556.9A
Other languages
Chinese (zh)
Inventor
周潇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Techology Co Ltd
Original Assignee
Goertek Optical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Optical Technology Co Ltd filed Critical Goertek Optical Technology Co Ltd
Priority to CN202111676556.9A priority Critical patent/CN114442804A/en
Publication of CN114442804A publication Critical patent/CN114442804A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Human Computer Interaction (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The embodiment of the disclosure discloses a boundary generating method and device, a head-mounted display device and a readable storage medium, which are applied to the head-mounted display device, wherein the head-mounted display device is provided with a first camera, the head-mounted display device is in communication connection with a second camera arranged outside, and the method comprises the following steps: acquiring a first image acquired by the first camera and a second image acquired by the second camera, wherein the second images comprise wearers of the head-mounted display equipment; generating and displaying a three-dimensional picture of the space where the wearer is located according to the first image and the second image; under the condition that a boundary delimiting instruction input by a user is received, acquiring a target boundary; and updating the three-dimensional picture according to the target boundary to obtain an updated three-dimensional picture.

Description

Boundary generation method and device, head-mounted display equipment and readable storage medium
Technical Field
The embodiment of the disclosure relates to the technical field of virtual reality, and more particularly, to a boundary generation method and device, a head-mounted display device and a readable storage medium.
Background
At present, the VR headset can support 6DoF (Degree of Freedom) scene use, i.e., the user can wear the VR headset to move at will to experience various virtual scenes in the VR headset device. However, in the process of using the VR headset by the user, the user cannot observe the real environment around, and may collide with the obstacle when moving, which may cause a certain potential safety hazard to the user. For example, a user may hit a wall or furniture such as a table, a chair, or a cabinet when using a VR headset at home.
In the related art, in order to improve the use safety of the VR head-mounted all-in-one machine, a user can set a safe area, and when the user exceeds the safe area in the process of using the VR head-mounted all-in-one machine, a prompt can be sent to the user to avoid the user from colliding with other objects. However, in this method, the set safety region is a two-dimensional planar region, and the accuracy of recognition is low. In addition, the area of the set safe area is fixed, which cannot be adapted to the actual use environment of the user, and the user experience is not good.
Disclosure of Invention
An object of the embodiments of the present disclosure is to provide a boundary generating method for a head-mounted display device, which can solve the problems of a two-dimensional security area set in the prior art, low identification accuracy, and poor security protection effect.
According to a first aspect of the embodiments of the present disclosure, there is provided a boundary generating method applied to a head-mounted display device, the head-mounted display device being provided with a first camera, the head-mounted display device being in communication connection with an externally provided second camera, the method including:
acquiring a first image acquired by the first camera and a second image acquired by the second camera, wherein the second image comprises a wearer of the head-mounted display device;
generating and displaying a three-dimensional picture of the space where the wearer is located according to the first image and the second image;
under the condition that a boundary delimiting instruction input by a user is received, acquiring a target boundary;
and updating the three-dimensional picture according to the target boundary to obtain an updated three-dimensional picture.
Optionally, after the updating the three-dimensional picture according to the target boundary to obtain an updated three-dimensional picture, the method further includes:
and displaying the target boundary in the updated three-dimensional picture.
Optionally, after the updating the three-dimensional picture according to the target boundary to obtain an updated three-dimensional picture, the method further includes:
determining a first distance between the wearer and the target boundary according to the updated three-dimensional picture in the process of using the head-mounted display equipment by the wearer;
and outputting first prompt information when the first distance is smaller than or equal to a first threshold value.
Optionally, after determining the first distance between the wearer and the target boundary according to the updated three-dimensional picture during the use of the head-mounted display device by the wearer, the method further includes:
under the condition that the first distance is smaller than or equal to a second threshold value, acquiring and displaying a real world image acquired by the first camera;
wherein the second threshold is less than the first threshold.
Optionally, the determining, during the use of the head-mounted display device by the wearer, a first distance between the wearer and the target boundary according to the updated three-dimensional picture includes:
acquiring a third image acquired by the second camera in the process that the wearer uses the head-mounted display equipment, wherein the third image comprises the wearer;
generating contour information of the wearer according to the third image;
and determining a first distance between the wearer and the target boundary according to the contour information of the wearer and the updated three-dimensional picture.
Optionally, after the updating the three-dimensional picture according to the target boundary to obtain an updated three-dimensional picture, the method further includes:
determining whether a first object is included in the updated three-dimensional picture according to the first image and the second image, wherein the first object is an object other than the wearer;
determining contour information of a first object and position information of the first object relative to the wearer from the first image and the second image in a case where the first object is included within the updated three-dimensional picture;
adding a three-dimensional model of the first object within the updated three-dimensional picture according to the position information of the first object relative to the wearer and the contour information of the first object.
Optionally, after said adding a three-dimensional model of said first object within said updated three-dimensional picture according to position information of said first object relative to said wearer and contour information of said first object, said method further comprises:
determining a second distance between the wearer and the first object during use of the head mounted display device by the wearer;
and outputting second prompt information when the second distance is smaller than or equal to a third threshold value.
Optionally, after said determining a second distance between the wearer and the first object, the method further comprises:
under the condition that the second distance is smaller than or equal to a fourth threshold value, acquiring and displaying an image of the real world acquired by the first camera;
wherein the fourth threshold is less than the third threshold.
Optionally, said determining a second distance between the wearer and the first object during use of the head mounted display device by the wearer comprises:
acquiring a fourth image acquired by the second camera in the process that the wearer uses the head-mounted display equipment, wherein the fourth image comprises the wearer;
generating contour information of the wearer according to the fourth image;
determining a second distance between the wearer and the first object from the contour information of the wearer and the three-dimensional model of the first object.
Optionally, after the updating the three-dimensional picture according to the target boundary to obtain an updated three-dimensional picture, the method further includes:
in the process that a wearer uses the head-mounted display equipment, obtaining a first scene image according to an image collected by the first camera, and obtaining a second scene image according to an image collected by the second camera;
displaying the first scene image in a first area of a display screen of the head-mounted display device and displaying the second scene image in a second area of the display screen;
the first scene image is a scene image of a first person perspective, and the second scene image is a scene image of a third person perspective.
According to a second aspect of the embodiments of the present disclosure, there is provided a boundary generating apparatus applied to a head-mounted display device, the head-mounted display device being provided with a first camera, the head-mounted display device being in communication connection with an externally provided second camera, the apparatus including:
a first obtaining module, configured to obtain a first image collected by the first camera and a second image collected by the second camera, where the second image includes a wearer of the head-mounted display device;
the generating module is used for generating a three-dimensional picture of the space where the wearer is located according to the first image and the second image;
the display module is used for displaying a three-dimensional picture of the space where the wearer is located;
the second acquisition module is used for acquiring a target boundary under the condition of receiving a boundary demarcation instruction input by a user;
and the updating module is used for updating the three-dimensional picture according to the target boundary to obtain an updated three-dimensional picture.
According to a third aspect of the embodiments of the present disclosure, there is provided a head mounted display device including a first camera, the head mounted display device further including:
a memory for storing executable computer instructions;
a processor for performing the boundary generation method according to the first aspect of the embodiments of the present disclosure under the control of the executable computer instructions;
the processor is in communication connection with the first camera and a second camera arranged outside respectively to acquire images shot by the first camera and the second camera.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer instructions which, when executed by a processor, perform the boundary generation method according to the first aspect of the embodiments of the present disclosure.
According to the embodiment of the disclosure, when the head-mounted display device is used, a first image acquired by a first camera and a second image acquired by an externally arranged second camera are acquired, a three-dimensional picture of a space where a wearer is located is generated and displayed according to the first image and the second image, then, under the condition that a boundary demarcation instruction input by a user is received, a target boundary is acquired, the three-dimensional picture is updated according to the target boundary, and the updated three-dimensional picture is obtained. Therefore, when the user sets the target boundary, the target boundary can be set according to the three-dimensional picture of the space where the wearer is located, which is displayed by the head-mounted display equipment, so that the operation of the user is facilitated, and the accuracy of boundary demarcation can be improved. Furthermore, the three-dimensional picture of the space where the wearer is located is updated according to the target boundary, so that in the process that the wearer experiences the head-mounted display device, safety reminding is provided for the wearer based on the updated three-dimensional picture, the accuracy is higher, the use safety of the head-mounted display device can be improved, and the problem that in the prior art, the protection effect is poor due to the fact that only a two-dimensional boundary is defined on the bottom surface is solved.
Other features of, and advantages with, the disclosed embodiments will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings needed to be used in the embodiments will be briefly described below. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope. For a person skilled in the art, it is possible to derive other relevant figures from these figures without inventive effort.
FIG. 1 is a schematic diagram of a hardware configuration of a control system that may be used to implement a boundary generation method of an embodiment;
FIG. 2 is a flow diagram of a boundary generation method according to one embodiment;
FIG. 3 is a functional block diagram of a boundary generating apparatus according to one embodiment;
fig. 4 is a hardware configuration diagram of a head-mounted display device according to an embodiment.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of parts and steps, numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the embodiments of the present disclosure unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
< hardware configuration >
Fig. 1 is a hardware configuration diagram of a control system that can be used to implement the boundary generation method of an embodiment.
As shown in fig. 1, the control system 100 includes a head mounted display apparatus 1000, and the head mounted display apparatus 1000 includes a first camera. The control system further comprises a second camera 2000, the second camera 2000 being in communication with the head mounted display device 1000, the second camera 2000 being positionable in an area outside a security boundary of the head mounted display device 1000. The second camera 2000 is used to capture an image including the wearer during the use of the head-mounted display device by the wearer.
In one embodiment, as shown in fig. 1, the head mounted display apparatus 1000 may include a processor 1100, a memory 1200, an interface device 1300, a communication device 1400, a display device 1500, an input device 1600, an audio device 1700, and a first camera 1800. The processor 1100 may include, but is not limited to, a central processing unit CPU, a microprocessor MCU, or the like. The memory 1200 includes, for example, a ROM (read only memory), a RAM (random access memory), a nonvolatile memory such as a hard disk, and the like. The interface device 1300 includes, for example, various bus interfaces such as a serial bus interface (including a USB interface), a parallel bus interface, and the like. Communication device 1400 is capable of wired or wireless communication, for example. The display device 1500 is, for example, a liquid crystal display, an LED display, a touch display, or the like. The input device 1600 includes, for example, a touch screen, a keyboard, a handle, and the like. Audio device 1700 may include a microphone that may be used to input voice information and a speaker that may be used to output voice information. The first camera 1800 may be used to acquire images.
In one embodiment, the input device is further provided with an indicator light. Taking the input device as a handle as an example, the handle is further provided with an indicator light, such as an infrared light, an LED light, and the like.
The head-mounted display device 1000 may be, for example, a VR (Virtual Reality) device, an AR (Augmented Reality) device, an MR (Mixed Reality) device, and the like, which is not limited in this disclosure.
In this embodiment, the memory 1200 of the head mounted display device 1000 is used to store instructions for controlling the processor 1100 to operate to implement or support the implementation of the boundary generation method according to any of the embodiments. The skilled person can design the instructions according to the solution disclosed in the present specification. How the instructions control the operation of the processor is well known in the art and will not be described in detail herein.
It should be understood by those skilled in the art that although a plurality of apparatuses of the head mounted display apparatus 1000 are illustrated in fig. 1, the head mounted display apparatus 1000 of the embodiments of the present specification may only refer to some of the apparatuses, and may also include other apparatuses, which are not limited herein.
The head mounted display device 1000 shown in FIG. 1 is illustrative only and is not intended to limit the present description, its applications, or uses in any way.
Various embodiments and examples according to the present disclosure are described below with reference to the drawings.
< method examples >
Fig. 2 illustrates a boundary generating method according to an embodiment of the present disclosure, which is applied to a head-mounted display device provided with a first camera and communicatively connected to an externally provided second camera. As shown in fig. 2, the boundary generating method provided by this embodiment may include the following steps S2100 to S2400.
Step S2100, acquiring a first image captured by the first camera and a second image captured by the second camera, where the second image includes a wearer of the head-mounted display device.
In this embodiment, the first camera may be a camera of the head-mounted display device itself. The first camera may be used to capture images of the surrounding environment. One or more first cameras can be arranged. For example, the head mounted display device is provided with five first cameras, wherein one first camera is a master camera and four first cameras are slave cameras.
The first image may be an image captured by a first camera. The first image may be an image of a surrounding scene of a wearer of the head mounted display device captured by the first camera. The first image is an image taken from the perspective of a first person, that is, the first image is an image taken from the perspective of a wearer of the head mounted display device.
The second camera can be an externally arranged camera, and the second camera is in communication connection with the head-mounted display device. Illustratively, the second camera and the head mounted display device may be connected via a wireless network, e.g., WIFI 6. The second camera may be used to photograph a wearer of the head mounted display device. One or more second cameras may be provided. The second camera is disposed outside a region for safety use of the head-mounted display apparatus. Generally, the distance from the second camera to the center of the safety area of the head mounted display device is a certain value. For example, the distance from the second camera to the center of the safety-use area of the head-mounted display device is 2 to 10 meters. Like this, when the user experiences head-mounted display device, can avoid the user to touch the second camera, guarantee head-mounted display device's normal operating, can also improve the security that the user used head-mounted display device.
The second image may be an image captured by the second camera that includes a wearer of the head mounted display device. That is, the second image may be an image taken at a third person perspective. According to the second image shot by the second camera, the position and the posture of the wearer of the head-mounted display equipment can be determined, and the contour information of the wearer of the head-mounted display equipment can also be determined.
In this embodiment, the acquiring of the first image acquired by the first camera may be acquiring one first image acquired by the first camera, or acquiring a plurality of first images acquired by the first camera. The second image acquired by the second camera may be one second image acquired by the second camera, or multiple second images acquired by the second camera. When the head-mounted display device is started, a user wears the head-mounted display device to look around, the first camera can collect a plurality of first images, and meanwhile the second camera continuously shoots a wearer to obtain a plurality of second images.
After step S2100, step S2200 is performed to generate and display a three-dimensional screen of the space where the wearer is located, based on the first image and the second image.
In the present embodiment, the three-dimensional picture of the space in which the wearer is located may be a picture reflecting the space in which the wearer of the head-mounted display device is located. The first image may comprise images taken of a scene outside the wearer at different angles. The second image is an image captured by a second camera provided externally, and the second image may be an image obtained by capturing a space in which the wearer is located at a third person's perspective. Therefore, the three-dimensional picture of the space where the wearer is located can be constructed by combining the first images and the second images, and the three-dimensional picture of the space where the wearer is located is displayed on the display screen of the head-mounted display device, so that a user can conveniently define a target boundary.
After step S2200, step S2300 is executed to acquire a target boundary in a case where a boundary delineation instruction input by the user is received.
In this embodiment, the boundary delineation instruction input by the user may be a boundary delineation instruction input by the user through a controller (e.g., a handle) of the head mounted display device. The target boundary may be a boundary generated from a boundary delineation instruction input by a user. The target boundary may be demarcated as practical when the head mounted display device is first used by the user. Illustratively, the target boundary may be a closed figure, e.g., a diamond, a rectangle, a circle, etc.
In specific implementation, after the head-mounted display device is started, a plurality of first images collected by the first camera and a plurality of second images collected by the second camera are acquired, and a three-dimensional picture of a space where a wearer is located is generated and displayed according to the plurality of first images and the plurality of second images. After that, the user can demarcate the target boundary according to the indication of the PC terminal. In particular, a center point of the target boundary may be determined, after which the user may demarcate the target boundary through a controller (e.g., a handle) of the head mounted display device. In this embodiment, when the user demarcates the target boundary, the user can demarcate the target boundary according to the three-dimensional picture of the space where the wearer is located displayed on the display screen, and the target boundary is displayed in real time in the three-dimensional picture.
After step S2300, step S2400 is executed to update the three-dimensional picture according to the target boundary, so as to obtain an updated three-dimensional picture.
In specific implementation, in the process of demarcating a target boundary by a user, a plurality of fifth images acquired in real time through the first camera and a plurality of sixth images acquired in real time through the second camera are acquired, and a three-dimensional picture of a space where the wearer is located is updated according to the plurality of fifth images and the plurality of sixth images to obtain an updated three-dimensional picture. The updated three-dimensional picture is constructed by a first coordinate system, the first coordinate system can be a three-dimensional space coordinate system established by taking the head-mounted display device as a coordinate origin, the cross section of the updated three-dimensional picture is a target boundary, and the updated three-dimensional picture comprises a plurality of boundary surfaces which surround the target boundary.
According to the embodiment of the disclosure, when the head-mounted display device is used, a first image acquired by a first camera and a second image acquired by an externally arranged second camera are acquired, a three-dimensional picture of a space where a wearer is located is generated and displayed according to the first image and the second image, then, under the condition that a boundary demarcation instruction input by a user is received, a target boundary is acquired, the three-dimensional picture is updated according to the target boundary, and the updated three-dimensional picture is obtained. Therefore, when the user sets the target boundary, the target boundary can be set according to the three-dimensional picture of the space where the wearer is located, which is displayed by the head-mounted display equipment, so that the operation of the user is facilitated, and the accuracy of boundary demarcation can be improved. Furthermore, the three-dimensional picture of the space where the wearer is located is updated according to the target boundary, so that in the process that the wearer experiences the head-mounted display device, safety reminding is provided for the wearer based on the updated three-dimensional picture, the accuracy is higher, the use safety of the head-mounted display device can be improved, and the problem that in the prior art, the protection effect is poor due to the fact that only a two-dimensional boundary is defined on the bottom surface is solved.
In an embodiment, after the updating the three-dimensional picture according to the target boundary to obtain an updated three-dimensional picture, the method may further include: and displaying the target boundary in the updated three-dimensional picture.
In this embodiment, after the three-dimensional picture is updated according to the target boundary to obtain the updated three-dimensional picture, the position of the wearer wearing the head-mounted display device in the updated three-dimensional picture can be determined, and further, whether the wearer is close to the target boundary can be determined, so that when the wearer is close to the target boundary, a prompt is given, the wearer is prevented from colliding with an article outside the target boundary, and the use safety of the head-mounted display device is improved. The following examples are given by way of illustration.
In an embodiment, after the updating the three-dimensional picture according to the target boundary to obtain an updated three-dimensional picture, the method may further include: step S3100-step S3200.
Step S3100, determining a first distance between the wearer and the target boundary according to the updated three-dimensional picture while the wearer uses the head-mounted display device.
Illustratively, the updated three-dimensional picture may include a boundary surface corresponding to the boundary of the object. The first distance may be a distance between boundary surfaces of the wearer's contour corresponding to the target boundary.
In a more specific example, the step of determining a first distance between the wearer and the target boundary according to the updated three-dimensional picture during the process of using the head-mounted display device by the wearer may further include: acquiring a third image acquired by the second camera in the process that the wearer uses the head-mounted display equipment, wherein the third image comprises the wearer; generating contour information of the wearer according to the third image; and determining a first distance between boundary surfaces corresponding to the boundary of the wearer and the target boundary according to the contour information of the wearer and the updated three-dimensional picture. It should be noted that, a plurality of consecutive third images acquired by the second camera may be acquired, and the contour information of the wearer may be constructed in the first coordinate system according to the plurality of third images. The first coordinate system may be a three-dimensional space coordinate system established with the head-mounted display device as a coordinate origin.
For example, the first distance may be a distance between the hand of the wearer and a boundary surface corresponding to the target boundary.
In a more specific example, the step of determining a first distance between the wearer and the target boundary according to the updated three-dimensional picture during the process of using the head-mounted display device by the wearer may further include: acquiring a third image acquired by the second camera in the process that the wearer uses the head-mounted display equipment, wherein the third image comprises an indicator lamp of a controller of the head-mounted display equipment; generating hand contour information of the wearer according to the third image; and determining a first distance between the hand of the wearer and a boundary surface corresponding to the target boundary according to the hand contour information of the wearer and the updated three-dimensional picture. It should be noted that, a plurality of consecutive third images acquired by the second camera may be acquired, and the hand contour information of the wearer may be constructed in the first coordinate system according to the plurality of third images. The first coordinate system may be a three-dimensional space coordinate system established with the head-mounted display device as a coordinate origin.
Step S3200, in a case that the first distance is less than or equal to a first threshold, outputting first prompt information.
In this embodiment, the first threshold is used to reflect whether the wearer is near a target boundary of the head mounted display device. In the case where the first distance between the wearer and the target boundary is less than or equal to the first threshold value, the wearer easily collides with an object other than the target boundary, and the wearer is considered to be close to the target boundary. It should be noted that the first threshold may be set by a person skilled in the art according to practical experience, and the embodiment of the present disclosure is not limited to this.
The first cue information may be used to cue the wearer to approach the target boundary. The first prompt message can be preset according to the actual needs of the user. Illustratively, the first prompt information may be a preset mode for displaying the target boundary in the updated three-dimensional picture. For example, the target boundary is displayed in a highlighted manner. Illustratively, the first prompt message may be a voice prompt message or a vibration prompt message. For example, the first prompt message is "you will go beyond the front boundary, please note".
In one embodiment, after determining the first distance between the wearer and the target boundary according to the updated three-dimensional picture during the use of the head-mounted display device by the wearer, the method may further include: under the condition that the first distance is smaller than or equal to a second threshold value, acquiring and displaying a real world image acquired by the first camera; wherein the second threshold is less than the first threshold.
In this embodiment, the second threshold is used to reflect whether the wearer is about to cross a target boundary of the head mounted display device. In the event that the first distance between the wearer and the target boundary is less than or equal to the second threshold, the wearer is deemed to be about to cross the target boundary. It should be noted here that the second threshold is smaller than the first threshold. The second threshold may be set by a person skilled in the art according to practical experience, and the embodiment of the present disclosure is not particularly limited thereto.
In specific implementation, in the process that the wearer uses the head-mounted display device, a plurality of third images acquired by the second camera are acquired, the contour information of the wearer is generated according to the plurality of third images, the first distance between the wearer and the target boundary is determined according to the contour information of the wearer and the updated three-dimensional picture, and the first prompt information is output to prompt the wearer to adjust the position of the wearer under the condition that the first distance is smaller than or equal to the first threshold. And under the condition that the first distance is smaller than or equal to the second threshold, opening a first camera of the head-mounted display device, and displaying the real world image acquired by the first camera, so that the wearer can directly observe the real environment, the user is prevented from colliding with the object near the boundary surface corresponding to the target boundary, and the use safety is improved.
In this embodiment, in the process that the wearer uses the head-mounted display device, the first distance between the wearer and the target boundary is acquired in real time according to the updated three-dimensional picture, so as to judge whether the wearer is close to the target boundary according to the first distance, thereby improving the accuracy of identification and solving the problem of poor protection effect caused by the fact that a two-dimensional boundary is only defined on the bottom surface in the prior art. Further, when the first distance is smaller than or equal to the first threshold value, the first prompt information is output, so that the wearer can be reminded of approaching the target boundary, the situation that the wearer crosses the target boundary in the experience process and collides is avoided, and the use safety of the head-mounted display device can be improved. In addition, under the condition that the first distance is smaller than or equal to the second threshold value, the image of the real world collected by the first camera is obtained and displayed, so that a wearer can directly observe the surrounding real scene and avoid surrounding objects in time, and the use safety of the head-mounted display equipment is further improved.
In an embodiment, after the updating the three-dimensional picture according to the target boundary to obtain an updated three-dimensional picture, the method may further include: step S4100-step S4300.
Step S4100 of determining whether a first object is included in the updated three-dimensional picture according to the first image and the second image, where the first object is an object other than the wearer.
In this embodiment, the first object may be another object located within the updated three-dimensional picture or located near a boundary of the updated three-dimensional picture. Such as tables, sofas, chairs, etc.
In this embodiment, the determining whether the updated three-dimensional picture includes the first object may be determining whether the first object is included in a target boundary defined by the user, or determining whether the first object is included near the target boundary defined by the user. Specifically, the first image may include images taken by a plurality of users wearing the head-mounted display device around their peripheries. The second image may be a sequence of video frames, i.e., a plurality of images that are acquired in succession during a user's look around while wearing the head mounted display device. Based on the above, according to the first images collected by the first camera and the second images collected by the second camera, it can be determined whether other objects (first objects) exist in the updated three-dimensional picture, that is, in the target boundary defined by the user.
Step S4200, in a case where a first object is included in the updated three-dimensional picture, determining contour information of the first object and position information of the first object with respect to the wearer from the first image and the second image.
When the head-mounted display device is started, a user wears the head-mounted display device to look around, the first camera can collect a plurality of first images, and meanwhile the second camera continuously shoots a wearer to obtain a plurality of second images. And inputting the plurality of first images and the plurality of second images into a preset model, and identifying whether the updated three-dimensional picture comprises the first object. When the first object is included in the updated three-dimensional picture, the contour information of the first object can be determined from the plurality of first images and the plurality of second images. Further, an indicator lamp, for example, an LED lamp, an infrared lamp, or the like, is provided on the controller of the head-mounted display device, so that the positional information of the wearer with respect to the second camera can be determined from the plurality of second images, and thus the positional information of the first object with respect to the wearer can be determined from the plurality of second images. For example, the distance between the first object and the wearer.
And step S4300, adding a three-dimensional model of the first object in the updated three-dimensional picture according to the position information of the first object relative to the wearer and the contour information of the first object.
In this embodiment, after the three-dimensional picture is updated according to the target boundary, and the updated three-dimensional picture is obtained, it may be further determined whether the updated three-dimensional picture includes the first object according to the first image acquired by the first camera and the second image acquired by the second camera, and in a case that the updated three-dimensional picture includes the first object, the three-dimensional model of the first object is added to the updated three-dimensional picture, so that interference of the object in the target boundary to the user may be avoided, and the safety of the use of the head-mounted display device may be further improved.
In this embodiment, after the three-dimensional model of the first object is added to the updated three-dimensional picture, the position of the wearer wearing the head-mounted display device in the updated three-dimensional picture can be determined, and it can be further determined whether the wearer is close to the first object, so that when the wearer is close to the first object, a prompt is given, collision between the wearer and the first object is avoided, and the use safety of the head-mounted display device is improved. The following examples are given by way of illustration.
In one embodiment, after said adding a three-dimensional model of said first object within said updated three-dimensional picture in dependence on position information of said first object relative to said wearer and contour information of said first object, the method may further comprise: step S5100-step S5200.
Step S5100, determine a second distance between the wearer and the first object during use of the head mounted display device by the wearer.
Illustratively, the second distance may be a distance between the outline of the wearer and the first object.
In a more specific example, the step of determining a second distance between the wearer and the first object during use of the head mounted display device by the wearer may further comprise: acquiring a fourth image acquired by the second camera in the process that the wearer uses the head-mounted display equipment, wherein the fourth image comprises the wearer; generating contour information of the wearer according to the fourth image; determining a second distance between the wearer and the first object from the contour information of the wearer and the three-dimensional model of the first object. It should be noted that, a plurality of consecutive fourth images acquired by the second camera may be acquired, and the contour information of the wearer may be constructed in the first coordinate system according to the plurality of fourth images. The first coordinate system may be a three-dimensional space coordinate system established with the head-mounted display device as a coordinate origin.
The second distance may also be, for example, the distance between the hand of the wearer and the first object.
In a more specific example, the step of determining a second distance between the wearer and the first object during use of the head mounted display device by the wearer may further comprise: acquiring a fourth image acquired by the second camera in the process that the wearer uses the head-mounted display equipment, wherein the fourth image comprises an indicator lamp of a controller of the head-mounted display equipment; generating hand contour information of the wearer according to the fourth image; determining a second distance between the wearer's hand and the first object from the hand contour information of the wearer and the three-dimensional model of the first object. It should be noted that, a plurality of consecutive fourth images acquired by the second camera may be acquired, and the hand contour information of the wearer may be constructed in the first coordinate system according to the plurality of fourth images. The first coordinate system may be a three-dimensional space coordinate system established with the head-mounted display device as a coordinate origin.
In step S5200, when the second distance is smaller than or equal to the third threshold, second prompt information is output.
In this embodiment, the third threshold is used to reflect whether the wearer is near the first object within the target boundary, that is, whether the wearer is too close to other items in the target boundary. In the case where the first distance between the wearer and the target boundary is less than or equal to the third threshold, the wearer is likely to collide with the first object within the target boundary, and the wearer is considered to be too close to the first object, i.e., the wearer is close to the first object. It should be noted that the third threshold may be set by a person skilled in the art according to practical experience, and the embodiment of the present disclosure is not limited to this.
The second prompting information may be for prompting the wearer to approach the first object. The second prompt message can be preset according to the actual needs of the user. For example, the second prompt information may be a three-dimensional model that displays the first object in the updated three-dimensional picture in a preset manner. For example, a three-dimensional model of the first object is displayed in a highlighted manner. The second prompt message may be a voice prompt message or a vibration prompt message. For example, the second prompt message is "there is another article ahead, please note".
In one embodiment, after said determining the second distance between the wearer and the first object, the method may further comprise: under the condition that the second distance is smaller than or equal to a fourth threshold value, acquiring and displaying an image of the real world acquired by the first camera; wherein the fourth threshold is less than the third threshold.
In this embodiment, the fourth threshold is used to reflect whether the wearer will collide with the first object within the target boundary. In the event that the second distance between the wearer and the first object within the target boundary is less than or equal to the fourth threshold, the wearer is deemed to be about to collide with the first object within the target boundary. It should be noted here that the fourth threshold is smaller than the third threshold. The fourth threshold may be set by a person skilled in the art according to practical experience, and the embodiment of the present disclosure is not particularly limited thereto.
In specific implementation, in the process that the wearer uses the head-mounted display device, a plurality of fourth images acquired by the second camera are acquired, the contour information of the wearer is generated according to the plurality of fourth images, the second distance between the wearer and the first object is determined according to the contour information of the wearer and the three-dimensional model of the first object, and second prompt information is output to prompt the wearer to adjust the position of the wearer under the condition that the second distance is smaller than or equal to a third threshold value. And under the condition that the second distance is smaller than or equal to the fourth threshold, opening a first camera of the head-mounted display device, and displaying the real world image acquired by the first camera, so that the wearer can directly observe the real environment, the user is prevented from colliding with the first object in the target boundary, and the use safety is improved.
In this embodiment, in the process that the wearer uses the head-mounted display device, the second distance between the wearer and the first object in the target boundary is obtained in real time, so as to judge whether the wearer is close to the first object in the target boundary according to the second distance, and the accuracy of identification can be improved. Further, when the second distance is smaller than or equal to the third threshold value, second prompt information is output, so that the wearer can be reminded of approaching the first object in the target boundary, the collision between the wearer and the first object is avoided, and the use safety of the head-mounted display equipment can be improved. In addition, under the condition that the second distance is smaller than or equal to the fourth threshold value, the image of the real world acquired by the first camera is acquired and displayed, so that a wearer can directly observe the surrounding real scene and avoid the first object in time, and the use safety of the head-mounted display equipment is further improved.
In an embodiment, after the updating the three-dimensional picture according to the target boundary to obtain an updated three-dimensional picture, the method may further include: step S6100 to step S6200.
Step S6100, in the process of using the head-mounted display device by the wearer, obtaining a first scene image according to the image collected by the first camera, and obtaining a second scene image according to the image collected by the second camera.
In this embodiment, the first scene image may be a scene image from a first-person perspective. The first-person perspective may be a viewing perspective of a wearer of the head mounted display device. The first scene image can be obtained by processing the image collected by the first camera. For example, according to a game scene experienced by a wearer, an image acquired by the first camera is processed to obtain a first scene image.
The second scene image may be an image of a scene from a third person perspective. The third person perspective view may be a view of the second camera viewing a wearer of the head mounted display device. The second scene image may be obtained by processing an image acquired by the second camera. For example, according to a game scene experienced by the wearer, an image acquired by the second camera is processed to obtain a second scene image, wherein the second scene image includes a game character played by the wearer.
Step S6200, displaying the first scene image in a first area of a display screen of the head-mounted display device, and displaying the second scene image in a second area of the display screen; the first scene image is a scene image of a first person perspective, and the second scene image is a scene image of a third person perspective.
In this embodiment, the first region and the second region may be different display regions of a display screen of the head-mounted display device. The first region and the second region may be two non-overlapping regions, or the first region and the second region may partially overlap. It should be noted here that the first area and the second area may be the same in size or different in size, for example, the first area for displaying the first person perspective (main perspective) is larger, and the second area for displaying the third person perspective is smaller.
In this embodiment, in the process that the wearer uses the head-mounted display device, an image acquired by the first camera is processed to obtain and display a first scene image, and an image acquired by the second camera is processed to obtain and display a second scene image, where the first scene image is a scene image at a first person perspective and the second scene image is a scene image at a third person perspective. Thus, the embodiment can provide the scene images of the first person and the third person from two visual angles for the user, and can enhance the game experience of the user.
< apparatus embodiment >
The embodiment of the disclosure provides a boundary generating device, which is applied to a head-mounted display device, wherein the head-mounted display device is provided with a first camera, and the head-mounted display device is in communication connection with a second camera arranged outside. As shown in fig. 3, the boundary generating apparatus 300 may include a first obtaining module 310, a generating module 320, a displaying module 330, a second obtaining module 340, and an updating module 350.
The first obtaining module 310 may be configured to obtain a first image captured by the first camera and a second image captured by the second camera, where the second image includes a wearer of the head-mounted display device;
the generating module 320 may be configured to generate a three-dimensional picture of a space where the wearer is located according to the first image and the second image;
the display module 330 may be configured to display a three-dimensional picture of a space where the wearer is located;
the second obtaining module 340 may be configured to obtain a target boundary in a case that a boundary delineation instruction input by a user is received;
the updating module 350 may be configured to update the three-dimensional picture according to the target boundary, so as to obtain an updated three-dimensional picture.
In an embodiment, the display module 330 is further configured to display the target boundary in the updated three-dimensional frame.
In one embodiment, the apparatus further comprises:
a first determining module, configured to determine, according to the updated three-dimensional picture, a first distance between the wearer and the target boundary in a process of using the head-mounted display device by the wearer;
and the first prompt module is used for outputting first prompt information under the condition that the first distance is smaller than or equal to a first threshold value.
In one embodiment, the apparatus further comprises:
the third acquisition module is used for acquiring the real world image acquired by the first camera under the condition that the first distance is smaller than or equal to a second threshold;
the display module 330 is further configured to display the real world image acquired by the first camera;
wherein the second threshold is less than the first threshold.
In one embodiment, the first determining module includes:
the first acquisition unit is used for acquiring a third image acquired by the second camera in the process that a wearer uses the head-mounted display equipment, wherein the third image comprises the wearer;
a first generating unit, configured to generate contour information of the wearer according to the third image;
a first determining unit, configured to determine a first distance between the wearer and the target boundary according to the contour information of the wearer and the updated three-dimensional picture.
In one embodiment, the apparatus further comprises:
a second determining module, configured to determine, according to the first image and the second image, whether a first object is included in the updated three-dimensional picture, where the first object is an object other than the wearer;
a third determining module, configured to determine, when a first object is included in the updated three-dimensional picture, contour information of the first object and position information of the first object with respect to the wearer from the first image and the second image;
and the adding module is used for adding the three-dimensional model of the first object in the updated three-dimensional picture according to the position information of the first object relative to the wearer and the contour information of the first object.
In one embodiment, the apparatus further comprises:
a fourth determining module for determining a second distance between a wearer and the first object during use of the head mounted display device by the wearer;
and the second prompting module is used for outputting second prompting information under the condition that the second distance is smaller than or equal to a third threshold value.
In one embodiment, the apparatus further comprises:
the fourth acquisition module is used for acquiring the real world image acquired by the first camera under the condition that the second distance is smaller than or equal to a fourth threshold;
the display module 330 is further configured to display the real-world image acquired by the first camera;
wherein the fourth threshold is less than the third threshold.
In one embodiment, the fourth determination module includes:
the second acquisition unit is used for acquiring a fourth image acquired by the second camera in the process that a wearer uses the head-mounted display equipment, wherein the fourth image comprises the wearer;
a second generating unit configured to generate contour information of the wearer from the fourth image;
a second determining unit for determining a second distance between the wearer and the first object according to the contour information of the wearer and the three-dimensional model of the first object.
In one embodiment, the apparatus further comprises:
the fifth acquisition module is used for acquiring a first scene image according to the image acquired by the first camera and acquiring a second scene image according to the image acquired by the second camera in the process that the wearer uses the head-mounted display device;
the display module 330 is further configured to display the first scene image in a first area of a display screen of the head-mounted display device, and display the second scene image in a second area of the display screen;
the first scene image is a scene image of a first person perspective, and the second scene image is a scene image of a third person perspective.
According to the embodiment of the disclosure, when the user sets the target boundary, the target boundary can be set according to the three-dimensional picture of the space where the wearer is located, which is displayed by the head-mounted display device, so that the operation of the user is facilitated, and the accuracy of boundary demarcation can be improved. Furthermore, the three-dimensional picture of the space where the wearer is located is updated according to the target boundary, so that in the process that the wearer experiences the head-mounted display device, safety reminding is provided for the wearer based on the updated three-dimensional picture, the accuracy is higher, the use safety of the head-mounted display device can be improved, and the problem that in the prior art, the protection effect is poor due to the fact that only a two-dimensional boundary is defined on the bottom surface is solved.
< apparatus embodiment >
Fig. 4 is a hardware configuration diagram of a head-mounted display device according to an embodiment. As shown in fig. 4, the head-mounted display device 400 includes a memory 410, a processor 420, and a first camera 430.
The memory 410 may be used to store executable computer instructions.
The processor 420 may be configured to execute the boundary generation method according to the embodiments of the disclosed method under the control of the executable computer instructions.
The processor 420 is respectively connected to the first camera 430 and an externally disposed second camera in a communication manner to acquire images captured by the first camera and the second camera.
The head-mounted display device 400 may be the head-mounted display device 1000 shown in fig. 1, or may be a device having another hardware structure, which is not limited herein. The head-mounted display device 400 may be, for example, a VR device, an AR device, an MR device, etc., which are not limited in this disclosure.
The head mounted display device 400 may also include a controller. The controller is provided with an indicator light. For example, the controller is a handle, and an LED lamp or an infrared lamp is arranged on the handle.
In further embodiments, the head mounted display device 400 may include the above boundary generating apparatus 300.
In one embodiment, the modules of the above boundary generating apparatus 300 may be implemented by the processor 420 executing computer instructions stored in the memory 410.
According to the embodiment of the disclosure, when the user sets the target boundary, the target boundary can be set according to the three-dimensional picture of the space where the wearer is located, which is displayed by the head-mounted display device, so that the operation of the user is facilitated, and the accuracy of boundary demarcation can be improved. Furthermore, the three-dimensional picture of the space where the wearer is located is updated according to the target boundary, so that in the process that the wearer experiences the head-mounted display device, safety reminding is provided for the wearer based on the updated three-dimensional picture, the accuracy is higher, the use safety of the head-mounted display device can be improved, and the problem that in the prior art, the protection effect is poor due to the fact that only a two-dimensional boundary is defined on the bottom surface is solved.
< computer-readable storage Medium >
The disclosed embodiments also provide a computer readable storage medium having stored thereon computer instructions, which, when executed by a processor, perform the boundary generation method provided by the disclosed embodiments.
The disclosed embodiments may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement aspects of embodiments of the disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations for embodiments of the present disclosure may be assembly instructions, Instruction Set Architecture (ISA) instructions, machine related instructions, microcode, firmware instructions, state setting data, or source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the disclosed embodiments by personalizing the custom electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of the computer-readable program instructions.
Various aspects of embodiments of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, implementation by software, and implementation by a combination of software and hardware are equivalent.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the market, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the embodiments of the present disclosure is defined by the appended claims.

Claims (13)

1. A boundary generating method is applied to a head-mounted display device, the head-mounted display device is provided with a first camera, the head-mounted display device is in communication connection with a second camera arranged outside, and the method comprises the following steps:
acquiring a first image acquired by the first camera and a second image acquired by the second camera, wherein the second image comprises a wearer of the head-mounted display device;
generating and displaying a three-dimensional picture of the space where the wearer is located according to the first image and the second image;
under the condition that a boundary delimiting instruction input by a user is received, acquiring a target boundary;
and updating the three-dimensional picture according to the target boundary to obtain an updated three-dimensional picture.
2. The method according to claim 1, wherein after said updating the three-dimensional picture according to the target boundary, resulting in an updated three-dimensional picture, the method further comprises:
and displaying the target boundary in the updated three-dimensional picture.
3. The method according to claim 1, wherein after said updating the three-dimensional picture according to the target boundary, resulting in an updated three-dimensional picture, the method further comprises:
determining a first distance between the wearer and the target boundary according to the updated three-dimensional picture in the process of using the head-mounted display equipment by the wearer;
and outputting first prompt information when the first distance is smaller than or equal to a first threshold value.
4. The method of claim 3, wherein after determining the first distance between the wearer and the target boundary from the updated three-dimensional view during use of the head-mounted display device by the wearer, the method further comprises:
under the condition that the first distance is smaller than or equal to a second threshold value, acquiring and displaying a real world image acquired by the first camera;
wherein the second threshold is less than the first threshold.
5. The method of claim 3, wherein determining the first distance between the wearer and the target boundary from the updated three-dimensional picture during use of the head-mounted display device by the wearer comprises:
acquiring a third image acquired by the second camera in the process that a wearer uses the head-mounted display equipment, wherein the third image comprises the wearer;
generating contour information of the wearer according to the third image;
and determining a first distance between the wearer and the target boundary according to the contour information of the wearer and the updated three-dimensional picture.
6. The method according to any of claims 1-5, wherein after said updating said three-dimensional picture according to said target boundary resulting in an updated three-dimensional picture, said method further comprises:
determining whether a first object is included in the updated three-dimensional picture according to the first image and the second image, wherein the first object is an object other than the wearer;
determining contour information of a first object and position information of the first object relative to the wearer from the first image and the second image in a case where the first object is included within the updated three-dimensional picture;
adding a three-dimensional model of the first object within the updated three-dimensional picture according to the position information of the first object relative to the wearer and the contour information of the first object.
7. The method of claim 6, wherein after said adding the three-dimensional model of the first object within the updated three-dimensional picture in accordance with the position information of the first object relative to the wearer and the contour information of the first object, the method further comprises:
determining a second distance between the wearer and the first object during use of the head mounted display device by the wearer;
and outputting second prompt information when the second distance is smaller than or equal to a third threshold value.
8. The method of claim 7, wherein after determining the second distance between the wearer and the first object during use of the head mounted display device by the wearer, the method further comprises:
under the condition that the second distance is smaller than or equal to a fourth threshold value, acquiring and displaying an image of the real world acquired by the first camera;
wherein the fourth threshold is less than the third threshold.
9. The method of claim 7, wherein determining the second distance between the wearer and the first object during use of the head mounted display device by the wearer comprises:
acquiring a fourth image acquired by the second camera in the process that the wearer uses the head-mounted display equipment, wherein the fourth image comprises the wearer;
generating contour information of the wearer according to the fourth image;
determining a second distance between the wearer and the first object from the contour information of the wearer and the three-dimensional model of the first object.
10. The method according to claim 1, wherein after said updating the three-dimensional picture according to the target boundary, resulting in an updated three-dimensional picture, the method further comprises:
in the process that a wearer uses the head-mounted display device, a first scene image is obtained according to an image collected by the first camera, and a second scene image is obtained according to an image collected by the second camera;
displaying the first scene image in a first area of a display screen of the head-mounted display device and displaying the second scene image in a second area of the display screen;
the first scene image is a scene image of a first person perspective, and the second scene image is a scene image of a third person perspective.
11. A boundary generating device is applied to a head-mounted display device, and is characterized in that the head-mounted display device is provided with a first camera and is in communication connection with a second camera arranged outside; the device comprises:
a first obtaining module, configured to obtain a first image collected by the first camera and a second image collected by the second camera, where the second image includes a wearer of the head-mounted display device;
the generating module is used for generating a three-dimensional picture of the space where the wearer is located according to the first image and the second image;
the display module is used for displaying a three-dimensional picture of the space where the wearer is located;
the second acquisition module is used for acquiring a target boundary under the condition of receiving a boundary demarcation instruction input by a user;
and the updating module is used for updating the three-dimensional picture according to the target boundary to obtain an updated three-dimensional picture.
12. A head-mounted display device comprising a first camera, the head-mounted display device further comprising:
a memory for storing executable computer instructions;
a processor for performing the boundary generation method of any one of claims 1-10 under the control of the executable computer instructions;
the processor is in communication connection with the first camera and a second camera arranged outside respectively to acquire images shot by the first camera and the second camera.
13. A computer readable storage medium having stored thereon computer instructions which, when executed by a processor, perform the boundary generation method of any one of claims 1-10.
CN202111676556.9A 2021-12-31 2021-12-31 Boundary generation method and device, head-mounted display equipment and readable storage medium Pending CN114442804A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111676556.9A CN114442804A (en) 2021-12-31 2021-12-31 Boundary generation method and device, head-mounted display equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111676556.9A CN114442804A (en) 2021-12-31 2021-12-31 Boundary generation method and device, head-mounted display equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN114442804A true CN114442804A (en) 2022-05-06

Family

ID=81366560

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111676556.9A Pending CN114442804A (en) 2021-12-31 2021-12-31 Boundary generation method and device, head-mounted display equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114442804A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105705210A (en) * 2013-09-30 2016-06-22 索尼电脑娱乐公司 Camera based safety mechanisms for users of head mounted displays
CN106295581A (en) * 2016-08-15 2017-01-04 联想(北京)有限公司 Obstacle detection method, device and virtual reality device
CN108040247A (en) * 2017-12-29 2018-05-15 湖南航天捷诚电子装备有限责任公司 A kind of wear-type augmented reality display device and method
CN109840947A (en) * 2017-11-28 2019-06-04 广州腾讯科技有限公司 Implementation method, device, equipment and the storage medium of augmented reality scene
CN110874818A (en) * 2018-08-31 2020-03-10 阿里巴巴集团控股有限公司 Image processing and virtual space construction method, device, system and storage medium
CN111866492A (en) * 2020-06-09 2020-10-30 青岛小鸟看看科技有限公司 Image processing method, device and equipment based on head-mounted display equipment
CN113574591A (en) * 2019-03-29 2021-10-29 索尼互动娱乐股份有限公司 Boundary setting device, boundary setting method, and program
CN113760086A (en) * 2020-06-04 2021-12-07 宏达国际电子股份有限公司 Method for dynamically displaying real world scene, electronic device and readable storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105705210A (en) * 2013-09-30 2016-06-22 索尼电脑娱乐公司 Camera based safety mechanisms for users of head mounted displays
CN106295581A (en) * 2016-08-15 2017-01-04 联想(北京)有限公司 Obstacle detection method, device and virtual reality device
CN109840947A (en) * 2017-11-28 2019-06-04 广州腾讯科技有限公司 Implementation method, device, equipment and the storage medium of augmented reality scene
CN108040247A (en) * 2017-12-29 2018-05-15 湖南航天捷诚电子装备有限责任公司 A kind of wear-type augmented reality display device and method
CN110874818A (en) * 2018-08-31 2020-03-10 阿里巴巴集团控股有限公司 Image processing and virtual space construction method, device, system and storage medium
CN113574591A (en) * 2019-03-29 2021-10-29 索尼互动娱乐股份有限公司 Boundary setting device, boundary setting method, and program
CN113760086A (en) * 2020-06-04 2021-12-07 宏达国际电子股份有限公司 Method for dynamically displaying real world scene, electronic device and readable storage medium
US20210382305A1 (en) * 2020-06-04 2021-12-09 Htc Corporation Method for dynamically displaying real-world scene, electronic device, and computer readable medium
CN111866492A (en) * 2020-06-09 2020-10-30 青岛小鸟看看科技有限公司 Image processing method, device and equipment based on head-mounted display equipment

Similar Documents

Publication Publication Date Title
CN110413105B (en) Tangible visualization of virtual objects within a virtual environment
CN106924970B (en) Virtual reality system, information display method and device based on virtual reality
US10725297B2 (en) Method and system for implementing a virtual representation of a physical environment using a virtual reality environment
CN106598229B (en) Virtual reality scene generation method and device and virtual reality system
CN109743892B (en) Virtual reality content display method and device
JP2019510321A (en) Virtual reality pass-through camera user interface elements
US20160371888A1 (en) Interactive information display
US20170195664A1 (en) Three-dimensional viewing angle selecting method and apparatus
KR20160145054A (en) Non-visual feedback of visual change in a gaze tracking method and device
TW201822164A (en) Virtual reality equipment safety monitoring method and device, and virtual reality equipment
CN106096540B (en) Information processing method and electronic equipment
CN111566596A (en) Real world portal for virtual reality display
US10955911B2 (en) Gazed virtual object identification module, a system for implementing gaze translucency, and a related method
US20220147138A1 (en) Image generation apparatus and information presentation method
US11195320B2 (en) Feed-forward collision avoidance for artificial reality environments
CN110286906B (en) User interface display method and device, storage medium and mobile terminal
CN109814710B (en) Data processing method and device and virtual reality equipment
CN106598245B (en) Multi-user interaction control method and device based on virtual reality
US11961190B2 (en) Content distribution system, content distribution method, and content distribution program
Čopič Pucihar et al. Creating a stereoscopic magic-lens to improve depth perception in handheld augmented reality
CN109885172B (en) Object interaction display method and system based on Augmented Reality (AR)
GB2525304B (en) Interactive information display
CN114442804A (en) Boundary generation method and device, head-mounted display equipment and readable storage medium
CN106598247B (en) Response control method and device based on virtual reality
JP2018063567A (en) Image processing device, image processing method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20221206

Address after: No. 500, Songling Road, Laoshan District, Qingdao, Shandong 266101

Applicant after: GOERTEK TECHNOLOGY Co.,Ltd.

Address before: 261061 workshop 1, phase III, Geer Photoelectric Industrial Park, 3999 Huixian Road, Yongchun community, Qingchi street, high tech Zone, Weifang City, Shandong Province

Applicant before: GoerTek Optical Technology Co.,Ltd.