CN111708432A - Safety region determining method and device, head-mounted display equipment and storage medium - Google Patents

Safety region determining method and device, head-mounted display equipment and storage medium Download PDF

Info

Publication number
CN111708432A
CN111708432A CN202010436848.4A CN202010436848A CN111708432A CN 111708432 A CN111708432 A CN 111708432A CN 202010436848 A CN202010436848 A CN 202010436848A CN 111708432 A CN111708432 A CN 111708432A
Authority
CN
China
Prior art keywords
head
mounted display
display device
user
image frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010436848.4A
Other languages
Chinese (zh)
Other versions
CN111708432B (en
Inventor
吴涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Xiaoniao Kankan Technology Co Ltd
Original Assignee
Qingdao Xiaoniao Kankan Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Xiaoniao Kankan Technology Co Ltd filed Critical Qingdao Xiaoniao Kankan Technology Co Ltd
Priority to CN202010436848.4A priority Critical patent/CN111708432B/en
Publication of CN111708432A publication Critical patent/CN111708432A/en
Application granted granted Critical
Publication of CN111708432B publication Critical patent/CN111708432B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a safety region determining method and device, a head-mounted display device and a storage medium. The safety area determining method comprises the following steps: when the perspective mode of the head-mounted display device is started, presenting prompt information for prompting a user to carry out head movement on a screen of the head-mounted display device; acquiring image frames obtained by scanning a real environment by a binocular camera wearing display equipment when the head of a user moves and head movement data corresponding to the image frames, and storing the image frames and the head movement data; a virtual safety zone indicating a safety zone in the real environment is determined from the stored image frames and the head movement data. According to the embodiment of the application, the setting of the safety region can be completed quickly without much interaction and intervention of a user, and the setting efficiency and the user satisfaction of the safety region are improved.

Description

Safety region determining method and device, head-mounted display equipment and storage medium
Technical Field
The present application relates to the field of head-mounted display device technologies, and in particular, to a method and an apparatus for determining a safety zone, a head-mounted display device, and a storage medium.
Background
With the progress and development of head-mounted display devices such as VR (Virtual Reality) technology, a VR all-in-one machine can support 6DoF (Degree of Freedom) scene usage at present, that is, a user wearing the VR all-in-one machine can freely move around freely to experience various experience contents in the VR all-in-one machine device. Because the user wears VR head-mounted all-in-one machine to experience virtual scene, especially the user when using at home, the user does not know to real environment, in-process of walking around in real environment, probably bump with the barrier, for example can bump the wall, probably bump the desk, object in real environments such as stool, this has brought hidden danger for user's safety, consequently, need a safety warning mechanism and through show the potential safety hazard warning in VR head-mounted all-in-one machine before the object in user and real environment will bump, give the user in order to ensure user's safety in step.
Displaying a safe area in the head-mounted display device is just one such safe reminding mechanism, and potential safety hazards when the user uses the VR head-mounted all-in-one machine can be well avoided by determining the safe area and reminding the user of the range of the safe area. However, the area of the safe region initially set by the VR headset manufacturer is fixed at present, and the area cannot be adapted to the actual use environment of the user, so that the user experience is poor.
Disclosure of Invention
The embodiment of the application provides a method and a device for determining a safe region, a head-mounted display device and a storage medium, and can quickly complete the setting of a user-defined safe region without too many interactive operations and interventions of a user, so that the safe region is ensured to be adapted to the actual use environment of the user, and the setting efficiency and the user satisfaction of the safe region are improved.
The embodiment of the application adopts the following technical scheme:
in a first aspect, an embodiment of the present application provides a safety region determining method, which is applied to a head-mounted display device, and includes:
when the perspective mode of the head-mounted display device is started, presenting prompt information for prompting a user to carry out head movement on a screen of the head-mounted display device;
acquiring image frames obtained by scanning a real environment by a binocular camera wearing display equipment when the head of a user moves and head movement data corresponding to the image frames, and storing the image frames and the head movement data;
a virtual safety zone indicating a safety zone in the real environment is determined from the stored image frames and the head movement data.
In a second aspect, an embodiment of the present application further provides a safety region determining apparatus, which is applied to a head-mounted display device, and includes:
the prompting module is used for presenting prompting information for prompting a user to carry out head movement on a screen of the head-mounted display device when the perspective mode of the head-mounted display device is started;
the acquisition module is used for acquiring image frames obtained by scanning a real environment by a binocular camera wearing the display equipment when the head of a user moves and head movement data corresponding to the image frames, and storing the image frames and the head movement data;
and the determining module is used for determining a virtual safety region indicating a safety region in the real environment according to the stored image frames and the head movement data.
In a third aspect, an embodiment of the present application further provides a head-mounted display device, including: a processor; and a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the method of the first aspect of an embodiment of the present application.
In a fourth aspect, embodiments of the present application further provide a computer-readable storage medium storing one or more programs that, when executed by a head-mounted display device including a plurality of application programs, cause the head-mounted display device to perform the method of the first aspect of the embodiments of the present application.
The embodiment of the application adopts at least one technical scheme which can achieve the following beneficial effects: after a perspective mode of the head-mounted display device is started, presenting prompt information for prompting a user to carry out head movement on a screen of the head-mounted display device to guide the head movement of the user, acquiring image frames obtained by scanning a real environment by a binocular camera of the head-mounted display device when the head of the user moves and head movement data corresponding to the image frames, storing the image frames and the head movement data, and determining a virtual safety zone indicating a safety zone in the real environment according to the stored image frames and the head movement data. Therefore, for a user, the user can simply and quickly complete the setting of the user-defined safe region only by moving the head without too much interactive operation and intervention, the setting efficiency and the user satisfaction degree of the safe region are improved, the adaptation of the safe region and the actual use environment of the user is ensured, and the market competitiveness of the head-mounted display equipment is enhanced.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a flowchart of a security area determination method according to an embodiment of the present application;
fig. 2 is a block diagram of a security area determination apparatus according to an embodiment of the present application;
FIG. 3 is a reference diagram of a usage status of a head mounted display device in an embodiment of the present application;
fig. 4 is a schematic structural diagram of a head-mounted display device in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Potential safety hazards when the user uses the VR all-in-one machine can be well avoided by determining the safe area and reminding the user of the range of the safe area. However, the area of the safety area initially set by a VR headset manufacturer is fixed at present, and particularly in a spatial scene such as at home, the area of the safety area may be too large, the safety area may still contain a part of obstacle objects, or the area of the safety area is too small, and in short, there is a technical problem that the safety area is not adapted to the actual use environment of a user.
In order to solve the technical problem, a scheme is that a user is required to manually complete the setting of a user-defined safety region step by step according to prompt steps given by a VR head-mounted all-in-one machine manufacturer, the operation step is complicated, common users such as old people and children at home are difficult to complete rapidly and autonomously, and the user experience is poor.
Therefore, the embodiment of the application provides the technical scheme for determining the safety zone, the setting process of the user-defined safety zone is simplified, both hands of a user are not occupied, more interaction and intervention of the user are not needed, and the setting efficiency of the user-defined safety zone is improved.
Fig. 1 is a flowchart of a safety region determining method according to an embodiment of the present application, and referring to fig. 1, the safety region determining method according to the embodiment of the present application is applied to a head-mounted display device, and includes the following steps:
and step S110, when the perspective mode of the head-mounted display device is started, presenting prompt information for prompting the user to carry out head movement on a screen of the head-mounted display device.
The prompt message here is a message for prompting the user how to perform head movement, such as a message of lowering the head, turning the head to the left, and the like.
Step S120, acquiring image frames obtained by scanning the real environment by the binocular camera wearing the display device when the head of the user moves, and head movement data corresponding to the image frames, and storing the image frames and the head movement data.
The binocular camera of the embodiment of the application comprises a left camera corresponding to the left eye of a user and a right camera corresponding to the right eye of the user. The image collected by the left camera is a left eye image, and the image collected by the right camera is a right eye image.
In step S130, a virtual safety region indicating a safety region in the real environment is determined from the stored image frames and the head movement data.
The virtual safety area of the embodiment of the application can be in a three-dimensional virtual safety fence comprising ground and height information, so that the virtual safety fence is matched with a three-dimensional virtual scene in the head-mounted display device, and the immersion of the head-mounted display device is guaranteed.
As shown in fig. 1, in the safety zone determining method according to the embodiment of the present application, the prompt information for prompting the user to perform head movement is output to the user to instruct the user to move the head according to the prompt information, the binocular camera is used to acquire image frames obtained by scanning the real environment when the head of the user moves, the head movement data corresponding to the image frames is acquired, the image frames and the head movement data are stored, and the virtual safety zone indicating the safety zone in the real environment is determined by calculation based on the stored image frames and the head movement data. Therefore, for the user, the user can simply and quickly complete the setting of the user-defined safe region only by moving the head according to the prompt information without occupying two hands of the user and without too much interactive operation and intervention, the setting efficiency of the safe region and the satisfaction degree of the user are improved, and the market competitiveness of the head-mounted display equipment is enhanced. In addition, the method of the embodiment of the application also avoids the technical problem that the safety area is too large or too small and is not adaptive to the safety area in the actual use environment of the user.
The safety region determining method in the embodiment of the application is based on computer vision and image processing technologies, and can quickly complete user-defined safety region setting.
Step one, data acquisition.
It should be noted that the head-mounted display device of the embodiment of the present application, such as a VR headset, has a see-through mode. In the perspective mode (See through), scene video content of an external actual environment is captured by using two environment capturing cameras Camera simulating human eyes on the VR headset, the scene video content is presented on a screen of the VR headset device, and a user can See the external real space environment of the VR headset through the screen.
Based on this, in the safety zone determining method according to the embodiment of the application, when the perspective mode of the head-mounted display device is turned on, prompt information prompting a user to perform head movement is presented on a screen of the head-mounted display device, and an image frame obtained by scanning the real environment by a binocular camera of the head-mounted display device when the head of the user moves is obtained, for example, the image frame obtained by scanning the real environment by the binocular camera of the head-mounted display device when the head of the user moves in a sequence according to a preset direction is obtained; the preset direction comprises front, back, left, right and lower.
That is to say, after a perspective mode of a head-mounted display device (hereinafter, referred to as a VR headset for example) is turned on and before a security fence is set, content provided by a See through function is presented in a screen of the VR headset for a user to watch, that is, at this time, the user sees that scene video content of an external actual environment is captured by two binocular cameras Camera simulating human eyes built in the VR headset, in this process, in this embodiment of the present application, a user is prompted by some simple and friendly user interfaces UI (user interface), the user is prompted to stand in a middle area of an environment space where the user desires to actually experience, and then, according to the UI prompts, the front, the back, the left, the right and the head are lowered to complete scanning of image data of the environment space, so as to obtain an image frame.
Note: in the embodiment of the present invention, the image data of the head of the user in the above-mentioned front, rear, left, right, and lower directions is acquired, but the order of the direction of the head movement is not limited, and for example, the image data may be acquired when the user moves in the order of head-down → forward-looking → right-looking → backward-looking → left-looking, or the image data may be acquired when the user moves in the order of head-down → leftward-looking right-looking backwards → head-down.
In addition, the embodiment of the application acquires the head motion data corresponding to the image frame in real time, wherein the head motion data comprises the head motion dataRotation matrix R of VR head-wearing type all-in-one machine relative to world coordinate system during front-back left-right head lowering movement of userHMDAnd translation vector THMD. The acquisition time of the head motion data is synchronized with the acquisition time of the image frames. For example, 20 image frames are acquired, and accordingly, head movement data corresponding to the 20 image frames one by one, respectively, is acquired. Note: the acquisition of head motion data is prior art and will not be described in great detail here.
After the data acquisition is completed, the environmental scanning data (namely, image frames) and the head movement data are stored in the data queue ImageQueue, and the step II is carried out.
And step two, calculating a user-defined safety area.
On the basis of the first step, when the front, the back, the left and the right of the user are lowered in real time, the image frames of the real environment captured by a camera binocular camera built in the VR head-mounted all-in-one machine and the head movement data corresponding to the image frames are acquired in real time, and the user-defined safety zone is set and calculated based on the image frames and the head movement data.
Specifically, determining a virtual safety zone indicating a safety zone in a real environment according to each stored image frame and head motion data includes: calculating to obtain three-dimensional point cloud according to the current frame left eye image, the current frame right eye image and the head motion data corresponding to the current frame; performing ground plane fitting on all three-dimensional point clouds based on a plane fitting algorithm to fit a ground plane; screening ground planes according to preset conditions, and combining the screened ground planes; and clustering the merged ground planes according to the height value of the merged ground planes, and determining a virtual safety region indicating a safety region in the real environment according to a clustering result.
For example, the custom secure enclave setting and calculation process may be divided into the following steps:
and (2.1) calculating the three-dimensional coordinates of the feature points in a world coordinate system.
Specifically, the step of calculating the three-dimensional point cloud according to the current frame left eye image, the current frame right eye image and the head motion data corresponding to the current frame comprises: detecting feature points of a current frame left eye image acquired by a left camera, and determining the feature points of the current frame left eye image; calculating the position coordinates of the feature points corresponding to the feature points on the current frame right eye image according to a binocular stereo imaging principle, an image matching algorithm and camera parameters of a left camera; obtaining three-dimensional space coordinates of each feature point under the same camera coordinate system based on the position coordinates of the feature points on the left eye image and the right eye image of the current frame and the space position of the right camera; and calculating the corresponding three-dimensional coordinates of each feature point in a world coordinate system according to the three-dimensional space coordinates of each feature point in the same camera coordinate system and the head motion data corresponding to the current frame to obtain the three-dimensional point cloud.
For example, the aforementioned step (2.1) is subdivided into the following three steps:
and step1, detecting characteristic points.
And detecting the feature points on the left eye image of the current frame in real time by adopting a feature point detection algorithm.
It can be understood that feature points on the current frame right eye image may also be detected, and then the feature points on the right eye image are uniformly mapped to the camera coordinate system corresponding to the left camera.
Feature point detection algorithms, such as fast (features from accessed Segment test) corner detection, Scale-invariant feature transform (SIFT-invariant feature transform), SURF (speeded up robust features) feature extraction, and the like, where SURF is an improvement on SIFT and is mainly characterized by being fast. Since these are all conventional feature point detection algorithms, they are not described in detail herein. In consideration of an actual use scenario, the feature point detection algorithm adopted in the embodiment of the present application is a FAST algorithm.
And step2, calculating three-dimensional coordinates in a camera coordinate system.
After the feature points of the left eye image of the current frame are detected through Step1, based on the principle of binocular stereo imaging, an image matching algorithm, such as NCC (Normalized cross correlation) algorithm, and a rotation-translation matrix parameter T (which is a 4 × 4 matrix) of the left eye camera relative to the right eye camera, the position of each feature point of the left eye image corresponding to the right eye image is calculated. Meanwhile, the three-dimensional space coordinate Featurepoint of the feature point on the left eye image and the feature point on the right eye image under the same camera coordinate system (namely, the camera coordinate system corresponding to the left camera) is calculated according to the three-dimensional triangle positioning principle, namely, the triangle formed by the position coordinates of the feature point on the left eye image and the right eye image of the current frame and the space position of the right camera.
It should be noted that before the system operates, the embodiment of the present application calibrates the left and right binocular cameras built in the VR headset to obtain calibration parameters of the left and right cameras. For example, the internal parameters K of the left and right cameras are respectively obtained by using the traditional Zhang Zhengyou calibration methodleftAnd KrighttAnd a rotation-translation matrix T of the left eye camera relative to the right eye cameraLeft2Right
And step3, calculating three-dimensional coordinates in a world coordinate system.
According to the three-dimensional space coordinates Featurepoint of the characteristic point of each frame image in the camera coordinate system acquired at Step2 and the head motion data R of the corresponding image frameHMDAnd THMDConverting the three-dimensional space coordinate Featurepoint into a world coordinate system to obtain a three-dimensional coordinate under the world coordinate system, wherein the specific formula is as follows:
FeaturepointWHMD=RHMD*Featurepoint+THMD
thus, the three-dimensional coordinates Featurepoint corresponding to the feature points under all the relative world coordinate systems in the data queue ImageQueue can be obtainedWHMD
And (2.2) plane fitting.
In the embodiment of the present application, performing ground plane fitting on all three-dimensional point clouds based on a plane fitting algorithm, and fitting the ground plane includes: performing ground plane fitting on all three-dimensional point clouds by adopting a random sampling consistency RANSAC algorithm to fit all ground planes meeting the three-dimensional point clouds; the least number of point clouds required by fitting each ground plane by using the RANSAC algorithm is 10.
Following the above example, Featurepoint is applied to all three-dimensional point cloudsWHMDIterative regression using RANSAC (Random sample consensus) algorithm, regression fittingOutput to satisfy FeaturepointWHMDAll ground planes of the three-dimensional point cloud. The three-dimensional point cloud is a three-dimensional point data set of the appearance surface of an object in the environment, and the distances between the point clouds corresponding to the same object are close, so that the RANSAC algorithm is used for all the three-dimensional point clouds Featurepoint according to the distances between the point cloudsWHMDAnd (4) iteration is carried out, the extracted point cloud clusters with the distance within a preset value are used as a point cloud set corresponding to a plane, and therefore the planes of all objects on the ground in the real environment are identified and fitted.
In order to improve the precision and stability of plane fitting, when the RANSAC algorithm is adopted for iterative regression, the minimum point cloud number of the plane fitting is 10, namely, at least 10 point clouds are needed for fitting each ground plane. The ground plane is a plane on the ground in the environment, and excludes a plane in a three-dimensional space such as a wall.
And (2.3) plane merging.
Because every plane all contains more point cloud, the closer point cloud number of distance has instructed two planes the probability whether to be same object plane, in other words, two planes are more apart from the nearer point cloud, these two planes are more probably same object plane, so produce the distance and probably be that equipment error leads to, so in order to make safe region's area maximize, satisfy the demand that the user freely walked at will, and simultaneously, avoid the erroneous judgement, improve the plane recognition accuracy, this application embodiment is according to the preset condition, the screening ground plane, the ground plane that will select merges, specifically include: screening all the ground planes, wherein the inter-plane inclination angle is smaller than a preset angle threshold value and the number of target point clouds on the planes is larger than a preset value according to a preset angle condition and a point cloud number condition; the target point cloud is the point cloud with the distance smaller than a preset distance threshold value in the three-dimensional point clouds with the shortest distance between the two ground planes; and merging the screened two ground planes into one ground plane.
In the above example, all the fitted ground planes are screened and judged, and if the inclination angle between the planes is within a preset angle threshold value, for example, 8 °, and the number of point clouds in which the distance between the nearest three-dimensional point clouds on the two planes is within a preset value, for example, 10cm, exceeds 10 (for example only), the two planes are merged, and the two planes are merged into one plane. And judging and combining all the fitted ground planes according to the screening process. And (2.4) clustering and determining a safety region.
The embodiment of the application clusters the merged ground planes, and determines a virtual safety region indicating the safety region in the real environment according to the clustering result, which specifically comprises: and clustering the merged ground planes according to the height values of the merged ground planes by adopting a K nearest neighbor KNN algorithm to obtain a clustering result, and determining the ground plane with the largest area in one class with the largest height value as a virtual safety region indicating a safety region in the real environment according to the clustering result.
That is, for all the fitted planes, clustering is performed according to the heights of the planes in the direction relative to the gravity by using a KNN algorithm, and the plane with the largest area is found out in the class with the largest height value, so that the plane with the largest area is the safe area in which the user is interested.
The plane is obtained by fitting three-dimensional point clouds meeting certain conditions, each three-dimensional point cloud comprises height value information, the height value can be understood as the distance between an object on the ground and a binocular camera wearing display equipment, the larger the height value is, the farther the object is from the head-mounted display equipment is indicated, for example, the fitted plane has the ground and a tea table plane, obviously, the distance between the ground and the head-mounted display equipment is longer than that between the tea table plane and the head-mounted display equipment, namely, the height value of the ground is larger. Therefore, all the fitted planes are clustered according to the height values of the planes according to the embodiment of the application, and the ground indicating the area size of the safe area is determined in the class with the largest height value.
It should be noted that, the clustering algorithm is not limited in the embodiments of the present application, and any feasible clustering algorithm can be selected according to actual application requirements.
Thus, the determination of the security area according to the embodiment of the present application is completed.
Therefore, the safety region determining method solves the technical problem that the safety region is not adaptive to the use environment of the user, the safety region determining process does not occupy two hands of the user, and the user does not need too much interaction and intervention, operation steps are simplified, common users such as old people and children can complete the safety region determining method quickly and independently, and user experience is optimized.
Fig. 2 is a block diagram of a safety region determining apparatus according to an embodiment of the present application, and referring to fig. 2, a safety region determining apparatus 200 according to an embodiment of the present application is applied to a head-mounted display device, and includes:
the prompting module 210 is configured to present, on a screen of the head-mounted display device, prompting information for prompting a user to perform head movement when the perspective mode of the head-mounted display device is turned on.
The obtaining module 220 is configured to obtain image frames obtained by scanning the real environment by the binocular camera wearing the display device when the head of the user moves, and head movement data corresponding to the image frames, and store the image frames and the head movement data.
A determining module 230, configured to determine a virtual safety zone indicating a safety zone in the real environment according to the saved image frames and the head movement data.
In an embodiment of the present application, the obtaining module 220 is specifically configured to obtain image frames obtained by scanning a real environment when a binocular camera of a head-mounted display device moves sequentially in a preset direction; the preset direction comprises front, back, left, right and lower.
In an embodiment of the present application, the determining module 230 is specifically configured to calculate to obtain a three-dimensional point cloud according to a current frame left eye image, a current frame right eye image, and head motion data corresponding to the current frame; performing ground plane fitting on all three-dimensional point clouds based on a plane fitting algorithm to fit a ground plane; screening ground planes according to preset conditions, and combining the screened ground planes; and clustering the merged ground planes according to the height value of the merged ground planes, and determining a virtual safety region indicating a safety region in the real environment according to a clustering result.
In an embodiment of the present application, the determining module 230 is specifically configured to perform feature point detection on a current frame left eye image acquired by a left camera, and determine feature points on the current frame left eye image; calculating the position coordinates of the feature points corresponding to the feature points on the current frame right eye image according to a binocular stereo imaging principle, an image matching algorithm and camera parameters of a left camera; obtaining three-dimensional space coordinates of each feature point under the same camera coordinate system based on the position coordinates of the feature points on the left eye image and the right eye image of the current frame and the space position of the right camera; and calculating the corresponding three-dimensional coordinates of each feature point in a world coordinate system according to the three-dimensional space coordinates of each feature point in the same camera coordinate system and the head motion data corresponding to the current frame to obtain the three-dimensional point cloud.
In an embodiment of the present application, the determining module 230 is specifically configured to perform ground plane fitting on all three-dimensional point clouds by using a random sample consensus (RANSAC) algorithm, and fit all ground planes satisfying the three-dimensional point clouds; the least number of point clouds required by fitting each ground plane by using the RANSAC algorithm is 10.
In an embodiment of the present application, the determining module 230 is specifically configured to, for all ground planes, screen, according to a preset angle condition and a point cloud number condition, the ground planes in which an inter-plane inclination angle is smaller than a preset angle threshold and the number of target point clouds on the plane is larger than a preset value; the target point cloud is the point cloud with the distance smaller than a preset distance threshold value in the three-dimensional point clouds with the shortest distance between the two ground planes; and merging the screened two ground planes into one ground plane.
In an embodiment of the application, the determining module 230 is specifically configured to employ a K-nearest neighbor KNN algorithm, cluster the merged ground planes according to height values of the merged ground planes, and determine the ground plane with the largest area in a class with the largest height value as a virtual safety region indicating a safety region in a real environment.
It can be understood that the safety region determining apparatus can implement the steps of the safety region determining method provided in the foregoing embodiment, and the related explanations about the safety region determining method are applicable to the safety region determining apparatus, and are not described herein again.
Fig. 3 is a reference diagram of a usage state of a head-mounted display device in an embodiment of the present application, referring to fig. 3, a binocular camera 302 is included on a head-mounted display device 301 in an embodiment of the present application, and since a user experiences scene content provided by the head-mounted display device and freely walks around, the eyes of the user cannot acquire real environment information, so that there is a danger of colliding with an object in a real environment.
To sum up, the technical scheme of the embodiment of the application does not occupy both hands of the user, does not need too much interactive operation and intervention, can quickly complete the setting of the user-defined safe region, simplifies the operation steps, improves the setting efficiency of the safe region and the satisfaction degree of the user, and enhances the market competitiveness of the head-mounted display device.
Fig. 4 is a schematic structural diagram of a head-mounted display device in an embodiment of the present application. Referring to fig. 4, at a hardware level, the head-mounted display device includes a processor, and optionally further includes an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory, such as at least 1 disk Memory. Of course, the head mounted display device may also include hardware needed for other services.
The processor, the network interface, and the memory may be connected to each other via an internal bus, which may be an ISA (Industry Standard Architecture) bus, a PCI (peripheral component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 4, but that does not indicate only one bus or one type of bus.
And the memory is used for storing programs. In particular, the program may include program code comprising computer operating instructions. The memory may include both memory and non-volatile storage and provides instructions and data to the processor.
The processor reads a corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to form a safety region determination device on a logic level. The processor is used for executing the program stored in the memory and is specifically used for executing the following operations:
when the perspective mode of the head-mounted display device is started, prompt information prompting a user to perform head movement is presented on a screen of the head-mounted display device.
The method comprises the steps of obtaining image frames obtained by scanning a real environment by a binocular camera wearing the display equipment when the head of a user moves and head movement data corresponding to the image frames, and storing the image frames and the head movement data.
A virtual safety zone indicating a safety zone in the real environment is determined from the stored image frames and the head movement data.
The method performed by the safety region determining apparatus according to the embodiment shown in fig. 2 of the present application may be applied to or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete gates or transistor logic devices, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
The head-mounted display device may further execute the method executed by the safety region determining apparatus in fig. 2, and implement the functions of the safety region determining apparatus in the embodiment shown in fig. 2, which are not described herein again.
An embodiment of the present application further provides a computer-readable storage medium storing one or more programs, where the one or more programs include instructions, which, when executed by a head-mounted display device including multiple application programs, enable the head-mounted display device to perform the method performed by the safety region determination apparatus in the embodiment shown in fig. 2, and are specifically configured to perform:
when the perspective mode of the head-mounted display device is started, prompt information prompting a user to perform head movement is presented on a screen of the head-mounted display device.
The method comprises the steps of obtaining image frames obtained by scanning a real environment by a binocular camera wearing the display equipment when the head of a user moves and head movement data corresponding to the image frames, and storing the image frames and the head movement data.
A virtual safety zone indicating a safety zone in the real environment is determined from the stored image frames and the head movement data.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) characterized by computer-usable program code.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) characterized by computer-usable program code.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A safety area determination method is applied to a head-mounted display device and is characterized by comprising the following steps:
when the perspective mode of the head-mounted display device is started, presenting prompt information for prompting a user to carry out head movement on a screen of the head-mounted display device;
acquiring image frames obtained by scanning a real environment by a binocular camera wearing display equipment when the head of a user moves and head movement data corresponding to the image frames, and storing the image frames and the head movement data;
a virtual safety zone indicating a safety zone in the real environment is determined from the stored image frames and the head movement data.
2. The method of claim 1, wherein the acquiring image frames scanned from a real environment by a binocular camera of a head mounted display device while a user's head is moving comprises:
acquiring an image frame obtained by scanning a real environment when a binocular camera wearing the display equipment moves according to a preset direction sequence; the preset direction comprises front, back, left, right and lower.
3. The method of claim 2, wherein determining a virtual safety zone indicative of a safety zone in the real environment based on the saved image frames and the head motion data comprises:
calculating to obtain three-dimensional point cloud according to the current frame left eye image, the current frame right eye image and the head motion data corresponding to the current frame;
performing ground plane fitting on all three-dimensional point clouds based on a plane fitting algorithm to fit a ground plane;
screening ground planes according to preset conditions, and combining the screened ground planes;
and clustering the merged ground planes according to the height value of the merged ground planes, and determining a virtual safety region indicating a safety region in the real environment according to a clustering result.
4. The method of claim 3, wherein the calculating the three-dimensional point cloud according to the current frame left eye image, the current frame right eye image and the head motion data corresponding to the current frame comprises:
detecting feature points of a current frame left eye image acquired by a left camera, and determining the feature points of the current frame left eye image;
calculating the position coordinates of the feature points corresponding to the feature points on the current frame right eye image according to a binocular stereo imaging principle, an image matching algorithm and camera parameters of a left camera;
obtaining three-dimensional space coordinates of each feature point under the same camera coordinate system based on the position coordinates of the feature points on the left eye image and the right eye image of the current frame and the space position of the right camera;
and calculating the corresponding three-dimensional coordinates of each feature point in a world coordinate system according to the three-dimensional space coordinates of each feature point in the same camera coordinate system and the head motion data corresponding to the current frame to obtain the three-dimensional point cloud.
5. The method of claim 3, wherein the plane-based fitting algorithm performs a ground plane fit to all three-dimensional point clouds, the fitting out of the ground plane comprising:
performing ground plane fitting on all three-dimensional point clouds by adopting a random sampling consistency RANSAC algorithm to fit all ground planes meeting the three-dimensional point clouds; the least number of point clouds required by fitting each ground plane by using the RANSAC algorithm is 10.
6. The method of claim 3, wherein the screening ground planes according to the preset condition, and the merging the screened ground planes comprises:
screening all the ground planes, wherein the inter-plane inclination angle is smaller than a preset angle threshold value and the number of target point clouds on the planes is larger than a preset value according to a preset angle condition and a point cloud number condition; the target point cloud is the point cloud with the distance smaller than a preset distance threshold value in the three-dimensional point clouds with the shortest distance between the two ground planes;
and merging the screened two ground planes into one ground plane.
7. The method of claim 3, wherein clustering the merged ground planes and determining a virtual safety zone indicative of a safety zone in the real environment based on the clustering comprises:
and clustering the merged ground planes according to the height values of the merged ground planes by adopting a K nearest neighbor KNN algorithm, and determining the ground plane with the largest area in the class with the largest height value as a virtual safety region indicating a safety region in the real environment.
8. A safety area determination device applied to a head-mounted display device comprises:
the prompting module is used for presenting prompting information for prompting a user to carry out head movement on a screen of the head-mounted display device when the perspective mode of the head-mounted display device is started;
the acquisition module is used for acquiring image frames obtained by scanning a real environment by a binocular camera wearing the display equipment when the head of a user moves and head movement data corresponding to the image frames, and storing the image frames and the head movement data;
and the determining module is used for determining a virtual safety region indicating a safety region in the real environment according to the stored image frames and the head movement data.
9. A head-mounted display device, comprising:
a processor; and
a memory arranged to store computer executable instructions which, when executed, cause the processor to perform the method of any of claims 1 to 7.
10. A computer readable storage medium storing one or more programs which, when executed by a head-mounted display device including a plurality of application programs, cause the head-mounted display device to perform the method of any of claims 1-7.
CN202010436848.4A 2020-05-21 2020-05-21 Security area determination method and device, head-mounted display device and storage medium Active CN111708432B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010436848.4A CN111708432B (en) 2020-05-21 2020-05-21 Security area determination method and device, head-mounted display device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010436848.4A CN111708432B (en) 2020-05-21 2020-05-21 Security area determination method and device, head-mounted display device and storage medium

Publications (2)

Publication Number Publication Date
CN111708432A true CN111708432A (en) 2020-09-25
CN111708432B CN111708432B (en) 2023-08-25

Family

ID=72537882

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010436848.4A Active CN111708432B (en) 2020-05-21 2020-05-21 Security area determination method and device, head-mounted display device and storage medium

Country Status (1)

Country Link
CN (1) CN111708432B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112462937A (en) * 2020-11-23 2021-03-09 青岛小鸟看看科技有限公司 Local perspective method and device of virtual reality equipment and virtual reality equipment
CN113660395A (en) * 2021-08-06 2021-11-16 海信视像科技股份有限公司 Safety prompting method and equipment based on target identification
CN115047977A (en) * 2022-07-13 2022-09-13 北京字跳网络技术有限公司 Method, device, equipment and storage medium for determining safety area
CN115098524A (en) * 2022-07-20 2022-09-23 北京字跳网络技术有限公司 Method, device, equipment and medium for updating safety area
WO2023002652A1 (en) * 2021-07-21 2023-01-26 株式会社ソニー・インタラクティブエンタテインメント Information processing device, information processing method, and computer program
WO2024156209A1 (en) * 2023-01-29 2024-08-02 腾讯科技(深圳)有限公司 Security boundary generation method and apparatus, device, storage medium and program product

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104539929A (en) * 2015-01-20 2015-04-22 刘宛平 Three-dimensional image coding method and coding device with motion prediction function
CN106020498A (en) * 2016-07-27 2016-10-12 深圳市金立通信设备有限公司 Safety early-warning method and terminal
CN106569605A (en) * 2016-11-03 2017-04-19 腾讯科技(深圳)有限公司 Virtual reality-based control method and device
CN106774944A (en) * 2017-01-18 2017-05-31 珠海市魅族科技有限公司 A kind of safe early warning method and system
CN108415562A (en) * 2018-02-12 2018-08-17 四川斐讯信息技术有限公司 A kind of cursor control method and cursor control system
CN108764080A (en) * 2018-05-17 2018-11-06 中国电子科技集团公司第五十四研究所 A kind of unmanned plane vision barrier-avoiding method based on cloud space binaryzation
CN108830943A (en) * 2018-06-29 2018-11-16 歌尔科技有限公司 A kind of image processing method and virtual reality device
CN109813317A (en) * 2019-01-30 2019-05-28 京东方科技集团股份有限公司 A kind of barrier-avoiding method, electronic equipment and virtual reality device
CN110008941A (en) * 2019-06-05 2019-07-12 长沙智能驾驶研究院有限公司 Drivable region detection method, device, computer equipment and storage medium
CN110503001A (en) * 2019-07-25 2019-11-26 青岛小鸟看看科技有限公司 A kind of Virtual Reality equipment and its barrier-avoiding method, device
US10535199B1 (en) * 2018-06-18 2020-01-14 Facebook Technologies, Llc Systems and methods for determining a safety boundary for a mobile artificial reality user
CN110879401A (en) * 2019-12-06 2020-03-13 南京理工大学 Unmanned platform real-time target 3D detection method based on camera and laser radar
CN111091609A (en) * 2019-12-11 2020-05-01 云南电网有限责任公司保山供电局 Transformer substation field operation management and control system and method based on three-dimensional dynamic modeling
CN111105500A (en) * 2019-10-31 2020-05-05 青岛小鸟看看科技有限公司 Safe region drawing method and device under virtual reality scene and virtual reality system

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104539929A (en) * 2015-01-20 2015-04-22 刘宛平 Three-dimensional image coding method and coding device with motion prediction function
CN106020498A (en) * 2016-07-27 2016-10-12 深圳市金立通信设备有限公司 Safety early-warning method and terminal
CN106569605A (en) * 2016-11-03 2017-04-19 腾讯科技(深圳)有限公司 Virtual reality-based control method and device
CN106774944A (en) * 2017-01-18 2017-05-31 珠海市魅族科技有限公司 A kind of safe early warning method and system
CN108415562A (en) * 2018-02-12 2018-08-17 四川斐讯信息技术有限公司 A kind of cursor control method and cursor control system
CN108764080A (en) * 2018-05-17 2018-11-06 中国电子科技集团公司第五十四研究所 A kind of unmanned plane vision barrier-avoiding method based on cloud space binaryzation
US10535199B1 (en) * 2018-06-18 2020-01-14 Facebook Technologies, Llc Systems and methods for determining a safety boundary for a mobile artificial reality user
CN108830943A (en) * 2018-06-29 2018-11-16 歌尔科技有限公司 A kind of image processing method and virtual reality device
CN109813317A (en) * 2019-01-30 2019-05-28 京东方科技集团股份有限公司 A kind of barrier-avoiding method, electronic equipment and virtual reality device
CN110008941A (en) * 2019-06-05 2019-07-12 长沙智能驾驶研究院有限公司 Drivable region detection method, device, computer equipment and storage medium
CN110503001A (en) * 2019-07-25 2019-11-26 青岛小鸟看看科技有限公司 A kind of Virtual Reality equipment and its barrier-avoiding method, device
CN111105500A (en) * 2019-10-31 2020-05-05 青岛小鸟看看科技有限公司 Safe region drawing method and device under virtual reality scene and virtual reality system
CN110879401A (en) * 2019-12-06 2020-03-13 南京理工大学 Unmanned platform real-time target 3D detection method based on camera and laser radar
CN111091609A (en) * 2019-12-11 2020-05-01 云南电网有限责任公司保山供电局 Transformer substation field operation management and control system and method based on three-dimensional dynamic modeling

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112462937A (en) * 2020-11-23 2021-03-09 青岛小鸟看看科技有限公司 Local perspective method and device of virtual reality equipment and virtual reality equipment
CN112462937B (en) * 2020-11-23 2022-11-08 青岛小鸟看看科技有限公司 Local perspective method and device of virtual reality equipment and virtual reality equipment
US11861071B2 (en) 2020-11-23 2024-01-02 Qingdao Pico Technology Co., Ltd. Local perspective method and device of virtual reality equipment and virtual reality equipment
WO2023002652A1 (en) * 2021-07-21 2023-01-26 株式会社ソニー・インタラクティブエンタテインメント Information processing device, information processing method, and computer program
CN113660395A (en) * 2021-08-06 2021-11-16 海信视像科技股份有限公司 Safety prompting method and equipment based on target identification
CN115047977A (en) * 2022-07-13 2022-09-13 北京字跳网络技术有限公司 Method, device, equipment and storage medium for determining safety area
CN115098524A (en) * 2022-07-20 2022-09-23 北京字跳网络技术有限公司 Method, device, equipment and medium for updating safety area
WO2024156209A1 (en) * 2023-01-29 2024-08-02 腾讯科技(深圳)有限公司 Security boundary generation method and apparatus, device, storage medium and program product

Also Published As

Publication number Publication date
CN111708432B (en) 2023-08-25

Similar Documents

Publication Publication Date Title
CN111708432B (en) Security area determination method and device, head-mounted display device and storage medium
US11747893B2 (en) Visual communications methods, systems and software
US10674142B2 (en) Optimized object scanning using sensor fusion
US9426447B2 (en) Apparatus and method for eye tracking
US20170076430A1 (en) Image Processing Method and Image Processing Apparatus
US9691152B1 (en) Minimizing variations in camera height to estimate distance to objects
JP2017531221A (en) Countering stumbling when immersed in a virtual reality environment
KR101769177B1 (en) Apparatus and method for eye tracking
IL308285B1 (en) System and method for augmented and virtual reality
CN108090463B (en) Object control method, device, storage medium and computer equipment
KR102450236B1 (en) Electronic apparatus, method for controlling thereof and the computer readable recording medium
US11138743B2 (en) Method and apparatus for a synchronous motion of a human body model
US20200242335A1 (en) Information processing apparatus, information processing method, and recording medium
US20220277512A1 (en) Generation apparatus, generation method, system, and storage medium
WO2014008320A1 (en) Systems and methods for capture and display of flex-focus panoramas
US9478068B2 (en) Computer-readable medium, image processing device, image processing system, and image processing method
WO2019085519A1 (en) Method and device for facial tracking
CN112114664A (en) Safety reminding method and device based on virtual reality and head-mounted all-in-one machine
WO2018042074A1 (en) A method, apparatus and computer program product for indicating a seam of an image in a corresponding area of a scene
KR20210133674A (en) Augmented reality device and method for controlling the same
CN113282167B (en) Interaction method and device of head-mounted display equipment and head-mounted display equipment
CN115698923A (en) Information processing apparatus, information processing method, and program
US10783853B2 (en) Image provision device, method and program that adjusts eye settings based on user orientation
CN112578983B (en) Finger orientation touch detection
US20220413295A1 (en) Electronic device and method for controlling electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant