CN114816048A - Virtual reality system control method and device and virtual reality system - Google Patents

Virtual reality system control method and device and virtual reality system Download PDF

Info

Publication number
CN114816048A
CN114816048A CN202210334094.0A CN202210334094A CN114816048A CN 114816048 A CN114816048 A CN 114816048A CN 202210334094 A CN202210334094 A CN 202210334094A CN 114816048 A CN114816048 A CN 114816048A
Authority
CN
China
Prior art keywords
virtual reality
space
reality device
marker
visual field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210334094.0A
Other languages
Chinese (zh)
Inventor
于国星
戴景文
贺杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Virtual Reality Technology Co Ltd
Original Assignee
Guangdong Virtual Reality Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Virtual Reality Technology Co Ltd filed Critical Guangdong Virtual Reality Technology Co Ltd
Priority to CN202210334094.0A priority Critical patent/CN114816048A/en
Publication of CN114816048A publication Critical patent/CN114816048A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a control method and device of a virtual reality system and the virtual reality system. The virtual reality system comprises a plurality of markers and a virtual reality device. The method comprises the following steps: acquiring pose information of a virtual reality device in a target space; determining a view space based on the pose information; the light emitting device that controls the marker in the visual field space emits light. In one aspect, the visual field space in the present application is a space covered by the image capturing range of the virtual reality device, and therefore, after controlling the marker in the visual field space to emit light, the virtual reality device can smoothly capture an image including the light-emitting marker. On the other hand, the range of the visual field space is smaller than that of the target space, and therefore, the light emitting device of the marker disposed inside the target space but outside the visual field space does not emit light, and the power consumption of the marker is reduced.

Description

Virtual reality system control method and device and virtual reality system
Technical Field
The present application relates to the field of virtual reality technologies, and in particular, to a method and an apparatus for controlling a virtual reality system, and a virtual reality system.
Background
In recent years, with the progress of science and technology, Virtual Reality (VR) technology has been widely used in the fields of education, industry, entertainment, medical care, and the like. For example, in the medical field, virtual reality technology provides surgeons with surgical simulation and virtual reality therapy design, increasing the success rate of surgical implementation.
In some application scenarios, virtual reality devices (e.g., virtual reality glasses) need to recognize specific markers in order to work. For example, virtual reality glasses display corresponding virtual images by recognizing different markers. Illustratively, in a virtual zoo scene, each marker corresponds to a virtual animal, and the user changes the virtual animal image displayed in the virtual reality glasses by switching the markers.
In order to ensure that the virtual reality device can successfully acquire the image containing the marker, the marker is usually an actively-luminous marker. However, in some large-space virtual reality application scenes, the number of markers is also multiplied along with the expansion of the scene space, and thus, when all the markers in the scene space are in a light-emitting state, the problem that the power consumption of a virtual reality system is too large occurs.
Disclosure of Invention
The embodiment of the application provides a control method and device of a virtual reality system and the virtual reality system.
In a first aspect, some embodiments of the present application provide a method for controlling a virtual reality system. The virtual reality system comprises a plurality of markers and a virtual reality device, the markers are fixedly arranged in a target space and comprise light-emitting devices, and the method comprises the following steps: acquiring pose information of a virtual reality device in a target space, wherein the pose information comprises position information and attitude information; the position information represents the position of the virtual reality device in the target space, and the attitude information represents the angle of the virtual reality device relative to a reference plane of the target space; determining a visual field space based on the pose information, wherein the visual field space represents a space covered by an image acquisition range of the virtual reality device, and the range of the visual field space is smaller than that of the target space; the light emitting device that controls the marker in the visual field space emits light.
In a second aspect, some embodiments of the present application further provide a control device of a virtual reality system. Wherein, the virtual reality system includes a plurality of markers and virtual reality device, and a plurality of markers are fixed to be set up in the target space, and the marker includes light emitting device, and the device includes: the device comprises a pose information acquisition module, a visual field space determination module and a light-emitting control module. Specifically, the pose information acquisition module is used for acquiring pose information of the virtual reality device in the target space, wherein the pose information comprises position information and attitude information; the position information characterizes a position of the virtual reality device within the target space, and the pose information characterizes an angle of the virtual reality device relative to a reference plane of the target space. The visual field space determining module is used for determining a visual field space based on the pose information, the visual field space represents a space covered by an image acquisition range of the virtual reality device, and the range of the visual field space is smaller than that of the target space. The light-emitting control module is used for controlling the light-emitting device of the marker in the visual field space to emit light.
In a third aspect, some embodiments of the present application further provide a virtual reality system. This virtual reality system includes: one or more processors, memory, a plurality of markers, a virtual reality device, and one or more applications. Wherein the marker includes a light emitting device, one or more applications stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to perform the above-described method of controlling a virtual reality system.
In a fourth aspect, an embodiment of the present application further provides a computer-readable storage medium, where computer program instructions are stored in the computer-readable storage medium. The computer program instructions can be called by the processor to execute the control method of the virtual reality system.
In a fifth aspect, an embodiment of the present application further provides a computer program product, where the computer program product, when executed, implements the control method of the virtual reality system.
The application provides a control method and device of a virtual reality system and the virtual reality system. The virtual reality system comprises a plurality of markers and a virtual reality device. In the method, the visual field space of the virtual reality device is determined by acquiring the pose information, and then the light-emitting device corresponding to the marker in the visual field space is controlled to emit light. In one aspect, the visual field space in the present application is a space covered by the image capturing range of the virtual reality device, and therefore, after controlling the marker in the visual field space to emit light, the virtual reality device can smoothly capture an image including the light-emitting marker, thereby ensuring that the virtual reality device can smoothly operate. On the other hand, the range of the visual field space is smaller than that of the target space, so that the light-emitting device of the marker arranged in the target space but outside the visual field space does not emit light, the electric power consumption of the marker is reduced, and the energy of the virtual reality system is saved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows an application environment schematic diagram of a control method of a virtual reality system according to an embodiment of the present application.
Fig. 2 shows a schematic flowchart of a control method of a virtual reality system according to a first embodiment of the present application.
Fig. 3 shows a flowchart of a control method of a virtual reality system according to a second embodiment of the present application.
Fig. 4 shows a flowchart of a control method of a virtual reality system according to a third embodiment of the present application.
Fig. 5 is a schematic flowchart illustrating a method for obtaining calibration data according to an embodiment of the present application.
Fig. 6 shows a block diagram of a control device of a virtual reality system according to an embodiment of the present application.
Fig. 7 shows a block diagram of a virtual reality system provided in an embodiment of the present application.
FIG. 8 illustrates a block diagram of modules of a computer-readable storage medium provided by embodiments of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
In order to make the technical solutions of the present application better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The application provides a control method and device of a virtual reality system and the virtual reality system. The virtual reality system comprises a plurality of markers and a virtual reality device. In the method, the visual field space of the virtual reality device is determined by acquiring the pose information, and then the light-emitting devices corresponding to the markers in the visual field space are controlled to emit light. In one aspect, the visual field space in the present application is a space covered by the image capturing range of the virtual reality device, and therefore, after controlling the marker in the visual field space to emit light, the virtual reality device can smoothly capture an image including the light-emitting marker, thereby ensuring that the virtual reality device can smoothly operate. On the other hand, the range of the visual field space is smaller than that of the target space, so that the light-emitting device of the marker arranged in the target space but outside the visual field space does not emit light, the electric power consumption of the marker is reduced, and the energy of the virtual reality system is saved.
For the convenience of describing the scheme of the present application in detail, an application environment of the control method of the virtual reality system provided by the example of the present application is described below with reference to the accompanying drawings. The method is applied to a virtual reality system, wherein the virtual reality system comprises a plurality of markers and a virtual reality device. Referring to fig. 1, fig. 1 is a schematic application environment diagram of a control method of a virtual reality system according to an embodiment of the present disclosure. In fig. 1, a virtual reality device 100 is worn on the head of a user. In particular, the virtual reality device 100 may be a head display device, such as virtual reality glasses, a virtual reality helmet, and so forth. In other embodiments, the virtual reality apparatus 100 may be a handheld display apparatus, such as a smartphone, tablet, or the like.
In the present embodiment, the virtual reality device 100 includes an image acquisition module 110, and the image acquisition module 110 is used for acquiring an image containing the luminescent marker 200. The image capture module 110 may be a camera module (e.g., a monocular camera module, a binocular camera module, etc.), or an infrared camera module. In the present application, the image acquisition module 110 is determined based on the light emitting device of the marker 200 within the target space. If the light source of the light emitting device is a visible light source (for example, the light emitting device is an LED lamp), the image capturing module 110 is a camera module; if the light source of the light emitting device is an invisible light source (for example, the light emitting device is an infrared lamp), the image capturing module 110 is an infrared camera module.
In the embodiment of the present application, the virtual reality apparatus 100 further includes a pose information acquiring device 120, and the pose information acquiring device 120 is configured to acquire position information and pose information of the virtual reality apparatus 100 during movement. As an embodiment, the pose information acquiring device 120 is an Inertial Measurement Unit (IMU). The IMU is a combined unit consisting of at least three accelerometers and at least three gyroscopes. Wherein the at least three accelerometers are used for acquiring acceleration information of at least three axial directions. At least three gyroscopes are used to obtain attitude angle information (i.e., angular rate) for at least three axial directions. In some embodiments, an information processing unit is further provided in the IMU, and the information processing unit is configured to acquire position information and attitude information of the virtual reality apparatus 100 from the plurality of acceleration information and the plurality of attitude angle information. In other embodiments, the position information and the pose information are processed in a processor in the virtual reality device 100.
In some embodiments, the virtual reality device 100 further has a visual field space determination function for determining a space covered by the image acquisition range of the virtual reality device 100 in the case of pose information determination. The visual field space determining function may be implemented by a processor in the virtual reality device 100.
In some embodiments, the virtual reality apparatus 100 further has a marker determination function for determining a marker in the visual field space in the case of visual field space determination, and a control instruction transmission function. The marker determination function may be implemented by a processor in the virtual reality device 100. The control instruction transmitting function is configured to transmit a control instruction to the marker in the visual field space, and the control instruction is configured to instruct the light emitting device of the marker in the visual field space to emit light. Specifically, the control instruction transmission function may be implemented by a signal transmitter (e.g., a Wi-Fi signal transmitter) in the virtual reality device 100.
In other embodiments, the virtual reality system further includes a control center, and the control center is configured to receive the pose information sent by the virtual reality device 100, determine a visual field space corresponding to the virtual reality device 100 based on the pose information, further determine a marker in the visual field space, and send a control instruction to the marker in the visual field space. The control center can be one server, a server cluster formed by a plurality of servers, or a cloud computing service center.
In the present embodiment, a plurality of markers 200 are fixedly disposed within the target space. Wherein the target space may be a classroom, a game show, and the like. Each marker 200 includes a light emitting device, a signal receiver (e.g., a Wi-Fi signal receiver), and a processor. The signal receiver is used for receiving the control instruction and sending a starting signal to the processor under the condition of receiving the control instruction, and the processor starts the light-emitting device after receiving the starting signal so that the light-emitting device is in a light-emitting state.
Referring to fig. 2, fig. 2 schematically illustrates a control method of a virtual reality system according to a first embodiment of the present application. The virtual reality system comprises a plurality of markers and a virtual reality device, the markers are fixedly arranged in a target space, and each marker comprises a light-emitting device. Specifically, the method may include the following steps S210 to S230.
Step S210, obtaining pose information of the virtual reality device in the target space.
The pose information includes position information and pose information. The position information characterizes a position of the virtual reality device within the target space, and the pose information characterizes an angle of the virtual reality device relative to a reference plane of the target space. The reference plane is determined based on a coordinate system corresponding to the attitude information, taking the coordinate system as a cartesian space rectangular coordinate system as an example, the coordinate system includes an x axis, a y axis and a z axis, and the reference plane includes a plane in which the x axis and the y axis are located, a plane in which the x axis and the z axis are located, and a plane in which the z axis and the y axis are located.
In some embodiments, the virtual reality system acquires pose information of the virtual reality device within the target space every preset duration. The preset time period may be any time period greater than 1 second, for example, the preset time period is 3 seconds. In other embodiments, a plurality of human body sensors are arranged in the target space, and when the virtual reality system receives human body signals sent by the human body sensors, the pose information of the virtual reality device in the target space is acquired. Wherein the distance between the human body sensor and the marker is smaller than a preset distance (for example, the preset distance is 10 meters). In this case, if the human body sensor captures a human body signal, it indicates that the user is in the vicinity of the marker, and the pose information of the virtual reality device in the target space is acquired. Specifically, the human body sensor may be a pressure sensor, an infrared sensor, or the like. According to the embodiment, the pose information is acquired under the condition that the human body sensor captures the human body signal, the situation that the pose information is frequently acquired can be avoided, and the computing resources of the virtual reality system are saved.
In the embodiment of the present application, the pose information is acquired by an IMU in the virtual reality apparatus. In one aspect, the pose information includes position information that may be calculated from acceleration information measured by the IMU. Optionally, when the virtual reality system determines the position information of the previous measurement period, the virtual reality system performs two integration operations on the acceleration information based on the acceleration information measured in the current measurement period and the interval duration corresponding to two adjacent measurement periods, so as to determine the position information in the current measurement period. Specifically, the position information may be represented by coordinate points, and taking a coordinate system corresponding to the coordinate points as a cartesian space rectangular coordinate system as an example, the position information is coordinate values corresponding to an x-axis, a y-axis, and a z-axis.
In another aspect, the pose information further includes pose information, and the pose information may be calculated from pose angle information measured by the IMU. Specifically, the pose information may be represented by euler angles, which represent the rotation angles of the virtual reality device around three coordinate axes (i.e., x-axis, y-axis, and z-axis) of the coordinate system, also exemplified by a cartesian space rectangular coordinate system. Specifically, the attitude information may be calculated by the euler angle method or the quaternion method.
Step S220, determining a visual field space based on the pose information.
In the embodiment of the application, on the one hand, the visual field space is a space covered by the image acquisition range of the virtual reality device, so that after the marker in the visual field space is controlled to emit light, the virtual reality device can smoothly acquire the image containing the light-emitting marker, and the virtual reality device can work smoothly. On the other hand, the range of the visual field space is smaller than that of the target space, so that the light-emitting device of the marker arranged in the target space but outside the visual field space does not emit light, the electric power consumption of the marker is reduced, and the energy of the virtual reality system is saved. Specifically, a specific implementation of determining the view space based on the pose information is set forth in the following embodiments.
Step S230, controlling the light emitting device of the marker in the visual field space to emit light.
As an embodiment, position information of the marker is stored in the virtual reality system, and further, the visual field space may be expressed by a spatial constraint equation. If the position information of the marker meets the space constraint equation, determining the marker as the marker in the visual field space; otherwise, if the position information of the marker does not satisfy the space constraint equation, the marker is determined to be a marker outside the visual field space.
The virtual reality system transmits a control command to the marker in the visual field space, and the marker in the visual field space controls the light-emitting device to emit light when receiving the control command. It should be noted that, when the control command is not received by the marker, the light-emitting device is in the off state, that is, the light-emitting device of the marker outside the visual field space does not emit light.
The application provides a control method of a virtual reality system. The virtual reality system comprises a plurality of markers and a virtual reality device. In the method, the visual field space of the virtual reality device is determined by acquiring the pose information, and then the light-emitting devices corresponding to the markers in the visual field space are controlled to emit light. In one aspect, the visual field space in the present application is a space covered by the image capturing range of the virtual reality device, and therefore, after controlling the marker in the visual field space to emit light, the virtual reality device can smoothly capture an image including the light-emitting marker, thereby ensuring that the virtual reality device can smoothly operate. On the other hand, since the range of the visual field space is smaller than the range of the target space, the light emitting device of the marker disposed inside the target space but outside the visual field space does not emit light, and energy of the virtual reality system is saved.
Referring to fig. 3, fig. 3 schematically illustrates a method for controlling a virtual reality system according to a second embodiment of the present application. The virtual reality system comprises a plurality of markers and a virtual reality device, the markers are fixedly arranged in a target space and comprise light emitting devices, and the virtual reality device comprises an image acquisition module. Specifically, the method may include the following steps S310 to S360.
Step S310, the pose information of the virtual reality device in the target space is obtained.
For a specific implementation of step S310, reference may be made to the specific description in step S210, and details are not repeated here.
Step S320, acquiring hardware parameters of the image acquisition module.
The hardware parameters include at least one parameter of the image acquisition module: angle of view, focal length. In the embodiment of the application, the hardware parameters of the image acquisition module can be determined by reading the default parameters of the memory in the virtual reality device.
In some embodiments, the hardware parameter includes a field angle, the size of which determines an image acquisition range of the image acquisition module. Specifically, the size of the field angle and the size of the image capturing range have a positive correlation, that is, the larger the field angle, the larger the image capturing range.
In some embodiments, the hardware parameter includes a focal length, and the length of the focal length also determines an image capturing range of the image capturing module. Specifically, the length of the focal length and the size of the image acquisition range are in a negative correlation relationship, that is, the shorter the focal length, the larger the image acquisition range.
And step S330, determining a visual field space based on the pose information and the hardware parameters.
In some embodiments, the hardware parameters include a field angle, and the virtual reality system determines the field of view space based on the pose information and the field angle. Illustratively, the visual field space is a space covered by a cone region, wherein the vertex of the cone region is determined based on the position information in the pose information, that is, the coordinate point representing the position information is determined as the vertex. The height direction of the cone area is determined based on the attitude information in the pose information, namely, the included angle between the height direction and each reference plane is determined based on the attitude information. And an included angle between the generatrix of the cone area and the height direction is a visual angle. It should be noted that the height of the cone area is a default focal length, which is summarized by the research and development personnel based on a great deal of experimental data.
In other embodiments, the hardware parameters include a focal distance, and the virtual reality system determines the view space based on the pose information and the focal distance. Illustratively, the visual field space is a space covered by a cone area, wherein the vertex of the cone area is determined based on the position information in the pose information, that is, the coordinate point representing the position information is determined as the vertex. The height direction of the cone area is determined based on the attitude information in the pose information, namely, the included angle between the height direction and each reference plane is determined based on the attitude information. The height of the cone area is the focal length. Here, an angle between a generatrix of the cone area and the height direction is a default viewing angle value, which is obtained by a research and development staff based on a large amount of experimental data.
In other embodiments, the hardware parameters include a focal length and a field angle, and the virtual reality system determines the field of view space based on the pose information, the focal length, and the field angle.
In the application, the visual field space is represented by describing through a space constraint equation, and the virtual reality system determines the space constraint equation corresponding to the cone area based on a space geometric theory under the condition of determining pose information, a focal length and a visual field angle.
In some embodiments of the present application, the virtual reality device includes a plurality of image acquisition modules, for example, the virtual reality device includes a binocular camera module and an infrared camera module. Before the visual field space is determined, the virtual reality system further comprises a step of acquiring a light source type corresponding to the light emitting device in the marker, a target image acquisition module matched with the light source type is determined based on the light source type, and the visual field space is determined based on the pose information and hardware parameters of the target image acquisition module. Exemplarily, if the light source type corresponding to the light emitting device is invisible light (e.g., infrared light), the matched target image acquisition module is determined as the infrared camera module, and then the view space is determined based on the pose information and the hardware parameters of the infrared camera module. Wherein, the light source type corresponding to the light-emitting device can be determined by reading default parameters of the memory in the marker.
The visual field space in the embodiment of the application is determined by the hardware parameters of the target image acquisition module, and the virtual reality system determines the corresponding target image acquisition module by determining the light source type corresponding to the light-emitting device, so that the virtual reality device can work in the target space, and the flexibility and the convenience in use of the virtual reality device are ensured.
In step S340, distance information between the marker and the virtual reality device in the visual field space is acquired.
Specific embodiments for determining the markers in the visual field space can be found in step S230. In the embodiment of the application, the position information of the marker is stored in the virtual reality system, and when the marker in the visual field space is determined, the first position information of the marker in the visual field space is further determined, and the second position information of the virtual reality device can be determined from the pose information. And determining the distance information between the marker and the virtual reality device in the visual field space from the first position information and the second position information through a preset distance calculation formula. Specifically, the preset distance calculation formula may be an euclidean distance calculation formula.
In step S350, based on the distance information, the operating parameters are determined.
In the embodiment of the present application, the operating parameter includes at least one of light emission luminance and light emission period.
In some embodiments, the operating parameter includes a light emission brightness, and step S350 includes step S3510.
Step S3510, based on the distance information and a preset first mapping relation, light emitting brightness is determined.
The first mapping relationship represents a positive correlation between the distance information and the light emission luminance. The first mapping relationship may be embodied by the first mapping function, or may be embodied by the first mapping table. Taking the first mapping relation as an example of the first mapping table, the virtual reality system can determine the light-emitting brightness corresponding to the distance information by searching the first mapping table pre-stored in the memory under the condition of determining the distance information.
In the embodiment of the present application, there is a positive correlation between the distance information and the light emission luminance, that is, the farther the distance between the marker and the virtual reality device is, the brighter the light emission luminance of the marker is. And further, the marker in the visual field space can be clearly displayed in the image shot by the virtual reality device under the condition that the marker is far away from the virtual reality device, and the marker can be accurately identified by the virtual reality device in the subsequent process.
In some embodiments, the operating parameter comprises an operating duration, then step S350 comprises step S3520.
And S3520, determining the working time length based on the distance information and a preset second mapping relation.
The second mapping relationship represents a positive correlation between the distance information and the working duration. The second mapping relationship may be embodied by a second mapping function, or may be embodied by a second mapping table. Taking the second mapping relationship as the second mapping table as an example, the virtual reality system can determine the working duration corresponding to the distance information by searching the second mapping table pre-stored in the memory under the condition of determining the distance information.
In the embodiment of the present application, the distance information and the working time length are in a positive correlation, that is, the longer the distance between the marker and the virtual reality device is, the longer the working time length of the marker is. Further, when the marker in the visual field space is far from the virtual reality device, if the marker cannot be recognized from the image captured by the virtual reality device, the marker which is in a light-emitting state for a long time can be successfully recognized by the virtual reality device in the process that a subsequent user moves towards the marker.
And step S360, controlling the light-emitting device of the marker in the visual field space to emit light according to the working parameters.
In some embodiments, the operating parameters include light emission brightness and light emission duration. And under the condition of determining the light-emitting brightness and the light-emitting time length, the virtual reality system sends a control instruction containing light-emitting brightness information and light-emitting time length information to the marker in the visual field space. And under the condition that the marker in the visual field space receives the control instruction, controlling the light-emitting device to emit light according to the light-emitting brightness and the light-emitting duration. Because the light-emitting device works according to the light-emitting duration, the marker in the visual field space is in a non-light-emitting state in a time period outside the light-emitting duration, and the electric power consumption of the marker is reduced.
The application provides a control method of a virtual reality system, which determines a visual field space based on hardware parameters of an image acquisition module in a virtual reality device and can ensure that the visual field space can be accurately determined. The luminous time length and the luminous brightness of the marker are determined according to the distance between the marker and the virtual reality device in the visual field space, so that the image acquisition module of the virtual reality device can acquire the image containing the luminous marker.
Referring to fig. 4, fig. 4 schematically illustrates a method for controlling a virtual reality system according to a third embodiment of the present application. The virtual reality system comprises a plurality of markers and a virtual reality device, the markers are fixedly arranged in a target space, and each marker comprises a light-emitting device. In the method, the pose information of the virtual reality device refers to pose information under a first coordinate system, and the pose information of the marker under a second coordinate system is pre-stored in the virtual reality system. Specifically, the method may include the following steps S410 to S460.
Step S410, acquiring pose information of the virtual reality device in the target space.
Here, the pose information of the virtual reality device is information based on the first coordinate system. The first coordinate system may be a cartesian-space rectangular coordinate system. The specific implementation of step S410 may refer to the specific description in step S210, and is not described herein again.
Step S420, converting the pose information of the virtual reality device in the first coordinate system into the pose information of the virtual reality device in the second coordinate system through the calibration data.
The calibration data includes a mapping relationship between the first coordinate system and the second coordinate system. Here, the pose information of the marker is information based on a second coordinate system, which may be a cartesian-space rectangular coordinate system. In the embodiment of the application, at least one coordinate element of the first coordinate system and the second coordinate system is different. The coordinate elements may be origin of coordinates, positive direction of coordinate axes, unit length of coordinate axes, and the like, and are not specifically limited in this application. That is, the pose information of the virtual reality device and the pose information of the markers are not described by the same coordinate system. In one specific example, the first coordinate system is a coordinate system established with the position of the virtual reality device as the origin, and the second coordinate system is a coordinate system established with the position of a marker as the origin.
In the embodiment of the application, the virtual reality system converts the pose information of the virtual reality device in the first coordinate system into the pose information of the virtual reality device in the second coordinate system through calibration data, that is, the converted pose information of the virtual reality device and the pose information of the markers are both information in the second coordinate system.
As an embodiment, the calibration data may be a calibration matrix, and the pose information in the second coordinate system may be obtained by performing a matrix product operation on the calibration matrix and the pose information in the first coordinate system. Illustratively, taking the data dimension of the pose information in the first coordinate system as 6 × 1 and the data dimension of the calibration matrix as 6 × 6 as an example, the data dimension of the pose information in the second coordinate system obtained by performing matrix multiplication operation on the two is 6 × 1. Specifically, the method for obtaining calibration data is described in the following embodiments.
The application provides a control method of a virtual reality system. In the method, the position and pose information of the virtual reality device and the position and pose information of the markers are unified into information under the same coordinate system through calibration data, so that the calculation amount required for determining the markers in the visual field space is simplified, a subsequent virtual reality system can smoothly determine the markers in the visual field space, and the virtual reality device can work.
And step S430, determining a visual field space based on the pose information and the hardware parameters of the virtual reality device in the second coordinate system.
Step S440, controlling the light-emitting device of the marker in the visual field space to emit light.
The detailed description of steps S430 to S440 may refer to the detailed description of steps S330 to S360, which is not repeated herein.
The following describes a method for acquiring calibration data. Referring to fig. 5, fig. 5 schematically illustrates a method for obtaining calibration data according to an embodiment of the present application. Specifically, the method further includes step a100 to step a200 before step S210.
Step A100, a target image is acquired through an image acquisition module on the virtual reality device.
The target image is an image acquired by the virtual reality device at a preset position, and the target image comprises a specified marker in the multiple markers. In one embodiment, when the virtual reality device receives a power-on signal, the target image is captured by an image capture module on the virtual reality device. As another embodiment, a human body sensor is arranged at a preset position, and when the virtual reality system receives a human body signal sent by the human body sensor, a target image is acquired through an image acquisition module on the virtual reality device. Specifically, the human body sensor may be a pressure sensor, an infrared sensor, or the like. The preset position may be any position in the target space, for example, the preset position is an entrance position of the target space.
Step A200, based on the target image, obtaining calibration data.
The calibration data includes a mapping relationship between the first coordinate system and the second coordinate system. In an embodiment of the application, the virtual reality system determines relative orientation information between the designated marker and the virtual reality device based on the target image, and determines calibration data between the first coordinate system and the second coordinate system based on the relative orientation information. Wherein the relative orientation information includes relative position information and relative rotation information. Specifically, step a200 includes steps a210 to a 230.
Step A210, obtaining the image characteristics of the designated marker in the target image.
The image features characterize feature data that specifies at least one feature point in the marker. In one embodiment, a target recognition algorithm is provided in the virtual reality system, and based on the target recognition algorithm, the position of the designated marker in the target image is determined. Illustratively, the target recognition algorithm may be a sliding window based target recognition algorithm, an R-CNN neural network based algorithm, or the like.
In an embodiment of the present application, the image feature of the designated marker is characterized by the pixel coordinates of at least one feature point in the designated marker. The pixel coordinates are coordinates of the feature points in an image coordinate system corresponding to the target image. In some embodiments, the designated marker is composed of a plurality of regular patterns, and the feature point may be a vertex of the regular pattern or a center point of the regular pattern. And under the condition that the position of the specified marker is determined, the virtual reality system identifies the feature points in the specified marker, and further determines the pixel coordinates of the feature points in the target image. Illustratively, a graph matching algorithm and a pre-stored image corresponding to the designated marker are set in the virtual reality system, the pre-stored image is pre-marked with the feature points to be identified, and based on the feature points to be identified in the pre-stored image, the graph matching algorithm can determine the feature points at the corresponding positions of the designated marker in the target image. The pattern matching algorithm may be a template matching algorithm, a feature matching algorithm, a deep learning based matching algorithm, or the like.
In some embodiments, if the virtual reality system does not recognize the designated marker from the target image through the target recognition algorithm, in which case it may be that the designated marker is not within the image capture range of the image capture module, the virtual reality system may send a prompt message to the virtual reality device to prompt the user to adjust the angle of the virtual reality device. Illustratively, the reminder may be "no marker identified, please adjust the angle of the virtual reality device".
In some embodiments, in the case that the virtual reality system determines the position of the designated marker, the step of identifying the feature points in the designated marker further includes a step of image preprocessing the target image. In one embodiment, when the position of the designated marker is determined, the virtual reality system extracts a sub-target image including the designated marker based on the position information of the designated marker, and processes the sub-target image based on a preset image processing algorithm. The image processing algorithm can be an image contrast enhancement algorithm and an image sharpening algorithm, and the color contour difference between the feature points in the designated marker and the background can be more obvious through the image preprocessing step, so that the subsequent feature points can be rapidly and accurately extracted.
Step A220, obtaining the relative orientation information between the designated marker and the virtual reality device according to the image characteristics and the preset position.
In the embodiment of the present application, the relative orientation information is determined based on the preset position of the virtual reality device, the pixel coordinates of the feature point, the physical coordinates of the feature point, and the hardware parameters of the image acquisition module. And the physical coordinates of the characteristic points are the coordinates of the characteristic points in a second coordinate system in which the specified marker is located. That is, the physical coordinates of the feature points represent the actual physical positions of the feature points on the designated marker, and the virtual reality system can determine the physical coordinates of the feature points by reading the position information of the designated marker in the second coordinate system and the relative positions of the feature points with respect to the designated marker. The hardware parameters of the image acquisition module represent parameters used in acquiring the target image, and the physical parameters of the image acquisition module may be focal length, field angle parameters, and the like.
The virtual reality system acquires the relative orientation information between the designated marker and the virtual reality device under the condition of determining the preset position of the virtual reality device, the pixel coordinates of the characteristic points, the physical coordinates of the characteristic points and the hardware parameters of the image acquisition module. As an embodiment, the relative orientation information may include relative position information and relative rotation information. The relative position information represents a moving state between the first coordinate system and the second coordinate system, that is, a degree of freedom of movement of the virtual reality device in the second coordinate system and each coordinate axis of the second coordinate system. The relative rotation information indicates a rotation state between the virtual reality device and the second coordinate system, that is, a rotational degree of freedom between the virtual reality device and each coordinate axis of the second coordinate system in the second coordinate system. The relative orientation information is the six-degree-of-freedom information of the virtual reality device in the second coordinate system, and can represent the rotation and movement states of the virtual reality device in the second coordinate system. As one embodiment, the virtual reality system calculates the relative orientation information between the designated marker and the virtual reality device through a preset algorithm (e.g., SVD algorithm) while determining preset positions of the virtual reality device, pixel coordinates of the feature points, physical coordinates of the feature points, and hardware parameters of the image acquisition module.
In other embodiments, when the feature point is acquired in step a210, the virtual reality system further acquires the pixel coordinate of the feature point. After acquiring the pixel coordinates of the feature points, the virtual reality system is further configured to determine the relative positional relationship between the plurality of feature points in the target image according to the pixel coordinates of the feature points, whereby the spatial posture of the specified marker (e.g., posture information with respect to the virtual reality device) can be obtained; calculating the space distance of the specified marker (or each characteristic point) relative to the virtual reality device according to the pixel size of the plurality of characteristic points in the target image; and finally, obtaining the relative orientation information of the specified marker relative to the virtual reality device according to the space attitude and the space distance.
Step A230, determining calibration data based on the relative orientation information.
In the embodiment of the present application, the calibration result is realized by a calibration matrix. As an embodiment, the controller stores the relative orientation information in the controller in a matrix form in a case where the relative orientation information is determined. The calibration matrix comprises a position matrix and a rotation matrix. The position matrix represents relative position information in the relative orientation information, and the rotation matrix represents relative rotation information in the relative orientation information.
It should be noted that, when the calibration matrix corresponding to the virtual reality device is determined, in the subsequent operation, the virtual reality system may convert the pose information in the first coordinate system acquired by the virtual reality device into the pose information in the second coordinate system based on the calibration matrix, so that the virtual reality system may implement information conversion between the first coordinate system and the second coordinate system through the calibration matrix.
The application provides a calibration data acquisition method. The calibration data in the application is determined through the target image collected by the virtual reality device at the preset position, and in the subsequent process, the virtual reality system completes information conversion between the first coordinate system and the second coordinate system through the obtained calibration data, so that the virtual reality system can accurately determine the marker in the visual field space in the subsequent process, and the virtual reality device can work smoothly.
Referring to fig. 6, a block diagram of a control device 600 of a virtual reality system according to an embodiment of the present disclosure is shown. Wherein, the virtual reality system includes a plurality of markers and virtual reality device, and a plurality of markers are fixed to be set up in target space, and the marker includes light emitting device, and the device 600 includes: a pose information acquisition module 610, a view space determination module 620, and a lighting control module 630. Specifically, the pose information acquiring module 610 is configured to acquire pose information of a virtual reality device in a target space, where the pose information includes position information and pose information; the position information characterizes a position of the virtual reality device within the target space, and the pose information characterizes an angle of the virtual reality device relative to a reference plane of the target space. The visual field space determination module 620 is configured to determine a visual field space based on the pose information, the visual field space representing a space covered by an image acquisition range of the virtual reality device, the visual field space having a range smaller than a range of the target space. The light emitting control module 630 is used for controlling the light emitting device of the marker in the visual field space to emit light.
In some embodiments, the virtual reality device includes an image acquisition module, and the visual field space determination module 620 is further configured to acquire hardware parameters of the image acquisition module, the hardware parameters including at least one parameter of the image acquisition module: angle of view, focal length. And determining a visual field space based on the pose information and the hardware parameters.
In some embodiments, the pose information of the virtual reality device refers to pose information in a first coordinate system, and the pose information of the marker in a second coordinate system is prestored in the virtual reality system. The apparatus 600 further includes an information conversion module (not shown in the figure) configured to convert the pose information of the virtual reality device in the first coordinate system into the pose information of the virtual reality device in the second coordinate system through calibration data, where the calibration data includes a mapping relationship between the first coordinate system and the second coordinate system, and at least one coordinate element of the first coordinate system is different from that of the second coordinate system. The visual field space determining module 620 is further configured to determine a visual field space based on the pose information and the hardware parameters of the virtual reality device in the second coordinate system.
In some embodiments, the apparatus 600 further comprises a target image acquisition module (not shown) and a calibration data acquisition module (not shown). The target image acquisition module is used for acquiring a target image through the image acquisition module on the virtual reality device, the target image is an image acquired by the virtual reality device at a preset position, and the target image comprises a designated marker in a plurality of markers. The calibration data acquisition module is used for acquiring calibration data based on the target image.
In some embodiments, the calibration data acquiring module is further configured to acquire an image feature of the specified marker in the target image, wherein the image feature represents feature data of at least one feature point in the specified marker. And acquiring relative orientation information between the designated marker and the virtual reality device according to the image characteristics and the preset position. Based on the relative orientation information, calibration data is determined.
In some embodiments, the lighting control module 630 is also used to acquire distance information between the marker and the virtual reality device within the field of view space; determining a working parameter based on the distance information; a light emitting device that controls the marker in the viewing space emits light in accordance with the operating parameter.
In some embodiments, the operating parameter includes a light-emitting brightness, and the light-emitting control module 630 is further configured to determine the light-emitting brightness based on the distance information and a preset first mapping relationship, where the first mapping relationship represents a positive correlation between the distance information and the light-emitting brightness.
In some embodiments, the operating parameter includes an operating duration, and the light control module 630 is further configured to determine the operating duration based on the distance information and a preset second mapping relationship, where the second mapping relationship represents a positive correlation between the distance information and the operating duration.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, the coupling between the modules may be electrical, mechanical or other type of coupling.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
The application provides a control device of a virtual reality system. The virtual reality system comprises a plurality of markers and a virtual reality device. In the device, the visual field space of the virtual reality device is determined by acquiring the pose information, and then the light-emitting devices corresponding to the markers in the visual field space are controlled to emit light. In one aspect, the visual field space in the present application is a space covered by the image capturing range of the virtual reality device, and therefore, after controlling the marker in the visual field space to emit light, the virtual reality device can smoothly capture an image including the light-emitting marker, thereby ensuring that the virtual reality device can smoothly operate. On the other hand, the range of the visual field space is smaller than that of the target space, so that the light-emitting device of the marker arranged in the target space but outside the visual field space does not emit light, the electric power consumption of the marker is reduced, and the energy of the virtual reality system is saved.
Referring to fig. 7, it is shown that the embodiment of the present application further provides a virtual reality system 700, where the virtual reality system 700 includes: one or more processors 710, a memory 720, a plurality of markers 730, a virtual reality device 740, and one or more applications. Wherein the marker 730 comprises a light emitting device, one or more applications stored in the memory 720 and configured to be executed by the one or more processors 710, the one or more applications configured to perform the methods described in the above embodiments.
Processor 710 may include one or more processing cores. The processor 710 interfaces with various interfaces and circuitry throughout the battery management system to perform various functions of the battery management system and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 720 and invoking data stored in the memory 720. Alternatively, the processor 710 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 710 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 710, but may be implemented by a communication chip.
The Memory 720 may include a Random Access Memory (RAM) 720 and a Read-Only Memory (Read-Only Memory) 720. The memory 720 may be used to store instructions, programs, code sets, or instruction sets. The memory 720 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area can also store data (such as a phone book, audio and video data, chatting record data) created by the electronic device map in use and the like.
Referring to fig. 8, a computer-readable storage medium 800 is further provided according to an embodiment of the present application, in which computer program instructions 810 are stored in the computer-readable storage medium 800, and the computer program instructions 810 can be called by a processor to execute the method described in the above embodiment.
The computer-readable storage medium 800 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 800 includes a non-volatile computer-readable storage medium. The computer readable storage medium 800 has storage space for computer program instructions 810 to perform any of the method steps of the method described above. The computer program instructions 810 may be read from or written to one or more computer program products.
Although the present application has been described with reference to the preferred embodiments, it is to be understood that the present application is not limited to the disclosed embodiments, but rather, the present application is intended to cover various modifications, equivalents and alternatives falling within the spirit and scope of the present application.

Claims (10)

1. A method for controlling a virtual reality system, the virtual reality system comprising a plurality of markers fixedly disposed in a target space and a virtual reality device, the markers comprising light emitting devices, the method comprising:
acquiring pose information of a virtual reality device in the target space, wherein the pose information comprises position information and attitude information; the position information characterizes a position of the virtual reality device within the target space, and the pose information characterizes an angle of the virtual reality device relative to a reference plane of the target space;
determining a visual field space based on the pose information, the visual field space representing a space covered by an image acquisition range of the virtual reality device, the range of the visual field space being smaller than the range of the target space;
and controlling the light emitting device of the marker in the visual field space to emit light.
2. The method of claim 1, wherein the virtual reality device comprises an image acquisition module, and wherein determining the view space based on the pose information comprises:
acquiring hardware parameters of the image acquisition module, wherein the hardware parameters comprise at least one parameter of the image acquisition module: angle of view, focal length;
determining the view space based on the pose information and the hardware parameters.
3. The method according to claim 2, wherein the pose information of the virtual reality device is pose information in a first coordinate system, and the pose information of the marker in a second coordinate system is pre-stored in the virtual reality system; after the obtaining of the pose information of the virtual reality device in the target space, the method further includes:
converting the pose information of the virtual reality device in the first coordinate system into the pose information of the virtual reality device in the second coordinate system through calibration data, wherein the calibration data comprise a mapping relation between the first coordinate system and the second coordinate system, and at least one coordinate element of the first coordinate system is different from that of the second coordinate system;
the determining the view space based on the pose information and the hardware parameters comprises:
and determining the visual field space based on the pose information of the virtual reality device in the second coordinate system and the hardware parameters.
4. The method according to any one of claims 1 to 3, characterized by, before the acquiring pose information of a virtual reality device in the target space, further comprising:
acquiring a target image through an image acquisition module on the virtual reality device, wherein the target image is acquired by the virtual reality device at a preset position and comprises a designated marker in a plurality of markers;
and acquiring calibration data based on the target image.
5. The method of claim 4, wherein said obtaining said calibration data based on said target image comprises:
acquiring image characteristics of the specified marker in the target image, wherein the image characteristics represent characteristic data of at least one characteristic point in the specified marker;
acquiring relative orientation information between the designated marker and the virtual reality device according to the image characteristics and the preset position;
and determining the calibration data based on the relative orientation information.
6. The method according to any one of claims 1 to 3, wherein the controlling the light emission of the light emitting device of the marker in the visual field space comprises:
acquiring distance information between a marker in the visual field space and the virtual reality device;
determining a working parameter based on the distance information;
and the light-emitting device for controlling the marker in the visual field space emits light according to the working parameters.
7. The method of claim 6, wherein the operating parameter comprises a light emission brightness, and wherein determining the operating parameter based on the distance information comprises:
and determining the light-emitting brightness based on the distance information and a preset first mapping relation, wherein the first mapping relation represents a positive correlation between the distance information and the light-emitting brightness.
8. The method of claim 6, wherein the operating parameter comprises a duration of operation, and wherein determining the operating parameter based on the distance information comprises:
and determining the working duration based on the distance information and a preset second mapping relation, wherein the second mapping relation represents a positive correlation between the distance information and the working duration.
9. A control device for a virtual reality system, the virtual reality system comprising a plurality of markers fixedly disposed in a target space, and a virtual reality device, the markers comprising light emitting means, the device comprising:
a pose information acquisition module, configured to acquire pose information of a virtual reality device in the target space, where the pose information includes position information and pose information; the position information characterizes a position of the virtual reality device within the target space, and the pose information characterizes an angle of the virtual reality device relative to a reference plane of the target space;
a visual field space determining module, configured to determine a visual field space based on the pose information, the visual field space representing a space covered by an image acquisition range of the virtual reality device, the range of the visual field space being smaller than the range of the target space;
and the light-emitting control module is used for controlling the light-emitting device of the marker in the visual field space to emit light.
10. A virtual reality system, comprising:
one or more processors;
a memory;
a plurality of markers including a light emitting device;
a virtual reality device;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to perform the method of any of claims 1-8.
CN202210334094.0A 2022-03-30 2022-03-30 Virtual reality system control method and device and virtual reality system Pending CN114816048A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210334094.0A CN114816048A (en) 2022-03-30 2022-03-30 Virtual reality system control method and device and virtual reality system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210334094.0A CN114816048A (en) 2022-03-30 2022-03-30 Virtual reality system control method and device and virtual reality system

Publications (1)

Publication Number Publication Date
CN114816048A true CN114816048A (en) 2022-07-29

Family

ID=82532464

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210334094.0A Pending CN114816048A (en) 2022-03-30 2022-03-30 Virtual reality system control method and device and virtual reality system

Country Status (1)

Country Link
CN (1) CN114816048A (en)

Similar Documents

Publication Publication Date Title
JP6968154B2 (en) Control systems and control processing methods and equipment
EP2984541B1 (en) Near-plane segmentation using pulsed light source
EP3608757B1 (en) Electronic device for displaying avatar corresponding to external object according to change in position of external object and method therefor
US20180189549A1 (en) Method for communication via virtual space, program for executing the method on computer, and information processing apparatus for executing the program
JP2020091904A (en) System and controller
US11442274B2 (en) Electronic device and method for displaying object associated with external electronic device on basis of position and movement of external electronic device
WO2015093130A1 (en) Information processing device, information processing method, and program
CN110737414B (en) Interactive display method, device, terminal equipment and storage medium
CN110489027A (en) Handheld input device and its display position control method and device for indicating icon
KR20200038111A (en) electronic device and method for recognizing gestures
WO2017061890A1 (en) Wireless full body motion control sensor
JP6650739B2 (en) Light emitting device adjustment apparatus and drive current adjustment method
EP3943167A1 (en) Device provided with plurality of markers
CN111913564B (en) Virtual content control method, device, system, terminal equipment and storage medium
CN114816048A (en) Virtual reality system control method and device and virtual reality system
CN110826376A (en) Marker identification method and device, terminal equipment and storage medium
KR102565444B1 (en) Method and apparatus for identifying object
CN110598605B (en) Positioning method, positioning device, terminal equipment and storage medium
CN116091701A (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, computer equipment and storage medium
CN111198609A (en) Interactive display method and device, electronic equipment and storage medium
CN111047710B (en) Virtual reality system, interactive device display method, and computer-readable storage medium
CN108765321A (en) It takes pictures restorative procedure, device, storage medium and terminal device
CN116069158A (en) Method, system and recording medium for accessory pairing
EP3912696A1 (en) Method of interacting with virtual creature in virtual reality environment and virtual object operating system
EP3795898A1 (en) Electronic device for providing visual effect by using light-emitting device on basis of user location, and method therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination