CN117197223A - Space calibration method, device, equipment, medium and program - Google Patents

Space calibration method, device, equipment, medium and program Download PDF

Info

Publication number
CN117197223A
CN117197223A CN202310988948.1A CN202310988948A CN117197223A CN 117197223 A CN117197223 A CN 117197223A CN 202310988948 A CN202310988948 A CN 202310988948A CN 117197223 A CN117197223 A CN 117197223A
Authority
CN
China
Prior art keywords
calibration
corner line
space
corner
ceiling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310988948.1A
Other languages
Chinese (zh)
Inventor
闫新阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202310988948.1A priority Critical patent/CN117197223A/en
Publication of CN117197223A publication Critical patent/CN117197223A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides a space calibration method, a device, equipment, a medium and a program, wherein the method comprises the following steps: acquiring an environmental image of a space, and determining a video perspective image of the space according to the environmental image of the space; calibrating the ground and the ceiling in the space according to the position of the virtual ray sent by the virtual object in the video perspective image; determining the position of a corner line in the space according to the calibration results of the environment image, the ground and the ceiling; and displaying the calibration lines of the ground and the ceiling and the corner line in the video perspective image. According to the method, the space is calibrated in a semi-automatic mode, wherein the ground and the ceiling are calibrated manually, and the corner line is calibrated automatically, so that the space calibration flow is simplified, the calibration efficiency is improved, the corner line is calibrated automatically, and the problem that a calibration result is deviated due to hand shake and vision errors of a user can be avoided.

Description

Space calibration method, device, equipment, medium and program
Technical Field
The embodiment of the application relates to the field of artificial intelligence, in particular to a space calibration method, a device, equipment, a medium and a program.
Background
Augmented Reality (XR), which is a common name for various technologies such as Virtual Reality (VR), augmented Reality (Augmented Reality, AR), and Mixed Reality (MR), is created by combining Reality with Virtual through a computer to create a Virtual environment that can be interacted with by human. By integrating the visual interaction technologies of the three, the method brings the 'immersion' of seamless transition between the virtual world and the real world for the experienter.
In MR applications, room calibration (room) is used to calibrate a room, including calibrating the position of floors, walls and ceilings in a room, in particular by means of radiation emitted by the handle of the MR device. The calibration result can be applied to virtual house watching, virtual furniture placement, virtual decoration and the like, and can also be applied to games.
However, the existing calibration method requires manual calibration by a user, the calibration operation is complicated, and the accuracy of the calibration result is low due to the shake of the user's hand or vision difference.
Disclosure of Invention
The embodiment of the application provides a space calibration method, a device, equipment, a medium and a program, which are used for calibrating a space in a semi-automatic mode, wherein the ground and the ceiling are manually calibrated, and the corner line is automatically calibrated, so that the space calibration flow is simplified, the calibration efficiency is improved, the corner line is automatically calibrated, and the problem of deviation of a calibration result caused by hand shake and vision errors of a user can be avoided.
In a first aspect, an embodiment of the present application provides a space calibration method, where the method includes:
acquiring an environmental image of a space, and determining a video perspective image of the space according to the environmental image of the space;
calibrating the ground and the ceiling in the space according to the position of the virtual ray sent by the virtual object in the video perspective image;
determining the position of a corner line in the space according to the environmental image and the calibration results of the ground and the ceiling;
and displaying the calibration lines of the ground and the ceiling and the corner line in the video perspective image.
In some embodiments, the determining the location of the corner line in the space based on the calibration results of the environmental image, the floor, and the ceiling includes:
detecting candidate straight lines perpendicular to the ground or the ceiling in the space by adopting a straight line detection algorithm;
and determining the corner line in the space from the candidate straight lines according to the calibration results of the ground and the ceiling.
In some embodiments, after displaying the calibration lines of the floor, ceiling and corner line in the video perspective image, further comprising:
Replacing a target corner line in the space according to a first user operation; and/or
And adding a new corner line in the space according to a second user operation.
In some embodiments, the replacing the target corner line in the space according to the first user operation includes:
generating a replacement corner line corresponding to the target corner point according to the calibration operation of a user on the target corner point corresponding to the target corner point, displaying the replacement corner line, wherein the starting point of the replacement corner line is the target corner point, the target corner point is a corner point formed by the ground and the wall, and the replacement corner line is used for replacing the target corner line.
In some embodiments, before generating the replacement corner line corresponding to the target corner line according to the calibration result of the user on the target corner corresponding to the target corner line, the method further includes:
receiving a deleting instruction of the target corner line;
and deleting the target corner line according to the deleting instruction.
In some embodiments, the replacing the target corner line in the space according to the first user operation includes:
and generating the replacement corner line according to the calibration operation of the user on the replacement corner line corresponding to the target corner line.
In some embodiments, before generating the replacement corner line according to the calibration operation of the user on the replacement corner line corresponding to the target corner line, the method further includes:
receiving a deleting instruction of the target corner line;
and deleting the target corner line according to the deleting instruction.
In some embodiments, adding a new corner line in the space according to a second user operation includes:
according to the calibration operation of a user on a first corner point in the space, generating a new corner line corresponding to the first corner point, displaying the new corner line, wherein the first corner point is a corner point formed by the ground and the wall surface.
In some embodiments, the method operates in an augmented reality XR device having a mixed reality MR application and MR service operating thereon, the method further comprising:
when the starting operation of a user on the current application is detected, the MR service acquires category configuration information of the current application, wherein the category configuration information is used for indicating whether the current application is an MR application or not;
when the category configuration information indicates that the current application is an MR application, determining to start the MR service;
When the category configuration information indicates that the current application is not an MR application, then it is determined that the MR service is not started.
In some embodiments, the MR service obtains category configuration information of the current application, including:
and the MR service reads the category configuration information of the current application from the manifest file of the current application.
In some embodiments, the MR service obtains category configuration information of the current application, including:
starting the current application according to the starting operation;
after the current application is started, if the current application is an MR application, the MR application sends the category configuration information of the current application to the MR service;
the MR service receives the category configuration information of the current application.
In another aspect, an embodiment of the present application provides a space calibration device, including:
the acquisition module is used for acquiring an environment image of a space and determining a video perspective image of the space according to the environment image of the space;
the manual calibration module is used for calibrating the ground and the ceiling in the space according to the position of the virtual ray sent by the virtual object in the video perspective image;
The automatic calibration module is used for determining the position of a corner line in the space according to the environmental image and the calibration results of the ground and the ceiling;
and the display module is used for displaying the standard lines of the ground and the ceiling and the corner line in the video perspective image.
In another aspect, embodiments of the present application provide an XR apparatus comprising: a processor and a memory for storing a computer program, the processor being for invoking and running the computer program stored in the memory to perform the method as described in any of the above.
In another aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program, the computer program causing a computer to perform the method as set forth in any one of the preceding claims.
In another aspect, embodiments of the present application provide a computer program product comprising a computer program which, when executed by a processor, implements a method as claimed in any one of the preceding claims.
The embodiment of the application provides a space calibration method, a device, equipment, a medium and a program, wherein the method comprises the following steps: acquiring an environmental image of a space, and determining a video perspective image of the space according to the environmental image of the space; calibrating the ground and the ceiling in the space according to the position of the virtual ray sent by the virtual object in the video perspective image; determining the position of a corner line in the space according to the calibration results of the environment image, the ground and the ceiling; and displaying the calibration lines of the ground and the ceiling and the corner line in the video perspective image. According to the method, the space is calibrated in a semi-automatic mode, wherein the ground and the ceiling are calibrated manually, and the corner line is calibrated automatically, so that the space calibration flow is simplified, the calibration efficiency is improved, the corner line is calibrated automatically, and the problem that a calibration result is deviated due to hand shake and vision errors of a user can be avoided.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic view of a scenario for spatial calibration;
FIG. 2 is a flow chart of a space calibration method according to a first embodiment of the present application;
FIG. 3 is a flow chart of a space calibration method according to a second embodiment of the present application;
FIG. 4 is a flow chart of a space calibration method according to a third embodiment of the present application;
FIG. 5 is a schematic structural diagram of a space calibration device according to a fourth embodiment of the present application;
fig. 6 is a schematic structural diagram of an XR device according to a fifth embodiment of the application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In order to facilitate understanding of the embodiments of the present application, before describing the embodiments of the present application, some concepts related to all embodiments of the present application are explained appropriately, specifically as follows:
the spatial calibration method provided by the embodiment of the application can be applied to XR equipment, and the XR equipment comprises, but is not limited to, VR equipment, AR equipment and MR equipment.
VR: the technology of creating and experiencing a virtual world, calculating and generating a virtual environment, which is a multi-source information (the virtual reality mentioned herein at least comprises visual perception, auditory perception, tactile perception, motion perception, even taste perception, olfactory perception and the like, and the virtual reality also comprises gustatory perception, olfactory perception and the like), realizes the simulation of the fusion, interactive three-dimensional dynamic view and entity behaviors of the virtual environment, immerses a user into the simulated virtual reality environment, and realizes the application in various virtual environments such as maps, games, videos, education, medical treatment, simulation, collaborative training, sales, assistance in manufacturing, maintenance and repair and the like.
AR: an AR set refers to a simulated set with at least one virtual object superimposed over a physical set or representation thereof. For example, the electronic system may have an opaque display and at least one imaging sensor for capturing images or videos of the physical set, which are representations of the physical set. The system combines the image or video with the virtual object and displays the combination on an opaque display. The individual uses the system to view the physical set indirectly via an image or video of the physical set and observe a virtual object superimposed over the physical set. When the system captures images of a physical set using one or more image sensors and presents an AR set on an opaque display using those images, the displayed images are referred to as video passthrough. Alternatively, the electronic system for displaying the AR scenery may have a transparent or translucent display through which the individual may directly view the physical scenery. The system may display the virtual object on a transparent or semi-transparent display such that an individual uses the system to view the virtual object superimposed over the physical scenery. As another example, the system may include a projection system that projects the virtual object into a physical set. The virtual object may be projected, for example, on a physical surface or as a hologram, such that an individual uses the system to view the virtual object superimposed over the physical scene. In particular, a technique for calculating camera attitude parameters of a camera in the real world (or three-dimensional world, real world) in real time in the process of acquiring images by the camera and adding virtual elements on the images acquired by the camera according to the camera attitude parameters. Virtual elements include, but are not limited to: images, videos, and three-dimensional models. The goal of AR technology is to socket the virtual world over the real world on the screen for interaction.
MR: by presenting virtual scene information in a real scene, an interactive feedback information loop is built up among the real world, the virtual world and the user, so as to enhance the sense of realism of the user experience. For example, integrating computer-created sensory input (e.g., virtual objects) with sensory input from a physical scenery or representations thereof in a simulated scenery, in some MR sceneries, the computer-created sensory input may be adapted to changes in sensory input from the physical scenery. In addition, some electronic systems for rendering MR scenes may monitor orientation and/or position relative to the physical scene to enable virtual objects to interact with real objects (i.e., physical elements from the physical scene or representations thereof). For example, the system may monitor movement such that the virtual plants appear to be stationary relative to the physical building.
The virtual reality device is a terminal for realizing a virtual reality effect, and may be provided in the form of glasses, a head mounted display (Head Mount Display, abbreviated as HMD), or a contact lens for realizing visual perception and other forms of perception, but the form of realizing the virtual reality device is not limited thereto, and may be further miniaturized or enlarged according to actual needs.
Optionally, the virtual reality device (i.e., XR device) described in the embodiments of the present application may include, but is not limited to, the following types:
1) The mobile virtual reality device supports setting up a mobile terminal (such as a smart phone) in various manners (such as a head-mounted display provided with a special card slot), performing related calculation of a virtual reality function by the mobile terminal through connection with the mobile terminal in a wired or wireless manner, and outputting data to the mobile virtual reality device, for example, watching a virtual reality video through an APP of the mobile terminal.
2) The integrated virtual reality device has a processor for performing the calculation related to the virtual function, and thus has independent virtual reality input and output functions, and is free from connection with a PC or a mobile terminal, and has high degree of freedom in use.
3) A computer-side virtual reality (PCVR) device performs related computation of a virtual reality function and data output by using a PC side, and an external computer-side virtual reality device realizes a virtual reality effect by using data output by the PC side.
The MR applications use the premise of room calibration, where room calibration includes a space calibration and furniture calibration in a space, where the space includes a three-dimensional space consisting of a floor, a wall, and a ceiling, and where the space calibration includes a calibration of the floor, the wall, and the ceiling, typically, after the space calibration is completed, the furniture in the space is calibrated.
The XR device has a video see-through (VST) function, and the VST technology captures a real-time view of the surrounding environment through the camera of the headset (i.e. the XR device) and then, in combination with computer technology, presents it on an opaque display, giving the human eye the sensation of being able to see the surrounding real world directly through the headset, thus also called a see-through function, increasing the perception of the surrounding environment by the user.
When a user wears the head-mounted equipment indoors, under the condition of starting the VST function, space calibration is carried out, and the user can see the calibration process and the calibration result in real time, so that the interactivity of the user is enhanced.
At present, the space calibration is performed in a ray calibration mode, the ray calibration refers to the calibration by a virtual ray emitted by a virtual controller, the virtual controller is usually controlled by a user, for example, the user controls the virtual controller to move through a physical controller (such as a handle), the virtual ray moves along with the movement of the virtual controller, the starting point of the virtual ray is the position of the virtual controller, and the end point of the virtual ray is a wall, a ground or a ceiling to be calibrated, and the like.
Fig. 1 is a schematic view of a scene of space calibration, as shown in fig. 1, a user wears a head-mounted device, a camera of the head-mounted device collects environmental data of a space and generates a corresponding video perspective image, the user controls a terminal point of a virtual ray in the video perspective image to move to a position needing to be calibrated through a handle, so that a calibration frame corresponding to the ground, a calibration frame corresponding to a ceiling and calibration frames corresponding to all walls are formed, each calibration frame is a rectangular calibration frame in general, and adjacent calibration frames are connected to form a closed space finally.
Taking ground calibration as an example, in one exemplary manner, forming a calibration frame requires the following three steps: moving the end point of the virtual ray to a corner position of the ground, clicking to confirm to form a first calibration point, controlling the virtual ray to draw a line, clicking to confirm to form a second calibration point; and pulling out a plane, clicking to confirm to form a third calibration point, wherein the third calibration point is perpendicular to a straight line formed by the first calibration point and the second calibration point, so that a rectangular calibration frame is formed.
At present, in the space calibration process, the floor, the ceiling and the wall surface all need to be calibrated manually by a user, the manual calibration means that the user needs to interact with the manual calibration process, the calibration can be completed according to the operation of the user, after the manual calibration of the wall surface, the intersection line of two adjacent wall surfaces is a corner line, and the position of the corner line is obtained according to the calibration operation of the wall surface. However, the manual calibration process is complicated, and the user has inaccurate calibration results due to hand shake or visual difference in the manual calibration process.
In order to solve the problems in the prior art, an embodiment of the present application provides a space calibration method, and fig. 2 is a flowchart of a space calibration method provided in an embodiment of the present application, where the method is applied to an XR device. As shown in fig. 2, the method provided in this embodiment includes the following steps.
S101, acquiring an environmental image of the space, and determining a video perspective image of the space according to the environmental image of the space.
The method comprises the steps that a space environment image, namely an image of a real environment where the space is located, is acquired in real time by a camera of the head-mounted device, and a video perspective image of the space is determined according to the space environment image, wherein the space environment image is a 2D image, the video perspective image is a 3D image obtained after processing, and the head-mounted device displays the video perspective image to a user through a display, so that the user can see the outside.
S102, calibrating the ground and the ceiling in the space according to the position of the virtual ray emitted by the virtual object in the video perspective image.
The user can calibrate the floor and the ceiling in a manual calibration mode, wherein the manual calibration means that the user needs to interact with the floor during the calibration process, and the calibration can be completed according to the operation of the user.
In an exemplary embodiment, a virtual object and a virtual ray emitted by the virtual object are displayed in the video perspective image, and a user performs calibration on a floor and a ceiling in a space through the virtual ray. The virtual object includes, but is not limited to, a virtual controller, a virtual hand model, a virtual hand model+a virtual controller (the presentation form may be that the virtual hand model holds the virtual controller), the starting point of the virtual ray is the position of the virtual object, the virtual ray moves along with the virtual object, the movement of the virtual object is operated by a user, the movement of the virtual object may also be understood as the movement of the virtual ray, and the user may control the movement of the virtual object or the virtual ray through one or more modes of a physical controller (such as a handle) of the XR device, voice interaction, gesture interaction, and eye tracking.
Taking ground calibration as an example, when the end point of the virtual ray moves to a corner position of the ground, forming a first calibration point according to a confirmation instruction; then continuously moving the end point position of the virtual ray until the virtual ray moves to the next corner position of the ground, wherein in the process of moving the end point of the virtual ray, a straight line is presented in the video perspective image according to the moving track of the end point of the virtual ray, and from the perspective of a user, the straight line is pulled out, and a second standard point is formed according to a confirmation instruction; the user continues to move the end point position of the virtual ray until the user moves to the next corner position of the ground, and at this time, as the first calibration point and the second calibration point form a straight line, along with the movement of the end point of the virtual ray, a plane is presented in the video perspective image, and is expanded or contracted along with the movement of the end point of the virtual ray, and from the perspective of the user, a plane is pulled out, similar to the pulling of the straight line formed by the first calibration point and the second calibration point, and when the end point of the virtual ray moves to the third calibration point, the third calibration point and the fourth calibration point are formed according to the determination instruction of the third calibration point.
The XR device estimates and obtains the 3D coordinates of the end point of the virtual ray according to the inertial sensor (Inertial Measurement Unit, IMU) data of the controller and the coordinates of the start point and the end point of the virtual ray in the image, wherein the 3D coordinates of the end point of the virtual ray are the 3D coordinates of the standard point corresponding to the end point of the virtual ray in the world coordinate system, so that the 3D coordinates of each standard point of the ground and the ceiling can be obtained.
S103, determining the position of a corner line in the space according to the environment image and the calibration results of the ground and the ceiling.
The calibration result of the floor and the ceiling is the 3D coordinates of each calibration point of the floor and the ceiling, and the floor and the ceiling are calibrated mainly for calibrating the height of the floor and the height of the ceiling.
In this embodiment, when the corner line is calibrated, an automatic calibration mode is adopted, and the automatic calibration mode is compared with manual calibration, and the automatic calibration means that no user operation is required in the calibration process, and a calibration result can be automatically generated according to an environmental image. The position of the corner line can be represented by the positions of two end points of the corner line, which are also referred to as corner points corresponding to the corner line, i.e. the position of the corner line is represented by two corner points of the corner line, and the position of the corner point corresponding to the corner line is determined by determining the corner line.
In one exemplary form, a line detection algorithm is used to detect candidate lines within the space that are perpendicular to the floor or ceiling, and a corner line in the space is determined from the candidate lines based on calibration results for the floor and ceiling.
The straight line detection is a classical underlying visual task, and the embodiment of the application can detect straight lines in a space from images corresponding to the space by adopting any existing straight line detection algorithm. The commonly used straight line detection algorithms fall into two categories: conventional algorithms and deep learning algorithms.
Conventional algorithms include, but are not limited to: hough transform algorithm and straight line segment detection (line segment detection, LSD for short). LSD is a 'perceptual clustering' method, which depends on image characteristics and detection strategies which are well designed, and the precision, algorithm complexity and the like of the LSD are better than those of Hough straight line detection.
Deep learning algorithms include, but are not limited to: a Wireframe network and a wire-roll neural network (line-Convolutional Neural Networks, LCNN for short).
The XR device may employ one or more environmental images to detect lines in space, which may include corner lines, apex lines, skirts, edges of furniture, edges of doors and windows, etc. The corner line refers to a straight line between two wall surfaces, the skirting line refers to a straight line between the ground surface and the wall surface, and the vertex angle line refers to a straight line between the wall surface and the ceiling.
The start point of the corner line is the floor and the end point is the ceiling, i.e., the corner line is perpendicular to the floor and the ceiling, so a straight line perpendicular to the floor or the ceiling can be selected as a candidate straight line from straight lines detected in a space. Then, according to the ground height and the ceiling height, the corner line in the space is determined from the candidate straight lines, the height of the corner line is equal to the height difference between the ground and the ceiling, firstly, a straight line with the height equal to or similar to the height difference is selected from the candidate straight lines, and then, whether the straight line is the corner line is determined according to the relative position relation between the straight lines.
There are multiple corner lines in the space, four corner lines for a relatively square space, and 6 or more corner lines for a space with corners.
It will be appreciated that this is by way of example only, and that other ways of detecting corner lines in space may be employed.
And S104, displaying the calibration lines of the ground and the ceiling and the corner lines in the video perspective image.
In this embodiment, the XR device generates and displays a video perspective image corresponding to the space in real time according to the environmental image of the space acquired by the camera, and displays the calibration result in real time in the calibration process, after the user calibrates the ceiling and the ground, the calibration line of the ground and the ceiling is displayed in the video perspective image, at this time, the user needs to manually calibrate the corner line, and the XR device automatically generates the corner line according to the environmental image and displays the corner line in the video perspective image. The calibration lines of the floor and the ceiling are connected together to form a closed space, so that the calibration of the space is completed.
In the embodiment, an environmental image of a space is acquired, and a video perspective image of the space is determined according to the environmental image of the space; calibrating the ground and the ceiling in the space according to the position of the virtual ray sent by the virtual object in the video perspective image; determining the position of a corner line in the space according to the calibration results of the environment image, the ground and the ceiling; and displaying the calibration lines of the ground and the ceiling and the corner line in the video perspective image. According to the method, the space is calibrated in a semi-automatic mode, wherein the ground and the ceiling are calibrated manually, and the corner line is calibrated automatically, so that the space calibration flow is simplified, the calibration efficiency is improved, the corner line is calibrated automatically, and the problem that a calibration result is deviated due to hand shake and vision errors of a user can be avoided.
On the basis of the first embodiment, optionally, after the corner line is automatically generated, if the automatically generated corner line is inaccurate, the target corner line in the space may be replaced according to the first user operation, where the target corner line is a corner line that needs to be replaced. Alternatively, if a corner line is missed from the automatically generated corner lines, a new corner line may be added to the space according to a second user operation. The first user operation and the second user operation do not represent one operation and may include a series of operations having a sequential order.
Fig. 3 is a flowchart of a space calibration method according to a second embodiment of the present application, and as shown in fig. 3, the method according to the present embodiment includes the following steps.
S201, acquiring an environment image of the space, and determining a video perspective image of the space according to the environment image of the space.
S202, calibrating the ground and the ceiling in the space according to the position of the virtual ray emitted by the virtual object in the video perspective image.
S203, determining the position of a corner line in the space according to the environment image, the calibration results of the ground and the ceiling.
S204, displaying the calibration lines of the ground and the ceiling and the corner lines in the video perspective image.
S205, receiving a deleting instruction for the target corner line, and deleting the target corner line according to the deleting instruction.
And the user judges whether the position of the corner line is accurate according to the position of the corner line displayed in the video perspective image, if the user finds that the position of one corner line is inaccurate, the user can select the corner line, the corner line selected by the user is the target corner line, then the target corner line is deleted, and the XR equipment generates a deleting instruction according to the deleting operation.
Deleting the target corner line not only comprises deleting the target corner line displayed by the video perspective image, but also comprises deleting the calibration result of the target corner line, namely deleting the position information of the target corner line stored in the map of the space.
For example, when the user moves the cursor to the target corner line, clicking the target corner line selects the target corner line, and then the user clicks the target corner line, a delete option is displayed, for example, whether to delete is displayed, when the user selects the yes option, the target corner line is deleted, and after the target corner line is deleted, the target corner line is not displayed in the video perspective image.
S206, generating a replacement corner line corresponding to the target corner point according to the calibration operation of the user on the target corner point, displaying the replacement corner line, wherein the replacement corner line is used for replacing the target corner line.
After deleting the target corner line, a replacement corner line can be generated according to user operation, and the user controls the end point of the virtual ray to move to the target corner point, wherein the target corner point is the initial position of the target corner line expected by the user, when receiving a confirmation instruction input by the user, the 3D coordinates of the target corner point are determined, and a replacement corner line is generated by taking the position of the target corner point as the initial point, and the direction of the replacement corner line is perpendicular to the ground and the ceiling.
It should be noted that, the corner point in the embodiment of the present application is an intersection point formed by the ground and two wall surfaces, that is, the corner point is located on the ground; it can also be the intersection point formed by the ceiling and two wall surfaces, namely the corner point on the ceiling. The floor and the ceiling are two parallel planes, one corner line has two endpoints, one endpoint is a corner point on the floor, and the other endpoint is a corner point on the ceiling.
When a corner line is generated, a user can select corner points on the ground for calibration, and can also select corner points on a ceiling for calibration. If the user selects the corner point on the ground for calibration, generating a corner replacement line by taking the corner point on the ground as a starting point, wherein the end point of the generated corner replacement line is positioned on the ceiling. If the user selects the corner point on the ceiling for calibration, generating a corner replacement line by taking the corner point on the ceiling as a starting point, and positioning the end point of the generated corner replacement line on the ground.
Optionally, after the user inputs a confirmation instruction of the target corner point, dynamically displaying a generation process of a replacement corner line in the video perspective image, wherein the replacement corner line takes the target corner point as a starting point, extends upwards to the ceiling vertically to the ground, or extends downwards to the ground vertically to the ceiling.
In this embodiment, the user deletes the target corner line before calibrating the target corner point. Optionally, in other embodiments of the present application, the user may not delete the target corner line before calibrating the target corner point, and may automatically delete the target corner line after generating the replacement corner line corresponding to the target corner point.
Alternatively, the alternate corner line may be displayed differently from the automatically generated corner line, e.g., the alternate corner line may be a green line and the automatically generated corner line may be yellow.
The calibrated floor, ceiling and corner lines form a closed space before the corner lines are replaced, and also, after the replacement corner lines are generated, the replacement corner lines are connected with adjacent calibrated lines in the space to form a closed space.
S207, generating a new corner line corresponding to the first corner point according to the calibration operation of the user on the first corner point in the space, and displaying the new corner line.
The user may add new corner lines due to inaccurate captured ambient images or problems with the detection algorithm that some corner lines are not automatically identified. After the user marks the first corner point, a new corner line corresponding to the first corner point is automatically generated. The calibration method of the first corner point and the generation process of the new corner line refer to the calibration method of the target corner point and the generation process of the replacement corner line in step S206, and are not described herein again.
In the actual calibration process, only steps S205 and S206 may be performed, and step S207 is not performed; it is also possible to perform only step S207, without performing steps S205 and S206; it is also possible that steps S205, S206 and S207 are all performed.
In this embodiment, after automatically generating the corner line of the space, the target corner line in the space may be replaced according to the first user operation; and/or adding a new corner line in the space according to the second user operation. Therefore, the adjustment of the corner line in the space is realized, and the finally calibrated corner line is more accurate.
Alternatively, in other embodiments of the present application, the step S206 may be replaced by the step S206'.
S206', generating the replacement corner line according to the calibration operation of the user on the replacement corner line corresponding to the target corner line.
In step S206, the user marks a corner point, and a corner line with the corner point as a starting point can be automatically generated according to the corner point marked by the user. In this embodiment, the user is required to mark a corner line.
The user controls the end point of the virtual ray to move to a second corner point on the ground, the second corner point is the initial position of the replacement corner line expected by the user, the user inputs a confirmation instruction, then the end point of the virtual ray is controlled to move continuously to draw a straight line, when the end point of the virtual ray moves to a third corner point on the ceiling, the user inputs the confirmation instruction, and then a replacement corner line taking the second corner point as the starting point and the third corner point as the end point is generated.
Based on the first embodiment and the second embodiment, the third embodiment of the present application provides a space calibration method, and the embodiment describes a complete process from the start of the MR application to the end of the space calibration. Fig. 4 is a flowchart of a space calibration method according to a third embodiment of the present application, and as shown in fig. 4, the method according to the present embodiment includes the following steps.
S301, opening an application by a user.
In the XR device, not only a 3D application but also a 2D application may be run, where the 2D application refers to a conventional application running on an electronic device such as a mobile phone, a computer, a tablet computer, etc., and an image displayed to a user by the 2D application is a 2D image, for example, a conventional video playing application, a short video application, a mobile phone game, a computer game, etc. The image presented to the user by the 3D application is a 3D image, and the 3D application is capable of providing the user with a virtual scene, a real scene, or a hybrid scene where the virtual scene and the real scene are superimposed in 3D. The 3D applications include MR applications that use VST functions to spatially scale real scenes and play games or other functions using spatial scaling results.
S302, the MR service judges whether the application is an MR application.
The XR device has running thereon MR applications and MR services (services) that acquire and process basic data provided by system services, manage relevant business logic and data of MR applications, such as data isolation, persistence storage, lifecycle management, and provide business support for the software development tools (Software Development Kit, SDK) layer of the game engine.
The game Engine is a development tool of MR applications, and the game Engine can be adopted by units or virtual engines (UE), and a developer adopts the game Engine to develop MR applications with semi-automatic calibration functions.
Specifically, after detecting the starting operation of the user on the current application, the MR service acquires category configuration information of the current application, where the category configuration information is used to indicate whether the current application is an MR application. The MR service determines whether the current application is an MR application according to the category configuration information, if the current application is an MR application, the MR service is started, step S303 is executed, and if the current application is not an MR application, it is determined that the MR service is not started, and step S304 is executed.
In one implementation, the class configuration information of the current application is located in a manifest file of the current application, and the MR service reads the class configuration information of the current application from the manifest file of the current application.
The manifest file, the android manifest. Xml file, must be one for each android application, in the app/manifest directory.
The game engine writes the category configuration information of the application into an android management file, when a starting operation of a user on the current application is detected, for example, when a clicking operation of the user on an application icon of the current application is detected, the current application informs an MR service through a run time, and when the MR service knows that the current application is started, the category configuration information of the current application is read from the android management file.
In another implementation, the current application is started according to the starting operation, and after the current application is started, if the current application is an MR application, the MR application sends the category configuration information of the current application to the MR service, and the MR service receives the category configuration information of the current application.
In the former way, the MR service can acquire the class configuration information of the current application earlier, that is, the class configuration information of the current application can be acquired before the current application is started, and in the latter way, the class configuration information of the current application can be acquired only after the current application is started. The MR service obtains the category configuration information of the current application earlier so as to determine whether to start the MR service earlier, and the MR service needs a certain time to start the MR service earlier, so that the MR application can run earlier, and the waiting time of a user is reduced.
S303, creating or loading a calibration space.
If the current application is an MR application, the calibration may begin after the MR application and MR service are started. If the space is not calibrated, a calibration space is created, the creation of the calibration space includes creating an Identity (ID) of the calibration space, the ID of the calibration space being a map (map) ID of the calibration space. If the space is calibrated, loading a calibration space, wherein the loaded calibration space comprises the map ID of the space and a calibration result.
S304, the MR service does not process.
If the current application is not an MR application, the MR service is not started, and only the current application is started, so the MR service does not process.
S305, selecting a calibration scene and a calibration mode.
The MR application may provide multiple calibration scenarios for the user to select, for example, different family members under the same account may select different calibration scenarios, and calibration results for the same space under different calibration scenarios may be different. After the calibration scene is selected, a calibration mode can be selected, wherein the calibration mode comprises semi-automatic calibration and manual calibration, and in the following steps, S306-S310 are semi-automatic calibration processes, and S306'-S312' are manual calibration processes.
S306, starting semi-automatic calibration.
S307, calibrating the ground.
S308, calibrating the ceiling.
S309, automatically generating corner lines.
S310, stopping semi-automatic calibration.
Steps S306-S310 are a semi-automatic calibration process provided in the embodiment of the present application, where the floor and the ceiling are manually calibrated, and the corner line is automatically calibrated, and specific implementation manners refer to the descriptions of the foregoing embodiments, which are not repeated herein.
S306', starting manual calibration.
S307', calibrating the ground,
S308', calibrating the ceiling.
S309', calibrating the wall surface.
S310', calibrating corners.
S311', judging whether the calibration result is accurate.
If the calibration result is accurate, step S312 'is executed, and if the calibration result is not accurate, step S306' may be returned to for re-calibration.
S312', stopping calibration.
S306'-S312' is a manual calibration flow, when manual calibration is performed, the floor, the ceiling and the wall surface are required to be manually calibrated by a user, after the wall surface is calibrated, the position of the corner is also uniquely determined, after the wall corner is calibrated, the user can judge whether the calibration result is accurate according to the position of a calibration line displayed in a video perspective image, if the user considers that the calibration result is inaccurate, the calibration can be restarted, and if the user considers that the calibration result is accurate, the calibration is stopped.
In order to facilitate better implementation of the space calibration method of the embodiment of the application, the embodiment of the application also provides a space calibration device. Fig. 5 is a schematic structural diagram of a space calibration device according to a fourth embodiment of the present application, and as shown in fig. 5, the space calibration device 100 may include:
an acquiring module 11, configured to acquire an environmental image of a space, and determine a video perspective image of the space according to the environmental image of the space;
The manual calibration module 12 is used for calibrating the ground and the ceiling in the space according to the position of the virtual ray emitted by the virtual object in the video perspective image;
an automatic calibration module 13 for determining the position of the corner line in the space according to the environmental image and the calibration results of the ground and the ceiling;
a display module 14 for displaying the calibration lines of the floor and the ceiling and the corner line in the video perspective image.
In some embodiments, the automatic calibration module 13 is specifically configured to:
detecting candidate straight lines perpendicular to the ground or the ceiling in the space by adopting a straight line detection algorithm;
and determining the corner line in the space from the candidate straight lines according to the calibration results of the ground and the ceiling.
In some embodiments, the apparatus 100 further comprises an adjustment module for:
replacing a target corner line in the space according to a first user operation; and/or
And adding a new corner line in the space according to a second user operation.
In some embodiments, the adjustment module is specifically configured to:
generating a replacement corner line corresponding to the target corner point according to the calibration operation of a user on the target corner point corresponding to the target corner point, displaying the replacement corner line, wherein the starting point of the replacement corner line is the target corner point, the target corner point is a corner point formed by the ground and the wall, and the replacement corner line is used for replacing the target corner line.
In some embodiments, the adjustment module is further to:
receiving a deleting instruction of the target corner line before generating a replacement corner line corresponding to the target corner line according to a calibration result of a user on the target corner corresponding to the target corner line; and deleting the target corner line according to the deleting instruction.
In some embodiments, the adjustment module is specifically configured to:
and generating the replacement corner line according to the calibration operation of the user on the replacement corner line corresponding to the target corner line.
In some embodiments, the adjustment module is further to: according to the calibration operation of a user on the replacement corner line corresponding to the target corner line, before the replacement corner line is generated, receiving a deleting instruction on the target corner line; and deleting the target corner line according to the deleting instruction.
In some embodiments, the adjustment module is specifically configured to:
according to the calibration operation of a user on a first corner point in the space, generating a new corner line corresponding to the first corner point, displaying the new corner line, wherein the first corner point is a corner point formed by the ground and the wall surface.
In some embodiments, the method operates in an augmented reality XR device having a mixed reality MR application and MR service operating thereon;
The MR service is used for acquiring category configuration information of a current application after detecting the starting operation of a user on the current application, wherein the category configuration information is used for indicating whether the current application is an MR application or not;
when the category configuration information indicates that the current application is an MR application, determining to start the MR service;
when the category configuration information indicates that the current application is not an MR application, then it is determined that the MR service is not started.
In some embodiments, the MR service is specifically for: and reading the category configuration information of the current application from the manifest file of the current application.
In some embodiments, the MR service is specifically for: and receiving the category configuration information of the current application sent by the MR application, wherein the category configuration information of the current application is sent after the MR application is started.
It should be understood that apparatus embodiments and method embodiments may correspond with each other and that similar descriptions may refer to the method embodiments. To avoid repetition, no further description is provided here.
The apparatus 100 of the embodiment of the present application is described above from the perspective of the functional module in conjunction with the accompanying drawings. It should be understood that the functional module may be implemented in hardware, or may be implemented by instructions in software, or may be implemented by a combination of hardware and software modules. Specifically, each step of the method embodiment in the embodiment of the present application may be implemented by an integrated logic circuit of hardware in a processor and/or an instruction in a software form, and the steps of the method disclosed in connection with the embodiment of the present application may be directly implemented as a hardware decoding processor or implemented by a combination of hardware and software modules in the decoding processor. Alternatively, the software modules may be located in a well-established storage medium in the art such as random access memory, flash memory, read-only memory, programmable read-only memory, electrically erasable programmable memory, registers, and the like. The storage medium is located in a memory, and the processor reads information in the memory, and in combination with hardware, performs the steps in the above method embodiments.
The embodiment of the application also provides an XR device. Fig. 6 is a schematic diagram of an XR device provided by a fifth embodiment of the application, as shown in fig. 6, the XR device 200 may comprise:
a memory 21 and a processor 22, the memory 21 being adapted to store a computer program and to transfer the program code to the processor 22. In other words, the processor 22 may call and run a computer program from the memory 21 to implement the method in an embodiment of the present application.
For example, the processor 22 may be configured to perform the above-described method embodiments according to instructions in the computer program.
In some embodiments of the present application, the processor 22 may include, but is not limited to:
a general purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like.
In some embodiments of the present application, the memory 21 includes, but is not limited to:
volatile memory and/or nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (Double Data Rate SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and Direct memory bus RAM (DR RAM).
In some embodiments of the application, the computer program may be split into one or more modules that are stored in the memory 21 and executed by the processor 22 to perform the methods provided by the application. The one or more modules may be a series of computer program instruction segments capable of performing the specified functions, which are used to describe the execution of the computer program in the XR device.
As shown in fig. 6, the XR device may further comprise: a transceiver 23, the transceiver 23 being connectable to the processor 22 or the memory 21.
The processor 22 may control the transceiver 23 to communicate with other devices, and in particular, may send information or data to other devices or receive information or data sent by other devices. The transceiver 23 may include a transmitter and a receiver. The transceiver 23 may further include antennas, the number of which may be one or more.
It will be appreciated that although not shown in fig. 6, the XR device 200 may also include a camera module, a WIFI module, a positioning module, a bluetooth module, a display, a controller, etc., which are not described in detail herein.
It will be appreciated that the various components in the XR device are connected by a bus system comprising, in addition to a data bus, a power bus, a control bus and a status signal bus.
The present application also provides a computer storage medium having stored thereon a computer program which, when executed by a computer, enables the computer to perform the method of the above-described method embodiments. Alternatively, embodiments of the present application also provide a computer program product comprising instructions which, when executed by a computer, cause the computer to perform the method of the method embodiments described above.
The present application also provides a computer program product comprising a computer program stored in a computer readable storage medium. The processor of the XR device reads the computer program from the computer readable storage medium, and the processor executes the computer program, so that the XR device executes a corresponding flow in the method for controlling the user position in the virtual scene in the embodiment of the present application, which is not described herein for brevity.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules illustrated as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. For example, functional modules in various embodiments of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily appreciate variations or alternatives within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. A method of spatial calibration, comprising:
acquiring an environmental image of a space, and determining a video perspective image of the space according to the environmental image of the space;
Calibrating the ground and the ceiling in the space according to the position of the virtual ray sent by the virtual object in the video perspective image;
determining the position of a corner line in the space according to the environmental image and the calibration results of the ground and the ceiling;
and displaying the calibration lines of the ground and the ceiling and the corner line in the video perspective image.
2. The method of claim 1, wherein determining the location of the corner line in the space based on the calibration of the environmental image, the floor, and the ceiling comprises:
detecting candidate straight lines perpendicular to the ground or the ceiling in the space by adopting a straight line detection algorithm;
and determining the corner line in the space from the candidate straight lines according to the calibration results of the ground and the ceiling.
3. The method of claim 1 or 2, further comprising, after displaying the floor, ceiling, and corner lines in the video perspective image:
replacing a target corner line in the space according to a first user operation; and/or
And adding a new corner line in the space according to a second user operation.
4. A method according to claim 3, wherein said replacing a target corner line in said space according to a first user operation comprises:
generating a replacement corner line corresponding to the target corner point according to the calibration operation of a user on the target corner point corresponding to the target corner point, displaying the replacement corner line, wherein the starting point of the replacement corner line is the target corner point, the target corner point is a corner point formed by the ground and the wall, and the replacement corner line is used for replacing the target corner line.
5. The method of claim 4, wherein before generating the replacement corner line corresponding to the target corner line according to the calibration result of the user on the target corner corresponding to the target corner line, further comprises:
receiving a deleting instruction of the target corner line;
and deleting the target corner line according to the deleting instruction.
6. A method according to claim 3, wherein said replacing a target corner line in said space according to a first user operation comprises:
and generating the replacement corner line according to the calibration operation of the user on the replacement corner line corresponding to the target corner line.
7. The method according to claim 6, wherein before generating the replacement corner line according to the calibration operation of the user on the replacement corner line corresponding to the target corner line, the method further comprises:
receiving a deleting instruction of the target corner line;
and deleting the target corner line according to the deleting instruction.
8. A method according to claim 3, wherein adding a new corner line in the space according to a second user operation comprises:
according to the calibration operation of a user on a first corner point in the space, generating a new corner line corresponding to the first corner point, displaying the new corner line, wherein the first corner point is a corner point formed by the ground and the wall surface.
9. The method of claim 1, wherein the method operates in an augmented reality XR device having a mixed reality MR application and MR service operating thereon, the method further comprising:
when the starting operation of a user on the current application is detected, the MR service acquires category configuration information of the current application, wherein the category configuration information is used for indicating whether the current application is an MR application or not;
When the category configuration information indicates that the current application is an MR application, determining to start the MR service;
when the category configuration information indicates that the current application is not an MR application, then it is determined that the MR service is not started.
10. The method of claim 9, wherein the MR service obtains category configuration information for the current application, comprising:
and the MR service reads the category configuration information of the current application from the manifest file of the current application.
11. The method of claim 9, wherein the MR service obtains category configuration information for the current application, comprising:
starting the current application according to the starting operation;
after the current application is started, if the current application is an MR application, the MR application sends the category configuration information of the current application to the MR service;
the MR service receives the category configuration information of the current application.
12. A spatial calibration apparatus, the apparatus comprising:
the acquisition module is used for acquiring an environment image of a space and determining a video perspective image of the space according to the environment image of the space;
The manual calibration module is used for calibrating the ground and the ceiling in the space according to the position of the virtual ray sent by the virtual object in the video perspective image;
the automatic calibration module is used for determining the position of a corner line in the space according to the environmental image and the calibration results of the ground and the ceiling;
and the display module is used for displaying the standard lines of the ground and the ceiling and the corner line in the video perspective image.
13. An augmented reality device, comprising:
a processor and a memory for storing a computer program, the processor being for invoking and running the computer program stored in the memory to perform the method of any of claims 1 to 11.
14. A computer readable storage medium storing a computer program for causing a computer to perform the method of any one of claims 1 to 11.
15. A computer program product comprising a computer program which, when executed by a processor, implements the method of any one of claims 1 to 11.
CN202310988948.1A 2023-08-07 2023-08-07 Space calibration method, device, equipment, medium and program Pending CN117197223A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310988948.1A CN117197223A (en) 2023-08-07 2023-08-07 Space calibration method, device, equipment, medium and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310988948.1A CN117197223A (en) 2023-08-07 2023-08-07 Space calibration method, device, equipment, medium and program

Publications (1)

Publication Number Publication Date
CN117197223A true CN117197223A (en) 2023-12-08

Family

ID=89000626

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310988948.1A Pending CN117197223A (en) 2023-08-07 2023-08-07 Space calibration method, device, equipment, medium and program

Country Status (1)

Country Link
CN (1) CN117197223A (en)

Similar Documents

Publication Publication Date Title
US11928838B2 (en) Calibration system and method to align a 3D virtual scene and a 3D real world for a stereoscopic head-mounted display
US11887312B2 (en) Fiducial marker patterns, their automatic detection in images, and applications thereof
KR101930657B1 (en) System and method for immersive and interactive multimedia generation
US20190041972A1 (en) Method for providing indoor virtual experience based on a panorama and a 3d building floor plan, a portable terminal using the same, and an operation method thereof
US20180350145A1 (en) Augmented Reality Devices and Methods Thereof for Rendering Virtual Objects
KR20210047278A (en) AR scene image processing method, device, electronic device and storage medium
KR20190089957A (en) Mismatch detection system, composite reality system, program and mismatch detection method
CN112348968B (en) Display method and device in augmented reality scene, electronic equipment and storage medium
KR20210086837A (en) Interior simulation method using augmented reality(AR)
Barrile et al. Geomatics and augmented reality experiments for the cultural heritage
CN106873300B (en) Virtual space projection method and device for intelligent robot
CN106980378B (en) Virtual display method and system
CN108830944B (en) Optical perspective three-dimensional near-to-eye display system and display method
JP2023546739A (en) Methods, apparatus, and systems for generating three-dimensional models of scenes
EP3991142A1 (en) Fast hand meshing for dynamic occlusion
CN111815783A (en) Virtual scene presenting method and device, electronic equipment and storage medium
CN117197223A (en) Space calibration method, device, equipment, medium and program
US20220130064A1 (en) Feature Determination, Measurement, and Virtualization From 2-D Image Capture
CN117036460A (en) Method, device, equipment, medium and program for calibrating object in space
WO2020244576A1 (en) Method for superimposing virtual object on the basis of optical communication apparatus, and corresponding electronic device
Hew et al. Markerless Augmented Reality for iOS Platform: A University Navigational System
TWI759764B (en) Superimpose virtual object method based on optical communitation device, electric apparatus, and computer readable storage medium
TWI721299B (en) Image display system and image display method
de Lacerda Campos Augmented Reality in Industrial Equipment
Giannakidis et al. Hacking Visual Positioning Systems to Scale the Software Development of Augmented Reality Applications for Urban Settings

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination