CN113538700A - Augmented reality device calibration method and device, electronic device and storage medium - Google Patents

Augmented reality device calibration method and device, electronic device and storage medium Download PDF

Info

Publication number
CN113538700A
CN113538700A CN202110721295.1A CN202110721295A CN113538700A CN 113538700 A CN113538700 A CN 113538700A CN 202110721295 A CN202110721295 A CN 202110721295A CN 113538700 A CN113538700 A CN 113538700A
Authority
CN
China
Prior art keywords
image
calibration object
calibration
equipment
real scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110721295.1A
Other languages
Chinese (zh)
Inventor
李晨
刘浩敏
黄晓鹏
章国锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Shangtang Technology Development Co Ltd
Zhejiang Sensetime Technology Development Co Ltd
Original Assignee
Zhejiang Shangtang Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Shangtang Technology Development Co Ltd filed Critical Zhejiang Shangtang Technology Development Co Ltd
Priority to CN202110721295.1A priority Critical patent/CN113538700A/en
Publication of CN113538700A publication Critical patent/CN113538700A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Abstract

The disclosure relates to a calibration method and device for augmented reality equipment, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a first acquisition image of a calibration object in a real scene; displaying a predicted image corresponding to the calibration object in the AR equipment according to the first collected image; in response to the moving operation aiming at the AR equipment, under the condition that the predicted image is matched with the calibration object, acquiring a second acquisition image of the calibration object in a real scene; and calibrating the AR equipment based on the second collected image. The embodiment of the disclosure can reduce the calibration cost of the AR equipment and improve the calibration efficiency and the convenient degree.

Description

Augmented reality device calibration method and device, electronic device and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a calibration method and apparatus for augmented reality devices, an electronic device, and a storage medium.
Background
Since an Augmented Reality (AR) device can interact with a real scene, calibration is often required to align a virtual object and the real scene. For example, the AR glasses may respectively transmit the virtual image to both eyes of the user to achieve fusion of the virtual image and the real scene. In order to align the virtual image presented in the AR glasses with the real scene, the AR glasses need to be calibrated.
However, in the related art, in order to calibrate the AR device, a specially-made three-dimensional calibration block, a specially-made calibration plate, or an additional expensive auxiliary device is often required. How to conveniently calibrate the AR equipment with lower cost becomes a problem to be solved urgently at present.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a calibration scheme for augmented reality devices.
According to an aspect of the present disclosure, a calibration method for augmented reality equipment is provided, including:
acquiring a first acquisition image of a calibration object in a real scene; displaying a predicted image corresponding to the calibration object in the AR equipment according to the first collected image; in response to the moving operation aiming at the AR equipment, under the condition that the predicted image is matched with the calibration object, acquiring a second acquisition image of the calibration object in a real scene; and calibrating the AR equipment based on the second collected image.
In a possible implementation manner, the displaying, according to the first captured image, a predicted image corresponding to the calibration object in the AR device includes: determining the pose of the calibration object relative to the AR equipment according to the first acquired image; predicting a target display state of the calibration object in the AR device according to the pose, and generating a predicted image according to the target display state, wherein the target display state comprises a target display position and/or a target display size of the predicted image; displaying the predicted image in the AR device.
In one possible implementation, the first captured image includes at least two captured images; determining, from the first acquired image, a pose of the calibration object relative to the AR device, including: and determining the pose of the calibration object relative to the AR equipment by a triangulation method according to the positions of the corner points of the calibration object in the at least two acquired images.
In one possible implementation, the first captured image includes at least one captured image; determining, from the first acquired image, a pose of the calibration object relative to the AR device, including: and determining the pose of the calibration object relative to the AR equipment by a multi-point perspective imaging method by combining the at least one acquired image according to the preset size of the calibration object.
In one possible implementation, in response to the target display state including a target display position, the displaying the predicted image in the AR device includes: fixedly displaying the prediction image at the target display position of the AR device in response to a fixedly instructed operation for the AR device.
In a possible implementation manner, the acquiring a second captured image of the calibration object in the real scene includes: responding to the matching confirmation operation aiming at the AR equipment, and acquiring a second acquisition image of the calibration object in a real scene; and/or acquiring a second acquisition image of the calibration object in the real scene under the condition that the duration of the AR device in the non-moving state reaches a preset time threshold.
In one possible implementation, the matching the predicted image and the calibration object includes: the calibration object in the predicted image is mutually aligned with the calibration object in the real scene; and/or, the corner points of the labeled objects in the predicted image are mutually aligned with the corner points of the labeled objects in the real scene.
In one possible implementation, the method further includes: and stopping the calibration of the AR equipment when the movement operation is not carried out by the AR equipment and the predicted image is matched with the calibration object.
According to an aspect of the present disclosure, there is provided a calibration apparatus for an augmented reality device, including:
the first acquisition image acquisition module is used for acquiring a first acquisition image of a calibration object in a real scene; the prediction module is used for displaying a predicted image corresponding to the calibration object in the AR equipment according to the first collected image; the second acquisition image acquisition module is used for responding to the movement operation of the AR equipment and acquiring a second acquisition image of the calibration object in a real scene under the condition that the predicted image is matched with the calibration object; and the calibration module is used for calibrating the AR equipment based on the second collected image.
In one possible implementation, the prediction module is configured to: determining the pose of the calibration object relative to the AR equipment according to the first acquired image; predicting a target display state of the calibration object in the AR device according to the pose, and generating a predicted image according to the target display state, wherein the target display state comprises a target display position and/or a target display size of the predicted image; displaying the predicted image in the AR device.
In one possible implementation, the first captured image includes at least two captured images; the prediction module is further to: and determining the pose of the calibration object relative to the AR equipment by a triangulation method according to the positions of the corner points of the calibration object in the at least two acquired images.
In one possible implementation, the first captured image includes at least one captured image; the prediction module is further to: and determining the pose of the calibration object relative to the AR equipment by a multi-point perspective imaging method by combining the at least one acquired image according to the preset size of the calibration object.
In one possible implementation, in response to the target display state including a target display position, the prediction module is further to: fixedly displaying the prediction image at the target display position of the AR device in response to a fixedly instructed operation for the AR device.
In one possible implementation manner, the second captured image acquiring module is configured to: responding to the matching confirmation operation aiming at the AR equipment, and acquiring a second acquisition image of the calibration object in a real scene; and/or acquiring a second acquisition image of the calibration object in the real scene under the condition that the duration of the AR device in the non-moving state reaches a preset time threshold.
In one possible implementation, the matching the predicted image and the calibration object includes: the calibration object in the predicted image is mutually aligned with the calibration object in the real scene; and/or, the corner points of the labeled objects in the predicted image are mutually aligned with the corner points of the labeled objects in the real scene.
In one possible implementation, the apparatus is further configured to: and stopping the calibration of the AR equipment when the movement operation is not carried out by the AR equipment and the predicted image is matched with the calibration object.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, a first captured image of the calibration object in the real scene may be acquired, and a predicted image of the calibration object may be displayed in the AR device according to the first captured image, so that in response to the moving operation for the AR device, in a case where the predicted image matches the first captured image, a second captured image of the calibration object in the real scene is acquired to calibrate the AR device. Through the process, according to the calibration method and device of the AR equipment, the electronic equipment and the storage medium, provided by the embodiment of the disclosure, calibration of the AR equipment can be simply and conveniently realized by using the calibration object, the calibration cost of the AR equipment is reduced, and the calibration efficiency and the convenient degree are improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flow chart of a calibration method of an AR device according to an embodiment of the present disclosure.
Fig. 2 shows a flow chart of a calibration method of an AR device according to an embodiment of the present disclosure.
Fig. 3 shows a block diagram of a calibration apparatus of an AR device according to an embodiment of the present disclosure.
Fig. 4 shows a schematic diagram of an application scenario according to the present disclosure.
Fig. 5 shows a schematic diagram of an application example according to the present disclosure.
Fig. 6 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure.
Fig. 7 illustrates a block diagram of an electronic device 1900 in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flowchart of a calibration method for an augmented reality AR device according to an embodiment of the present disclosure, where the method may be applied to a calibration apparatus for an augmented reality AR device, and the calibration apparatus for the AR device may be a terminal device, a server, or other processing devices. The terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like.
In some possible implementations, the calibration method of the AR device may also be implemented by the processor calling computer-readable instructions stored in the memory.
As shown in fig. 1, in a possible implementation manner, the method for calibrating an AR device may include:
step S11, a first captured image of the calibration object in the real scene is acquired.
The AR equipment can be any equipment which realizes the AR function and has the calibration requirement, such as AR glasses, AR helmets, AR all-in-one machines or intelligent equipment with the AR function, such as smart phones and the like. The following disclosure embodiments are described by taking the AR device as the AR glasses, and the case where the AR device is other devices may be flexibly extended by referring to the following disclosure embodiments, which is not described in detail.
The real scene can be a real physical space where the AR glasses are located, the AR glasses can acquire images of objects in the real scene through image acquisition equipment arranged at any positions of the glasses, or the real scene can be directly observed through lenses with a perspective function and the like.
The calibration object may be any object which is in a real scene and used for assisting the AR glasses in calibration, and the first collected image may be an image obtained by the AR glasses performing image collection on the calibration object. In some possible implementation manners, the AR glasses may perform image acquisition on the calibration object through an image acquisition device disposed at any position of the glasses, so as to obtain a first acquired image of the calibration object. The image acquisition mode is not limited in the embodiment of the disclosure, and can be flexibly determined according to actual conditions, and in some possible implementation modes, a single image of a calibration object can be acquired through a monocular vision sensor arranged at any part of glasses to obtain a first acquired image; in some possible implementation manners, a plurality of images of the calibration object may be acquired by using a binocular vision sensor disposed at any position of the glasses to obtain a plurality of first acquired images.
In some possible implementations, the calibration object may include a two-dimensional calibration object. The two-dimensional calibration object can be any calibration object in a two-dimensional form, and the implementation form of the two-dimensional calibration object can be flexibly determined according to actual conditions, and is not limited to the following disclosed embodiments. In some possible implementations, the two-dimensional calibration object may include an image of a two-dimensional code and/or a checkerboard for performing visual calibration, and in one example, the two-dimensional calibration object may be an aprilat tag for performing visual calibration. In some possible implementations, the two-dimensional calibration object may be presented on a piece of paper of a real scene by printing, for example, printing one or more aprilats as the calibration object; in some possible implementations, the two-dimensional calibration object may also be displayed on a display device in a real scene, for example, aprilatag is displayed as the calibration object through a display screen of a computer or a mobile phone.
Step S12, displaying a predicted image corresponding to the calibration object in the AR device according to the first captured image.
The predicted image may be a virtual image of the calibration object presented in the AR glasses. The predicted image of the calibration object displayed in the AR glasses should theoretically visually coincide with the calibration object in the real scene. However, since the AR glasses are in the calibration stage, the corresponding relationship between the virtual object and the real scene may be inaccurate, and thus the position where the predicted image is displayed in the AR glasses may be inaccurate, so that the predicted image cannot be aligned with or coincide with the calibration object in the real scene.
According to the first acquired image of the calibration object in the real scene, the position, the size and the like of the calibration object in the real scene can be predicted to obtain a predicted image, and the predicted image is displayed in the AR glasses. The prediction mode, the display position and mode in the AR glasses, and the like can be flexibly determined according to the actual situation, for example, the predicted image and the like can be displayed at the corresponding position of the AR glasses according to the predicted position of the calibration object in the real scene. Various possible implementations of step S12 are detailed in the following disclosure, and will not be expanded herein.
Step S13, in response to the moving operation for the AR device, in a case where the prediction image matches the first captured image, acquires a second captured image of the calibration object in the real scene.
Wherein the movement operation for the AR glasses may shift the AR glasses. The implementation manner of the moving operation is not limited in the embodiment of the present disclosure, and may be implemented by moving the head on which the AR glasses are worn by the user, or by manually adjusting the position of the AR glasses by the user, and the like.
As described in the above-described disclosed embodiments, the position where the predicted image is displayed in the AR glasses may be inaccurate, so that there may be a positional deviation between the predicted image and the calibration object in the real scene. Therefore, in response to the moving operation for the AR glasses, the position of the predicted image displayed in the AR glasses with respect to the real scene may also change, and in some possible implementations, in the case that the AR glasses are moved to a certain specific position, the predicted image displayed in the AR glasses may be matched with the calibration object. The matching between the predicted image and the calibration object may be that the predicted image and the calibration object are overlapped and/or aligned as a whole, or that the vertex or a part of the predicted image and the calibration object are overlapped and/or aligned.
In the case that the predicted image matches the calibration object, a second captured image of the calibration object in the real scene may be obtained, that is, the second captured image may be an image obtained by capturing an image of the calibration object in the case that the predicted image matches the calibration object. The manner of obtaining the second captured image may refer to the manner of obtaining the first captured image in the above-described embodiments, and details are not repeated herein. It should be noted that "first" and "second" in the first captured image and the second captured image are only used to distinguish the images acquired under different conditions, and the order of acquiring the images or the size of the images are not limited.
And step S14, calibrating the AR equipment based on the second acquisition image.
As described in the above-mentioned disclosed embodiment, based on the first captured image, the position of the calibration object in the real scene may be predicted, and since the position predicted based on the first captured image is not accurate, and the second captured image is the image of the calibration object captured at the accurate predicted position, the position transformation relationship between the AR glasses and the real scene may be corrected according to the deviation between the second captured image and the first captured image, thereby implementing the calibration of the AR glasses. The specific process of calibration is not limited in the embodiment of the present disclosure, and any calibration method applied in computer vision may be implemented as the implementation manner of step S14, and is not illustrated and described herein.
In the embodiment of the disclosure, a first captured image of the calibration object in the real scene may be acquired, and a predicted image of the calibration object may be displayed in the AR device according to the first captured image, so that in response to the moving operation for the AR device, in a case where the predicted image matches the first captured image, a second captured image of the calibration object in the real scene is acquired to calibrate the AR device. Through the process, according to the calibration method and device of the AR equipment, the electronic equipment and the storage medium, provided by the embodiment of the disclosure, calibration of the AR equipment can be simply and conveniently realized by using the calibration object, the calibration cost of the AR equipment is reduced, and the calibration efficiency and the convenient degree are improved.
Fig. 2 shows a flowchart of a calibration method of augmented reality AR glasses according to an embodiment of the present disclosure, and as shown in the drawing, in one possible implementation, step S12 may include:
step S121, determining the pose of the calibration object relative to the AR equipment according to the first collected image;
step S122, predicting the target display state of the calibration object in the AR glasses according to the pose, and generating a prediction image according to the target display state;
in step S123, the prediction image is displayed in the AR device.
The pose of the calibration object relative to the AR glasses may be a three-dimensional position of the calibration object relative to the AR glasses in the real scene. The method for determining the pose of the calibration object according to the first collected image is not limited in the embodiment of the present disclosure, and any spatial positioning method for performing three-dimensional positioning based on a two-dimensional image may be used as the implementation manner of step S121. Some possible implementations of step S121 are detailed in the following disclosure embodiments, which are not expanded herein.
According to the pose determined in step S121, a target display state of the calibration object in the AR glasses may be predicted to generate a predicted image. The target display state may be a case where the calibration object is displayed in the AR glasses, and the included state types are not limited in the embodiment of the present disclosure. In some possible implementations, the target display state may include a target display position and/or a target display size of the predicted image.
The target display position may be a position where the predicted image is presented in the AR glasses, the target display size may be a size where the predicted image is presented in the AR glasses, and the target display size may be consistent with or match a real size of the calibration object, so that the predicted image observed through the AR glasses may be aligned with and/or coincide with the calibration object in the real scene.
Based on the target display state determined in step S122, a prediction image can be displayed through step S123. In one possible implementation, the predicted image may be displayed at a target display size at a target display position of the AR glasses.
Through steps S121 to S123, the prediction from the two-dimensional image to the three-dimensional space can be realized according to the first collected image, so as to obtain the pose corresponding to the calibration in the real scene, and the predicted image of the calibration object is displayed in the AR glasses based on the pose, so that the predicted image of the calibration object is simply and conveniently presented by using the transformation relationship between the two-dimensional space and the three-dimensional space, the calculation amount in the calibration process is reduced, and the convenience degree of the whole calibration process is improved.
In a possible implementation manner, the first captured image may include at least two captured images, and the step S121 may include: and determining the pose of the calibration object relative to the AR equipment by a triangulation method according to the positions of the corner points of the calibration object in the at least two acquired images.
The at least two collected images can be images obtained by acquiring images of the calibration object in the real scene through different angles by the AR glasses at the same time. In some possible implementations, multiple image capturing devices may be disposed on the AR glasses at different positions, or a binocular or multi-view sensor may be disposed on the AR glasses, and in this case, the AR glasses may obtain at least two captured images at different angles as the first captured image.
Based on the at least two acquired images, the position of the calibration object relative to the AR glasses can be determined through a triangulation method according to the positions of the calibration object in the at least two acquired images.
The triangulation method can calculate the three-dimensional position of the calibration object in the real scene according to the parallax of the calibration object in at least two acquired images and by combining camera parameters of image acquisition equipment. The detailed process of the triangulation method is not described in detail in the embodiments of the present disclosure. In some possible implementation manners, since the calibration object may include a two-dimensional calibration object such as a two-dimensional code or a checkerboard, in order to reduce the calculation amount, triangularization calculation may be performed according to positions of corner points of the calibration object in at least two acquired images, where the corner points may be two-dimensional codes or checkerboard, and may be specifically detected interest points.
In the embodiment of the disclosure, the position of the calibration object relative to the AR equipment is determined by a triangulation method according to the positions of the corner points of the calibration object in at least two acquired images, so that on one hand, the calculated amount in the position determination process can be effectively reduced, the calibration difficulty is reduced, and the calibration efficiency is improved; on the other hand, the pose calculated based on at least two acquired images has higher precision, so that the calibration precision is effectively improved, the calibration times are reduced, and the calibration efficiency is improved.
In one possible implementation, the first captured image may include at least one captured image, and the step S121 may include: and determining the pose of the calibration object relative to the AR equipment by combining at least one acquired image according to the preset size of the calibration object and a multipoint perspective imaging method.
Wherein, the at least one collected image can be one or more images collected by the AR glasses. In some possible implementations, only a single image capturing device may be provided on the AR glasses, or a plurality of image capturing devices may be provided but only one of the image capturing devices is used, in which case the AR glasses may obtain at least one image captured by the single device as the first captured image.
In a possible implementation manner, under the condition that at least one image comprises a single image, based on the single acquired image, the pose of the calibration object relative to the AR glasses can be determined by a multipoint Perspective imaging (PnP) method according to the preset size of the calibration object.
The preset size of the calibration object can be the size of the known calibration object in the real scene, and the posture of the image acquisition device relative to the real scene can be calculated by the PnP method according to the position of the known three-dimensional point in the real scene and the position of the three-dimensional point in the corresponding two-dimensional image, so that the posture of the calibration object relative to the AR glasses in the real scene can be obtained by solving according to the preset size of the calibration object and by combining the size and the position of the calibration object in a single acquired image.
Under the condition that at least one image comprises a plurality of images, the pose determination method based on a single acquired image provided by the disclosed embodiment can be used for respectively processing the plurality of images to obtain a plurality of pose solution results, and carrying out average calculation or weighted average calculation based on the poses to obtain the pose of the calibration object in the real scene relative to the AR glasses.
In the embodiment of the disclosure, the pose of the calibration object relative to the AR equipment is determined by the PnP method through combining at least one acquired image according to the preset size of the calibration object, the three-dimensional pose of the calibration object in the real scene can be determined based on at least one first acquired image, the hardware requirement on the AR equipment is reduced, the calibration precision is guaranteed, the calibration difficulty is reduced, and the practicability and feasibility of the whole method are improved.
In one possible implementation manner, step S123 may include: in response to a fixed instruction operation for the AR device, the prediction image is fixedly displayed at a target display position of the AR device.
The fixed indication operation may be an operation of instructing the AR glasses to fixedly display the predicted image, and the form of the fixed indication operation may be flexibly set according to an actual situation, for example, a certain button or certain buttons on the AR glasses are clicked to execute the fixed indication operation, or a certain part or certain parts of the AR glasses are clicked according to a preset number of times to execute the fixed indication operation, and the value of the preset number of times is not limited in the embodiment of the present disclosure, and may be a single time or multiple times.
In some possible implementations, as the AR glasses move, the position of the predicted image displayed on the AR glasses may move, and in order to achieve matching of the predicted image with the target object in the real scene, the predicted image needs to be fixedly displayed at the predicted target display position. Accordingly, in one possible implementation, the prediction image may be fixedly displayed at the target display position of the AR glasses in response to a fixing instruction operation for the AR glasses.
By responding to the fixed indication operation of the AR equipment, the predicted image is fixedly displayed at the target display position of the AR equipment, so that the matching of the predicted image and the calibration object can be conveniently realized subsequently, and the convenience and feasibility of the calibration process are improved.
In some possible implementation manners, the AR glasses may also be set, so that the AR glasses may directly and fixedly display the predicted image at the target display position of the AR glasses, thereby reducing the operation process of the user and further improving the calibration convenience.
In some possible implementations, the matching of the predicted image and the calibration object in step S13 may include:
and the calibration object in the predicted image is mutually aligned with the calibration object in the real scene. And/or the presence of a gas in the gas,
the corners of the labeled objects in the predicted image are aligned with the corners of the labeled objects in the real scene.
The calibration object in the predicted image and the calibration object in the real scene are aligned with each other, the calibration object can be a calibration object drawn in the predicted image displayed in the AR glasses, and the calibration object and the image of the calibration object in the real scene mutually generate 6-degree-of-freedom superposition alignment, and the matching mode can have higher precision and improve the calibration precision.
The angle point of the calibration object in the predicted image is aligned with the angle of the calibration object in the real scene, and can be one or some 2D points in the predicted image displayed in the AR glasses and one or some 3D points in the calibration object in the real scene, so that 2-degree-of-freedom superposition alignment is generated.
In some possible implementation manners, in the case that the predicted image matches the calibration object, the AR device may further output a prompt for the user to know, and the form of the prompt may be flexibly selected according to the actual situation, for example, one or more manners such as issuing a prompt sound, displaying a prompt text on a display interface, or displaying a prompt image frame may be included.
Through the different matching modes, whether the predicted image is matched with the calibration object or not can be flexibly determined according to actual conditions, and the matching flexibility is improved.
In some possible implementations, the acquiring of the second captured image of the calibration object in the real scene in step S13 may include:
and responding to the matching confirmation operation aiming at the AR equipment, and acquiring a second acquisition image of the calibration object in the real scene. And/or the presence of a gas in the gas,
and under the condition that the duration of the AR equipment in the non-moving state reaches a preset time threshold, acquiring a second acquisition image of the calibration object in the real scene.
The matching confirmation operation may be a confirmation operation sent to the AR glasses when the predicted image is matched with the calibration object, the form of the matching confirmation operation may also be flexibly set according to the actual situation, and various implementation forms thereof may refer to the fixed indication operation mentioned in the above-described disclosed embodiment, and details thereof are not repeated. It should be noted that, in the case of including the fixed instruction operation and the matching confirmation operation at the same time, there should be a difference between the fixed instruction operation and the matching confirmation operation, for example, the fixed instruction operation and the matching confirmation operation correspond to different click buttons, or correspond to different numbers of times and/or positions of strokes, etc.
In response to the matching confirmation operation, it may be indicated that the predicted image is matched with the calibration object, in this case, a second captured image of the calibration object in the real scene may be obtained, and a manner of obtaining the second captured image may be described in detail in the foregoing embodiments, and is not described herein again.
In some possible implementations, the predicted image may be determined to be matched with the calibration object when the duration of the non-moving state of the AR glasses reaches a preset time threshold, so as to obtain the second captured image. Wherein, the AR glasses being in the non-moving state may be that the AR glasses stop moving and are in a stationary state. How to determine whether the AR glasses are in the non-moving state may be flexibly determined according to actual situations, and is not limited to the following disclosure embodiments. In a possible implementation manner, the current motion state of the AR glasses may be determined through a posture sensor or a positioning interface built in the AR glasses, so as to determine whether the AR glasses are in a non-moving state.
The time length of the preset time threshold may be flexibly determined according to actual situations, and is not limited in the embodiment of the present disclosure, for example, the time length may be several seconds to several tens of seconds. In some possible implementation manners, the countdown is displayed in the AR glasses to remind the user when the AR glasses are in the non-moving state, and the second collected image is automatically acquired after the countdown is finished.
By obtaining the second collected image in response to the matching confirmation operation and/or obtaining the second collected image under the condition that the duration time of the AR device in the non-moving state reaches the preset time threshold, the time for obtaining the second collected image can be flexibly determined in different modes through the process, wherein the second collected image with an accurate position can be more accurately obtained in response to the matching confirmation operation, and the calibration precision is improved; and the second acquisition image is acquired based on the duration of the non-moving state, so that the operation of a user can be reduced, and the convenience and feasibility of calibration are improved.
In some possible implementations, the calibration method proposed in the embodiment of the present disclosure may be repeated to improve the calibration accuracy, that is, the process from step S11 to step S14 may be repeated multiple times. In some possible implementations, the calibration may be stopped when the number of repetitions of steps S11 to S14 reaches a preset number threshold, where the number of the preset number threshold may be flexibly set according to practical situations, for example, 3 to 10 times, and is not limited in the embodiment of the present disclosure. In some possible implementations, the calibration may also be ended by the user manually determining or ending the calibration, such as clicking a button or buttons on the AR glasses, or making a single or multiple tap on certain parts of the AR glasses.
In one possible implementation manner, the method provided in the embodiment of the present disclosure may further include: and stopping the calibration of the AR equipment when the AR equipment does not perform the moving operation and the predicted image is matched with the calibration object.
The AR glasses do not perform moving operation and the predicted image is matched with the calibration object, the predicted image displayed by the AR glasses can be directly matched with the calibration object under the condition that the user does not move the AR glasses, and in this condition, the position transformation relation between the AR glasses and the real scene can be shown to be accurate, so that the calibration of the AR glasses can be stopped.
The calibration of the AR glasses may be manually confirmed by the user, or as described in the above-mentioned disclosed embodiment, the calibration may be stopped when the duration of the non-moving state of the AR glasses reaches the preset time threshold, and the implementation manner of the calibration may be flexibly determined according to the actual situation, which is not limited in the embodiment of the present disclosure.
By stopping the calibration of the AR equipment under the condition that the AR equipment does not perform moving operation and the predicted image is matched with the calibration object, whether the calibration can be stopped or not can be judged based on the real-time observation calibration effect, the calibration precision is improved, the calibration times are saved, and the calibration efficiency is improved.
Fig. 3 shows a block diagram of a calibration apparatus 20 of an AR device according to an embodiment of the present disclosure, which includes, as shown in fig. 3:
the first captured image acquiring module 21 is configured to acquire a first captured image of the calibration object in the real scene.
And the prediction module 22 is configured to display a predicted image corresponding to the calibration object in the AR device according to the first collected image.
And a second captured image acquiring module 23, configured to acquire, in response to the moving operation for the AR device, a second captured image of the calibration object in the real scene in a case where the predicted image matches the calibration object.
And the calibration module 24 is configured to calibrate the AR device based on the second collected image.
In one possible implementation, the prediction module is configured to: determining the pose of the calibration object relative to the AR equipment according to the first collected image; predicting the target display state of the calibration object in the AR equipment according to the pose, and generating a predicted image according to the target display state, wherein the target display state comprises the target display position and/or the target display size of the predicted image; the predicted image is displayed in the AR device.
In one possible implementation, the first captured image includes at least two captured images; the prediction module is further to: and determining the pose of the calibration object relative to the AR equipment by a triangulation method according to the positions of the corner points of the calibration object in the at least two acquired images.
In one possible implementation, the first captured image includes at least one captured image; the prediction module is further to: and determining the pose of the calibration object relative to the AR equipment by combining at least one acquired image according to the preset size of the calibration object and a multipoint perspective imaging method.
In one possible implementation, in response to the target display state including the target display position, the prediction module is further to: in response to a fixed instruction operation for the AR device, the prediction image is fixedly displayed at a target display position of the AR device.
In one possible implementation, the second captured image acquisition module is configured to: responding to the matching confirmation operation aiming at the AR equipment, and acquiring a second acquisition image of the calibration object in the real scene; and/or acquiring a second acquisition image of the calibration object in the real scene under the condition that the duration of the non-moving state of the AR equipment reaches a preset time threshold.
In one possible implementation, the matching of the predicted image and the calibration object includes: the calibration object in the predicted image is mutually aligned with the calibration object in the real scene; and/or, aligning the corner points of the marked object in the predicted image with the corner points of the marked object in the real scene.
In one possible implementation, the apparatus is further configured to: and stopping the calibration of the AR equipment when the AR equipment does not perform the moving operation and the predicted image is matched with the calibration object.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Application scenario example
Fig. 4 is a schematic diagram of an application scenario according to the present disclosure, as shown in fig. 4, in an application example of the present disclosure, optical perspective AR glasses may transmit a calibration object (i.e., AprilTag printed in the figure) in a real scene into a human eye through a semi-transmissive semi-reflective lens, and reflect a virtual predicted image generated based on AprilTag into the human eye, so as to generate an AR effect.
The application example of the present disclosure provides a calibration method for AR glasses, which can calibrate the AR glasses in the application scene of fig. 4, and fig. 5 shows a schematic diagram of an application example according to the present disclosure, and as shown in the diagram, the calibration method for AR glasses provided by the application example of the present disclosure may include the following processes:
step S31, identifying, by a camera built in the AR glasses, a pose of a calibration object (AprilTag) in the real scene in the field of view with respect to the camera:
under the condition that the AR glasses are provided with at least a left camera and a right camera, at least two first collected images are collected through the at least two cameras, AprilTag is detected based on the at least two first collected images, and then the pose of the AprilTag relative to the cameras is calculated through a triangulation method according to the positions of the angular points of the AprilTag in the at least two first collected images and the internal and external parameters of the two cameras calibrated in advance;
in the case where the AR glasses are configured with only one camera, or are configured with a plurality of cameras but use only one camera, a single first captured image may be acquired by the single camera, AprilTag is detected based on the single first captured image, and the pose of AprilTag with respect to the camera is solved by the previously measured size of AprilTag in combination with the PnP method.
And step S32, according to the position and posture, combining the position transformation parameters between the AR glasses and the human eyes, and drawing the predicted image corresponding to the calibration object in the display interface of the AR glasses.
Step S33, fixedly displaying the currently drawn predicted image in the AR glasses:
the user can fix the currently drawn predicted image by clicking a certain button; in one example, the AR glasses may also automatically fix the currently rendered predictive image.
In step S34, the user moves the head to align the drawn predicted image with AprilTag in the real scene seen by the user, and in the case of alignment, acquires a second captured image of AprilTag:
the alignment between the predicted image and aprilat may be to align the whole image of aprilat in the predicted image with aprilat in the real scene; or aligning a certain corner of AprilTag in the predicted image with a corner of AprilTag in the real scene;
in one example, the user may confirm that the predicted image is aligned with AprilTag by clicking on some button on the AR glasses themselves; in one example, whether the AR glasses are in a static state may also be confirmed by calling a posture sensor or a positioning interface built in the AR glasses, in the static state, the predicted image may be considered to be aligned with AprilTag, and the AR glasses may display a countdown on a screen in some way, and after the countdown is finished, the program automatically acquires the second captured image.
In step S35, the process returns to step S32, and whether the drawn predicted image is aligned with aprilatag in the real scene is observed at a different distance, and if so, the calibration is exited, otherwise, steps S32 to S35 are repeatedly performed.
By the application example of the method, the calibration of the AR glasses can be realized through the simply printed AprilTag image, additional other equipment or devices are omitted, the calibration cost is reduced, and the feasibility and convenience of calibration are improved; meanwhile, a user can observe the calibration effect in real time to judge whether the calibration can be stopped or not, so that the calibration times are saved, and the calibration efficiency is improved; moreover, for some AR glasses only provided with a single camera, calibration with higher precision can be realized, and the universality of calibration is improved.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The disclosed embodiments also provide a computer program product comprising computer readable code which, when run on a device, executes instructions for implementing a method as provided by any of the above embodiments.
Embodiments of the present disclosure also provide another computer program product for storing computer readable instructions, which when executed, cause a computer to perform the operations of the method provided by any of the above embodiments.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 6 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 6, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (WiFi), a second generation mobile communication technology (2G) or a third generation mobile communication technology (3G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 7 illustrates a block diagram of an electronic device 1900 in accordance with an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 7, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system, such as the Microsoft Server operating system (Windows Server), stored in the memory 1932TM) Apple Inc. of the present application based on the graphic user interface operating System (Mac OS X)TM) Multi-user, multi-process computer operating system (Unix)TM) Free and open native code Unix-like operating System (Linux)TM) Open native code Unix-like operating System (FreeBSD)TM) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (11)

1. A calibration method for an Augmented Reality (AR) device is characterized by comprising the following steps:
acquiring a first acquisition image of a calibration object in a real scene;
displaying a predicted image corresponding to the calibration object in the AR equipment according to the first collected image;
in response to the moving operation aiming at the AR equipment, under the condition that the predicted image is matched with the calibration object, acquiring a second acquisition image of the calibration object in a real scene;
and calibrating the AR equipment based on the second collected image.
2. The method according to claim 1, wherein the displaying, in the AR device, a predicted image corresponding to the calibration object according to the first captured image includes:
determining the pose of the calibration object relative to the AR equipment according to the first acquired image;
predicting a target display state of the calibration object in the AR device according to the pose, and generating a predicted image according to the target display state, wherein the target display state comprises a target display position and/or a target display size of the predicted image;
displaying the predicted image in the AR device.
3. The method of claim 2, wherein the first captured image comprises at least two captured images;
determining, from the first acquired image, a pose of the calibration object relative to the AR device, including:
and determining the pose of the calibration object relative to the AR equipment by a triangulation method according to the positions of the corner points of the calibration object in the at least two acquired images.
4. The method of claim 2, wherein the first captured image comprises at least one captured image;
determining, from the first acquired image, a pose of the calibration object relative to the AR device, including:
and determining the pose of the calibration object relative to the AR equipment by a multi-point perspective imaging method by combining the at least one acquired image according to the preset size of the calibration object.
5. The method according to any of claims 2 to 4, wherein said displaying the predicted image in the AR device in response to the target display state comprising a target display position comprises:
fixedly displaying the prediction image at the target display position of the AR device in response to a fixedly instructed operation for the AR device.
6. The method according to any one of claims 1 to 5, wherein the acquiring a second captured image of the calibration object in the real scene comprises:
responding to the matching confirmation operation aiming at the AR equipment, and acquiring a second acquisition image of the calibration object in a real scene; and/or the presence of a gas in the gas,
and under the condition that the duration of the AR equipment in the non-moving state reaches a preset time threshold, acquiring a second acquisition image of the calibration object in the real scene.
7. The method according to any of claims 1 to 6, wherein the matching of the predicted image to the calibration object comprises:
the calibration object in the predicted image is mutually aligned with the calibration object in the real scene; and/or the presence of a gas in the gas,
and aligning the corner points of the marked object in the predicted image with the corner points of the marked object in the real scene.
8. The method according to any one of claims 1 to 7, further comprising:
and stopping the calibration of the AR equipment when the movement operation is not carried out by the AR equipment and the predicted image is matched with the calibration object.
9. A calibration apparatus for an Augmented Reality (AR) device, the apparatus comprising:
the first acquisition image acquisition module is used for acquiring a first acquisition image of a calibration object in a real scene;
the prediction module is used for displaying a predicted image corresponding to the calibration object in the AR equipment according to the first collected image;
the second acquisition image acquisition module is used for responding to the movement operation of the AR equipment and acquiring a second acquisition image of the calibration object in a real scene under the condition that the predicted image is matched with the calibration object;
and the calibration module is used for calibrating the AR equipment based on the second collected image.
10. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any one of claims 1 to 8.
11. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 8.
CN202110721295.1A 2021-06-28 2021-06-28 Augmented reality device calibration method and device, electronic device and storage medium Pending CN113538700A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110721295.1A CN113538700A (en) 2021-06-28 2021-06-28 Augmented reality device calibration method and device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110721295.1A CN113538700A (en) 2021-06-28 2021-06-28 Augmented reality device calibration method and device, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN113538700A true CN113538700A (en) 2021-10-22

Family

ID=78126122

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110721295.1A Pending CN113538700A (en) 2021-06-28 2021-06-28 Augmented reality device calibration method and device, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN113538700A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937478A (en) * 2022-12-26 2023-04-07 北京字跳网络技术有限公司 Calibration information determining method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937478A (en) * 2022-12-26 2023-04-07 北京字跳网络技术有限公司 Calibration information determining method and device, electronic equipment and storage medium
CN115937478B (en) * 2022-12-26 2023-11-17 北京字跳网络技术有限公司 Calibration information determining method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
EP3038345B1 (en) Auto-focusing method and auto-focusing device
CN111551191B (en) Sensor external parameter calibration method and device, electronic equipment and storage medium
EP3667365A1 (en) Camera module and ranging radar
US10178379B2 (en) Method and apparatus for testing virtual reality head display device
US20210158560A1 (en) Method and device for obtaining localization information and storage medium
CN111323007B (en) Positioning method and device, electronic equipment and storage medium
CN110989901B (en) Interactive display method and device for image positioning, electronic equipment and storage medium
CN110928627A (en) Interface display method and device, electronic equipment and storage medium
CN111563138B (en) Positioning method and device, electronic equipment and storage medium
CN114019473A (en) Object detection method and device, electronic equipment and storage medium
CN111860373B (en) Target detection method and device, electronic equipment and storage medium
CN113052919A (en) Calibration method and device of visual sensor, electronic equipment and storage medium
CN112541971A (en) Point cloud map construction method and device, electronic equipment and storage medium
CN112184787A (en) Image registration method and device, electronic equipment and storage medium
CN112950712B (en) Positioning method and device, electronic equipment and storage medium
US8988507B2 (en) User interface for autofocus
CN110989884A (en) Image positioning operation display method and device, electronic equipment and storage medium
CN113345000A (en) Depth detection method and device, electronic equipment and storage medium
CN113538700A (en) Augmented reality device calibration method and device, electronic device and storage medium
CN111496782B (en) Measuring system, method, processing device and storage medium for robot tool point
CN113344999A (en) Depth detection method and device, electronic equipment and storage medium
CN112860061A (en) Scene image display method and device, electronic equipment and storage medium
CN111078346B (en) Target object display method and device, electronic equipment and storage medium
CN114519794A (en) Feature point matching method and device, electronic equipment and storage medium
CN112461245A (en) Data processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination