CN109992111B - Augmented reality extension method and electronic device - Google Patents

Augmented reality extension method and electronic device Download PDF

Info

Publication number
CN109992111B
CN109992111B CN201910228203.9A CN201910228203A CN109992111B CN 109992111 B CN109992111 B CN 109992111B CN 201910228203 A CN201910228203 A CN 201910228203A CN 109992111 B CN109992111 B CN 109992111B
Authority
CN
China
Prior art keywords
pose
electronic device
information
identified
image information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910228203.9A
Other languages
Chinese (zh)
Other versions
CN109992111A (en
Inventor
邓建
吕向博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201910228203.9A priority Critical patent/CN109992111B/en
Publication of CN109992111A publication Critical patent/CN109992111A/en
Application granted granted Critical
Publication of CN109992111B publication Critical patent/CN109992111B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Abstract

The present disclosure provides an augmented reality extension method, including: identifying first image information which is acquired under a first position posture of the first electronic equipment and contains an object to be identified to obtain object identification information and position posture association information of the object to be identified, wherein the position posture association information corresponds to the first position posture; acquiring to-be-identified object pose associated information corresponding to a second pose based on an associated relationship between the first pose and the second pose of a second electronic device, the first pose, the second pose and the to-be-identified object pose associated information corresponding to the first pose; and outputting the object identification information and the pose correlation information of the object to be identified corresponding to the second pose. The present disclosure also provides an electronic device.

Description

Augmented reality extension method and electronic device
Technical Field
The disclosure relates to an augmented reality extension method and an electronic device.
Background
Augmented Reality (AR), also called mixed Reality, is a technology for calculating the position and angle of an image of a camera device in real time and adding information such as corresponding images, videos, and stereo models. That is, virtual information is applied to the real world, such as superimposing the real environment and the virtual information on the same screen or space in real time.
The camera device is usually fixed on the AR device, for example, the camera of the AR glasses is fixed on the head of the user, which results in that when the AR glasses are used for object recognition, if the object to be recognized is at a lower position, the object to be recognized has a smaller volume, and the object to be recognized is in a poor recognition scene such as an angle which is not easy to be recognized or dark light, the user of the AR glasses needs to lower head to be close to the object to be recognized or change the camera angle to successfully recognize, and thus the operability of the AR device is poor, and good experience cannot be brought to the user.
Disclosure of Invention
One aspect of the present disclosure provides an augmented reality extension method applied to a first electronic device, the method including the following operations: firstly, identifying first image information which is acquired under a first pose of the first electronic equipment and contains an object to be identified to obtain object identification information and object pose correlation information to be identified corresponding to the first pose, then acquiring object pose correlation information to be identified corresponding to a second pose based on a correlation relationship between the first pose and the second pose of the second electronic equipment, the first pose, the second pose and the object pose correlation information to be identified corresponding to the first pose, and then outputting the object identification information and the object pose correlation information to be identified corresponding to the second pose.
According to the augmented reality extension method provided by the disclosure, when a second electronic device, such as an AR device, is inconvenient to directly collect an image of an object to be recognized, image collection and object recognition can be performed through the first electronic device, and meanwhile, due to the fact that the first pose of the first electronic device and the second pose of the second electronic device are obtained in the process and a conversion relation exists between the first pose and the second pose, the image and the object recognition information of the object to be recognized, which are obtained in the first pose, can be converted into the image information and the object recognition information of the object to be recognized in the second pose, so that the object recognition information and the like can be correctly added into the second electronic device, the application range of the second electronic device is effectively extended, and user experience is improved.
Optionally, the method may further comprise the operations of: first, a first user operation is determined at least based on the variation amount of the first pose of the first electronic device within a preset time period, and then the first user operation is input to the second electronic device to control the second electronic device. In the prior art, in order to improve the operability of the electronic device, a special control device, such as a remote control device and a handle, is provided, but the control device increases the cost of the electronic device.
Optionally, the pose associated information of the object to be identified corresponding to the first pose includes at least one of: the electronic device comprises a first electronic device, a second electronic device and a controller, wherein the first electronic device comprises a first electronic device and a second electronic device, the first electronic device comprises a specified number of degrees of freedom data of an object to be recognized relative to the first electronic device, image information containing the object to be. The information in the first pose can be simply, conveniently and quickly converted into the information in the second pose through the pose associated information, and the conversion accuracy can be ensured.
Optionally, the identifying the first image information including the object to be identified, acquired in the first pose of the first electronic device, to obtain the object identification information and the pose associated information of the object to be identified corresponding to the first pose may include the following operations: the method comprises the steps of firstly, obtaining first image information of an object to be recognized in a first position of first electronic equipment, then sending the first image information to third electronic equipment, wherein the third electronic equipment is in communication connection with the first electronic equipment, then receiving object recognition information and position and posture related information of the object to be recognized corresponding to the first position, and determining the object recognition information and the position and posture related information of the object to be recognized corresponding to the first position by the third electronic equipment at least based on the first image information. Because the processing processes of image identification, information conversion under different poses and the like are carried out in the third electronic equipment, the power consumption of the first electronic equipment can be effectively reduced, and the electronic equipment with lower computing capacity can be used as the first electronic equipment, so that the technical popularization is facilitated.
Optionally, the method may further comprise the operations of: first, receiving a second user operation, wherein the user operation is used for controlling the first electronic device to acquire image information, and acquiring and outputting the image information in response to the second user operation. Therefore, the first electronic device and the second electronic device can acquire real-time real images of the user and display the real images, the problem that the real-time images of the user or the other user cannot be seen in the interaction process of the two AR devices in the prior art is solved, and the user experience is effectively improved.
Another aspect of the present disclosure provides an augmented reality extension method applied to a second electronic device, where the method may include the following operations: first, second image information including an object to be recognized in a second pose of the second electronic device is obtained, and then pose associated information of the object to be recognized and object recognition information corresponding to the second pose are obtained, so that the object recognition information can be added to a corresponding position in the second image information according to the pose associated information of the object to be recognized corresponding to the second pose. The object identification information is obtained by identifying first image information which is acquired under a first pose of first electronic equipment and contains an object to be identified, and the pose associated information of the object to be identified corresponding to a second pose is acquired by the first electronic equipment based on the association relationship between the first pose and the second pose, the first pose, the second pose and the pose associated information of the object to be identified corresponding to the first pose.
Optionally, the method may further comprise the operations of: receiving a first user operation determined at least based on the variation of the first pose of the first electronic equipment in a preset time period, and then executing a function corresponding to the first user operation.
Optionally, the method may further comprise the operations of: receiving image information, wherein the image information is acquired by the first electronic device in response to a second user operation to control the first electronic device, and then outputting the image information.
Another aspect of the present disclosure provides an augmented reality extension method applied to an augmented reality system, where the augmented reality system includes at least one first electronic device and a second electronic device, and for one of the first electronic device and the second electronic device, the method may include the following operations: first, second image information including an object to be recognized in a second pose of the second electronic device is obtained, first image information including the object to be recognized in a first pose of the first electronic device is obtained, then the first image information is recognized, object recognition information and object pose correlation information to be recognized corresponding to the first pose are obtained, next object pose correlation information to be recognized corresponding to the second pose is obtained based on a correlation between the first pose and the second pose, the first pose, the second pose and the object pose correlation information to be recognized corresponding to the first pose, and then the object recognition information is added to a corresponding position in the second image information according to the object pose correlation information to be recognized corresponding to the second pose.
Another aspect of the present disclosure provides an augmented reality expanding apparatus, which may include a first information acquiring module, a first converting module, and an outputting module, wherein, the first information acquisition module is used for identifying first image information which is acquired under a first position posture of the first electronic equipment and contains an object to be identified to obtain object identification information and position posture association information of the object to be identified corresponding to the first position posture, the first conversion module is used for acquiring pose associated information of an object to be identified corresponding to a second pose based on the associated relation between the first pose and the second pose of a second electronic device, the first pose, the second pose and pose associated information of the object to be identified corresponding to the first pose, the output module is used for outputting the object identification information and the pose correlation information of the object to be identified corresponding to the second pose.
Optionally, the apparatus may further include an operation determination module configured to determine a first user operation based on at least an amount of change in the first pose of the first electronic device within a preset time period, and an input module configured to input the first user operation to the second electronic device to control the second electronic device.
Optionally, the first information obtaining module may include an obtaining unit, a sending unit, and a receiving unit, where the obtaining unit is configured to obtain first image information of an object to be recognized in a first pose of the first electronic device, the sending unit is configured to send the first image information to a third electronic device, the third electronic device is in communication connection with the first electronic device, the receiving unit is configured to receive object identification information and pose association information of the object to be recognized corresponding to the first pose, and the object identification information and the pose association information of the object to be recognized corresponding to the first pose are determined by the third electronic device based on at least the first image information.
Optionally, the apparatus may further include an operation acquisition module configured to receive a second user operation, the second user operation being used to control the first electronic device to acquire image information, and an image output module configured to acquire and output the image information in response to the second user operation.
Another aspect of the present disclosure provides an augmented reality expanding apparatus, which may include: the electronic device comprises an image acquisition module, a second information acquisition module and an information adding module, wherein the image acquisition module is used for acquiring second image information including an object to be recognized in a second posture of the second electronic device, the second information acquisition module is used for acquiring pose associated information of the object to be recognized and object recognition information corresponding to the second posture, and the information adding module is used for adding the object recognition information in a corresponding position in the second image information according to the pose associated information of the object to be recognized corresponding to the second posture. The object identification information is obtained by identifying first image information which is acquired under a first pose of first electronic equipment and contains an object to be identified, and the pose associated information of the object to be identified corresponding to a second pose is acquired by the first electronic equipment based on the association relationship between the first pose and the second pose, the first pose, the second pose and the pose associated information of the object to be identified corresponding to the first pose.
Optionally, the apparatus may further include an operation receiving module configured to receive a first user operation determined based on at least an amount of change in the first pose of the first electronic device within a preset time period, and an executing module configured to execute a function corresponding to the first user operation.
Another aspect of the present disclosure provides an augmented reality augmenting system comprising at least a first electronic device and a second electronic device, the system may include: the system comprises a third information acquisition module, a fourth information acquisition module, a second conversion module and a second information adding module, wherein the third information acquisition module is used for acquiring second image information including an object to be recognized in a second position of the second electronic equipment and acquiring first image information including the object to be recognized in a first position of the first electronic equipment, the fourth information acquisition module is used for recognizing the first image information to obtain object recognition information and object position correlation information corresponding to the first position, the second conversion module is used for acquiring the object position correlation information corresponding to the second position based on the correlation between the first position and the second position, the first position, the second position and the object position correlation information corresponding to the first position, the second information adding module is used for adding the object identification information at the corresponding position in the second image information according to the pose correlation information of the object to be identified corresponding to the second pose.
Another aspect of the present disclosure provides an electronic device including: one or more processors, a computer-readable storage medium, wherein the computer-readable storage medium is for storing one or more computer programs that, when executed by the processors, implement the method as described above.
Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions for implementing the method as described above when executed.
Another aspect of the disclosure provides a computer program comprising computer executable instructions for implementing the method as described above when executed.
Drawings
For a more complete understanding of the present disclosure and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
fig. 1 schematically illustrates an application scenario of an augmented reality augmentation method and an electronic device according to an embodiment of the present disclosure;
fig. 2A schematically illustrates a flow chart of an augmented reality augmentation method according to an embodiment of the present disclosure;
fig. 2B schematically illustrates a schematic diagram of identifying an object to be identified according to an embodiment of the present disclosure;
fig. 2C schematically shows a flow chart of an augmented reality augmentation method according to another embodiment of the present disclosure;
fig. 3 schematically shows a flow chart of an augmented reality augmentation method according to another embodiment of the present disclosure;
fig. 4 schematically shows a flow chart of an augmented reality augmentation method according to another embodiment of the present disclosure;
fig. 5A schematically illustrates a block diagram of an augmented reality augmentation apparatus according to an embodiment of the present disclosure;
fig. 5B schematically illustrates a block diagram of an augmented reality augmentation apparatus according to an embodiment of the present disclosure;
fig. 6 schematically illustrates a block diagram of an augmented reality augmentation system according to an embodiment of the present disclosure; and
fig. 7 schematically shows a block diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Some block diagrams and/or flow diagrams are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations thereof, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the instructions, which execute via the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart block or blocks. The techniques of this disclosure may be implemented in hardware and/or software (including firmware, microcode, etc.). In addition, the techniques of this disclosure may take the form of a computer program product on a computer-readable storage medium having instructions stored thereon for use by or in connection with an instruction execution system.
The embodiment of the disclosure provides an augmented reality extension method and an electronic device capable of applying the same. The method includes an object recognition process and an information conversion process. In the object identification process, identifying first image information containing an object to be identified, which is acquired under the first position of the first electronic device, to obtain object identification information and position and posture association information of the object to be identified, which corresponds to the first position. And obtaining object identification information and pose correlation information of the object to be identified corresponding to the first pose based on the object identification process. After the object identification process is completed, an information conversion process is carried out, based on the association relationship between the first pose and a second pose of the second electronic equipment, the first pose, the second pose and the pose association information of the object to be identified corresponding to the first pose, the pose association information of the object to be identified corresponding to the second pose is obtained, and therefore the object identification information and the pose association information of the object to be identified corresponding to the second pose can be output, namely the first electronic equipment assists the second electronic equipment in carrying out object identification, and the second electronic equipment displays the identification information of the object to be identified based on the pose of the second electronic equipment.
Fig. 1 schematically illustrates an application scenario of an augmented reality extension method and an electronic device according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a scenario in which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, but does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, a user performs object recognition using an AR device (e.g., head-mounted AR glasses, etc.), and when a camera of the AR glasses captures an image of an object to be recognized, such as a table and a cup placed on the table, the image recognition is performed, and information labels are performed on recognition results, such as information of the "cup" and the "table", at corresponding positions in the image, so that the user can know attribute information of the object to be recognized.
Information interaction may be performed between AR devices or between an AR device and, for example, a server (not shown), to receive or transmit information, and the like. Various messaging client applications may be installed on the AR device, such as a shopping-like application, a web browser application, a search-like application, an instant messaging tool, a mailbox client, social platform software, and so forth (by way of example only).
In addition, there may be a control input device (not shown) used in cooperation with the AR device, such as a handle supporting 3 Degree of Freedom (DoF) or 6DoF data acquisition and transmission, and of course, a voice input type control input device may be included.
The AR device may be various electronic devices having a display screen and an image acquisition apparatus and supporting multiple degrees of freedom data interaction and information display. The control input device includes, but is not limited to, a handle, a smart phone, a tablet computer, a laptop computer, and the like, which support 3dof or 6dof data acquisition and transmission, or a device supporting voice recognition.
Fig. 2A schematically shows a flow chart of an augmented reality augmentation method according to an embodiment of the present disclosure. In this embodiment, the augmented reality extension method is applied to a first electronic device, such as an electronic device supporting multiple degrees of freedom data interaction and capable of image acquisition, such as a smart phone.
As shown in fig. 2A, the method includes operations S201 to S203.
In operation S201, first image information including an object to be recognized, which is obtained in a first position of the first electronic device, is recognized, so as to obtain object recognition information and pose association information of the object to be recognized, which corresponds to the first position.
In this embodiment, the first electronic device may serve as a manipulation input device of the AR device, and also plays a role of expanding functions of the AR device, and the first electronic device needs to support data with multiple degrees of freedom, for example, 6DoF data, and specifically, may support applications of an arcre and/or an ARKit platform, and has various electronic devices with an image acquisition function. Wherein the 6DoF data may include coordinates of the AR device in each axial direction in the ground inertial coordinate system and an attitude angle of the AR device, the attitude angle including at least one of: a pitch angle theta, a yaw angle psi, and a roll angle phi.
In one embodiment, since the smart phone is popular and the smart phone generally has a multi-axis inertia measurement unit (e.g., 3-axis, 6-axis, 9-axis, etc.), the smart phone generally supports 6DoF data, and the smart phone is used as the first electronic device, so that no additional operation input device is required, and the smart phone can interact with the AR device by means of various functions of the smart phone, and can assist the AR device in improving data processing performance and endurance by using data processing capability of the smart phone. The first electronic device and the AR device may be connected via a network (not shown) that provides a medium for a communication link between the AR device and the server, which may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
For example, the image is preprocessed (e.g., analog-to-digital conversion, binarization processing, image smoothing processing, enhancement processing, filtering, etc.), feature extraction is performed (the extracted features can be used to characterize the category of the object to be recognized), and then the features are processed by using a trained recognition model to obtain the category to which the object to be recognized belongs, which can be used as the object recognition information. Of course, the image recognition techniques applicable in the prior art are all applicable, and are not limited herein. Because the existing image recognition process is performed on the second electronic device, and in the embodiment, the image recognition process is performed on the first electronic device, the computing resource consumption and the electric energy consumption of the second electronic device can be effectively reduced, and the cruising ability of the second electronic device is improved.
The pose associated information of the object to be identified corresponding to the first pose may include at least one of the following: the electronic device comprises a first electronic device, a second electronic device and a controller, wherein the first electronic device comprises a first electronic device and a second electronic device, the first electronic device. Specifically, the pose associated information of the object to be recognized corresponding to the first pose may be pose information of the object to be recognized in the coordinate system of the first electronic device. For example, the coordinates (x-value, y-value, z-value) of the object to be recognized in the first electronic device coordinate system and the attitude angle (θ -value, ψ -value, φ -value) of the object to be recognized with respect to the first electronic device. Of course, the coordinates may be replaced with velocity values, acceleration values, and the like of each axis, and thus relative coordinates at each time may be obtained by integration based on the velocity values, the acceleration values, and the like.
It should be noted that the pose correlation information of the object to be recognized corresponding to the first pose may be obtained through image processing, for example. In addition, the pose information of the object to be recognized relative to the first electronic device can be acquired by a sensor (such as a 6-axis inertial measurement unit) of the object to be recognized, and the pose information can be sent to the first electronic device. For example, there is an initial pose transformation relationship between the object to be recognized and the first electronic device, and the object to be recognized determines pose information of the object to be recognized with respect to the first electronic device based on the initial pose transformation relationship and the detected amount of change in pose.
In addition, the image identification process and the pose associated information acquisition process based on the image can be realized locally or remotely.
In an embodiment, the identifying the first image information including the object to be identified, which is acquired in the first pose of the first electronic device, to obtain the object identification information and pose association information of the object to be identified, which corresponds to the first pose, may include the following operations.
First, first image information of an object to be recognized in a first position of the first electronic device is acquired. For example, the first electronic device obtains first image information including an object to be recognized through its own image sensor, where the first image information is obtained by the first electronic device in a first posture.
And then, sending the first image information to third electronic equipment, wherein the third electronic equipment is in communication connection with the first electronic equipment. The third electronic device may be a server, a cluster of servers, or the like. Because the data processing capability of the server is strong, and the limitation of the size, the energy consumption and the like is small, the image recognition and the like by using the third electronic device is beneficial to reducing the electric quantity and the performance consumption of the first electronic device, and the method provided by the disclosure can be suitable for the first electronic device without high data processing capability.
Then, object identification information and pose association information of the object to be identified corresponding to the first pose are received, and the object identification information and the pose association information of the object to be identified corresponding to the first pose are determined by the third electronic equipment at least based on the first image information. The method for recognizing the image by the third electronic device may be as described above. Therefore, the first electronic equipment can acquire the object identification information and the position and posture association information of the object to be identified corresponding to the first position and posture in a low-consumption state.
In operation S202, based on the association relationship between the first pose and a second pose of a second electronic device, and the pose association information of the object to be recognized corresponding to the first pose, the second pose, and the pose association information of the object to be recognized corresponding to the first pose, the pose association information of the object to be recognized corresponding to the second pose is obtained.
In this embodiment, the association relationship between the first pose and the second pose of the second electronic device may be calibration information. For example, pose information of the first electronic device and the second electronic device is initialized, and a corresponding relationship between the initialized poses of the first electronic device and the second electronic device in an initialized state is obtained. In the initialization process, the first electronic device and the second electronic device may be respectively placed at a first calibration position and a second calibration position which are preset, a pose information conversion relationship (such as a displacement relationship and a pose angle relationship, that is, the calibration information) exists between the first calibration position and the second calibration position, and then pose information of the first electronic device and the second electronic device is associated.
The first pose may be pose information of the first electronic device relative to a ground inertial coordinate system, the second pose may be pose information of the second electronic device relative to the first electronic device, and the pose associated information of the object to be recognized corresponding to the first pose is pose information of the object to be recognized relative to the first electronic device.
For example, pose information (X) of the first calibration positionA0,YA0,ZA0,θA0,ψA0,φA0) Is (0,0,0,0,0,0), pose information (X) of the second calibration positionB0,YB0,ZB0,θB0,ψB0,φB0) Is (10,0,0,0,0, 0). The second pose information of the second electronic device after the initialization of the first calibration position is (0,0,0, 0), the first pose information of the first electronic device after the initialization of the second calibration position is (10,0,0,0,0,0), and when the change amount of the pose information of the first electronic device after the movement is (20,50, -3,10,30, -20), the pose information of the first electronic device relative to the second electronic device is (30,50, -3,10,30, -20). If the pose information of the object to be recognized relative to the first electronic device is (20,22,60,20, -10,10), the pose information of the object to be recognized relative to the second electronic device is (50,72,57,30,20, -10). If the second electronic device moves during the period, the pose information becomes (10,10,10,10,10,10) after the second electronic device moves, the pose information of the object to be recognized relative to the second electronic device becomes (60,82,67,40,30, 0). Therefore, the position of the object to be recognized in the current display image of the second electronic equipment can be determined based on the pose information of the object to be recognized relative to the second electronic equipment.
Therefore, the pose associated information of the object to be recognized corresponding to the second pose, which is the pose information of the object to be recognized relative to the second electrical device (e.g., the AR device), can be obtained through simple geometric transformation, for example, the position information (e.g., which area, which pixels, etc.) of the object to be recognized in the image currently displayed by the AR device. In addition, the image of the object to be recognized, which is captured by the first electronic device in the first position, may be converted into the image of the object to be recognized, which is captured by the second electronic device in the second position, through a stereoscopic 3D model process (e.g., a stereoscopic model of the object to be recognized, which is obtained by processing multiple frames of images), or of course, the stereoscopic model of the object to be recognized may be directly provided.
In operation S203, the object identification information and pose association information of the object to be identified corresponding to the second pose are output.
In this embodiment, the object identification information and the to-be-identified object pose associated information corresponding to the second pose may be sent to the second electronic device, so that the second electronic device adds the object identification information to a correct position of an image currently displayed by the second electronic device based on the current second pose and the to-be-identified object pose associated information corresponding to the second pose. In addition, the stereoscopic model of the object to be recognized can be sent to the second electronic device for presentation.
Fig. 2B schematically illustrates a schematic diagram of identifying an object to be identified according to an embodiment of the present disclosure.
As shown in fig. 2B, the first electronic device is a smart phone, and the second electronic device is AR glasses for example. When a user uses AR glasses to identify an object to be identified (such as a cup), the extended information display of the cup is inconvenient in the following scenes: the number of objects in the scene is large, the volume of the cup to be recognized is smaller than that of other objects in the scene, the volume of the cup is small, the cup is far away from the user, the current angle of the cup is not a good recognition angle, and the like, so that the user is difficult to operate when the user recognizes the cup and displays the label information of the cup at the correct position in the picture displayed on the AR glasses. In fig. 2B, objects such as a television, a television cabinet, a tea table, a door, a plurality of electric appliances, and a cup exist in a scene, and the placement is crowded (if all extended information is displayed, the displayed image is disordered), if a user does not know or cannot see the object (cup) placed on the tea table, the user wants the AR glasses to identify the object and then mark the object information, but does not want the AR glasses to display the marked information on other objects in the environment. In this embodiment, a user may directly move the smartphone to a position where the user can clearly shoot the cup to perform image acquisition (the image acquisition may be performed from multiple angles, and light may be supplemented by using a flash lamp or the like in the period), and then the smartphone acquires the identification information of the cup, the cup pose correlation information corresponding to the first posture, and the cup pose correlation information corresponding to the second posture by using the data processing capability of the smartphone or the server, and then sends the identification information and the cup pose correlation information corresponding to the second posture to the AR device, so that information "cup" may be added at a correct position in a picture displayed by the AR device. The angle of the cup shot by the AR device and the angle and distance of the cup shot by the mobile phone are possibly different, so that the obtained images of the cup are possibly different, but can be converted based on pose information.
It should be noted that the image of the cup in the picture displayed by the AR device may be an image of the cup shot by the AR device itself, or an image of the cup obtained by geometrically converting an image shot by a mobile phone, where the images of the two cups should be the same, but affected by the accuracy of the pose information, and the converted image may have a certain error. In addition, the image of the cup in the image displayed by the AR device may also be a stereoscopic model of the cup (the size of which is adjusted based on the pose information and is adapted to the image shot by the AR device), or a window may be opened in the image displayed by the AR device to display the stereoscopic model of the cup, which is not limited herein. In addition, the mobile phone can also display the image, the stereo model and the image recognition result shot by the mobile phone in real time, and as the auxiliary display content of the AR device, for example, the user can drag, zoom, rotate and the like the cup by the motion of the finger on the screen to observe the structure of the cup in all directions.
According to the augmented reality extension method, by means of the first electronic equipment, such as the camera of a smart phone and 6DoF data, the distance between the camera and the object to be recognized can be flexibly changed, the AR equipment user can approach the object to be recognized at different angles, the object recognition can be more conveniently carried out by the AR equipment user, the recognition algorithm is executed at the first electronic equipment end, and the power consumption of the AR equipment end is effectively reduced.
Fig. 2C schematically shows a flow chart of an augmented reality augmentation method according to another embodiment of the present disclosure.
In one embodiment, as shown in fig. 2C, the augmented reality extension method may further include operations S204 to S205.
In operation S204, a first user operation is determined based on at least an amount of change in the first pose of the first electronic device within a preset time period.
In this embodiment, taking the first electronic device as a smart phone as an example for explanation, the smart phone may acquire and output 6DoF data, and provide an air mouse function based on the 6DoF data as an operation input device of the AR glasses.
For example, after the operation input function is enabled, the movement path of the mobile phone is in a zigzag shape and is completed within a preset time threshold, and it may be determined that the first user operation is the function of taking a picture. The function corresponding to the first user operation may be preset or user-defined, and is not limited herein.
In operation S205, the first user operation is input to the second electronic device to control the second electronic device.
The process of controlling the second electronic device based on the first user operation may refer to an existing process of controlling the second electronic device based on a handle, and is not described herein again.
According to the augmented reality extension method, functions such as an air mouse are provided by means of the first electronic device, such as 6DoF data of a smart phone, and the augmented reality extension method is used as a control input device of the AR device, and the convenience of input is improved without additionally arranging a matched control input device for the AR device.
In another embodiment, as shown in fig. 2C, the augmented reality extension method may further include operations S206 to S207. In the prior art, when a user uses an AR device to perform a video call, both parties of the call cannot see a real image of the other party (for example, an AR glasses wearer cannot shoot a real image of the wearer), which may affect user experience, and this problem may be solved by the following operations.
In operation S206, a second user operation for controlling the first electronic device to acquire image information is received.
For example, the user clicks a button for a video capture function displayed on the screen of the mobile phone. As another example, the user has turned on the capture video function by an air gesture (gesture determined based on 6DoF data). Of course, the AR glasses may also directly call the camera of the smartphone to perform image acquisition when performing a video call, which is not limited herein.
In operation S207, the image information is acquired and output in response to the second user operation.
Specifically, the image information may be sent to the second electronic device, and then sent to the opposite side of the video call by the second electronic device. In addition, the first electronic device can also display the image information or the image information of the other party of the call in real time, and user experience is improved.
Fig. 3 schematically shows a flow chart of an augmented reality augmentation method according to another embodiment of the present disclosure. In this embodiment, the augmented reality extension method may be applied to the second electronic device.
As shown in fig. 3, the augmented reality extension method may include operations S301 to S303.
In operation S301, second image information including an object to be recognized in a second position of the second electronic device is acquired.
For example, the second electronic device acquires second image information including the object to be recognized through the camera in the second posture. The second electronic device may be an AR device, such as AR glasses or the like.
In operation S302, pose association information and object identification information of the object to be identified corresponding to the second pose are obtained.
The object identification information is obtained by identifying first image information which is acquired under a first pose of first electronic equipment and contains an object to be identified, and the pose associated information of the object to be identified corresponding to a second pose is acquired by the first electronic equipment based on the association relationship between the first pose and the second pose, the first pose, the second pose and the pose associated information of the object to be identified corresponding to the first pose.
In operation S303, the object identification information is added to the corresponding position in the second image information according to the pose correlation information of the object to be identified corresponding to the second pose.
Specifically, referring to FIG. 2B, the annotation information "cup" is correctly added to the image displayed by the AR device.
It should be noted that, the above method is referred to in the process of acquiring the pose associated information of the object to be identified and the object identification information corresponding to the second pose, and details are not repeated here.
The augmented reality extension method provided by the disclosure can flexibly change the distance between the camera and the object to be recognized by means of the camera of the first electronic device and the 6DoF data, and can approach the object to be recognized at different angles, so that a user of the AR device can more conveniently recognize the object. In addition, the object identification information can be correctly added into the image displayed by the AR equipment based on the pose information, and the operation convenience is greatly improved. In addition, the first electronic equipment terminal executes the recognition algorithm, so that the power consumption of the AR equipment terminal is effectively reduced.
In another embodiment, the method may further include the following operations.
First, a first user operation is received, wherein the first user operation is determined at least based on the variation of the first pose of the first electronic device in a preset time period. And then, executing the function corresponding to the first user operation. In this way, a device such as a smart phone supporting 6DoF data can be used as the operation input device of the second electronic device, and the rich functions of the smart phone can be utilized to provide function extension for the AR device, and no new operation input device is additionally added.
In another embodiment, the method may further include the following operations.
Firstly, image information is received, wherein the image information is acquired by the first electronic device in response to a second user operation to control the first electronic device. Then, the image information is output. Therefore, the AR equipment can acquire the real image information of the user in real time, and then both parties in the communication can see the real image of the other party (in the prior art, an AR glasses wearer cannot shoot the real image of the wearer).
Fig. 4 schematically shows a flow chart of an augmented reality augmentation method according to another embodiment of the present disclosure. In this embodiment, the augmented reality extension method may be applied to an augmented reality system, where the augmented reality system includes at least one first electronic device and one second electronic device, and the augmented reality system includes at least one of the first electronic device and the second electronic device.
As shown in fig. 4, the method may include operations S401 to S404.
In operation S401, second image information including an object to be recognized in a second posture of the second electronic device is acquired, and first image information including the object to be recognized in a first posture of the first electronic device is acquired.
In operation S402, the first image information is identified to obtain object identification information and pose association information of the object to be identified corresponding to the first pose.
In operation S403, based on the association relationship between the first pose and the second pose, the first pose, the second pose, and the pose association information of the object to be recognized corresponding to the first pose, the pose association information of the object to be recognized corresponding to the second pose is obtained.
In operation S404, the object identification information is added to the corresponding position in the second image information according to the pose associated information of the object to be identified corresponding to the second pose.
In this embodiment, the operations may refer to the related contents shown in fig. 2A to fig. 3, and are not described herein again.
Fig. 5A schematically illustrates a block diagram of an augmented reality augmentation apparatus according to an embodiment of the present disclosure.
As shown in fig. 5A, the augmented reality extension apparatus 500 may be an electronic device such as a smart phone supporting 6DoF data. Specifically, the augmented reality extension apparatus 500 may include: a first information acquisition module 510, a first conversion module 520, and an output module 530.
The first information obtaining module 510 is configured to identify first image information that is obtained in a first pose of the first electronic device and includes an object to be identified, so as to obtain object identification information and pose association information of the object to be identified, where the pose association information corresponds to the first pose.
The first conversion module 520 is configured to obtain pose association information of an object to be identified corresponding to a second pose based on an association relationship between the first pose and the second pose of a second electronic device, and pose association information of the object to be identified corresponding to the first pose, the second pose, and the first pose.
The output module 530 is configured to output the object identification information and pose association information of the object to be identified corresponding to the second pose.
For example, the pose associated information of the object to be recognized corresponding to the first pose includes at least one of the following: the electronic device comprises a first electronic device, a second electronic device and a controller, wherein the first electronic device comprises a first electronic device and a second electronic device, the first electronic device comprises a specified number of degrees of freedom data of an object to be recognized relative to the first electronic device, image information containing the object to be.
In one embodiment, the apparatus 500 may further include an operation determination module 540 and an input module 550.
The operation determining module 540 is configured to determine a first user operation at least based on an amount of change in the first pose of the first electronic device within a preset time period.
The input module 550 is configured to input the first user operation to the second electronic device to control the second electronic device.
Alternatively, the first information obtaining module 510 may include an obtaining unit, a sending unit, and a receiving unit.
The acquiring unit is configured to acquire first image information of an object to be recognized in a first posture of the first electronic device, the sending unit is configured to send the first image information to a third electronic device, the third electronic device is in communication connection with the first electronic device, the receiving unit is configured to receive object identification information and pose correlation information of the object to be recognized corresponding to the first posture, and the object identification information and the pose correlation information of the object to be recognized corresponding to the first posture are determined by the third electronic device at least based on the first image information.
In another embodiment, the apparatus 500 may further include an operation acquisition module 560 and an image output module 570.
The operation acquiring module 560 is configured to receive a second user operation, where the second user operation is used to control the first electronic device to acquire image information.
The image output module 570 is configured to obtain and output the image information in response to the second user operation.
Fig. 5B schematically illustrates a block diagram of an augmented reality augmentation apparatus according to an embodiment of the present disclosure.
As shown in fig. 5B, the augmented reality expanding apparatus 5000 may be an electronic device supporting augmented reality technology. Specifically, the augmented reality extension apparatus 5000 may include an image acquisition module 580, a second information acquisition module 590, and an information addition module 511.
The image obtaining module 580 is configured to obtain second image information including an object to be recognized in a second position of the second electronic device.
The second information obtaining module 590 is configured to obtain pose associated information of the object to be identified and object identification information corresponding to the second pose.
The information adding module 511 is configured to add the object identification information to the corresponding position in the second image information according to the pose associated information of the object to be identified corresponding to the second pose.
It should be noted that the object identification information is obtained by identifying first image information, which is acquired by a first electronic device in a first pose and contains an object to be identified, and the pose associated information of the object to be identified, which corresponds to a second pose, is acquired by the first electronic device based on the association relationship between the first pose and the second pose, the first pose, the second pose, and the pose associated information of the object to be identified, which corresponds to the first pose.
The apparatus 5000 may further include an operation receiving module 512 and an executing module 513.
The operation receiving module 512 is configured to receive a first user operation, where the first user operation is determined based on at least an amount of change of a first pose of the first electronic device within a preset time period.
The executing module 513 is configured to execute a function corresponding to the first user operation.
Fig. 6 schematically shows a block diagram of an augmented reality augmentation system according to an embodiment of the present disclosure.
As shown in fig. 6, the augmented reality augmenting apparatus 600 may be an augmented reality system, where the augmented reality system includes at least one first electronic device and a second electronic device, where the first electronic device may be an electronic device such as a smart phone that supports 6DoF data, and the second electronic device may be an electronic device that supports augmented reality technology.
Specifically, the augmented reality extension apparatus 600 may include: a third information acquisition module 610, a fourth information acquisition module 620, a second conversion module 630 and a second information addition module 640.
The third information obtaining module 610 is configured to obtain second image information of the second electronic device in the second position, where the second image information includes an object to be recognized, and obtain first image information of the first electronic device in the first position, where the first image information includes the object to be recognized.
The fourth information obtaining module 620 is configured to identify the first image information, so as to obtain object identification information and pose association information of the object to be identified corresponding to the first pose.
The second conversion module 630 is configured to obtain pose associated information of an object to be identified corresponding to the second pose based on an association relationship between the first pose and the second pose, and pose associated information of the object to be identified corresponding to the first pose, the second pose, and the first pose.
The second information adding module 640 is configured to add the object identification information to a corresponding position in the second image information according to the pose associated information of the object to be identified corresponding to the second pose.
According to the embodiment of the present disclosure, the image recognition process and the pose information conversion process of the augmented reality extension apparatus can be referred to the above description, and are not repeated here.
Any of the modules, units, or at least part of the functionality of any of them according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules and units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, units according to the embodiments of the present disclosure may be implemented at least partially as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by any other reasonable means of hardware or firmware by integrating or packaging the circuits, or in any one of three implementations of software, hardware and firmware, or in any suitable combination of any of them. Alternatively, one or more of the modules, units according to embodiments of the present disclosure may be implemented at least partly as computer program modules, which, when executed, may perform the respective functions.
For example, any plurality of the first information obtaining module 510, the first converting module 520, the outputting module 530, the operation determining module 540, the inputting module 550, the operation obtaining module 560 and the image outputting module 570 may be combined in one module to be implemented, or any one of them may be split into a plurality of modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the present disclosure, at least one of the first information obtaining module 510, the first converting module 520, the output module 530, the operation determining module 540, the input module 550, the operation obtaining module 560 and the image output module 570 may be at least partially implemented as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or implemented by any one of three implementations of software, hardware and firmware, or in a suitable combination of any of them. Alternatively, at least one of the first information obtaining module 510, the first converting module 520, the outputting module 530, the operation determining module 540, the inputting module 550, the operation obtaining module 560 and the image outputting module 570 may be at least partially implemented as a computer program module, which may perform a corresponding function when executed.
Fig. 7 schematically shows a block diagram of an electronic device according to an embodiment of the disclosure. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7, the electronic device 700 includes: one or more processors 710 and a computer-readable storage medium 720. The electronic device may perform a method according to an embodiment of the present disclosure.
In particular, processor 710 may comprise, for example, a general purpose microprocessor, an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), and/or the like. The processor 710 may also include on-board memory for caching purposes. Processor 710 may be a single processing unit or a plurality of processing units for performing the different actions of the method flows according to embodiments of the present disclosure.
Computer-readable storage medium 720, for example, may be a non-volatile computer-readable storage medium, specific examples including, but not limited to: magnetic storage devices, such as magnetic tape or Hard Disk Drives (HDDs); optical storage devices, such as compact disks (CD-ROMs); memory such as Random Access Memory (RAM) or flash memory, etc.
The computer-readable storage medium 720 may include a program 721, which program 721 may include code/computer-executable instructions that, when executed by the processor 710, cause the processor 710 to perform a method according to an embodiment of the disclosure, or any variation thereof.
The program 721 may be configured with, for example, computer program code including computer program modules. For example, in an example embodiment, code in program 721 may include one or more program modules, including for example program module 721A, program modules 721B, … …. It should be noted that the division and number of the program modules are not fixed, and those skilled in the art may use suitable program modules or program module combinations according to actual situations, so that the processor 710 may execute the method according to the embodiment of the present disclosure or any variation thereof when the program modules are executed by the processor 710.
According to embodiments of the present disclosure, the processor 710 may interact with the computer readable storage medium 720 to perform a method according to embodiments of the present disclosure or any variant thereof.
According to an embodiment of the present disclosure, at least one of the first information obtaining module 510, the first converting module 520, the outputting module 530, the operation determining module 540, the inputting module 550, the operation obtaining module 560 and the image outputting module 570 may be implemented as a program module described with reference to fig. 7, which, when executed by the processor 710, may implement the corresponding operation described above.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
While the disclosure has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims and their equivalents. Accordingly, the scope of the present disclosure should not be limited to the above-described embodiments, but should be defined not only by the appended claims, but also by equivalents thereof.

Claims (10)

1. An augmented reality extension method is applied to a first electronic device, and comprises the following steps:
identifying first image information which is acquired under a first position posture of the first electronic equipment and contains an object to be identified to obtain object identification information and position posture association information of the object to be identified, wherein the position posture association information corresponds to the first position posture;
acquiring to-be-identified object pose associated information corresponding to a second pose based on an associated relationship between the first pose and the second pose of a second electronic device, the first pose, the second pose and the to-be-identified object pose associated information corresponding to the first pose; and
and outputting the object identification information and the pose correlation information of the object to be identified corresponding to the second pose, so that the second electronic equipment adds the object identification information to the position, corresponding to the object to be identified, of the second image information according to the pose correlation information of the object to be identified corresponding to the second pose.
2. The method of claim 1, further comprising:
determining a first user operation at least based on the variation of the first pose of the first electronic device within a preset time period; and
inputting the first user operation into the second electronic device to control the second electronic device.
3. The method of claim 1, wherein:
the pose associated information of the object to be identified corresponding to the first pose comprises at least one of the following: the electronic device comprises a first electronic device, a second electronic device and a controller, wherein the first electronic device comprises a first electronic device and a second electronic device, the first electronic device comprises a specified number of degrees of freedom data of an object to be recognized relative to the first electronic device, image information containing the object to be.
4. The method according to claim 1, wherein the identifying first image information including an object to be identified, acquired in the first pose of the first electronic device, and obtaining object identification information and pose association information of the object to be identified corresponding to the first pose comprises:
acquiring first image information of an object to be identified in a first position of the first electronic equipment;
sending the first image information to third electronic equipment, wherein the third electronic equipment is in communication connection with the first electronic equipment; and
receiving object identification information and pose associated information of the object to be identified corresponding to the first pose, wherein the object identification information and the pose associated information of the object to be identified corresponding to the first pose are determined by the third electronic equipment at least based on the first image information.
5. The method of claim 1, further comprising:
receiving a second user operation, wherein the second user operation is used for controlling the first electronic equipment to acquire image information; and
and responding to the second user operation, and acquiring and outputting the image information.
6. An augmented reality extension method is applied to a second electronic device, and comprises the following steps:
acquiring second image information including an object to be identified in a second position of the second electronic equipment;
acquiring pose associated information and object identification information of the object to be identified corresponding to the second pose, wherein,
the object identification information is obtained by identifying first image information containing an object to be identified, which is acquired under a first posture of the first electronic equipment,
the pose associated information of the object to be recognized corresponding to the second pose is obtained by the first electronic equipment based on the association relationship between the first pose and the second pose, the first pose, the second pose and the pose associated information of the object to be recognized corresponding to the first pose; and
and adding the object identification information at the corresponding position in the second image information according to the pose correlation information of the object to be identified corresponding to the second pose.
7. The method of claim 6, further comprising:
receiving a first user operation, wherein the first user operation is determined at least based on the variation of the first pose of the first electronic device in a preset time period; and
and executing the function corresponding to the first user operation.
8. The method of claim 6, further comprising:
receiving image information, wherein the image information is acquired by the first electronic equipment in response to a second user operation to control the first electronic equipment; and
and outputting the image information.
9. An augmented reality extension method is applied to an augmented reality system, the augmented reality system comprises at least one first electronic device and at least one second electronic device, and for one of the first electronic device and the second electronic device, the method comprises the following steps:
acquiring second image information including an object to be recognized in a second position of the second electronic device, and acquiring first image information including the object to be recognized in a first position of the first electronic device;
identifying the first image information to obtain object identification information and pose associated information of the object to be identified corresponding to the first pose;
acquiring pose associated information of an object to be identified corresponding to the second pose based on the associated relation between the first pose and the second pose, the first pose, the second pose and pose associated information of the object to be identified corresponding to the first pose; and
and adding the object identification information at the corresponding position in the second image information according to the pose correlation information of the object to be identified corresponding to the second pose.
10. An electronic device, comprising:
one or more processors;
a computer readable storage medium storing one or more computer programs which, when executed by the processor, implement the method of any of claims 1-9.
CN201910228203.9A 2019-03-25 2019-03-25 Augmented reality extension method and electronic device Active CN109992111B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910228203.9A CN109992111B (en) 2019-03-25 2019-03-25 Augmented reality extension method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910228203.9A CN109992111B (en) 2019-03-25 2019-03-25 Augmented reality extension method and electronic device

Publications (2)

Publication Number Publication Date
CN109992111A CN109992111A (en) 2019-07-09
CN109992111B true CN109992111B (en) 2021-02-19

Family

ID=67131392

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910228203.9A Active CN109992111B (en) 2019-03-25 2019-03-25 Augmented reality extension method and electronic device

Country Status (1)

Country Link
CN (1) CN109992111B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110837297B (en) * 2019-10-31 2021-07-16 联想(北京)有限公司 Information processing method and AR equipment
CN111077999B (en) * 2019-11-14 2021-08-13 联想(北京)有限公司 Information processing method, equipment and system
CN113223129B (en) * 2020-01-20 2024-03-26 华为技术有限公司 Image rendering method, electronic equipment and system
CN114461072A (en) * 2022-02-10 2022-05-10 湖北星纪时代科技有限公司 Display method, display device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107646126A (en) * 2015-07-16 2018-01-30 谷歌有限责任公司 Camera Attitude estimation for mobile device
CN108734736A (en) * 2018-05-22 2018-11-02 腾讯科技(深圳)有限公司 Camera posture method for tracing, device, equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10089790B2 (en) * 2015-06-30 2018-10-02 Ariadne's Thread (Usa), Inc. Predictive virtual reality display system with post rendering correction
KR102233807B1 (en) * 2016-11-15 2021-03-30 구글 엘엘씨 Input Controller Stabilization Technique for Virtual Reality System
CN107340861B (en) * 2017-06-26 2020-11-20 联想(北京)有限公司 Gesture recognition method and device thereof
CN107747941B (en) * 2017-09-29 2020-05-15 歌尔股份有限公司 Binocular vision positioning method, device and system
CN109126121B (en) * 2018-06-01 2022-01-04 成都通甲优博科技有限责任公司 AR terminal interconnection method, system, device and computer readable storage medium
CN109087359B (en) * 2018-08-30 2020-12-08 杭州易现先进科技有限公司 Pose determination method, pose determination apparatus, medium, and computing device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107646126A (en) * 2015-07-16 2018-01-30 谷歌有限责任公司 Camera Attitude estimation for mobile device
CN108734736A (en) * 2018-05-22 2018-11-02 腾讯科技(深圳)有限公司 Camera posture method for tracing, device, equipment and storage medium

Also Published As

Publication number Publication date
CN109992111A (en) 2019-07-09

Similar Documents

Publication Publication Date Title
US11394950B2 (en) Augmented reality-based remote guidance method and apparatus, terminal, and storage medium
CN109992111B (en) Augmented reality extension method and electronic device
US20210201520A1 (en) Systems and methods for simulatenous localization and mapping
US11145083B2 (en) Image-based localization
CN109313812B (en) Shared experience with contextual enhancements
US11625841B2 (en) Localization and tracking method and platform, head-mounted display system, and computer-readable storage medium
CN107820593B (en) Virtual reality interaction method, device and system
US9639988B2 (en) Information processing apparatus and computer program product for processing a virtual object
US9256986B2 (en) Automated guidance when taking a photograph, using virtual objects overlaid on an image
US11893702B2 (en) Virtual object processing method and apparatus, and storage medium and electronic device
US9268410B2 (en) Image processing device, image processing method, and program
US11288871B2 (en) Web-based remote assistance system with context and content-aware 3D hand gesture visualization
CN110866977B (en) Augmented reality processing method, device, system, storage medium and electronic equipment
EP3968131A1 (en) Object interaction method, apparatus and system, computer-readable medium, and electronic device
WO2015093130A1 (en) Information processing device, information processing method, and program
CN110296686A (en) Localization method, device and the equipment of view-based access control model
US20150244984A1 (en) Information processing method and device
US20220107704A1 (en) Virtual paintbrush implementing method and apparatus, and computer readable storage medium
US11869195B2 (en) Target object controlling method, apparatus, electronic device, and storage medium
US20210245368A1 (en) Method for virtual interaction, physical robot, display terminal and system
CN114067087A (en) AR display method and apparatus, electronic device and storage medium
US20170026617A1 (en) Method and apparatus for real-time video interaction by transmitting and displaying user interface correpsonding to user input
CN109542218B (en) Mobile terminal, human-computer interaction system and method
CN113780045A (en) Method and apparatus for training distance prediction model
CN108171802B (en) Panoramic augmented reality implementation method realized by combining cloud and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant