WO2021082801A1 - Augmented reality processing method and apparatus, system, storage medium and electronic device - Google Patents

Augmented reality processing method and apparatus, system, storage medium and electronic device Download PDF

Info

Publication number
WO2021082801A1
WO2021082801A1 PCT/CN2020/116290 CN2020116290W WO2021082801A1 WO 2021082801 A1 WO2021082801 A1 WO 2021082801A1 CN 2020116290 W CN2020116290 W CN 2020116290W WO 2021082801 A1 WO2021082801 A1 WO 2021082801A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature point
current frame
point information
frame image
dimensional feature
Prior art date
Application number
PCT/CN2020/116290
Other languages
French (fr)
Chinese (zh)
Inventor
黄锋华
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2021082801A1 publication Critical patent/WO2021082801A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation

Definitions

  • the present disclosure relates to the field of augmented reality technology, and in particular to an augmented reality processing method, an augmented reality processing device, an augmented reality processing system, a storage medium, and electronic equipment.
  • Augmented Reality is a technology that integrates the virtual world and the real world. This technology has been widely used in education, games, medical care, the Internet of Things, intelligent manufacturing and other fields.
  • the relocation effect plays a crucial role in the AR experience.
  • the shooting angles of the mapping device and the relocation device are not the same, in the process of determining the pose relationship between the mapping device and the relocation device, the problem of feature mismatch may occur, resulting in poor relocation results .
  • an augmented reality processing method which is applied to a first device, and the augmented reality processing method includes: determining image parameters of a current frame image of the first device; and acquiring the image parameters determined by the second device Image parameters of the reference image; determine the pose of the current frame image relative to the second device based on the image parameters of the current frame image and the image parameters of the reference image; capture the current frame image based on the pose of the current frame image relative to the second device
  • the posture information of the first device is used, the relative posture relationship between the first device and the second device is determined, so as to use the relative posture relationship between the first device and the second device to perform augmented reality processing operations.
  • an augmented reality processing device applied to a first device.
  • the augmented reality processing device includes a first image parameter determination module, a second image parameter determination module, and a first relative pose determination module And the second relative pose determination module.
  • the first image parameter determination module is used to determine the image parameters of the current frame image of the first device; the second image parameter determination module is used to obtain the image parameters of the reference image determined by the second device; the first relative pose The determining module is used to determine the pose of the current frame image relative to the second device according to the image parameters of the current frame image and the image parameters of the reference image; the second relative pose determining module is used to determine the pose of the current frame image relative to the second device according to the image parameters of the current frame image relative to the second device.
  • the pose and the pose information of the first device when the current frame image is collected determine the relative pose relationship between the first device and the second device, so as to use the relative pose relationship between the first device and the second device to perform augmented reality processing operations.
  • an augmented reality processing system applied to a first device.
  • the augmented reality processing system includes a camera module, a depth sensing module, an inertial measurement unit, a real-time positioning and map construction unit, and Augmented reality processing device.
  • the camera module is used to collect the current frame image; the depth sensing module is used to collect the depth information corresponding to the current frame image; the inertial measurement unit is used to measure the inertial information of the first device; the instant positioning and map construction unit is used To obtain the current frame image and inertial information, and generate the posture information of the first device based on the inertial information; the augmented reality processing device is used to determine the image parameters of the current frame image; to obtain the image parameters of the reference image determined by the second device; The image parameters of the current frame image and the image parameters of the reference image determine the pose of the current frame image relative to the second device, and combine the posture information of the first device when the current frame image is collected to determine the relative position of the first device and the second device Posture relationship.
  • a storage medium on which a computer program is stored, and when the computer program is executed by a processor, the above-mentioned augmented reality processing method is implemented.
  • an electronic device including a processor; and a memory for storing executable instructions of the processor; the processor is configured to execute the aforementioned augmented reality processing by executing the executable instructions method.
  • FIG. 1 shows a schematic diagram of a scenario architecture suitable for implementing exemplary embodiments of the present disclosure
  • Fig. 2 shows a schematic structural diagram of a computer system suitable for implementing an electronic device according to an embodiment of the present invention
  • FIG. 3 schematically shows an architecture diagram of an augmented reality processing system according to an exemplary embodiment of the present disclosure
  • FIG. 4 schematically shows a flowchart of an augmented reality processing method according to an exemplary embodiment of the present disclosure
  • FIG. 5 schematically shows a flowchart of determining the relative pose relationship between the first device and the second device by using the iterative closest point method according to the present disclosure
  • Fig. 6 schematically shows a block diagram of an augmented reality processing device according to an exemplary embodiment of the present disclosure
  • FIG. 7 schematically shows a block diagram of a first relative pose determination module according to an exemplary embodiment of the present disclosure
  • Fig. 8 schematically shows a block diagram of a first relative pose determination module according to another exemplary embodiment of the present disclosure
  • FIG. 9 schematically shows a block diagram of an augmented reality processing apparatus according to another exemplary embodiment of the present disclosure.
  • Fig. 1 shows a schematic diagram of a scenario architecture suitable for implementing exemplary embodiments of the present disclosure.
  • the augmented reality processing solution architecture of the exemplary embodiment of the present disclosure may include a first device 1001 and a second device 1002.
  • the second device 1002 is used to map the scene in which it is located
  • the first device 1001 is a terminal device that is currently to perform an augmented reality processing operation in the scene.
  • the first device 1001 and the second device 1002 may be various electronic devices with display screens, including but not limited to smart phones, tablet computers, portable computers, smart wearable devices, and the like.
  • the first device 1001 and the second device 1002 may be communicatively connected. Specifically, the connection can be established through Bluetooth, hotspot, WiFi, mobile network, etc., so that the second device 1002 can directly transmit data with the first device 1001 without the data going through the server.
  • the video frame image can be obtained based on the camera module of the second device 1002, and the depth information corresponding to each video frame image can be obtained based on the depth sensing module.
  • the second device 1002 can determine two-dimensional feature point information and three-dimensional feature point information.
  • the second device 1002 can send the two-dimensional feature point information and three-dimensional feature point information of each frame of image or key frame image to the first One device 1001.
  • the first device 1001 When the first device 1001 performs the augmented reality processing operation, the first device 1001 can obtain the current frame image taken by its own camera module, and based on the corresponding depth information, determine the two-dimensional feature point information and the three-dimensional feature point information of the current frame image. Feature point information. Subsequently, the first device 1001 may match the two-dimensional feature point information and the three-dimensional feature point information of the current frame image with the two-dimensional feature point information and the three-dimensional feature point information of the image determined by the second device 1002, and determine the current frame based on the matching result The pose of the image relative to the second device 1002. Next, the first device 1001 combines its own posture information to determine the relative posture relationship between the first device 1001 and the second device 1002.
  • the first device 1001 can obtain the anchor point information, so that the first device 1001 and the second device 1002 can be in the same position in the scene Display virtual objects and perform other augmented reality processing operations.
  • the second device 1002 can also send information to the first device 1001 by means of the server 1003.
  • the server 1003 may be a cloud server.
  • the second device 1002 may send the two-dimensional feature point information and the three-dimensional feature point information of each frame of image in the mapping process to the server 1003, and may also send the configured anchor point information to the server 1003.
  • the first device 1001 can obtain the two-dimensional feature point information and the three-dimensional feature point information of each frame of image during the mapping process from the server 1003, and compare it with the current The information of the frame image is matched to determine the relative pose relationship between the first device 1001 and the second device 1002.
  • the number of terminal devices and servers in FIG. 1 is only illustrative, and any number of terminal devices and servers may be provided according to implementation needs.
  • the server 1003 may be a server cluster composed of multiple servers.
  • multi-person AR interaction with more than three persons can also be realized.
  • the terminal device used for mapping is described as the second device, and the terminal device currently used for processing operations of augmented reality is described as the first device for distinction. It should be understood that the second device may be a terminal device currently performing processing operations in some scenarios, and the first device may also be a terminal device for mapping in some scenarios.
  • the augmented reality processing method of the exemplary embodiment of the present disclosure is executed by the first device, and accordingly, the augmented reality processing device and the augmented reality processing system described below may be configured in the first device.
  • FIG. 2 shows a schematic structural diagram of a computer system suitable for implementing an electronic device according to an exemplary embodiment of the present disclosure. That is to say, FIG. 2 exemplarily shows a schematic diagram of the computer structure of the above-mentioned first device.
  • the computer system 200 includes a central processing unit (CPU) 201, which can be based on a program stored in a read-only memory (ROM) 202 or a program loaded from a storage part 208 into a random access memory (RAM) 203 And perform various appropriate actions and processing.
  • ROM read-only memory
  • RAM random access memory
  • various programs and data required for system operation are also stored.
  • the CPU 201, the ROM 202, and the RAM 203 are connected to each other through a bus 204.
  • An input/output (I/O) interface 205 is also connected to the bus 204.
  • the following components are connected to the I/O interface 205: an input part 206 including a keyboard, a mouse, etc.; an output part 207 including a cathode ray tube (CRT), a liquid crystal display (LCD), etc., and speakers, etc.; a storage part 208 including a hard disk, etc. ; And a communication section 209 including a network interface card such as a LAN card, a modem, and the like. The communication section 209 performs communication processing via a network such as the Internet.
  • the drive 210 is also connected to the I/O interface 205 as needed.
  • the removable medium 211 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is installed on the drive 210 as needed, so that the computer program read from it is installed into the storage part 208 as needed.
  • an embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication part 209, and/or installed from the removable medium 211.
  • CPU central processing unit
  • various functions defined in the system of the present application are executed.
  • the computer-readable medium shown in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium.
  • the computer-readable medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wireless, wire, optical cable, RF, etc., or any suitable combination of the above.
  • each block in the flowchart or block diagram may represent a module, program segment, or part of the code, and the above-mentioned module, program segment, or part of the code contains one or more for realizing the specified logic function.
  • Executable instructions may also occur in a different order from the order marked in the drawings. For example, two blocks shown in succession can actually be executed substantially in parallel, and they can sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the block diagram or flowchart, and the combination of blocks in the block diagram or flowchart can be implemented by a dedicated hardware-based system that performs the specified function or operation, or can be implemented by It is realized by a combination of dedicated hardware and computer instructions.
  • the units described in the embodiments of the present disclosure may be implemented in software or hardware, and the described units may also be provided in a processor. Among them, the names of these units do not constitute a limitation on the unit itself under certain circumstances.
  • this application also provides a computer-readable medium.
  • the computer-readable medium may be included in the electronic device described in the above-mentioned embodiments; or it may exist alone without being assembled into the electronic device. in.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by an electronic device, the electronic device realizes the method described in the following embodiments.
  • the augmented reality processing system of the exemplary embodiment of the present disclosure may include an inertial measurement unit 31, a camera module 32, a depth sensing module group 33, an instant positioning and map construction unit 34, and an augmented reality processing device 35.
  • the inertial measurement unit 31 may include a gyroscope and an accelerometer, which respectively measure the angular velocity and acceleration of the first device, and thereby determine the inertial information of the first device.
  • the camera module 31 can be used to collect video frame images, where the video frame images are RGB images. During the execution of the following augmented reality processing, the camera module 31 can acquire the current frame of image for subsequent processing.
  • the depth sensing module 33 may be used to collect depth information.
  • the depth sensing module may be a dual camera module, a structured light module, or a TOF (Time-Of-Flight, Time-Of-Flight) module. This disclosure does not impose special restrictions on this.
  • the real-time positioning and map construction unit 34 can be used to obtain the inertial information sent by the inertial measurement unit 31 and the image sent by the camera module 32, and perform the mapping and relocation process.
  • the augmented reality processing device 35 can obtain the current frame image sent by the instant positioning and map construction unit 34, determine the image parameters of the current frame image, obtain the image parameters of the reference image determined by the second device, and determine the image parameters of the reference image according to the current frame image.
  • the image parameters and the image parameters of the reference image determine the pose of the current frame image relative to the second device, and combine the pose information of the first device when the current frame image is used to determine the relative pose relationship between the first device and the second device.
  • the augmented reality processing device 35 may obtain the current frame image and the corresponding posture information sent by the instant positioning and map construction unit 34. Extract the two-dimensional feature point information of the current frame image; obtain the depth information corresponding to the current frame image from the depth sensing module 33, and determine the three-dimensional feature point information of the current frame image according to the depth information corresponding to the two-dimensional feature point information; Use the two-dimensional feature point information and the three-dimensional feature point information of the current frame image and the two-dimensional feature point information and the three-dimensional feature point information of the image determined by the second device to determine the pose of the current frame image relative to the second device, and Combining the posture information of the first device, determine the relative posture relationship between the first device and the second device, so as to use the relative posture relationship between the first device and the second device to perform augmented reality processing operations.
  • the augmented reality processing device 35 may also include an anchor point acquisition module.
  • the anchor point acquisition module is used to acquire the anchor point information configured by the second device, so as to display the virtual object corresponding to the anchor point information on the first device based on the relative pose relationship between the first device and the second device.
  • the first device can also add anchor point information in the scene so that other devices can display and perform interactive operations.
  • the enhanced display processing system may further include an anchor point adding unit.
  • the anchor point adding unit may be used to add anchor point information in the scene where the first device is located.
  • the anchor point adding unit may include an application program 36 as shown in FIG. 3, and the user holding the first device may implement the addition of anchor point information by means of the application program 36.
  • the second device involved in the exemplary embodiment of the present disclosure may also have a system architecture as shown in FIG. 3.
  • the augmented reality processing method may include the following steps:
  • the image parameter of the current frame image may include two-dimensional feature point information and three-dimensional feature point information of the current frame image.
  • the first device can obtain the current frame image taken by the camera module, and perform feature extraction on the current frame image to determine the two-dimensional feature point information of the current frame image. Specifically, the two-dimensional feature point information of the current frame image can be extracted based on the combination of the feature extraction algorithm and the feature descriptor.
  • the feature extraction algorithms adopted by the exemplary embodiments of the present disclosure may include, but are not limited to, FAST feature point detection algorithm, DOG feature point detection algorithm, Harris feature point detection algorithm, SIFT feature point detection algorithm, SURF feature point detection algorithm, etc.
  • Feature descriptors may include, but are not limited to, BRIEF feature point descriptors, BRISK feature point descriptors, FREAK feature point descriptors, and so on.
  • the combination of the feature extraction algorithm and the feature descriptor may be the FAST feature point detection algorithm and the BRIEF feature point descriptor. According to other embodiments of the present disclosure, the combination of the feature extraction algorithm and the feature descriptor may be a DOG feature point detection algorithm and a FREAK feature point descriptor.
  • the first device may respond to the user's operation to perform the process of acquiring the current frame image and extracting the two-dimensional feature. For example, when the user starts the AR application, the first device may respond to the AR application startup operation, turn on the camera module, obtain the current frame image captured by the camera module, and extract the two-dimensional feature point information.
  • the depth information corresponding to the two-dimensional feature point information can be combined to determine the three-dimensional feature point information of the current frame image.
  • the depth information corresponding to the current frame image can be acquired through the depth sensing module.
  • the depth sensing module may be any one of a dual camera module (for example, a color camera and a telephoto camera), a structured light module, and a TOF module.
  • the current frame image and the corresponding depth information can be registered to determine the depth information of each pixel on the current frame image.
  • the internal and external parameters of the camera module and the depth sensing module need to be calibrated in advance.
  • the internal parameter matrix of the depth sensing module can be used to obtain the coordinate P_ir of the pixel point in the depth sensing module coordinate system.
  • P_ir can be multiplied by a rotation matrix R, and a translation vector T is added to convert P_ir to the coordinate system of the RGB camera to obtain P_rgb.
  • P_rgb can be multiplied with the internal parameter matrix H_rgb of the camera module to obtain p_rgb
  • p_rgb is also a three-dimensional vector, denoted as (x0, y0, z0), where x0 and y0 are the pixels of the pixel in the RGB image Coordinates, extract the pixel value of the pixel, and match it with the corresponding depth information.
  • the alignment of the two-dimensional image information and depth information of one pixel is completed. In this case, the above process is performed for each pixel to complete the registration process.
  • the depth information corresponding to the two-dimensional feature point information can be determined from it, and the two-dimensional feature point information and the depth information corresponding to the two-dimensional feature point information can be combined to determine Get the three-dimensional feature point information of the current frame image.
  • the depth information can also be denoised to remove obviously wrong depth values in the depth information.
  • a deep neural network may be used to remove noise in the TOF image, which is not particularly limited in this exemplary embodiment.
  • the image parameters of the reference image may include two-dimensional feature point information and three-dimensional feature point information of the reference image.
  • the second device can generate two-dimensional feature point information and three-dimensional feature point information of each frame of image or key frame image.
  • the first device is in the scene created by the second device and performs an augmented reality processing operation
  • the two-dimensional feature point information and the three-dimensional feature point information of these images can be acquired.
  • the reference image described in the present disclosure is each frame image or key frame image generated by the second device in the mapping.
  • the second device can send the two-dimensional feature point information and three-dimensional feature point information of the reference image to the first device through Bluetooth, hotspot, WiFi, mobile network, etc. .
  • the second device may send the two-dimensional feature point information and the three-dimensional feature point information of the reference image to the cloud server, so that the first device can obtain the two-dimensional feature point information and the three-dimensional feature point information of the reference image from the cloud server.
  • Three-dimensional feature point information may be used to send the two-dimensional feature point information and the three-dimensional feature point information of the reference image to the cloud server.
  • step S42 and step S44 can be interchanged, that is, the solution of the present disclosure may also execute step S44 first, and then execute step S42.
  • the pose of the current frame image relative to the second device can be determined, that is, the pose of the current frame image in the second device coordinate system can be determined .
  • the present disclosure provides three implementation manners, which will be described one by one below.
  • the relationship between the two-dimensional feature point information of the current frame image and the two-dimensional feature point information of the reference image can be determined through feature matching or descriptor matching. If the two-dimensional feature point information of the current frame image is determined If the feature point information matches the two-dimensional feature point information of the reference image, the Iterative Closest Point (ICP) method can be used to determine the relative relationship between the three-dimensional feature point information of the current frame image and the three-dimensional feature point information of the reference image. Posture relationship.
  • ICP Iterative Closest Point
  • the three-dimensional feature point information of the current frame image is the point cloud information corresponding to the current frame image
  • the three-dimensional feature point information of the reference image is the point cloud information of the reference image.
  • the two point cloud information can be used as input, and by inputting the specified pose as the initial value, the optimal relative pose after the alignment of the two point clouds is obtained by the iterative closest point method, that is, the three-dimensional feature of the current frame image is determined
  • the relative pose relationship between the point information and the three-dimensional feature point information of the reference image is determined.
  • the relationship between the two-dimensional information is determined first. Due to the determination of the two-dimensional information relationship, the method of feature matching or descriptor matching is usually adopted, and the process is simple. As a result, the entire matching process can be accelerated, and the accuracy can be improved, while the effect of troubleshooting in advance can also be achieved.
  • the exemplary embodiment of the present disclosure may also include a solution for removing mismatched points.
  • the RANSAC (Random Sample Consensus) method can be used to eliminate mismatched feature point information. Specifically, a certain number of matching pairs (for example, 7 pairs, 8 pairs, etc.) are randomly selected from the matching pairs between the two-dimensional feature points of the current frame image and the two-dimensional feature points of the reference image, and the selected matching pairs are calculated
  • the basic matrix or essential matrix between the current frame image and the reference image is based on the epipolar constraint. If a two-dimensional feature point is far from the corresponding epipolar line, for example, greater than a threshold, the two-dimensional feature can be considered The points are mismatched points. By iterating a certain number of random sampling processes, the random sampling result with the largest number of internal points is selected as the final matching result. On this basis, the mismatched feature point information can be eliminated from the three-dimensional feature point information of the current frame image.
  • the three-dimensional feature point information from which the mismatched feature point information is eliminated can be used to determine the pose of the current frame image relative to the second device.
  • the two-dimensional feature point information of the current frame image matches the two-dimensional feature point information of the reference image
  • the two-dimensional feature point information of the current frame image is matched with the three-dimensional feature point information of the reference image.
  • Information association to get point-to-point information
  • the point pair information can be used as input to solve the Perspective-n-Point (PnP) problem.
  • PnP Perspective-n-Point
  • PnP is a classic method in the field of machine vision, which can determine the relative pose between the camera and the object according to n feature points on the object. Specifically, the rotation matrix and translation vector between the camera and the object can be determined according to the n feature points on the object. In addition, n may be determined to be 4 or more, for example.
  • the relative pose relationship between the three-dimensional feature point information of the reference image and the three-dimensional feature point information obtained by combining the PnP solution result of the previous embodiment can be used as the iterative initial pose input.
  • the point method determines the relative pose relationship between the three-dimensional feature point information of the current frame image and the three-dimensional feature point information of the reference image to determine the pose of the current frame image relative to the second device. It is easy to see that in this embodiment, PnP is combined with ICP to improve the accuracy of determining the pose relationship.
  • the inertial measurement unit of the first device can obtain the inertial information of the first device, and thereby, the 6DOF (6 Degrees Of Freedom) attitude information of the first device can be obtained. Based on the posture information of the first device and the posture of the current frame image relative to the second device determined in step S46, the relative posture relationship between the first device and the second device can be obtained.
  • 6DOF Degrees Of Freedom
  • the first device may extract the two-dimensional feature point information of the current frame image.
  • the DOG feature point detection algorithm and the FREAK feature point descriptor may be used for feature extraction.
  • the first device can obtain the depth information input by the TOF module; in step S506, the two-dimensional feature point information can be registered with the depth information to obtain the point cloud data of the current frame image.
  • the first device may determine whether the two-dimensional feature point information determined in step S502 matches the two-dimensional feature of the reference image, and if it matches, it executes the step of determining the three-dimensional point cloud data of the reference image in step S510. If it matches, return to step S502, and the feature extraction process of the next frame image can be executed, or the feature extraction process of the current frame image can be executed again.
  • the ICP may be used to determine the relative pose of the point cloud of the current frame image and the point cloud of the reference image, and then determine the pose of the current frame image in the second device coordinate system.
  • the inertial measurement unit can be used to determine the posture information of the first device; in step S516, the first device can be determined based on the posture of the current frame image in the second device coordinate system and the posture information of the first device. The relative pose of the device and the second device.
  • the first device may perform augmented reality processing operations based on the relative pose relationship.
  • the first device may obtain the anchor point information configured by the second device in the scene.
  • the anchor point information may include, but is not limited to, attribute information (color, size, type, etc.), identification, position, and posture of the virtual object.
  • attribute information color, size, type, etc.
  • identification identification, position, and posture of the virtual object.
  • the first device can be adjusted to the corresponding position and display the virtual object.
  • processing operations of augmented reality may also include rendering operations on real objects.
  • the first device may also display the real object after color rendering.
  • the above-mentioned augmented reality processing method is described by taking one terminal device as an example, in a scenario, the above-mentioned augmented reality processing method may be applied to multiple terminal devices.
  • the depth information is less affected by the environment, the problem of poor relocation effect due to the influence of the surrounding environment texture, illumination, angle and other factors is overcome, and the robustness of multi-person AR relocation is improved, thereby enhancing The experience of multiplayer AR.
  • this exemplary embodiment also provides an augmented reality processing device.
  • FIG. 6 schematically shows a block diagram of an augmented reality processing apparatus according to an exemplary embodiment of the present disclosure.
  • the augmented reality processing device 6 may include a first image parameter determination module 61, a second image parameter determination module 63, a first relative pose determination module 65, and a second relative pose Determine module 67.
  • the first image parameter determination module 61 may be used to determine the image parameters of the current frame image of the first device; the second image parameter determination module 63 may be used to obtain the image parameters of the reference image determined by the second device;
  • a relative pose determination module 65 can be used to determine the pose of the current frame image relative to the second device according to the image parameters of the current frame image and the image parameters of the reference image;
  • the second relative pose determination module 67 can be used to determine the pose of the current frame image relative to the second device;
  • the augmented reality processing apparatus by determining the pose of the current frame image relative to the second device, and combining with the posture information of the first device when the current frame image is collected, the difference between the first device and the second device is determined. Relative to the posture relationship, the relocation effect is good, and the scheme is universal and easy to implement.
  • the image parameters of the current frame image include two-dimensional feature point information and three-dimensional feature point information of the current frame image.
  • the first image parameter determination module 61 may be configured to execute: Acquire the current frame image, perform feature extraction on the current frame image, and determine the two-dimensional feature point information of the current frame image; acquire the depth information corresponding to the two-dimensional feature point information, according to the two-dimensional feature point The information and the depth information corresponding to the two-dimensional feature point information determine the three-dimensional feature point information of the current frame image.
  • the process of determining the three-dimensional feature point information of the current frame image by the first image parameter determination module 61 may be configured to perform: acquiring depth information corresponding to the current frame image collected by the depth sensing module ; Register the current frame image with the depth information corresponding to the current frame image to determine the depth information of each pixel on the current frame image; determine the corresponding two-dimensional feature point information from the depth information of each pixel on the current frame image The depth information of the two-dimensional feature point information and the depth information corresponding to the two-dimensional feature point information to determine the three-dimensional feature point information of the current frame image.
  • the image parameters of the reference image include two-dimensional feature point information and three-dimensional feature point information of the reference image.
  • the first relative pose determination module 65 may include a first relative pose determination module 65.
  • the first relative pose determining unit 701 may be configured to execute: if the two-dimensional feature point information of the current frame image matches the two-dimensional feature point information of the reference image, the iterative closest point method is used to determine the three-dimensional feature point of the current frame image. The relative pose relationship between the feature point information and the three-dimensional feature point information of the reference image to obtain the pose of the current frame image relative to the second device.
  • the first relative pose determining unit 701 may be configured to perform: before determining the relative pose relationship between the three-dimensional feature point information of the current frame image and the three-dimensional feature point information of the reference image, Determine the mismatched feature point information in the two-dimensional feature point information of the current frame image and the two-dimensional feature point information of the reference image; remove the mismatched feature point information from the three-dimensional feature point information of the current frame image to determine the current frame image
  • the image parameters of the reference image include two-dimensional feature point information and three-dimensional feature point information of the reference image.
  • the first relative pose determination module 65 may include a first relative pose determination module 65. Two relative pose determination unit 801.
  • the second relative pose determining unit 801 may be configured to execute: if the two-dimensional feature point information of the current frame image matches the two-dimensional feature point information of the reference image, then the two-dimensional feature point information of the current frame image is matched with the two-dimensional feature point information of the reference image.
  • the three-dimensional feature point information of the reference image is correlated to obtain the point-pair information; the point-pair information is used to solve the perspective n-point problem, and the position of the current frame image relative to the second device is determined according to the three-dimensional feature point information of the current frame image and the solution result. posture.
  • the second relative pose determining unit 801 performing the process of determining the pose of the current frame image relative to the second device in combination with the solution result may include: determining the three-dimensional feature point information of the current frame image according to the solution result The relative pose relationship with the three-dimensional feature point information of the reference image; the relative pose relationship between the three-dimensional feature point information of the current frame image and the three-dimensional feature point information of the reference image determined according to the solution result is used as the initial pose input, and iterative The closest point method determines the relative pose relationship between the three-dimensional feature point information of the current frame image and the three-dimensional feature point information of the reference image to determine the pose of the current frame image relative to the second device.
  • the augmented reality processing device 9 may further include a virtual object display module 91.
  • the virtual object display module 91 may be configured to execute: obtain the anchor point information configured by the second device, so as to display the anchor point on the first device based on the relative pose relationship between the first device and the second device.
  • the virtual object corresponding to the information.
  • the example embodiments described here can be implemented by software, or can be implemented by combining software with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, U disk, mobile hard disk, etc.) or on the network , Including several instructions to make a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the embodiment of the present disclosure.
  • a computing device which may be a personal computer, a server, a terminal device, or a network device, etc.
  • modules or units of the device for action execution are mentioned in the above detailed description, this division is not mandatory.
  • the features and functions of two or more modules or units described above may be embodied in one module or unit.
  • the features and functions of a module or unit described above can be further divided into multiple modules or units to be embodied.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present application provides an augmented reality processing method, an augmented reality processing device, an augmented reality processing system, a storage medium, and an electronic device, and relates to the technical field of augmented reality. The augmented reality processing method comprises: determining image parameters of a current frame image of a first device; acquiring image parameters of a reference image determined by a second device; determining a pose of the current frame image relative to the second device according to the image parameters of the current frame image and the image parameters of the reference image; and determining the relative pose relationship between the first device and the second device according to the pose of the current frame image relative to the second device and the pose information of the first device when the current frame image is acquired, so as to perform an augmented reality processing operation by using the relative pose relationship between the first device and the second device. The present application can improve the relocation effect.

Description

增强现实处理方法及装置、系统、存储介质和电子设备Augmented reality processing method and device, system, storage medium and electronic equipment
相关申请的交叉引用Cross-references to related applications
本申请要求于2019年10月31日提交的申请号为201911055871.2、名称为“增强现实处理方法及装置、系统、存储介质和电子设备”的中国专利申请的优先权,该中国专利申请的全部内容通过引用全部并入本文。This application claims the priority of the Chinese patent application filed on October 31, 2019, with the application number 201911055871.2 and titled "Augmented Reality Processing Method and Device, System, Storage Medium and Electronic Equipment", the entire content of the Chinese patent application The entire text is incorporated by reference.
技术领域Technical field
本公开涉及增强现实技术领域,具体而言,涉及一种增强现实处理方法、增强现实处理装置、增强现实处理系统、存储介质和电子设备。The present disclosure relates to the field of augmented reality technology, and in particular to an augmented reality processing method, an augmented reality processing device, an augmented reality processing system, a storage medium, and electronic equipment.
背景技术Background technique
增强现实(Augmented Reality,AR)是一种把虚拟世界和现实世界融合的技术,该技术已广泛应用到教育、游戏、医疗、物联网、智能制造等多个领域。Augmented Reality (AR) is a technology that integrates the virtual world and the real world. This technology has been widely used in education, games, medical care, the Internet of Things, intelligent manufacturing and other fields.
在多人AR的方案中,重定位效果对AR体验起着至关重要的作用。然而,由于建图设备与重定位设备所处的拍摄角度不相同,在确定建图设备与重定位设备之间位姿关系的过程中,可能出现特征误匹配的问题,导致重定位效果不佳。In the multi-person AR solution, the relocation effect plays a crucial role in the AR experience. However, because the shooting angles of the mapping device and the relocation device are not the same, in the process of determining the pose relationship between the mapping device and the relocation device, the problem of feature mismatch may occur, resulting in poor relocation results .
发明内容Summary of the invention
根据本公开的第一方面,提供了一种增强现实处理方法,应用于第一设备,该增强现实处理方法包括:确定第一设备的当前帧图像的图像参数;获取由第二设备确定出的参考图像的图像参数;根据当前帧图像的图像参数和参考图像的图像参数,确定当前帧图像相对于第二设备的位姿;根据当前帧图像相对于第二设备的位姿以及采集当前帧图像时第一设备的姿态信息,确定第一设备与第二设备的相对位姿关系,以便利用第一设备与第二设备的相对位姿关系进行增强现实的处理操作。According to a first aspect of the present disclosure, there is provided an augmented reality processing method, which is applied to a first device, and the augmented reality processing method includes: determining image parameters of a current frame image of the first device; and acquiring the image parameters determined by the second device Image parameters of the reference image; determine the pose of the current frame image relative to the second device based on the image parameters of the current frame image and the image parameters of the reference image; capture the current frame image based on the pose of the current frame image relative to the second device When the posture information of the first device is used, the relative posture relationship between the first device and the second device is determined, so as to use the relative posture relationship between the first device and the second device to perform augmented reality processing operations.
根据本公开的第二方面,提供了一种增强现实处理装置,应用于第一设备,该增强现实处理装置包括第一图像参数确定模块、第二图像参数确定模块、第一相对位姿确定模块和第二相对位姿确定模块。According to a second aspect of the present disclosure, there is provided an augmented reality processing device applied to a first device. The augmented reality processing device includes a first image parameter determination module, a second image parameter determination module, and a first relative pose determination module And the second relative pose determination module.
具体的,第一图像参数确定模块用于确定第一设备的当前帧图像的图像参数;第二图像参数确定模块用于获取由第二设备确定出的参考图像的图像参数;第一相对位姿确定模块用于根据当前帧图像的图像参数和参考图像的图像参数,确定当前帧图像相对于第二设备的位姿;第二相对位姿确定模块用于根据当前帧图像相对于第二设备的位姿以及采集当前帧图像时第一设备的姿态信息,确定第一设备与第二设备的相对位姿关系,以便利用第一设备与第二设备的相对位姿关系进行增强现实的处理操作。Specifically, the first image parameter determination module is used to determine the image parameters of the current frame image of the first device; the second image parameter determination module is used to obtain the image parameters of the reference image determined by the second device; the first relative pose The determining module is used to determine the pose of the current frame image relative to the second device according to the image parameters of the current frame image and the image parameters of the reference image; the second relative pose determining module is used to determine the pose of the current frame image relative to the second device according to the image parameters of the current frame image relative to the second device. The pose and the pose information of the first device when the current frame image is collected, determine the relative pose relationship between the first device and the second device, so as to use the relative pose relationship between the first device and the second device to perform augmented reality processing operations.
根据本公开的第三方面,提供了一种增强现实处理系统,应用于第一设备,该增强现实处理系统包括摄像头模组、深度感测模组、惯性测量单元、即时定位与地图构建单元和增强现实处理装置。According to a third aspect of the present disclosure, there is provided an augmented reality processing system applied to a first device. The augmented reality processing system includes a camera module, a depth sensing module, an inertial measurement unit, a real-time positioning and map construction unit, and Augmented reality processing device.
具体的,摄像头模组用于采集当前帧图像;深度感测模组用于采集与当前帧图像对应的深度信息;惯性测量单元用于测量第一设备的惯性信息;即时定位与地图构建单元用于获取当前帧图像和惯性信息,并基于惯性信息生成第一设备的姿态信息;增强现实处理装置用于确定当前帧图像的图像参数;获取由第二设备确定出的参考图像的图像参数;根据当前帧图像的图像参数和参考图像的图像参数,确定当前帧图像相对于第二设备的位姿,并结合采集当前帧图像时第一设备的姿态信息,确定第一设备与第二设备的相对位姿关系。Specifically, the camera module is used to collect the current frame image; the depth sensing module is used to collect the depth information corresponding to the current frame image; the inertial measurement unit is used to measure the inertial information of the first device; the instant positioning and map construction unit is used To obtain the current frame image and inertial information, and generate the posture information of the first device based on the inertial information; the augmented reality processing device is used to determine the image parameters of the current frame image; to obtain the image parameters of the reference image determined by the second device; The image parameters of the current frame image and the image parameters of the reference image determine the pose of the current frame image relative to the second device, and combine the posture information of the first device when the current frame image is collected to determine the relative position of the first device and the second device Posture relationship.
根据本公开的第四方面,提供了一种存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现上述增强现实处理方法。According to a fourth aspect of the present disclosure, there is provided a storage medium on which a computer program is stored, and when the computer program is executed by a processor, the above-mentioned augmented reality processing method is implemented.
根据本公开的第五方面,提供了一种电子设备,该电子设备包括处理器;以及存储器,用于存储处理器的可执行指令;处理器配置为经由执行可执行指令来执行上述增强现实处理方法。According to a fifth aspect of the present disclosure, there is provided an electronic device including a processor; and a memory for storing executable instructions of the processor; the processor is configured to execute the aforementioned augmented reality processing by executing the executable instructions method.
附图说明Description of the drawings
图1示出了适于用来实现本公开示例性实施方式的场景架构的示意图;FIG. 1 shows a schematic diagram of a scenario architecture suitable for implementing exemplary embodiments of the present disclosure;
图2示出了适于用来实现本发明实施例的电子设备的计算机系统的结构示意图;Fig. 2 shows a schematic structural diagram of a computer system suitable for implementing an electronic device according to an embodiment of the present invention;
图3示意性示出了根据本公开的示例性实施方式的增强现实处理系统的架构图;Fig. 3 schematically shows an architecture diagram of an augmented reality processing system according to an exemplary embodiment of the present disclosure;
图4示意性示出了根据本公开的示例性实施方式的增强现实处理方法的流程图;Fig. 4 schematically shows a flowchart of an augmented reality processing method according to an exemplary embodiment of the present disclosure;
图5示意性示出了根据本公开的采用迭代最近点方式确定第一设备与第二设备相对位姿关系的流程图;FIG. 5 schematically shows a flowchart of determining the relative pose relationship between the first device and the second device by using the iterative closest point method according to the present disclosure;
图6示意性示出了根据本公开的示例性实施方式的增强现实处理装置的方框图;Fig. 6 schematically shows a block diagram of an augmented reality processing device according to an exemplary embodiment of the present disclosure;
图7示意性示出了根据本公开的示例性实施方式的第一相对位姿确定模块的方框图;FIG. 7 schematically shows a block diagram of a first relative pose determination module according to an exemplary embodiment of the present disclosure;
图8示意性示出了根据本公开的另一示例性实施方式的第一相对位姿确定模块的方框图;Fig. 8 schematically shows a block diagram of a first relative pose determination module according to another exemplary embodiment of the present disclosure;
图9示意性示出了根据本公开的另一示例性实施方式的增强现实处理装置的方框图。FIG. 9 schematically shows a block diagram of an augmented reality processing apparatus according to another exemplary embodiment of the present disclosure.
具体实施方式Detailed ways
现在将参考附图更全面地描述示例实施方式。然而,示例实施方式能够以多种形式实施,且不应被理解为限于在此阐述的范例;相反,提供这些实施方式使得本公开将更加全面和完整,并将示例实施方式的构思全面地传达给本领域的技术人员。所描述的特征、结构或特性可以以任何合适的方式结合在一个或更多实施方式中。在下面的描述中,提供许多具体细节从而给出对本公开的实施方式的充分理解。然而,本领域技术人员将意识到,可以实践本公开的技术方案而省略所述特定细节中的一个或更多,或者可以采用其它的方法、组元、装置、步骤等。在其它情况下,不详细示出或描述公知技术方案以避免喧宾夺主而使得本公开的各方面变得模糊。Example embodiments will now be described more fully with reference to the accompanying drawings. However, the example embodiments can be implemented in various forms, and should not be construed as being limited to the examples set forth herein; on the contrary, these embodiments are provided so that the present disclosure will be more comprehensive and complete, and the concept of the example embodiments will be fully conveyed To those skilled in the art. The described features, structures or characteristics can be combined in one or more embodiments in any suitable way. In the following description, many specific details are provided to give a sufficient understanding of the embodiments of the present disclosure. However, those skilled in the art will realize that the technical solutions of the present disclosure can be practiced without one or more of the specific details, or other methods, components, devices, steps, etc. can be used. In other cases, the well-known technical solutions are not shown or described in detail to avoid overwhelming the crowd and obscure all aspects of the present disclosure.
此外,附图仅为本公开的示意性图解,并非一定是按比例绘制。图中相同的附图标记表示相同或类似的部分,因而将省略对它们的重复描述。附图中所示的一些方框图是功能实体,不一定必须与物理或逻辑上独立的实体相对应。可以采用软件形式来实现这些功能实体,或在一个或多个硬件模块或集成电路中实现这些功能实体,或在不同网络和/或处理器装置和/或微控制器装置中实现这些功能实体。In addition, the drawings are only schematic illustrations of the present disclosure, and are not necessarily drawn to scale. The same reference numerals in the figures denote the same or similar parts, and thus their repeated description will be omitted. Some of the block diagrams shown in the drawings are functional entities and do not necessarily correspond to physically or logically independent entities. These functional entities may be implemented in the form of software, or implemented in one or more hardware modules or integrated circuits, or implemented in different networks and/or processor devices and/or microcontroller devices.
附图中所示的流程图仅是示例性说明,不是必须包括所有的步骤。例如,有的步骤还可以分解,而有的步骤可以合并或部分合并,因此实际执行的顺序有可能根据实际情况改变。The flowchart shown in the drawings is only an exemplary description, and does not necessarily include all the steps. For example, some steps can be decomposed, and some steps can be combined or partially combined, so the actual execution order may be changed according to actual conditions.
图1示出了适于实现本公开示例性实施方式的场景架构的示意图。Fig. 1 shows a schematic diagram of a scenario architecture suitable for implementing exemplary embodiments of the present disclosure.
如图1所示,本公开示例性实施方式的增强现实处理方案架构中可以包括第一设备1001和第二设备1002。其中,第二设备1002用于对所处场景进行建图,而第一设备1001为当前在该场景中待进行增强现实处理操作的终端设备。As shown in FIG. 1, the augmented reality processing solution architecture of the exemplary embodiment of the present disclosure may include a first device 1001 and a second device 1002. Wherein, the second device 1002 is used to map the scene in which it is located, and the first device 1001 is a terminal device that is currently to perform an augmented reality processing operation in the scene.
第一设备1001和第二设备1002可以是具有显示屏的各种电子设备,包括但不限于智能手机、平板电脑、便携式计算机、智能可穿戴设备等。The first device 1001 and the second device 1002 may be various electronic devices with display screens, including but not limited to smart phones, tablet computers, portable computers, smart wearable devices, and the like.
第一设备1001与第二设备1002可以进行通信连接。具体的,可以通过蓝牙、热点、WiFi、移动网络等方式建立连接,由此,第二设备1002可以与第一设备1001直接进行数据传输,而数据无需经由服务器。The first device 1001 and the second device 1002 may be communicatively connected. Specifically, the connection can be established through Bluetooth, hotspot, WiFi, mobile network, etc., so that the second device 1002 can directly transmit data with the first device 1001 without the data going through the server.
针对第二设备1002的建图过程,可以基于第二设备1002的摄像头模组获取视频帧图像,并基于深度感测模组获取与各视频帧图像分别对应的深度信息,由此,针对每一帧图 像,第二设备1002均可以确定出二维特征点信息及三维特征点信息。在利用即时定位与地图构建(Simultaneous Localization And Mapping,SLAM)进行建图的情况下,第二设备1002可以将每一帧图像或关键帧图像的二维特征点信息及三维特征点信息发送至第一设备1001。For the mapping process of the second device 1002, the video frame image can be obtained based on the camera module of the second device 1002, and the depth information corresponding to each video frame image can be obtained based on the depth sensing module. Thus, for each For frame images, the second device 1002 can determine two-dimensional feature point information and three-dimensional feature point information. In the case of using real-time positioning and mapping (Simultaneous Localization And Mapping, SLAM) for mapping, the second device 1002 can send the two-dimensional feature point information and three-dimensional feature point information of each frame of image or key frame image to the first One device 1001.
在第一设备1001进行增强现实的处理操作时,第一设备1001可以获取由自身摄像头模组拍摄的当前帧图像,并基于对应的深度信息,确定出当前帧图像的二维特征点信息和三维特征点信息。随后,第一设备1001可以将当前帧图像的二维特征点信息和三维特征点信息与第二设备1002确定出的图像二维特征点信息和三维特征点信息进行匹配,基于匹配结果确定当前帧图像相对于第二设备1002的位姿。接下来,第一设备1001结合自身的姿态信息,即可以确定出第一设备1001与第二设备1002的相对位姿关系。在这种情况下,如果第二设备1002在建图时配置了锚点信息,则第一设备1001可以获取该锚点信息,使得第一设备1001和第二设备1002可以在场景下的同一位置显示出虚拟对象,并进行其他增强现实的处理操作。When the first device 1001 performs the augmented reality processing operation, the first device 1001 can obtain the current frame image taken by its own camera module, and based on the corresponding depth information, determine the two-dimensional feature point information and the three-dimensional feature point information of the current frame image. Feature point information. Subsequently, the first device 1001 may match the two-dimensional feature point information and the three-dimensional feature point information of the current frame image with the two-dimensional feature point information and the three-dimensional feature point information of the image determined by the second device 1002, and determine the current frame based on the matching result The pose of the image relative to the second device 1002. Next, the first device 1001 combines its own posture information to determine the relative posture relationship between the first device 1001 and the second device 1002. In this case, if the second device 1002 configures anchor point information when creating a map, the first device 1001 can obtain the anchor point information, so that the first device 1001 and the second device 1002 can be in the same position in the scene Display virtual objects and perform other augmented reality processing operations.
除第一设备1001与第二设备1002可以直接进行数据通信外,第二设备1002还可以借助于服务器1003将信息发送给第一设备1001。其中,服务器1003可以是云服务器。In addition to the direct data communication between the first device 1001 and the second device 1002, the second device 1002 can also send information to the first device 1001 by means of the server 1003. Among them, the server 1003 may be a cloud server.
具体的,第二设备1002可以将建图过程中各帧图像的二维特征点信息和三维特征点信息发送给服务器1003,还可以将配置的锚点信息发送给服务器1003。在第一设备1001处于由第二设备1002建图的场景中时,第一设备1001可以从服务器1003处获取建图过程中各帧图像的二维特征点信息和三维特征点信息,并与当前帧图像的信息进行匹配,以确定出第一设备1001与第二设备1002之间的相对位姿关系。Specifically, the second device 1002 may send the two-dimensional feature point information and the three-dimensional feature point information of each frame of image in the mapping process to the server 1003, and may also send the configured anchor point information to the server 1003. When the first device 1001 is in the scene created by the second device 1002, the first device 1001 can obtain the two-dimensional feature point information and the three-dimensional feature point information of each frame of image during the mapping process from the server 1003, and compare it with the current The information of the frame image is matched to determine the relative pose relationship between the first device 1001 and the second device 1002.
应当理解的是,图1中的终端设备、服务器的数量仅仅是示意性的,根据实现需要,可以具有任意数目的终端设备和服务器。比如服务器1003可以是多个服务器组成的服务器集群等。另外,还可以实现三人以上的多人AR交互。It should be understood that the number of terminal devices and servers in FIG. 1 is only illustrative, and any number of terminal devices and servers may be provided according to implementation needs. For example, the server 1003 may be a server cluster composed of multiple servers. In addition, multi-person AR interaction with more than three persons can also be realized.
为了下面描述方便,将用于建图的终端设备描述为第二设备,将当前用于进行增强现实的处理操作的终端设备描述为第一设备,以作区分。应该理解,第二设备在一些场景下可以作为当前执行处理操作的终端设备,而第一设备在一些场景下也可以是建图的终端设备。For the convenience of the following description, the terminal device used for mapping is described as the second device, and the terminal device currently used for processing operations of augmented reality is described as the first device for distinction. It should be understood that the second device may be a terminal device currently performing processing operations in some scenarios, and the first device may also be a terminal device for mapping in some scenarios.
需要说明的是,本公开示例性实施方式的增强现实处理方法由第一设备执行,相应的,下面描述的增强现实处理装置及增强现实处理系统可以配置在第一设备中。It should be noted that the augmented reality processing method of the exemplary embodiment of the present disclosure is executed by the first device, and accordingly, the augmented reality processing device and the augmented reality processing system described below may be configured in the first device.
图2示出了适于用来实现本公开示例性实施方式的电子设备的计算机系统的结构示意图。也就是说,图2示例性示出了上述第一设备的计算机结构示意图。Fig. 2 shows a schematic structural diagram of a computer system suitable for implementing an electronic device according to an exemplary embodiment of the present disclosure. That is to say, FIG. 2 exemplarily shows a schematic diagram of the computer structure of the above-mentioned first device.
需要说明的是,图2示出的电子设备的计算机系统200仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。It should be noted that the computer system 200 of the electronic device shown in FIG. 2 is only an example, and should not bring any limitation to the functions and scope of use of the embodiments of the present disclosure.
如图2所示,计算机系统200包括中央处理单元(CPU)201,其可以根据存储在只读存储器(ROM)202中的程序或者从储存部分208加载到随机访问存储器(RAM)203中的程序而执行各种适当的动作和处理。在RAM 203中,还存储有系统操作所需的各种程序和数据。CPU 201、ROM 202以及RAM 203通过总线204彼此相连。输入/输出(I/O)接口205也连接至总线204。As shown in FIG. 2, the computer system 200 includes a central processing unit (CPU) 201, which can be based on a program stored in a read-only memory (ROM) 202 or a program loaded from a storage part 208 into a random access memory (RAM) 203 And perform various appropriate actions and processing. In RAM 203, various programs and data required for system operation are also stored. The CPU 201, the ROM 202, and the RAM 203 are connected to each other through a bus 204. An input/output (I/O) interface 205 is also connected to the bus 204.
以下部件连接至I/O接口205:包括键盘、鼠标等的输入部分206;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分207;包括硬盘等的储存部分208;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分209。通信部分209经由诸如因特网的网络执行通信处理。驱动器210也根据需要连接至I/O接口205。可拆卸介质211,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器210上,以便于从其上读出的计算机程序根据需要被安装入储存部分208。The following components are connected to the I/O interface 205: an input part 206 including a keyboard, a mouse, etc.; an output part 207 including a cathode ray tube (CRT), a liquid crystal display (LCD), etc., and speakers, etc.; a storage part 208 including a hard disk, etc. ; And a communication section 209 including a network interface card such as a LAN card, a modem, and the like. The communication section 209 performs communication processing via a network such as the Internet. The drive 210 is also connected to the I/O interface 205 as needed. The removable medium 211, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is installed on the drive 210 as needed, so that the computer program read from it is installed into the storage part 208 as needed.
特别地,根据本公开的实施例,下文参考流程图描述的过程可以被实现为计算机软件 程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分209从网络上被下载和安装,和/或从可拆卸介质211被安装。在该计算机程序被中央处理单元(CPU)201执行时,执行本申请的系统中限定的各种功能。In particular, according to an embodiment of the present disclosure, the process described below with reference to a flowchart can be implemented as a computer software program. For example, an embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart. In such an embodiment, the computer program may be downloaded and installed from the network through the communication part 209, and/or installed from the removable medium 211. When the computer program is executed by the central processing unit (CPU) 201, various functions defined in the system of the present application are executed.
需要说明的是,本公开所示的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、电线、光缆、RF等等,或者上述的任意合适的组合。It should be noted that the computer-readable medium shown in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two. The computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above. In the present disclosure, a computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device. In the present disclosure, a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. The computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium. The computer-readable medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device . The program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wireless, wire, optical cable, RF, etc., or any suitable combination of the above.
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,上述模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图或流程图中的每个方框、以及框图或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowcharts and block diagrams in the accompanying drawings illustrate the possible implementation architecture, functions, and operations of the system, method, and computer program product according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagram may represent a module, program segment, or part of the code, and the above-mentioned module, program segment, or part of the code contains one or more for realizing the specified logic function. Executable instructions. It should also be noted that, in some alternative implementations, the functions marked in the block may also occur in a different order from the order marked in the drawings. For example, two blocks shown in succession can actually be executed substantially in parallel, and they can sometimes be executed in the reverse order, depending on the functions involved. It should also be noted that each block in the block diagram or flowchart, and the combination of blocks in the block diagram or flowchart, can be implemented by a dedicated hardware-based system that performs the specified function or operation, or can be implemented by It is realized by a combination of dedicated hardware and computer instructions.
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现,所描述的单元也可以设置在处理器中。其中,这些单元的名称在某种情况下并不构成对该单元本身的限定。The units described in the embodiments of the present disclosure may be implemented in software or hardware, and the described units may also be provided in a processor. Among them, the names of these units do not constitute a limitation on the unit itself under certain circumstances.
作为另一方面,本申请还提供了一种计算机可读介质,该计算机可读介质可以是上述实施例中描述的电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被一个该电子设备执行时,使得该电子设备实现如下述实施例中所述的方法。As another aspect, this application also provides a computer-readable medium. The computer-readable medium may be included in the electronic device described in the above-mentioned embodiments; or it may exist alone without being assembled into the electronic device. in. The above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by an electronic device, the electronic device realizes the method described in the following embodiments.
下面将参考图3对本公开的示例性实施方式的增强现实处理系统进行说明。Hereinafter, the augmented reality processing system of the exemplary embodiment of the present disclosure will be described with reference to FIG. 3.
参考图3,本公开示例性实施方式的增强现实处理系统可以包括惯性测量单元31、摄像头模组32、深度感测模块组33、即时定位与地图构建单元34和增强现实处理装置35。Referring to FIG. 3, the augmented reality processing system of the exemplary embodiment of the present disclosure may include an inertial measurement unit 31, a camera module 32, a depth sensing module group 33, an instant positioning and map construction unit 34, and an augmented reality processing device 35.
惯性测量单元31可以包括陀螺仪和加速度计,分别测量第一设备的角速度和加速度,进而确定出第一设备的惯性信息。The inertial measurement unit 31 may include a gyroscope and an accelerometer, which respectively measure the angular velocity and acceleration of the first device, and thereby determine the inertial information of the first device.
摄像头模组31可以用于采集视频帧图像,其中,视频帧图像为RGB图像。在执行下述增强现实处理过程中,摄像头模组31可以获取当前帧图像以便进行后续处理。The camera module 31 can be used to collect video frame images, where the video frame images are RGB images. During the execution of the following augmented reality processing, the camera module 31 can acquire the current frame of image for subsequent processing.
深度感测模组33可以用于采集深度信息,具体的,深度感测模组可以是双摄模组、结构光模组或TOF(Time-Of-Flight,飞行时间测距)模组。本公开对此不做特殊限制。The depth sensing module 33 may be used to collect depth information. Specifically, the depth sensing module may be a dual camera module, a structured light module, or a TOF (Time-Of-Flight, Time-Of-Flight) module. This disclosure does not impose special restrictions on this.
即时定位与地图构建单元34可以用于获取由惯性测量单元31发送的惯性信息以及由摄像头模组32发送的图像,执行建图与重定位过程。The real-time positioning and map construction unit 34 can be used to obtain the inertial information sent by the inertial measurement unit 31 and the image sent by the camera module 32, and perform the mapping and relocation process.
增强现实处理装置35可以获取由即时定位与地图构建单元34发送的当前帧图像,并确定当前帧图像的图像参数,获取由第二设备确定出的参考图像的图像参数,并根据当前帧图像的图像参数和参考图像的图像参数,确定当前帧图像相对于第二设备的位姿,并结合采用当前帧图像时第一设备的姿态信息,确定第一设备与第二设备的相对位姿关系。The augmented reality processing device 35 can obtain the current frame image sent by the instant positioning and map construction unit 34, determine the image parameters of the current frame image, obtain the image parameters of the reference image determined by the second device, and determine the image parameters of the reference image according to the current frame image. The image parameters and the image parameters of the reference image determine the pose of the current frame image relative to the second device, and combine the pose information of the first device when the current frame image is used to determine the relative pose relationship between the first device and the second device.
具体的,增强现实处理装置35可以获取由即时定位与地图构建单元34发送的当前帧图像及对应的姿态信息。提取当前帧图像的二维特征点信息;从深度感测模组33获取与当前帧图像对应的深度信息,并根据二维特征点信息对应的深度信息,确定当前帧图像的三维特征点信息;利用当前帧图像的二维特征点信息和三维特征点信息以及由第二设备确定出的图像的二维特征点信息和三维特征点信息,确定当前帧图像相对于第二设备的位姿,并结合第一设备的姿态信息,确定第一设备与第二设备的相对位姿关系,以便利用第一设备与第二设备的相对位姿关系进行增强现实的处理操作。Specifically, the augmented reality processing device 35 may obtain the current frame image and the corresponding posture information sent by the instant positioning and map construction unit 34. Extract the two-dimensional feature point information of the current frame image; obtain the depth information corresponding to the current frame image from the depth sensing module 33, and determine the three-dimensional feature point information of the current frame image according to the depth information corresponding to the two-dimensional feature point information; Use the two-dimensional feature point information and the three-dimensional feature point information of the current frame image and the two-dimensional feature point information and the three-dimensional feature point information of the image determined by the second device to determine the pose of the current frame image relative to the second device, and Combining the posture information of the first device, determine the relative posture relationship between the first device and the second device, so as to use the relative posture relationship between the first device and the second device to perform augmented reality processing operations.
另外,增强现实处理装置35还可以包括锚点获取模块。锚点获取模块用于获取由第二设备配置的锚点信息,以便基于第一设备与第二设备的相对位姿关系,在第一设备上显示出与锚点信息对应的虚拟对象。In addition, the augmented reality processing device 35 may also include an anchor point acquisition module. The anchor point acquisition module is used to acquire the anchor point information configured by the second device, so as to display the virtual object corresponding to the anchor point information on the first device based on the relative pose relationship between the first device and the second device.
此外,第一设备还可以在场景中添加锚点信息,以便其他设备可以显示出并进行互动操作。在这种情况下,增强显示处理系统还可以包括锚点添加单元。锚点添加单元可以用于在第一设备所处场景中,添加锚点信息。具体的,锚点添加单元可以包括如图3中所示的应用程序36,持有第一设备的用户可以借助于该应用程序36实现锚点信息的添加。In addition, the first device can also add anchor point information in the scene so that other devices can display and perform interactive operations. In this case, the enhanced display processing system may further include an anchor point adding unit. The anchor point adding unit may be used to add anchor point information in the scene where the first device is located. Specifically, the anchor point adding unit may include an application program 36 as shown in FIG. 3, and the user holding the first device may implement the addition of anchor point information by means of the application program 36.
需要注意的是,本公开示例性实施方式涉及的第二设备中也可以具有如图3所示的系统架构。It should be noted that the second device involved in the exemplary embodiment of the present disclosure may also have a system architecture as shown in FIG. 3.
下面将对本公开示例性实施方式的增强现实处理方法进行说明。参考图4,增强现实处理方法可以包括以下步骤:The augmented reality processing method of the exemplary embodiment of the present disclosure will be described below. Referring to FIG. 4, the augmented reality processing method may include the following steps:
S42.确定第一设备的当前帧图像的图像参数。S42. Determine the image parameter of the current frame image of the first device.
在本公开的示例性实施方式中,当前帧图像的图像参数可以包含当前帧图像的二维特征点信息和三维特征点信息。In an exemplary embodiment of the present disclosure, the image parameter of the current frame image may include two-dimensional feature point information and three-dimensional feature point information of the current frame image.
第一设备可以获取由摄像头模组拍摄的当前帧图像,并对当前帧图像进行特征提取,确定出当前帧图像的二维特征点信息。具体的,可以基于特征提取算法和特征描述子的组合来提取当前帧图像的二维特征点信息。The first device can obtain the current frame image taken by the camera module, and perform feature extraction on the current frame image to determine the two-dimensional feature point information of the current frame image. Specifically, the two-dimensional feature point information of the current frame image can be extracted based on the combination of the feature extraction algorithm and the feature descriptor.
本公开示例性实施方式采用的特征提取算法可以包括但不限于FAST特征点检测算法、DOG特征点检测算法、Harris特征点检测算法、SIFT特征点检测算法、SURF特征点检测算法等。特征描述子可以包括但不限于BRIEF特征点描述子、BRISK特征点描述子、FREAK特征点描述子等。The feature extraction algorithms adopted by the exemplary embodiments of the present disclosure may include, but are not limited to, FAST feature point detection algorithm, DOG feature point detection algorithm, Harris feature point detection algorithm, SIFT feature point detection algorithm, SURF feature point detection algorithm, etc. Feature descriptors may include, but are not limited to, BRIEF feature point descriptors, BRISK feature point descriptors, FREAK feature point descriptors, and so on.
根据本公开的一个实施例,特征提取算法和特征描述子的组合可以是FAST特征点检测算法和BRIEF特征点描述子。根据本公开的另一些实施例,特征提取算法和特征描述子的组合可以是DOG特征点检测算法和FREAK特征点描述子。According to an embodiment of the present disclosure, the combination of the feature extraction algorithm and the feature descriptor may be the FAST feature point detection algorithm and the BRIEF feature point descriptor. According to other embodiments of the present disclosure, the combination of the feature extraction algorithm and the feature descriptor may be a DOG feature point detection algorithm and a FREAK feature point descriptor.
应当理解的是,还可以针对不同纹理场景采用不同的组合形式,例如,针对强纹理场景,可以采用FAST特征点检测算法和BRIEF特征点描述子来进行特征提取;针对弱纹理场景,可以采用DOG特征点检测算法和FREAK特征点描述子来进行特征提取。It should be understood that different combinations can also be used for different texture scenes. For example, for strong texture scenes, the FAST feature point detection algorithm and the BRIEF feature point descriptor can be used for feature extraction; for weak texture scenes, DOG can be used. Feature point detection algorithm and FREAK feature point descriptor are used for feature extraction.
另外,第一设备可以响应用户的操作,执行获取当前帧图像并提取二维特征的过程。例如,在用户开启AR应用程序时,第一设备可以响应AR应用程序启动操作,开启摄像头模组,获取摄像头模组拍摄的当前帧图像,并提取二维特征点信息。In addition, the first device may respond to the user's operation to perform the process of acquiring the current frame image and extracting the two-dimensional feature. For example, when the user starts the AR application, the first device may respond to the AR application startup operation, turn on the camera module, obtain the current frame image captured by the camera module, and extract the two-dimensional feature point information.
在确定出当前帧图像的二维特征点信息的情况下,可以结合二维特征点信息对应的深度信息,确定当前帧图像的三维特征点信息。When the two-dimensional feature point information of the current frame image is determined, the depth information corresponding to the two-dimensional feature point information can be combined to determine the three-dimensional feature point information of the current frame image.
具体的,在获取当前帧图像时,可以通过深度感测模组采集与当前帧图像对应的深度信息。其中,深度感测模组可以是双摄模组(例如,彩色摄像头与长焦摄像头)、结构光模组、TOF模组中的任意一个。Specifically, when acquiring the current frame image, the depth information corresponding to the current frame image can be acquired through the depth sensing module. The depth sensing module may be any one of a dual camera module (for example, a color camera and a telephoto camera), a structured light module, and a TOF module.
在得到当前帧图像以及对应的深度信息后,可以将当前帧图像与深度信息进行配准,确定当前帧图像上各像素点的深度信息。After the current frame image and the corresponding depth information are obtained, the current frame image and the depth information can be registered to determine the depth information of each pixel on the current frame image.
针对配准的过程,需要预先标定摄像头模组与深度感测模组的内参和外参。For the registration process, the internal and external parameters of the camera module and the depth sensing module need to be calibrated in advance.
具体的,可以构建一个三维向量p_ir=(x,y,z),其中,x,y表示一像素点的像素坐标,z表示该像素点的深度值。利用深度感测模组的内参矩阵可以得到该像素点在深度感测模组坐标系下的坐标P_ir。然后,P_ir可以与一个旋转矩阵R相乘,再加上一个平移向量T,即可将P_ir转换到RGB摄像头的坐标系下,得到P_rgb。随后,P_rgb可以与摄像头模组的内参矩阵H_rgb相乘,得到p_rgb,p_rgb也是一个三维向量,记为(x0,y0,z0),其中,x0和y0即为该像素点在RGB图像中的像素坐标,提取该像素点的像素值,与对应的深度信息进行匹配。由此,完成了一个像素点的二维图像信息与深度信息的对齐。在这种情况下,针对每一个像素点均执行上述过程,以完成配准过程。Specifically, a three-dimensional vector p_ir=(x, y, z) can be constructed, where x, y represent the pixel coordinates of a pixel, and z represents the depth value of the pixel. The internal parameter matrix of the depth sensing module can be used to obtain the coordinate P_ir of the pixel point in the depth sensing module coordinate system. Then, P_ir can be multiplied by a rotation matrix R, and a translation vector T is added to convert P_ir to the coordinate system of the RGB camera to obtain P_rgb. Subsequently, P_rgb can be multiplied with the internal parameter matrix H_rgb of the camera module to obtain p_rgb, p_rgb is also a three-dimensional vector, denoted as (x0, y0, z0), where x0 and y0 are the pixels of the pixel in the RGB image Coordinates, extract the pixel value of the pixel, and match it with the corresponding depth information. As a result, the alignment of the two-dimensional image information and depth information of one pixel is completed. In this case, the above process is performed for each pixel to complete the registration process.
在确定出当前帧图像上各像素点的深度信息后,可以从中确定出与二维特征点信息对应的深度信息,并将二维特征点信息与二维特征点信息对应的深度信息结合,确定出当前帧图像的三维特征点信息。After determining the depth information of each pixel on the current frame image, the depth information corresponding to the two-dimensional feature point information can be determined from it, and the two-dimensional feature point information and the depth information corresponding to the two-dimensional feature point information can be combined to determine Get the three-dimensional feature point information of the current frame image.
另外,在获取由深度感测模组的深度信息后,还可以对深度信息进行去噪,以去除深度信息中明显错误的深度值。例如,可以采用深度神经网络去除TOF图像中的噪点,本示例性实施方式中对此不做特殊限定。In addition, after acquiring the depth information from the depth sensing module, the depth information can also be denoised to remove obviously wrong depth values in the depth information. For example, a deep neural network may be used to remove noise in the TOF image, which is not particularly limited in this exemplary embodiment.
S44.获取由第二设备确定出的参考图像的图像参数。S44. Acquire the image parameters of the reference image determined by the second device.
在本公开的示例性实施方式中,参考图像的图像参数可以包含参考图像的二维特征点信息和三维特征点信息。In an exemplary embodiment of the present disclosure, the image parameters of the reference image may include two-dimensional feature point information and three-dimensional feature point information of the reference image.
第二设备在建图的过程中,可以生成每一帧图像或关键帧图像的二维特征点信息和三维特征点信息。在第一设备处于第二设备所建图的场景中并要执行增强现实的处理操作时,可以获取这些图像的二维特征点信息和三维特征点信息。应当理解的是,本公开所述的参考图像即是第二设备在建图中生成的各帧图像或关键帧图像。In the process of building a map, the second device can generate two-dimensional feature point information and three-dimensional feature point information of each frame of image or key frame image. When the first device is in the scene created by the second device and performs an augmented reality processing operation, the two-dimensional feature point information and the three-dimensional feature point information of these images can be acquired. It should be understood that the reference image described in the present disclosure is each frame image or key frame image generated by the second device in the mapping.
针对获取参考图像的图像参数的过程,在一个实施例中,第二设备可以通过蓝牙、热点、WiFi、移动网络等方式将参考图像的二维特征点信息和三维特征点信息发送给第一设备。而在另一个实施例中,第二设备可以将参考图像的二维特征点信息和三维特征点信息发送至云服务器,以便第一设备可以从云服务器中获取参考图像的二维特征点信息和三维特征点信息。For the process of obtaining the image parameters of the reference image, in one embodiment, the second device can send the two-dimensional feature point information and three-dimensional feature point information of the reference image to the first device through Bluetooth, hotspot, WiFi, mobile network, etc. . In another embodiment, the second device may send the two-dimensional feature point information and the three-dimensional feature point information of the reference image to the cloud server, so that the first device can obtain the two-dimensional feature point information and the three-dimensional feature point information of the reference image from the cloud server. Three-dimensional feature point information.
另外,上述步骤S42与步骤S44的执行顺序可以互换,也就是说,本公开的方案还可以先执行步骤S44,然后执行步骤S42。In addition, the execution order of the above step S42 and step S44 can be interchanged, that is, the solution of the present disclosure may also execute step S44 first, and then execute step S42.
S46.根据当前帧图像的图像参数和参考图像的图像参数,确定当前帧图像相对于第二设备的位姿。S46. Determine the pose of the current frame image relative to the second device according to the image parameters of the current frame image and the image parameters of the reference image.
在确定出当前帧图像的图像参数和参考图像的图像参数后,可以确定当前帧图像相对于第二设备的位姿,也就是说,可以确定当前帧图像在第二设备坐标系下的位姿。针对此过程,本公开提供了三个实现方式,下面将逐一进行说明。After determining the image parameters of the current frame image and the image parameters of the reference image, the pose of the current frame image relative to the second device can be determined, that is, the pose of the current frame image in the second device coordinate system can be determined . For this process, the present disclosure provides three implementation manners, which will be described one by one below.
根据本公开的一个实施例,可以通过特征匹配或描述子匹配的方式,确定当前帧图像的二维特征点信息与参考图像的二维特征点信息的关系,如果确定出当前帧图像的二维特征点信息与参考图像的二维特征点信息匹配,则可以采用迭代最近点(Iterative Closest Point,ICP)的方式确定当前帧图像的三维特征点信息与参考图像的三维特征点信息之间的相对位姿关系。According to an embodiment of the present disclosure, the relationship between the two-dimensional feature point information of the current frame image and the two-dimensional feature point information of the reference image can be determined through feature matching or descriptor matching. If the two-dimensional feature point information of the current frame image is determined If the feature point information matches the two-dimensional feature point information of the reference image, the Iterative Closest Point (ICP) method can be used to determine the relative relationship between the three-dimensional feature point information of the current frame image and the three-dimensional feature point information of the reference image. Posture relationship.
具体的,当前帧图像的三维特征点信息即是当前帧图像对应的点云信息,参考图像的 三维特征点信息即是参考图像的点云信息。可以将此两个点云信息作为输入,通过输入指定的位姿作为初始值,利用迭代最近点的方式得到两个点云对齐后的最优相对位姿,即确定出当前帧图像的三维特征点信息与参考图像的三维特征点信息之间的相对位姿关系。由此,基于第二设备在获取参考图像时的姿态信息,可以确定出当前帧图像相对于第二设备的位姿。Specifically, the three-dimensional feature point information of the current frame image is the point cloud information corresponding to the current frame image, and the three-dimensional feature point information of the reference image is the point cloud information of the reference image. The two point cloud information can be used as input, and by inputting the specified pose as the initial value, the optimal relative pose after the alignment of the two point clouds is obtained by the iterative closest point method, that is, the three-dimensional feature of the current frame image is determined The relative pose relationship between the point information and the three-dimensional feature point information of the reference image. Thus, based on the posture information of the second device when acquiring the reference image, the posture of the current frame image relative to the second device can be determined.
应当理解的是,在进行点云匹配之前,先确定二维信息之间的关系,由于二维信息关系的确定,通常采用的是特征匹配或描述子匹配的方式,过程简单。由此,可以加速匹配的整个过程,提高精度的同时,也可以实现提前排错的效果。It should be understood that before performing point cloud matching, the relationship between the two-dimensional information is determined first. Due to the determination of the two-dimensional information relationship, the method of feature matching or descriptor matching is usually adopted, and the process is simple. As a result, the entire matching process can be accelerated, and the accuracy can be improved, while the effect of troubleshooting in advance can also be achieved.
另外,在上述二维特征点信息的匹配过程中,由于特征及描述子的问题,可能存在误匹配的问题。由此,本公开示例性实施方式还可以包括去除误匹配点的方案。In addition, in the above-mentioned matching process of the two-dimensional feature point information, there may be a problem of mismatch due to the problem of the feature and the descriptor. Thus, the exemplary embodiment of the present disclosure may also include a solution for removing mismatched points.
可以采用RANSAC(Random Sample Consensus,随机抽样一致性)方式剔除误匹配特征点信息。具体的,在当前帧图像的二维特征点与参考图像的二维特征点之间的匹配对中随机选取一定数量的匹配对(例如,7对、8对等),通过选取的匹配对计算当前帧图像与参考图像之间的基本矩阵或本质矩阵,基于极线约束的方式,如果一个二维特征点离对应的极线距离较远,例如,大于一阈值,则可以认为该二维特征点为误匹配点。通过迭代一定次数的随机取样过程,选取内点个数最多的一次随机取样结果作为最终的匹配结果,在此基础上,可以从当前帧图像的三维特征点信息中剔除误匹配特征点信息。The RANSAC (Random Sample Consensus) method can be used to eliminate mismatched feature point information. Specifically, a certain number of matching pairs (for example, 7 pairs, 8 pairs, etc.) are randomly selected from the matching pairs between the two-dimensional feature points of the current frame image and the two-dimensional feature points of the reference image, and the selected matching pairs are calculated The basic matrix or essential matrix between the current frame image and the reference image is based on the epipolar constraint. If a two-dimensional feature point is far from the corresponding epipolar line, for example, greater than a threshold, the two-dimensional feature can be considered The points are mismatched points. By iterating a certain number of random sampling processes, the random sampling result with the largest number of internal points is selected as the final matching result. On this basis, the mismatched feature point information can be eliminated from the three-dimensional feature point information of the current frame image.
由此,可以利用剔除误匹配特征点信息的三维特征点信息确定出当前帧图像相对于第二设备的位姿。Thus, the three-dimensional feature point information from which the mismatched feature point information is eliminated can be used to determine the pose of the current frame image relative to the second device.
根据本公开的另一个实施例,首先,如果当前帧图像的二维特征点信息与参考图像的二维特征点信息匹配,则将当前帧图像的二维特征点信息与参考图像的三维特征点信息关联,以得到点对信息。接下来,可以将该点对信息作为输入,求解透视n点(Perspective-n-Point,PnP)问题,根据当前帧图像的三维特征点信息并结合求解结果确定当前帧图像相对于第二设备的位姿。According to another embodiment of the present disclosure, first, if the two-dimensional feature point information of the current frame image matches the two-dimensional feature point information of the reference image, then the two-dimensional feature point information of the current frame image is matched with the three-dimensional feature point information of the reference image. Information association to get point-to-point information. Next, the point pair information can be used as input to solve the Perspective-n-Point (PnP) problem. According to the three-dimensional feature point information of the current frame image and the solution result, determine the current frame image relative to the second device Posture.
其中,PnP是机器视觉领域的经典方法,可以根据物体上的n个特征点来确定摄像头与物体间的相对位姿。具体可以根据物体上的n个特征点来确定摄像头与物体间的旋转矩阵和平移向量。另外,可以例如将n确定为大于等于4。Among them, PnP is a classic method in the field of machine vision, which can determine the relative pose between the camera and the object according to n feature points on the object. Specifically, the rotation matrix and translation vector between the camera and the object can be determined according to the n feature points on the object. In addition, n may be determined to be 4 or more, for example.
根据本公开的又一个实施例,可以将上一实施例结合PnP的求解结果而得到的三维特征点信息与参考图像的三维特征点信息的相对位姿关系作为迭代初始位姿输入,利用迭代最近点方式确定当前帧图像的三维特征点信息以及参考图像的三维特征点信息之间的相对位姿关系,以确定出当前帧图像相对于第二设备的位姿。容易看出,在本实施例是将PnP与ICP结合,提高位姿关系确定的准确性。According to another embodiment of the present disclosure, the relative pose relationship between the three-dimensional feature point information of the reference image and the three-dimensional feature point information obtained by combining the PnP solution result of the previous embodiment can be used as the iterative initial pose input. The point method determines the relative pose relationship between the three-dimensional feature point information of the current frame image and the three-dimensional feature point information of the reference image to determine the pose of the current frame image relative to the second device. It is easy to see that in this embodiment, PnP is combined with ICP to improve the accuracy of determining the pose relationship.
S48.根据当前帧图像相对于第二设备的位姿以及采集当前帧图像时第一设备的姿态信息,确定第一设备与第二设备的相对位姿关系,以便利用第一设备与第二设备的相对位姿关系进行增强现实的处理操作。S48. Determine the relative pose relationship between the first device and the second device according to the pose of the current frame image relative to the second device and the pose information of the first device when the current frame image is collected, so as to use the first device and the second device The relative pose relationship of the two is used for augmented reality processing operations.
第一设备的惯性测量单元可以获取第一设备的惯性信息,由此,可以得到第一设备的6DOF(6 Degrees Of Freedom,6自由度)姿态信息。基于第一设备的姿态信息以及步骤S46中确定出的当前帧图像相对于第二设备的位姿,可以得到第一设备与第二设备的相对位姿关系。The inertial measurement unit of the first device can obtain the inertial information of the first device, and thereby, the 6DOF (6 Degrees Of Freedom) attitude information of the first device can be obtained. Based on the posture information of the first device and the posture of the current frame image relative to the second device determined in step S46, the relative posture relationship between the first device and the second device can be obtained.
参考图5,对采用ICP方式确定第一设备与第二设备相对位姿关系的过程进行说明。Referring to FIG. 5, the process of determining the relative pose relationship between the first device and the second device using the ICP method will be described.
在步骤S502中,第一设备可以提取当前帧图像的二维特征点信息,针对弱纹理场景,可以采用DOG特征点检测算法和FREAK特征点描述子来进行特征提取。In step S502, the first device may extract the two-dimensional feature point information of the current frame image. For a weak texture scene, the DOG feature point detection algorithm and the FREAK feature point descriptor may be used for feature extraction.
在步骤S504中,第一设备可以获取TOF模组输入的深度信息;在步骤S506中,可以将二维特征点信息与深度信息进行配准,得到当前帧图像的点云数据。In step S504, the first device can obtain the depth information input by the TOF module; in step S506, the two-dimensional feature point information can be registered with the depth information to obtain the point cloud data of the current frame image.
在步骤S508中,第一设备可以判断步骤S502确定出的二维特征点信息是否与参考 图像的二维特征匹配,如果匹配,则执行步骤S510确定参考图像的三维点云数据的步骤,如果不匹配,则返回步骤S502,可以执行下一帧图像的特征提取过程,或者重新执行当前帧图像的特征提取过程。In step S508, the first device may determine whether the two-dimensional feature point information determined in step S502 matches the two-dimensional feature of the reference image, and if it matches, it executes the step of determining the three-dimensional point cloud data of the reference image in step S510. If it matches, return to step S502, and the feature extraction process of the next frame image can be executed, or the feature extraction process of the current frame image can be executed again.
在步骤S512中,可以利用ICP确定当前帧图像的点云与参考图像的点云的相对位姿,进而确定出当前帧图像在第二设备坐标系下的位姿。In step S512, the ICP may be used to determine the relative pose of the point cloud of the current frame image and the point cloud of the reference image, and then determine the pose of the current frame image in the second device coordinate system.
在步骤S514中,可以利用惯性测量单元确定出第一设备的姿态信息;在步骤S516中,可以基于当前帧图像在第二设备坐标系下的位姿以及第一设备的姿态信息,确定第一设备与第二设备的相对位姿。In step S514, the inertial measurement unit can be used to determine the posture information of the first device; in step S516, the first device can be determined based on the posture of the current frame image in the second device coordinate system and the posture information of the first device. The relative pose of the device and the second device.
在确定出第一设备与第二设备的相对位姿后,第一设备可以基于该相对位姿关系进行增强现实的处理操作。After the relative poses of the first device and the second device are determined, the first device may perform augmented reality processing operations based on the relative pose relationship.
例如,第一设备可以获取第二设备在场景中配置的锚点信息,该锚点信息可以包括但不限于虚拟对象的属性信息(颜色、大小、类型等)、标识、位置及姿态。由此,第一设备可以调整至对应位置,显示出虚拟对象。For example, the first device may obtain the anchor point information configured by the second device in the scene. The anchor point information may include, but is not limited to, attribute information (color, size, type, etc.), identification, position, and posture of the virtual object. Thus, the first device can be adjusted to the corresponding position and display the virtual object.
另外,增强现实的处理操作还可以包括对真实对象的渲染操作等。例如,第二设备对一真实对象进行颜色渲染后,第一设备也可显示出颜色渲染后真实对象。In addition, the processing operations of augmented reality may also include rendering operations on real objects. For example, after the second device performs color rendering on a real object, the first device may also display the real object after color rendering.
应当理解的是,虽然上述增强现实处理方法以一个终端设备为例进行说明,然而,在一场景下,可以将上述增强现实处理方法应用于多个终端设备。鉴于深度信息受环境影响较小,由此,克服了由于周围环境纹理、光照、角度等因素的影响而导致重定位效果不佳的问题,提高了多人AR重定位的鲁棒性,进而增强了多人AR的体验。It should be understood that although the above-mentioned augmented reality processing method is described by taking one terminal device as an example, in a scenario, the above-mentioned augmented reality processing method may be applied to multiple terminal devices. In view of the fact that the depth information is less affected by the environment, the problem of poor relocation effect due to the influence of the surrounding environment texture, illumination, angle and other factors is overcome, and the robustness of multi-person AR relocation is improved, thereby enhancing The experience of multiplayer AR.
应当注意,尽管在附图中以特定顺序描述了本公开中方法的各个步骤,但是,这并非要求或者暗示必须按照该特定顺序来执行这些步骤,或是必须执行全部所示的步骤才能实现期望的结果。附加的或备选的,可以省略某些步骤,将多个步骤合并为一个步骤执行,以及/或者将一个步骤分解为多个步骤执行等。It should be noted that although the various steps of the method in the present disclosure are described in a specific order in the drawings, this does not require or imply that these steps must be performed in the specific order, or that all the steps shown must be performed to achieve the desired the result of. Additionally or alternatively, some steps may be omitted, multiple steps may be combined into one step for execution, and/or one step may be decomposed into multiple steps for execution, etc.
进一步的,本示例实施方式中还提供了一种增强现实处理装置。Further, this exemplary embodiment also provides an augmented reality processing device.
图6示意性示出了本公开的示例性实施方式的增强现实处理装置的方框图。参考图6,根据本公开的示例性实施方式的增强现实处理装置6可以包括第一图像参数确定模块61、第二图像参数确定模块63、第一相对位姿确定模块65和第二相对位姿确定模块67。FIG. 6 schematically shows a block diagram of an augmented reality processing apparatus according to an exemplary embodiment of the present disclosure. 6, the augmented reality processing device 6 according to an exemplary embodiment of the present disclosure may include a first image parameter determination module 61, a second image parameter determination module 63, a first relative pose determination module 65, and a second relative pose Determine module 67.
具体的,第一图像参数确定模块61可以用于确定第一设备的当前帧图像的图像参数;第二图像参数确定模块63可以用于获取由第二设备确定出的参考图像的图像参数;第一相对位姿确定模块65可以用于根据当前帧图像的图像参数和参考图像的图像参数,确定当前帧图像相对于第二设备的位姿;第二相对位姿确定模块67可以用于根据当前帧图像相对于第二设备的位姿以及采集当前帧图像时第一设备的姿态信息,确定第一设备与第二设备的相对位姿关系,以便利用第一设备与第二设备的相对位姿关系进行增强现实的处理操作。Specifically, the first image parameter determination module 61 may be used to determine the image parameters of the current frame image of the first device; the second image parameter determination module 63 may be used to obtain the image parameters of the reference image determined by the second device; A relative pose determination module 65 can be used to determine the pose of the current frame image relative to the second device according to the image parameters of the current frame image and the image parameters of the reference image; the second relative pose determination module 67 can be used to determine the pose of the current frame image relative to the second device; The pose of the frame image relative to the second device and the pose information of the first device when the current frame image is collected, determine the relative pose relationship between the first device and the second device, so as to use the relative pose of the first device and the second device Relations to perform augmented reality processing operations.
采用本公开示例性实施方式的增强现实处理装置,通过确定当前帧图像相对于第二设备的位姿,并结合采集当前帧图像时第一设备的姿态信息,确定第一设备与第二设备的相对位姿关系,重定位效果佳,并且本方案普适性强,易于实施。Using the augmented reality processing apparatus of the exemplary embodiment of the present disclosure, by determining the pose of the current frame image relative to the second device, and combining with the posture information of the first device when the current frame image is collected, the difference between the first device and the second device is determined. Relative to the posture relationship, the relocation effect is good, and the scheme is universal and easy to implement.
根据本公开的示例性实施例,当前帧图像的图像参数包含当前帧图像的二维特征点信息和三维特征点信息,在这种情况下,第一图像参数确定模块61可以被配置为执行:获取所述当前帧图像,对所述当前帧图像进行特征提取,确定所述当前帧图像的二维特征点信息;获取所述二维特征点信息对应的深度信息,根据所述二维特征点信息以及所述二维特征点信息对应的深度信息确定所述当前帧图像的三维特征点信息。According to an exemplary embodiment of the present disclosure, the image parameters of the current frame image include two-dimensional feature point information and three-dimensional feature point information of the current frame image. In this case, the first image parameter determination module 61 may be configured to execute: Acquire the current frame image, perform feature extraction on the current frame image, and determine the two-dimensional feature point information of the current frame image; acquire the depth information corresponding to the two-dimensional feature point information, according to the two-dimensional feature point The information and the depth information corresponding to the two-dimensional feature point information determine the three-dimensional feature point information of the current frame image.
根据本公开的示例性实施例,第一图像参数确定模块61确定当前帧图像的三维特征点信息的过程可以被配置为执行:获取由深度感测模组采集的与当前帧图像对应的深度信息;将当前帧图像与当前帧图像对应的深度信息进行配准,确定当前帧图像上各像素点的 深度信息;从当前帧图像上各像素点的深度信息中确定出与二维特征点信息对应的深度信息;利用二维特征点信息以及与二维特征点信息对应的深度信息,确定当前帧图像的三维特征点信息。According to an exemplary embodiment of the present disclosure, the process of determining the three-dimensional feature point information of the current frame image by the first image parameter determination module 61 may be configured to perform: acquiring depth information corresponding to the current frame image collected by the depth sensing module ; Register the current frame image with the depth information corresponding to the current frame image to determine the depth information of each pixel on the current frame image; determine the corresponding two-dimensional feature point information from the depth information of each pixel on the current frame image The depth information of the two-dimensional feature point information and the depth information corresponding to the two-dimensional feature point information to determine the three-dimensional feature point information of the current frame image.
根据本公开的示例性实施例,参考图像的图像参数包含参考图像的二维特征点信息和三维特征点信息,在这种情况下,参考图7,第一相对位姿确定模块65可以包括第一相对位姿确定单元701。According to an exemplary embodiment of the present disclosure, the image parameters of the reference image include two-dimensional feature point information and three-dimensional feature point information of the reference image. In this case, referring to FIG. 7, the first relative pose determination module 65 may include a first relative pose determination module 65. A relative pose determination unit 701.
具体的,第一相对位姿确定单元701可以被配置为执行:如果当前帧图像的二维特征点信息与参考图像的二维特征点信息匹配,则利用迭代最近点方式确定当前帧图像的三维特征点信息与参考图像的三维特征点信息之间的相对位姿关系,以得到当前帧图像相对于第二设备的位姿。Specifically, the first relative pose determining unit 701 may be configured to execute: if the two-dimensional feature point information of the current frame image matches the two-dimensional feature point information of the reference image, the iterative closest point method is used to determine the three-dimensional feature point of the current frame image. The relative pose relationship between the feature point information and the three-dimensional feature point information of the reference image to obtain the pose of the current frame image relative to the second device.
根据本公开的示例性实施例,第一相对位姿确定单元701可以被配置为执行:在确定当前帧图像的三维特征点信息与参考图像的三维特征点信息之间的相对位姿关系之前,确定当前帧图像的二维特征点信息与参考图像的二维特征点信息中的误匹配特征点信息;从当前帧图像的三维特征点信息中剔除误匹配特征点信息,以便确定当前帧图像的剔除误匹配特征点信息后的三维特征点信息与参考图像的剔除误匹配特征点信息后的三维特征点信息之间的相对位姿关系。According to an exemplary embodiment of the present disclosure, the first relative pose determining unit 701 may be configured to perform: before determining the relative pose relationship between the three-dimensional feature point information of the current frame image and the three-dimensional feature point information of the reference image, Determine the mismatched feature point information in the two-dimensional feature point information of the current frame image and the two-dimensional feature point information of the reference image; remove the mismatched feature point information from the three-dimensional feature point information of the current frame image to determine the current frame image The relative pose relationship between the three-dimensional feature point information after excluding the mismatched feature point information and the three-dimensional feature point information of the reference image after excluding the mismatched feature point information.
根据本公开的示例性实施例,参考图像的图像参数包含参考图像的二维特征点信息和三维特征点信息,在这种情况下,参考图8,第一相对位姿确定模块65可以包括第二相对位姿确定单元801。According to an exemplary embodiment of the present disclosure, the image parameters of the reference image include two-dimensional feature point information and three-dimensional feature point information of the reference image. In this case, referring to FIG. 8, the first relative pose determination module 65 may include a first relative pose determination module 65. Two relative pose determination unit 801.
具体的,第二相对位姿确定单元801可以被配置为执行:如果当前帧图像的二维特征点信息与参考图像的二维特征点信息匹配,则将当前帧图像的二维特征点信息与参考图像的三维特征点信息关联,得到点对信息;利用点对信息求解透视n点问题,根据所述当前帧图像的三维特征点信息并结合求解结果确定当前帧图像相对于第二设备的位姿。Specifically, the second relative pose determining unit 801 may be configured to execute: if the two-dimensional feature point information of the current frame image matches the two-dimensional feature point information of the reference image, then the two-dimensional feature point information of the current frame image is matched with the two-dimensional feature point information of the reference image. The three-dimensional feature point information of the reference image is correlated to obtain the point-pair information; the point-pair information is used to solve the perspective n-point problem, and the position of the current frame image relative to the second device is determined according to the three-dimensional feature point information of the current frame image and the solution result. posture.
根据本公开的示例性实施例,第二相对位姿确定单元801执行结合求解结果确定当前帧图像相对于第二设备的位姿的过程可以包括:根据求解结果确定当前帧图像的三维特征点信息与参考图像的三维特征点信息的相对位姿关系;将根据求解结果确定出的当前帧图像的三维特征点信息与参考图像的三维特征点信息的相对位姿关系作为初始位姿输入,采用迭代最近点方式确定当前帧图像的三维特征点信息以及参考图像的三维特征点信息之间的相对位姿关系,以确定出当前帧图像相对于第二设备的位姿。According to an exemplary embodiment of the present disclosure, the second relative pose determining unit 801 performing the process of determining the pose of the current frame image relative to the second device in combination with the solution result may include: determining the three-dimensional feature point information of the current frame image according to the solution result The relative pose relationship with the three-dimensional feature point information of the reference image; the relative pose relationship between the three-dimensional feature point information of the current frame image and the three-dimensional feature point information of the reference image determined according to the solution result is used as the initial pose input, and iterative The closest point method determines the relative pose relationship between the three-dimensional feature point information of the current frame image and the three-dimensional feature point information of the reference image to determine the pose of the current frame image relative to the second device.
根据本公开的示例性实施例,参考图9,相比于增强现实处理装置6,增强现实处理装置9还可以包括虚拟对象显示模块91。According to an exemplary embodiment of the present disclosure, referring to FIG. 9, compared to the augmented reality processing device 6, the augmented reality processing device 9 may further include a virtual object display module 91.
具体的,虚拟对象显示模块91可以被配置为执行:获取由第二设备配置的锚点信息,以便基于第一设备与第二设备的相对位姿关系,在第一设备上显示出与锚点信息对应的虚拟对象。Specifically, the virtual object display module 91 may be configured to execute: obtain the anchor point information configured by the second device, so as to display the anchor point on the first device based on the relative pose relationship between the first device and the second device. The virtual object corresponding to the information.
由于本发明实施方式的增强现实处理装置的各个功能模块与上述方法发明实施方式中相同,因此在此不再赘述。Since each functional module of the augmented reality processing device in the embodiment of the present invention is the same as in the above-mentioned method and invention embodiment, it will not be repeated here.
通过以上的实施方式的描述,本领域的技术人员易于理解,这里描述的示例实施方式可以通过软件实现,也可以通过软件结合必要的硬件的方式来实现。因此,根据本公开实施方式的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM,U盘,移动硬盘等)中或网络上,包括若干指令以使得一台计算设备(可以是个人计算机、服务器、终端装置、或者网络设备等)执行根据本公开实施方式的方法。Through the description of the above embodiments, those skilled in the art can easily understand that the example embodiments described here can be implemented by software, or can be implemented by combining software with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, U disk, mobile hard disk, etc.) or on the network , Including several instructions to make a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the embodiment of the present disclosure.
此外,上述附图仅是根据本发明示例性实施例的方法所包括的处理的示意性说明,而不是限制目的。易于理解,上述附图所示的处理并不表明或限制这些处理的时间顺序。另外,也易于理解,这些处理可以是例如在多个模块中同步或异步执行的。In addition, the above-mentioned drawings are merely schematic illustrations of the processing included in the method according to the exemplary embodiment of the present invention, and are not intended for limitation. It is easy to understand that the processing shown in the above drawings does not indicate or limit the time sequence of these processings. In addition, it is easy to understand that these processes can be executed synchronously or asynchronously in multiple modules, for example.
应当注意,尽管在上文详细描述中提及了用于动作执行的设备的若干模块或者单元,但是这种划分并非强制性的。实际上,根据本公开的实施方式,上文描述的两个或更多模块或者单元的特征和功能可以在一个模块或者单元中具体化。反之,上文描述的一个模块或者单元的特征和功能可以进一步划分为由多个模块或者单元来具体化。It should be noted that although several modules or units of the device for action execution are mentioned in the above detailed description, this division is not mandatory. In fact, according to the embodiments of the present disclosure, the features and functions of two or more modules or units described above may be embodied in one module or unit. Conversely, the features and functions of a module or unit described above can be further divided into multiple modules or units to be embodied.
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其他实施例。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由权利要求指出。Those skilled in the art will easily think of other embodiments of the present disclosure after considering the specification and practicing the invention disclosed herein. This application is intended to cover any variations, uses, or adaptive changes of the present disclosure. These variations, uses, or adaptive changes follow the general principles of the present disclosure and include common knowledge or conventional technical means in the technical field that are not disclosed in the present disclosure. . The description and the embodiments are only regarded as exemplary, and the true scope and spirit of the present disclosure are pointed out by the claims.
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限。It should be understood that the present disclosure is not limited to the precise structure that has been described above and shown in the drawings, and various modifications and changes can be made without departing from its scope. The scope of the present disclosure is limited only by the appended claims.

Claims (20)

  1. 一种增强现实处理方法,应用于第一设备,包括:An augmented reality processing method, applied to a first device, includes:
    确定所述第一设备的当前帧图像的图像参数;Determining the image parameters of the current frame image of the first device;
    获取由第二设备确定出的参考图像的图像参数;Acquiring the image parameters of the reference image determined by the second device;
    根据所述当前帧图像的图像参数和所述参考图像的图像参数,确定所述当前帧图像相对于所述第二设备的位姿;Determine the pose of the current frame image relative to the second device according to the image parameters of the current frame image and the image parameters of the reference image;
    根据所述当前帧图像相对于所述第二设备的位姿以及采集所述当前帧图像时所述第一设备的姿态信息,确定所述第一设备与所述第二设备的相对位姿关系,以便利用所述第一设备与所述第二设备的相对位姿关系进行增强现实的处理操作。Determine the relative pose relationship between the first device and the second device according to the pose of the current frame image relative to the second device and the pose information of the first device when the current frame image is collected , So as to use the relative pose relationship between the first device and the second device to perform augmented reality processing operations.
  2. 根据权利要求1所述的增强现实处理方法,其中,所述当前帧图像的图像参数包含所述当前帧图像的二维特征点信息和三维特征点信息;其中,确定所述第一设备的当前帧图像的图像参数包括:The augmented reality processing method according to claim 1, wherein the image parameters of the current frame image include two-dimensional feature point information and three-dimensional feature point information of the current frame image; wherein the current frame of the first device is determined The image parameters of the frame image include:
    获取所述当前帧图像,对所述当前帧图像进行特征提取,确定所述当前帧图像的二维特征点信息;Acquiring the current frame image, performing feature extraction on the current frame image, and determining the two-dimensional feature point information of the current frame image;
    获取所述二维特征点信息对应的深度信息,根据所述二维特征点信息以及所述二维特征点信息对应的深度信息确定所述当前帧图像的三维特征点信息。The depth information corresponding to the two-dimensional feature point information is acquired, and the three-dimensional feature point information of the current frame image is determined according to the two-dimensional feature point information and the depth information corresponding to the two-dimensional feature point information.
  3. 根据权利要求2所述的增强现实处理方法,其中,获取所述二维特征点信息对应的深度信息,根据所述二维特征点信息以及所述二维特征点信息对应的深度信息确定所述当前帧图像的三维特征点信息,包括:The augmented reality processing method according to claim 2, wherein the depth information corresponding to the two-dimensional feature point information is acquired, and the depth information corresponding to the two-dimensional feature point information and the two-dimensional feature point information is determined to be the The three-dimensional feature point information of the current frame image includes:
    获取由深度感测模组采集的与所述当前帧图像对应的深度信息;Acquiring the depth information corresponding to the current frame image collected by the depth sensing module;
    将所述当前帧图像与所述当前帧图像对应的深度信息进行配准,确定所述当前帧图像上各像素点的深度信息;Register the current frame image with the depth information corresponding to the current frame image, and determine the depth information of each pixel on the current frame image;
    从所述当前帧图像上各像素点的深度信息中确定出与所述二维特征点信息对应的深度信息;Determining the depth information corresponding to the two-dimensional feature point information from the depth information of each pixel on the current frame image;
    利用所述二维特征点信息以及与所述二维特征点信息对应的深度信息,确定所述当前帧图像的三维特征点信息。Using the two-dimensional feature point information and the depth information corresponding to the two-dimensional feature point information, the three-dimensional feature point information of the current frame image is determined.
  4. 根据权利要求3所述的增强现实处理方法,其中,所述参考图像的图像参数包含所述参考图像的二维特征点信息和三维特征点信息;其中,根据所述当前帧图像的图像参数和所述参考图像的图像参数,确定所述当前帧图像相对于所述第二设备的位姿,包括:The augmented reality processing method according to claim 3, wherein the image parameters of the reference image include two-dimensional feature point information and three-dimensional feature point information of the reference image; wherein, according to the image parameters of the current frame image and The image parameter of the reference image to determine the pose of the current frame image relative to the second device includes:
    如果所述当前帧图像的二维特征点信息与所述参考图像的二维特征点信息匹配,则利用迭代最近点方式确定所述当前帧图像的三维特征点信息与所述参考图像的三维特征点信息之间的相对位姿关系,以得到所述当前帧图像相对于所述第二设备的位姿。If the two-dimensional feature point information of the current frame image matches the two-dimensional feature point information of the reference image, the iterative closest point method is used to determine the three-dimensional feature point information of the current frame image and the three-dimensional feature of the reference image The relative pose relationship between the point information to obtain the pose of the current frame image relative to the second device.
  5. 根据权利要求4所述的增强现实处理方法,其中,在确定所述当前帧图像的三维特征点信息与所述参考图像的三维特征点信息之间的相对位姿关系之前,所述增强现实处理方法还包括:The augmented reality processing method according to claim 4, wherein before determining the relative pose relationship between the three-dimensional feature point information of the current frame image and the three-dimensional feature point information of the reference image, the augmented reality processing Methods also include:
    确定所述当前帧图像的二维特征点信息与所述参考图像的二维特征点信息中的误匹配特征点信息;Determining mismatched feature point information in the two-dimensional feature point information of the current frame image and the two-dimensional feature point information of the reference image;
    从所述当前帧图像的三维特征点信息中剔除所述误匹配特征点信息,以便确定所述当前帧图像的剔除所述误匹配特征点信息后的三维特征点信息与所述参考图像的剔除所述误匹配特征点信息后的三维特征点信息之间的相对位姿关系。The mismatched feature point information is removed from the three-dimensional feature point information of the current frame image, so as to determine the removal of the three-dimensional feature point information after the mismatched feature point information of the current frame image and the removal of the reference image The relative pose relationship between the three-dimensional feature point information after the mismatched feature point information.
  6. 根据权利要求3所述的增强现实处理方法,其中,所述参考图像的图像参数包含所述参考图像的二维特征点信息和三维特征点信息;其中,根据所述当前帧图像的图像参数和所述参考图像的图像参数,确定所述当前帧图像相对于所述第二设备的位姿,包括:The augmented reality processing method according to claim 3, wherein the image parameters of the reference image include two-dimensional feature point information and three-dimensional feature point information of the reference image; wherein, according to the image parameters of the current frame image and The image parameter of the reference image to determine the pose of the current frame image relative to the second device includes:
    如果所述当前帧图像的二维特征点信息与所述参考图像的二维特征点信息匹配,则将所述当前帧图像的二维特征点信息与所述参考图像的三维特征点信息关联,得到点对信息;If the two-dimensional feature point information of the current frame image matches the two-dimensional feature point information of the reference image, then the two-dimensional feature point information of the current frame image is associated with the three-dimensional feature point information of the reference image, Get point-to-point information;
    利用所述点对信息求解透视n点问题,根据所述当前帧图像的三维特征点信息并结合求解结果确定所述当前帧图像相对于所述第二设备的位姿。The point-pair information is used to solve the perspective n-point problem, and the pose of the current frame image relative to the second device is determined according to the three-dimensional feature point information of the current frame image and the solution result.
  7. 根据权利要求6所述的增强现实处理方法,其中,结合求解结果确定所述当前帧图像相对于所述第二设备的位姿包括:The augmented reality processing method according to claim 6, wherein determining the pose of the current frame image relative to the second device in combination with the solution result comprises:
    根据求解结果确定所述当前帧图像的三维特征点信息与所述参考图像的三维特征点信息的相对位姿关系;Determining the relative pose relationship between the three-dimensional feature point information of the current frame image and the three-dimensional feature point information of the reference image according to the solution result;
    将根据所述求解结果确定出的所述当前帧图像的三维特征点信息与所述参考图像的三维特征点信息的相对位姿关系作为初始位姿输入,采用迭代最近点方式确定所述当前帧图像的三维特征点信息以及所述参考图像的三维特征点信息之间的相对位姿关系,以确定出所述当前帧图像相对于所述第二设备的位姿。The relative pose relationship between the three-dimensional feature point information of the current frame image and the three-dimensional feature point information of the reference image determined according to the solution result is used as the initial pose input, and the current frame is determined by the iterative closest point method The relative pose relationship between the three-dimensional feature point information of the image and the three-dimensional feature point information of the reference image is used to determine the pose of the current frame image relative to the second device.
  8. 根据权利要求1至7中任一项所述的增强现实处理方法,其中,利用所述第一设备与所述第二设备的相对位姿关系进行增强现实的处理操作包括:The augmented reality processing method according to any one of claims 1 to 7, wherein using the relative pose relationship between the first device and the second device to perform augmented reality processing operations comprises:
    获取由所述第二设备配置的锚点信息,以便基于所述第一设备与所述第二设备的相对位姿关系,在所述第一设备上显示出与所述锚点信息对应的虚拟对象。Acquire the anchor point information configured by the second device, so that based on the relative pose relationship between the first device and the second device, the virtual information corresponding to the anchor point information is displayed on the first device. Object.
  9. 一种增强现实处理装置,应用于第一设备,包括:An augmented reality processing device applied to a first device, including:
    第一图像参数确定模块,被配置为确定所述第一设备的当前帧图像的图像参数;The first image parameter determination module is configured to determine the image parameters of the current frame image of the first device;
    第二图像参数确定模块,被配置为获取由第二设备确定出的参考图像的图像参数;The second image parameter determination module is configured to obtain the image parameters of the reference image determined by the second device;
    第一相对位姿确定模块,被配置为根据所述当前帧图像的图像参数和所述参考图像的图像参数,确定所述当前帧图像相对于所述第二设备的位姿;The first relative pose determining module is configured to determine the pose of the current frame image relative to the second device according to the image parameters of the current frame image and the image parameters of the reference image;
    第二相对位姿确定模块,被配置为根据所述当前帧图像相对于所述第二设备的位姿以及采集所述当前帧图像时所述第一设备的姿态信息,确定所述第一设备与所述第二设备的相对位姿关系,以便利用所述第一设备与所述第二设备的相对位姿关系进行增强现实的处理操作。The second relative pose determining module is configured to determine the first device based on the pose of the current frame image relative to the second device and the pose information of the first device when the current frame image is collected The relative pose relationship with the second device, so as to use the relative pose relationship between the first device and the second device to perform an augmented reality processing operation.
  10. 根据权利要求9所述的增强现实处理装置,其中,所述当前帧图像的图像参数包含所述当前帧图像的二维特征点信息和三维特征点信息,所述第一图像参数确定模块被配置为:获取所述当前帧图像,对所述当前帧图像进行特征提取,确定所述当前帧图像的二维特征点信息;获取所述二维特征点信息对应的深度信息,根据所述二维特征点信息以及所述二维特征点信息对应的深度信息确定所述当前帧图像的三维特征点信息。The augmented reality processing device according to claim 9, wherein the image parameters of the current frame image include two-dimensional feature point information and three-dimensional feature point information of the current frame image, and the first image parameter determination module is configured To obtain the current frame image, perform feature extraction on the current frame image, and determine the two-dimensional feature point information of the current frame image; obtain the depth information corresponding to the two-dimensional feature point information, according to the two-dimensional The feature point information and the depth information corresponding to the two-dimensional feature point information determine the three-dimensional feature point information of the current frame image.
  11. 根据权利要求10所述的增强现实处理装置,其中,所述第一图像参数确定模块确定所述当前帧图像的三维特征点信息的过程被配置为:获取由深度感测模组采集的与所述当前帧图像对应的深度信息;将所述当前帧图像与所述当前帧图像对应的深度信息进行配准,确定所述当前帧图像上各像素点的深度信息;从所述当前帧图像上各像素点的深度信息中确定出与所述二维特征点信息对应的深度信息;利用所述二维特征点信息以及与所述二维特征点信息对应的深度信息,确定所述当前帧图像的三维特征点信息。The augmented reality processing device according to claim 10, wherein the process of determining the three-dimensional feature point information of the current frame image by the first image parameter determination module is configured to: The depth information corresponding to the current frame image; register the current frame image with the depth information corresponding to the current frame image to determine the depth information of each pixel on the current frame image; from the current frame image The depth information corresponding to the two-dimensional feature point information is determined from the depth information of each pixel; the current frame image is determined by using the two-dimensional feature point information and the depth information corresponding to the two-dimensional feature point information The three-dimensional feature point information.
  12. 根据权利要求11所述的增强现实处理装置,其中,所述参考图像的图像参数包含所述参考图像的二维特征点信息和三维特征点信息,所述第一相对位姿确定模块包括:The augmented reality processing device according to claim 11, wherein the image parameters of the reference image include two-dimensional feature point information and three-dimensional feature point information of the reference image, and the first relative pose determination module comprises:
    第一相对位姿确定单元,被配置为如果所述当前帧图像的二维特征点信息与所述参考图像的二维特征点信息匹配,则利用迭代最近点方式确定所述当前帧图像的三维特征点信息与所述参考图像的三维特征点信息之间的相对位姿关系,以得到所述当前帧图像相对于所述第二设备的位姿。The first relative pose determining unit is configured to, if the two-dimensional feature point information of the current frame image matches the two-dimensional feature point information of the reference image, determine the three-dimensional image of the current frame image by using an iterative closest point method. The relative pose relationship between the feature point information and the three-dimensional feature point information of the reference image to obtain the pose of the current frame image relative to the second device.
  13. 根据权利要求12所述的增强现实处理装置,其中,第一相对位姿确定单元还被配置为:在确定所述当前帧图像的三维特征点信息与所述参考图像的三维特征点信息之间的相对位姿关系之前,确定所述当前帧图像的二维特征点信息与所述参考图像的二维特征 点信息中的误匹配特征点信息;从所述当前帧图像的三维特征点信息中剔除所述误匹配特征点信息,以便确定所述当前帧图像的剔除所述误匹配特征点信息后的三维特征点信息与所述参考图像的剔除所述误匹配特征点信息后的三维特征点信息之间的相对位姿关系。The augmented reality processing device according to claim 12, wherein the first relative pose determining unit is further configured to: determine between the three-dimensional feature point information of the current frame image and the three-dimensional feature point information of the reference image Prior to the relative pose relationship, determine the mismatched feature point information in the two-dimensional feature point information of the current frame image and the two-dimensional feature point information of the reference image; from the three-dimensional feature point information of the current frame image The mismatched feature point information is eliminated to determine the three-dimensional feature point information of the current frame image after the mismatched feature point information is eliminated and the three-dimensional feature point of the reference image after the mismatched feature point information has been eliminated The relative pose relationship between information.
  14. 根据权利要求11所述的增强现实处理装置,其中,所述参考图像的图像参数包含所述参考图像的二维特征点信息和三维特征点信息;所述第一相对位姿确定模块包括:The augmented reality processing device according to claim 11, wherein the image parameters of the reference image include two-dimensional feature point information and three-dimensional feature point information of the reference image; the first relative pose determination module includes:
    第二相对位姿确定单元,被配置为如果所述当前帧图像的二维特征点信息与所述参考图像的二维特征点信息匹配,则将所述当前帧图像的二维特征点信息与所述参考图像的三维特征点信息关联,得到点对信息;利用所述点对信息求解透视n点问题,根据所述当前帧图像的三维特征点信息并结合求解结果确定所述当前帧图像相对于所述第二设备的位姿。The second relative pose determining unit is configured to compare the two-dimensional feature point information of the current frame image with the two-dimensional feature point information of the reference image if the two-dimensional feature point information of the current frame image matches The three-dimensional feature point information of the reference image is correlated to obtain point-pair information; the point-pair information is used to solve the perspective n-point problem, and the three-dimensional feature point information of the current frame image is combined with the solution result to determine the relative value of the current frame image The pose of the second device.
  15. 根据权利要求14所述的增强现实处理装置,其中,第二相对位姿确定单元执行结合求解结果确定所述当前帧图像相对于所述第二设备的位姿的过程被配置为:根据求解结果确定所述当前帧图像的三维特征点信息与所述参考图像的三维特征点信息的相对位姿关系;将根据所述求解结果确定出的所述当前帧图像的三维特征点信息与所述参考图像的三维特征点信息的相对位姿关系作为初始位姿输入,采用迭代最近点方式确定所述当前帧图像的三维特征点信息以及所述参考图像的三维特征点信息之间的相对位姿关系,以确定出所述当前帧图像相对于所述第二设备的位姿。The augmented reality processing device according to claim 14, wherein the second relative pose determining unit performs the process of determining the pose of the current frame image relative to the second device in combination with the solution result, and is configured to: according to the solution result Determine the relative pose relationship between the three-dimensional feature point information of the current frame image and the three-dimensional feature point information of the reference image; compare the three-dimensional feature point information of the current frame image determined according to the solution result with the reference The relative pose relationship of the three-dimensional feature point information of the image is used as the initial pose input, and the relative pose relationship between the three-dimensional feature point information of the current frame image and the three-dimensional feature point information of the reference image is determined by the iterative closest point method To determine the pose of the current frame image relative to the second device.
  16. 一种增强现实处理系统,应用于第一设备,包括:An augmented reality processing system applied to a first device, including:
    摄像头模组,被配置为采集当前帧图像;The camera module is configured to collect the current frame image;
    深度感测模组,被配置为采集与所述当前帧图像对应的深度信息;A depth sensing module configured to collect depth information corresponding to the current frame image;
    惯性测量单元,被配置为测量所述第一设备的惯性信息;An inertial measurement unit configured to measure the inertial information of the first device;
    即时定位与地图构建单元,被配置为获取所述当前帧图像和所述惯性信息,并基于所述惯性信息生成所述第一设备的姿态信息;An instant positioning and map construction unit configured to obtain the current frame image and the inertial information, and generate posture information of the first device based on the inertial information;
    增强现实处理装置,被配置为确定所述当前帧图像的图像参数;获取由第二设备确定出的参考图像的图像参数;根据所述当前帧图像的图像参数和所述参考图像的图像参数,确定所述当前帧图像相对于所述第二设备的位姿,并结合采集所述当前帧图像时所述第一设备的姿态信息,确定所述第一设备与所述第二设备的相对位姿关系。The augmented reality processing device is configured to determine the image parameters of the current frame image; obtain the image parameters of the reference image determined by the second device; according to the image parameters of the current frame image and the image parameters of the reference image, Determine the pose of the current frame image relative to the second device, and determine the relative position of the first device and the second device in combination with the pose information of the first device when the current frame image is collected Posture relationship.
  17. 根据权利要求16所述的增强现实处理系统,其中,所述增强现实处理装置包括:The augmented reality processing system according to claim 16, wherein the augmented reality processing device comprises:
    锚点获取模块,被配置为获取由所述第二设备配置的锚点信息,以便基于所述第一设备与所述第二设备的相对位姿关系,在所述第一设备上显示出与所述锚点信息对应的虚拟对象。The anchor point acquisition module is configured to acquire the anchor point information configured by the second device, so that based on the relative pose relationship between the first device and the second device, display on the first device and The virtual object corresponding to the anchor point information.
  18. 根据权利要求16或17所述的增强现实处理系统,所述增强现实处理系统还包括:The augmented reality processing system according to claim 16 or 17, the augmented reality processing system further comprising:
    锚点添加单元,被配置为在所述第一设备所处场景中,添加锚点信息。The anchor point adding unit is configured to add anchor point information in the scene where the first device is located.
  19. 一种存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1至8中任一项所述的增强现实处理方法。A storage medium having a computer program stored thereon, and when the computer program is executed by a processor, the augmented reality processing method according to any one of claims 1 to 8 is realized.
  20. 一种电子设备,包括:An electronic device including:
    处理器;以及Processor; and
    存储器,被配置为存储所述处理器的可执行指令;A memory configured to store executable instructions of the processor;
    其中,所述处理器配置为经由执行所述可执行指令来执行权利要求1至8中任一项所述的增强现实处理方法。Wherein, the processor is configured to execute the augmented reality processing method according to any one of claims 1 to 8 by executing the executable instructions.
PCT/CN2020/116290 2019-10-31 2020-09-18 Augmented reality processing method and apparatus, system, storage medium and electronic device WO2021082801A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911055871.2 2019-10-31
CN201911055871.2A CN110866977B (en) 2019-10-31 2019-10-31 Augmented reality processing method, device, system, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
WO2021082801A1 true WO2021082801A1 (en) 2021-05-06

Family

ID=69653264

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/116290 WO2021082801A1 (en) 2019-10-31 2020-09-18 Augmented reality processing method and apparatus, system, storage medium and electronic device

Country Status (2)

Country Link
CN (1) CN110866977B (en)
WO (1) WO2021082801A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937478A (en) * 2022-12-26 2023-04-07 北京字跳网络技术有限公司 Calibration information determining method and device, electronic equipment and storage medium

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110866977B (en) * 2019-10-31 2023-06-16 Oppo广东移动通信有限公司 Augmented reality processing method, device, system, storage medium and electronic equipment
US20240007590A1 (en) * 2020-09-30 2024-01-04 Beijing Zitiao Network Technology Co., Ltd. Image processing method and apparatus, and electronic device, and computer readable medium
CN112270242A (en) * 2020-10-22 2021-01-26 北京字跳网络技术有限公司 Track display method and device, readable medium and electronic equipment
CN112365530A (en) * 2020-11-04 2021-02-12 Oppo广东移动通信有限公司 Augmented reality processing method and device, storage medium and electronic equipment
CN112270769B (en) 2020-11-11 2023-11-10 北京百度网讯科技有限公司 Tour guide method and device, electronic equipment and storage medium
CN113051424A (en) * 2021-03-26 2021-06-29 联想(北京)有限公司 Positioning method and device based on SLAM map

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130148851A1 (en) * 2011-12-12 2013-06-13 Canon Kabushiki Kaisha Key-frame selection for parallel tracking and mapping
CN106355647A (en) * 2016-08-25 2017-01-25 北京暴风魔镜科技有限公司 Augmented reality system and method
CN108734736A (en) * 2018-05-22 2018-11-02 腾讯科技(深圳)有限公司 Camera posture method for tracing, device, equipment and storage medium
CN109949422A (en) * 2018-10-15 2019-06-28 华为技术有限公司 Data processing method and equipment for virtual scene
CN109976523A (en) * 2019-03-22 2019-07-05 联想(北京)有限公司 Information processing method and electronic equipment
CN110264509A (en) * 2018-04-27 2019-09-20 腾讯科技(深圳)有限公司 Determine the method, apparatus and its storage medium of the pose of image-capturing apparatus
CN110275968A (en) * 2019-06-26 2019-09-24 北京百度网讯科技有限公司 Image processing method and device
CN110286768A (en) * 2019-06-27 2019-09-27 Oppo广东移动通信有限公司 Dummy object display methods, terminal device and computer readable storage medium
CN110866977A (en) * 2019-10-31 2020-03-06 Oppo广东移动通信有限公司 Augmented reality processing method, device and system, storage medium and electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7957583B2 (en) * 2007-08-02 2011-06-07 Roboticvisiontech Llc System and method of three-dimensional pose estimation
JP5460499B2 (en) * 2010-07-12 2014-04-02 日本放送協会 Image processing apparatus and computer program
CN107025662B (en) * 2016-01-29 2020-06-09 成都理想境界科技有限公司 Method, server, terminal and system for realizing augmented reality
CN110349213B (en) * 2019-06-28 2023-12-12 Oppo广东移动通信有限公司 Pose determining method and device based on depth information, medium and electronic equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130148851A1 (en) * 2011-12-12 2013-06-13 Canon Kabushiki Kaisha Key-frame selection for parallel tracking and mapping
CN106355647A (en) * 2016-08-25 2017-01-25 北京暴风魔镜科技有限公司 Augmented reality system and method
CN110264509A (en) * 2018-04-27 2019-09-20 腾讯科技(深圳)有限公司 Determine the method, apparatus and its storage medium of the pose of image-capturing apparatus
CN108734736A (en) * 2018-05-22 2018-11-02 腾讯科技(深圳)有限公司 Camera posture method for tracing, device, equipment and storage medium
CN109949422A (en) * 2018-10-15 2019-06-28 华为技术有限公司 Data processing method and equipment for virtual scene
CN109976523A (en) * 2019-03-22 2019-07-05 联想(北京)有限公司 Information processing method and electronic equipment
CN110275968A (en) * 2019-06-26 2019-09-24 北京百度网讯科技有限公司 Image processing method and device
CN110286768A (en) * 2019-06-27 2019-09-27 Oppo广东移动通信有限公司 Dummy object display methods, terminal device and computer readable storage medium
CN110866977A (en) * 2019-10-31 2020-03-06 Oppo广东移动通信有限公司 Augmented reality processing method, device and system, storage medium and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937478A (en) * 2022-12-26 2023-04-07 北京字跳网络技术有限公司 Calibration information determining method and device, electronic equipment and storage medium
CN115937478B (en) * 2022-12-26 2023-11-17 北京字跳网络技术有限公司 Calibration information determining method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN110866977A (en) 2020-03-06
CN110866977B (en) 2023-06-16

Similar Documents

Publication Publication Date Title
WO2021082801A1 (en) Augmented reality processing method and apparatus, system, storage medium and electronic device
US11145083B2 (en) Image-based localization
CN110457414B (en) Offline map processing and virtual object display method, device, medium and equipment
US8269722B2 (en) Gesture recognition system and method thereof
WO2020186935A1 (en) Virtual object displaying method and device, electronic apparatus, and computer-readable storage medium
EP3968131A1 (en) Object interaction method, apparatus and system, computer-readable medium, and electronic device
JP2020507850A (en) Method, apparatus, equipment, and storage medium for determining the shape of an object in an image
CN112889091A (en) Camera pose estimation using fuzzy features
WO2020248900A1 (en) Panoramic video processing method and apparatus, and storage medium
CN109992111B (en) Augmented reality extension method and electronic device
WO2020034981A1 (en) Method for generating encoded information and method for recognizing encoded information
WO2022237116A1 (en) Image processing method and apparatus
WO2022048428A1 (en) Method and apparatus for controlling target object, and electronic device and storage medium
CN116848556A (en) Enhancement of three-dimensional models using multi-view refinement
CN111192308B (en) Image processing method and device, electronic equipment and computer storage medium
CN112365530A (en) Augmented reality processing method and device, storage medium and electronic equipment
CN109816791B (en) Method and apparatus for generating information
CN115578515B (en) Training method of three-dimensional reconstruction model, three-dimensional scene rendering method and device
US11176636B2 (en) Method of plane tracking
CN111260544B (en) Data processing method and device, electronic equipment and computer storage medium
CN112750164A (en) Lightweight positioning model construction method, positioning method and electronic equipment
WO2023279868A1 (en) Simultaneous localization and mapping initialization method and apparatus and storage medium
WO2024087927A1 (en) Pose determination method and apparatus, and computer-readable storage medium and electronic device
WO2023284479A1 (en) Plane estimation method and apparatus, electronic device, and storage medium
WO2023160072A1 (en) Human-computer interaction method and apparatus in augmented reality (ar) scene, and electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20882491

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20882491

Country of ref document: EP

Kind code of ref document: A1