WO2021189784A1 - Procédé, système et appareil de reconstruction de scénario, et robot de balayage - Google Patents

Procédé, système et appareil de reconstruction de scénario, et robot de balayage Download PDF

Info

Publication number
WO2021189784A1
WO2021189784A1 PCT/CN2020/115921 CN2020115921W WO2021189784A1 WO 2021189784 A1 WO2021189784 A1 WO 2021189784A1 CN 2020115921 W CN2020115921 W CN 2020115921W WO 2021189784 A1 WO2021189784 A1 WO 2021189784A1
Authority
WO
WIPO (PCT)
Prior art keywords
pose
current
current moment
difference
image frame
Prior art date
Application number
PCT/CN2020/115921
Other languages
English (en)
Chinese (zh)
Inventor
张涵
于元隆
梁振振
黄志勇
Original Assignee
南京科沃斯机器人技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南京科沃斯机器人技术有限公司 filed Critical 南京科沃斯机器人技术有限公司
Publication of WO2021189784A1 publication Critical patent/WO2021189784A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Definitions

  • This application relates to the field of data processing technology, and in particular to a scene reconstruction method, system, device and sweeping robot.
  • a camera can be used to obtain images of the scene, and the matching image features can be obtained by means of extracting image features, using image features for feature matching, and so on. Then, sparse reconstruction can be performed based on the matched image features to obtain the camera pose of each image. Finally, dense reconstruction can be performed based on the camera pose to obtain a dense point cloud, which can then be used to reconstruct a three-dimensional scene.
  • the purpose of this application is to provide a scene reconstruction method, system, device and sweeping robot, which can improve the accuracy and robustness of scene reconstruction.
  • one aspect of the present application provides a scene reconstruction method, the method includes: acquiring the current environment state; and selecting the first pose or the second pose as the pose suitable for the current moment according to the current environment state , And create a scene model based on the selected first pose and/or second pose; where the first pose is the pose corresponding to the image data at the current moment, and the second pose is the pose corresponding to the inertial measurement data at the current moment Posture.
  • another aspect of the present application also provides a scene reconstruction system.
  • the system includes: an environment state acquisition unit for acquiring the current environment state; One pose or second pose is used as the pose suitable for the current moment, and the scene model is established according to the selected first pose and/or second pose; where the first pose corresponds to the image data at the current moment The pose, the second pose is the pose corresponding to the inertial measurement data at the current moment.
  • the scene reconstruction device includes a memory and a processor.
  • the memory is used to store a computer program.
  • the computer program When the computer program is executed by the processor, It is used to achieve the following functions: obtain the current environmental state; select the first pose or the second pose as the pose suitable for the current moment according to the current environmental state, and according to the selected first pose and/or second pose
  • the pose establishes a scene model; where the first pose is the pose corresponding to the image data at the current moment, and the second pose is the pose corresponding to the inertial measurement data at the current moment.
  • another aspect of the present application also provides a sweeping robot, the sweeping robot includes a memory and a processor, the memory is used to store a computer program, when the computer program is executed by the processor, Realize the following functions: get the current environment state; select the first pose or the second pose as the pose suitable for the current moment according to the current environment state, and establish according to the selected first pose and/or second pose Scene model; where the first pose is the pose corresponding to the image data at the current moment, and the second pose is the pose corresponding to the inertial measurement data at the current moment.
  • the technical solutions provided by one or more embodiments of the present application can obtain the current environmental state during scene reconstruction, and the environmental state can more accurately reflect the indoor scene.
  • certain environmental conditions such as sudden changes in brightness, insufficient visual information, etc.
  • relatively large errors may occur in the pose generated by the image data.
  • the position generated by the inertial measurement data in the inertial measurement unit can be used. Pose, so that the generated three-dimensional model is more accurate. Because image data can accurately characterize indoor scenes, but image data is easily affected by the external environment, and the inertial measurement data is only related to the motion state of the device itself, and the generated pose will not be affected by the external environment, so the two types of data are combined Reconstruction of the scene can ensure high accuracy and robustness.
  • FIG. 1 is a schematic diagram of steps of a scene reconstruction method in an embodiment of the present invention
  • Fig. 2 is a flowchart of scene reconstruction in an embodiment of the present invention
  • Fig. 3 is a schematic diagram of pixel point mapping in an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of functional modules of a scene reconstruction system in an embodiment of the present invention.
  • Fig. 5 is a schematic structural diagram of a scene reconstruction device in an embodiment of the present invention.
  • an image sensor can be installed on the equipment for 3D scene reconstruction, and the image sensor has an image acquisition function.
  • the image sensor Through the image sensor, the image data corresponding to the indoor scene can be collected during the traveling of the device.
  • the image sensor may be an RGB-D sensor. Through the RGB-D sensor, RGB-D data can be collected, and the RGB-D data can include RGB images and depth images.
  • the image data collected by the image sensor can be processed to reconstruct the three-dimensional scene.
  • unstable environmental conditions such as sudden brightness changes, insufficient visual information, and too fast steering.
  • the image captured by the image sensor may have a sudden change in brightness.
  • indoor scenes are likely to have areas with insufficient texture information and insufficient depth changes. These areas may be walls, ceilings, floors, etc., for example.
  • the image sensor collects images of these areas, it will not be able to accurately match the images due to lack of sufficient visual information. Therefore, only relying on the image data collected by the image sensor may not be able to perform high-precision scene reconstruction.
  • an embodiment of the present application provides a scene reconstruction method, which may be executed by a device that performs 3D scene reconstruction, or may be executed by a server specifically responsible for data processing.
  • the equipment for 3D scene reconstruction may be robots, autonomous vehicles, virtual reality glasses, and so on.
  • the sweeping robot can collect various data required, and then use the built-in operating system to process the collected data, thereby completing the process of scene reconstruction.
  • the sweeping robot can communicate with devices with data processing functions such as Tmall Genie, cloud servers, etc., so as to upload various collected data to the The data is processed to complete the process of scene reconstruction.
  • the scene reconstruction method provided by an embodiment of the present application may include the following multiple steps.
  • an image sensor and an inertial measurement unit may be installed on the device that performs three-dimensional scene reconstruction.
  • the image sensor may be an RGB-D sensor.
  • the image sensor may also be a sensor in other image formats.
  • it can be a CMYK sensor, CMY sensor, HSL sensor, HSV sensor, YUV sensor, etc.
  • the inertial measurement unit may include an accelerometer and a gyroscope.
  • the accelerometer can measure the acceleration components of the sweeping robot in three orthogonal directions
  • the gyroscope can measure the angular velocity components of the device in three orthogonal directions.
  • the image sensor can collect image data of indoor scenes, and the IMU can generate corresponding inertial measurement data according to the operating status of the sweeping robot.
  • the first pose generated based on the image data at the current moment, and the pose generated based on the inertial measurement data at the current moment is the second pose.
  • the device can read image data from the image sensor, and can read inertial measurement data from the IMU.
  • the image data and inertial measurement data can be processed by the server or the data processing module in the device.
  • the image data may include color images and depth images (Depth Image).
  • the format of the color image can be consistent with the image format supported by the image sensor.
  • the color image may be an RGB image.
  • the pixel points in the depth image can represent the distance between the image sensor and each point in the scene.
  • the current environmental status can be obtained from the image data.
  • the environment state can be characterized by the matching residuals between adjacent image frames in the image data.
  • Calculating the matching residual between the image frame at the current moment and the target image frame includes: calculating the pixel difference between the image frame at the current moment and the pixel points mapped to each other in the target image frame.
  • the image data is first processed according to the solution in the prior art to generate the current moment
  • the initial relative pose between the image frame and the target image frame Traverse each pixel in the target image frame, and query the pixel points obtained by mapping each pixel in the current image frame according to the initial relative pose between the target image frame and the current image frame. For example, in FIG. 3, through the initial relative pose, the pixel in the first row and first column in the target image frame can be mapped to the pixel in the fourth row and fifth column in the image frame at the current moment.
  • the two pixels that are mapped to each other should be consistent in brightness or depth.
  • the current state of the environment can be judged by calculating the difference between the pixel points mapped to each other in the image frame at the current moment and the target image frame. Specifically, after calculating the difference between each pair of mutually mapped pixels, these differences can be added, so that the matching residual between the image frame at the current moment and the target image frame can be obtained. The smaller the matching residual, the more stable the environment at the current moment. Therefore, the calculated matching residual can be compared with the specified matching residual threshold. If the calculated matching residual is greater than or equal to the matching residual threshold, it indicates that the matching degree of the two image frames is not good enough.
  • the environmental state of is unstable; on the contrary, it indicates that the environmental state of the current moment is stable.
  • the matching residual threshold may be an empirical value obtained by performing statistics on a large number of normal matching residuals. In practical applications, the size of the matching residual threshold can be flexibly changed.
  • the pixel difference value includes a brightness difference value and a depth difference value
  • the corresponding matching residuals can be calculated from these two aspects respectively. For example, when the two matching residuals are both greater than or equal to the corresponding matching residual threshold, it can be determined that the environmental state at the current moment is unstable. For another example, as long as one of the two matching residuals is greater than or equal to the corresponding matching residual threshold, it can be determined that the environmental state at the current moment is unstable.
  • different weight values can be assigned to the two matching residuals respectively, and then a comprehensive matching residual can be obtained by means of weighted summation.
  • S3 Select the first pose or the second pose as the pose suitable for the current moment according to the current environment state, and build a scene model based on the selected first pose and/or second pose; among them, the first pose The pose is the pose corresponding to the image data at the current moment, and the second pose is the pose corresponding to the inertial measurement data at the current moment.
  • the corresponding first pose or second pose can be selected. Specifically, if the matching residual between the image frame at the current moment and the target image frame is greater than or equal to the preset matching residual threshold, the surface current environment state is unstable, and the first pose generated by the image data at the current moment is used as the State the pose at the current moment; if the matching residual between the image frame at the current moment and the target image frame is less than the preset matching residual threshold, the surface current environment is stable, and the inertial measurement data generated at the current moment is the second place
  • the pose is the pose applicable to the current moment.
  • the pose of the image frame at the current moment and the target image frame are further calculated.
  • Pose difference if the two poses are greater than or equal to the preset pose difference, it means that the pose generated based on the image frame at the current moment may not be correct, and further verification is needed; if the pixels between the two frames are If the difference is less than the preset pose difference, the pose of the image frame at the current moment is correct, and the pose of the image frame at the current moment (that is, the first pose) is used as the pose at the current moment.
  • the pose of the image frame at the current moment is the pose to be verified, in order to further determine whether the pose to be verified is correct, compare the first pose and the second pose at the current moment, if the difference between the two is greater than or equal to the specified difference Value threshold, the current second pose is taken as the pose applicable to the current moment; if the difference between the two is less than the specified difference threshold, the current first pose is taken as the pose applicable to the current The pose of the moment.
  • the reason for further comparing the first pose and the second pose is that when the pose difference between the current moment of the image frame and the target image frame is greater than or equal to the preset pose difference, only the current moment of image A preliminary verification is made whether the pose of the frame is correct, indicating that the pose generated by the image frame at the current moment may not be correct, and further verification is needed. Since the inertial measurement data is often only related to the motion state of the device itself, the generated pose will not be affected by the external environment. Therefore, the pose calculated based on the inertial measurement data can guarantee certain accuracy in most cases.
  • the pose generated by the inertial measurement data can be used to compare the pose generated by the inertial measurement data at the current moment with the pose generated by the image frame to further Determine whether the pose generated by the image frame at the current moment is correct.
  • the pose of the image frame at the current moment and the matching residual of the target image frame are greater than or equal to the preset matching residual threshold, it can only indicate that the pose generated by the image frame at the current moment may not be correct. In order to further improve the judgment In another preferred embodiment, if the matching residual between the previous image frame and the target image frame is greater than the preset matching residual threshold, it indicates that the current environment is likely to be unstable. In order to further confirm the current Whether the environment is unstable, further compare the first pose and the second pose at the current moment. If the difference between the two is greater than or equal to the specified difference threshold, it means that the first pose generated by the image data has a large error.
  • the current second pose should be regarded as the pose applicable to the current moment; if the difference between the two is less than the specified difference threshold, it means that the first pose generated by the image data does not have a large error, then The current first pose is taken as the pose applicable to the current moment.
  • the pose can be processed according to the existing technology to complete the process of scene reconstruction.
  • the sparse feature point cloud can be densely reconstructed according to the generated pose to obtain a dense point cloud, and the dense point cloud can be used for scene reconstruction.
  • loop detection may be performed during the scene reconstruction process. Specifically, after the current scene is reconstructed, the current scene can be encoded and stored, and from the historically reconstructed scenes, it can be identified whether there is a historical scene similar to the current scene. If it exists, it means a loopback has occurred. Later, the loop detection technology can be used to directly calculate the pose of the current scene directly from the historical scene, so that the result of the pose is more accurate, and thus the accuracy of scene reconstruction is improved.
  • the Dibao robot can maintain a communication connection with the cloud server after completing the network configuration.
  • the Dibao robot can collect indoor image data through an RGB-D camera, and usually can collect inertial measurement data through an inertial measurement unit. Both image data and inertial measurement data can be uploaded to a server in the cloud on a regular basis.
  • the cloud server can combine the two aspects of data to reconstruct the indoor scene, and can send the reconstructed indoor model or indoor map to the Dibao robot, so that the Dibao robot can better plan the cleaning path.
  • the image data collected by the Dibao robot often contains areas with insufficient texture information such as walls, ceilings, and floors, and insufficient depth changes. The difference between the image frames of these areas may not be large. Based on the image The relative pose generated by the data may not be accurate enough. In view of this, the relative pose generated by the image data can be combined with the relative pose generated by the inertial measurement data to correct the relative pose generated by the image data, so as to ensure the modeling accuracy of the indoor scene, so that the Dibao robot can more accurately plan the cleaning path according to the generated map .
  • the Dibao robot can directly process the collected image data and inertial measurement data to reconstruct the indoor scene, and can store the reconstructed indoor model or indoor map locally. Later, the user can directly view the indoor map stored in the Dibao robot through the APP, and issue an area cleaning instruction to the Dibao robot.
  • an autonomous vehicle can reconstruct the three-dimensional scene around the driving path by collecting image data in the driving path and the vehicle's own inertial measurement data, and can perform path planning and navigation based on the reconstructed scene.
  • the vehicle may drive from the shadow to the sunlight.
  • the relative pose error generated based on the image data will be large.
  • the relative pose generated by the inertial measurement data can be used to perform the pose correction. This makes the reconstructed 3D scene more accurate, thereby ensuring the accuracy of path planning and the safety of autonomous driving.
  • the virtual reality eyes can simultaneously collect image data in the user’s environment and the inertial measurement data generated when the user moves, and the virtual reality eyes can be based on the collection
  • the obtained data reconstructs the user's environment.
  • the user may suddenly turn around or move by a large amount while playing the game.
  • the difference between adjacent image frames of the image data is large, and the pose generated based on the image data may not be accurate enough. Therefore, the relative pose generated by the image data can be corrected in combination with the relative pose generated by the inertial measurement data, so as to ensure the modeling accuracy of the indoor scene.
  • this application also provides a scene reconstruction system, which includes:
  • the environmental state obtaining unit is used to obtain the current environmental state
  • the pose selection unit is used to select the first pose or the second pose as the pose suitable for the current moment according to the current environment state, and establish a scene model according to the selected first pose and/or second pose; Among them, the first pose is the pose corresponding to the image data at the current moment, and the second pose is the pose corresponding to the inertial measurement data at the current moment.
  • the current first pose is used as the pose applicable to the current moment; if the current environment state is not stable, the current second pose is used as the current pose applicable to the current The pose of the moment.
  • the environmental state acquisition unit includes:
  • An image frame reading module configured to read the image frame at the current moment in the image data, and read the target image frame located before the image frame at the current moment and adjacent to the image frame at the current moment;
  • the matching residual calculation module is used to calculate the matching residual between the image frame at the current moment and the target image frame, and obtain the current environmental state according to the matching residual.
  • the matching residual calculation module includes:
  • the pixel difference calculation module is used to calculate the pixel difference between the pixel points mapped to each other in the image frame at the current moment and the target image frame.
  • the current environment state is unstable; if the pixel difference between the two frames is less than the preset pixel difference, the current environment The state is stable.
  • the pose selection unit includes:
  • the pose difference calculation module is used to calculate the pose difference of the image frame at the current moment and the target image frame when the pixel difference between the two frames is less than the preset pixel difference;
  • the pose judgment module is used to determine the pose of the image frame at the current moment as the pose to be verified if the difference between the two poses is greater than or equal to the preset pose difference; if the pixel difference between the two frames is less than The preset pose difference value determines that the pose of the image frame at the current moment is correct.
  • the pose selection unit further includes:
  • Difference comparison module used to compare the first pose and the second pose at the current moment if the pose of the image frame at the current moment is the pose to be verified, if the difference between the two is greater than or equal to the specified difference threshold , The current second pose is taken as the pose applicable to the current moment; if the difference between the two is less than the specified difference threshold, the current first pose is taken as the pose applicable to the current moment Posture.
  • the pose selection unit includes:
  • the pose comparison module is used to compare the first pose and the second pose at the current moment if the current environment is unstable;
  • the pose determination module is configured to, if the difference between the two is greater than or equal to the specified difference threshold, use the current second pose as the pose applicable to the current moment; if the difference between the two is less than the specified difference threshold For the difference threshold, the current first pose is taken as the pose applicable to the current moment.
  • the present application also provides a scene reconstruction device.
  • the scene reconstruction device includes a memory and a processor.
  • the memory is used to store a computer program.
  • the computer program is executed by the processor, it is used to implement The following functions:
  • the first pose is the pose corresponding to the image data at the current moment
  • the second pose is the pose corresponding to the inertial measurement data at the current moment.
  • the present application also provides a cleaning robot, which includes a memory and a processor, the memory is used to store a computer program, and when the computer program is executed by the processor, it is used to implement the following functions:
  • the first pose is the pose corresponding to the image data at the current moment
  • the second pose is the pose corresponding to the inertial measurement data at the current moment.
  • the memory may include a physical device for storing information, which is usually digitized and then stored in a medium using electrical, magnetic, or optical methods.
  • the memory may include: devices that use electrical energy to store information, such as RAM, ROM, etc.; devices that use magnetic energy to store information, such as hard disks, floppy disks, magnetic tapes, magnetic core memories, bubble memory, and U disks; use optical methods to store information Device such as CD or DVD.
  • devices that use electrical energy to store information such as RAM, ROM, etc.
  • devices that use magnetic energy to store information such as hard disks, floppy disks, magnetic tapes, magnetic core memories, bubble memory, and U disks
  • use optical methods to store information Device such as CD or DVD.
  • there are other types of memory such as quantum memory, graphene memory, and so on.
  • the processor can be implemented in any suitable manner.
  • the processor may take the form of, for example, a microprocessor or a processor and a computer-readable medium storing computer-readable program codes (for example, software or firmware) executable by the (micro)processor, logic gates, switches, special-purpose integrated Circuit (Application Specific Integrated Circuit, ASIC), programmable logic controller and embedded microcontroller form, etc.
  • program codes for example, software or firmware
  • the technical solutions provided by one or more embodiments of the present application can obtain the current environmental state during scene reconstruction, and the environmental state can more accurately reflect the indoor scene.
  • certain environmental conditions such as sudden changes in brightness, insufficient visual information, etc.
  • relatively large errors may occur in the pose generated by the image data.
  • the position generated by the inertial measurement data in the inertial measurement unit can be used. Pose, so that the generated three-dimensional model is more accurate. Because image data can accurately characterize indoor scenes, but image data is easily affected by the external environment, and the inertial measurement data is only related to the motion state of the device itself, and the generated pose will not be affected by the external environment, so the two types of data are combined Reconstruction of the scene can ensure high accuracy and robustness.
  • the embodiments of the present invention can be provided as a method, a system, or a computer program product. Therefore, the present invention may adopt a form of a complete hardware implementation, a complete software implementation, or a combination of software and hardware implementations. Moreover, the present invention may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • the computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-permanent memory in computer readable media, random access memory (RAM) and/or non-volatile memory, such as read-only memory (ROM) or flash memory (flash RAM). Memory is an example of computer readable media.
  • RAM random access memory
  • ROM read-only memory
  • flash RAM flash memory
  • Computer-readable media include permanent and non-permanent, removable and non-removable media, and information storage can be realized by any method or technology.
  • the information can be computer-readable instructions, data structures, program modules, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical storage, Magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices. According to the definition in this article, computer-readable media does not include transitory media, such as modulated data signals and carrier waves.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé, un système et un appareil de reconstruction de scénario, ainsi qu'un robot de balayage. Le procédé consiste à : acquérir l'état environnemental actuel (S1) ; et sélectionner une première pose ou une seconde pose en tant que pose appropriée pour le moment actuel en fonction de l'état environnemental actuel, puis établir un modèle de scénario en fonction de la première pose et/ou de la seconde pose sélectionnée (S3), la première pose étant une pose correspondant aux données d'image au moment actuel, et la seconde pose étant une pose correspondant aux données de mesure inertielle au moment actuel. Le procédé permet d'améliorer la précision et la robustesse d'une reconstruction de scénario.
PCT/CN2020/115921 2020-03-23 2020-09-17 Procédé, système et appareil de reconstruction de scénario, et robot de balayage WO2021189784A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010207310.6A CN113436309A (zh) 2020-03-23 2020-03-23 一种场景重建方法、系统、装置及扫地机器人
CN202010207310.6 2020-03-23

Publications (1)

Publication Number Publication Date
WO2021189784A1 true WO2021189784A1 (fr) 2021-09-30

Family

ID=77752467

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/115921 WO2021189784A1 (fr) 2020-03-23 2020-09-17 Procédé, système et appareil de reconstruction de scénario, et robot de balayage

Country Status (2)

Country Link
CN (1) CN113436309A (fr)
WO (1) WO2021189784A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116402826A (zh) * 2023-06-09 2023-07-07 深圳市天趣星空科技有限公司 视觉坐标系的修正方法、装置、设备及存储介质
CN116758157A (zh) * 2023-06-14 2023-09-15 深圳市华赛睿飞智能科技有限公司 一种无人机室内三维空间测绘方法、系统及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140333741A1 (en) * 2013-05-08 2014-11-13 Regents Of The University Of Minnesota Constrained key frame localization and mapping for vision-aided inertial navigation
CN105953798A (zh) * 2016-04-19 2016-09-21 深圳市神州云海智能科技有限公司 移动机器人的位姿确定方法和设备
CN109431381A (zh) * 2018-10-29 2019-03-08 北京石头世纪科技有限公司 机器人的定位方法及装置、电子设备、存储介质
CN109947886A (zh) * 2019-03-19 2019-06-28 腾讯科技(深圳)有限公司 图像处理方法、装置、电子设备及存储介质
CN109978931A (zh) * 2019-04-04 2019-07-05 北京悉见科技有限公司 三维场景重建方法及设备、存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140333741A1 (en) * 2013-05-08 2014-11-13 Regents Of The University Of Minnesota Constrained key frame localization and mapping for vision-aided inertial navigation
CN105953798A (zh) * 2016-04-19 2016-09-21 深圳市神州云海智能科技有限公司 移动机器人的位姿确定方法和设备
CN109431381A (zh) * 2018-10-29 2019-03-08 北京石头世纪科技有限公司 机器人的定位方法及装置、电子设备、存储介质
CN109947886A (zh) * 2019-03-19 2019-06-28 腾讯科技(深圳)有限公司 图像处理方法、装置、电子设备及存储介质
CN109978931A (zh) * 2019-04-04 2019-07-05 北京悉见科技有限公司 三维场景重建方法及设备、存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116402826A (zh) * 2023-06-09 2023-07-07 深圳市天趣星空科技有限公司 视觉坐标系的修正方法、装置、设备及存储介质
CN116402826B (zh) * 2023-06-09 2023-09-26 深圳市天趣星空科技有限公司 视觉坐标系的修正方法、装置、设备及存储介质
CN116758157A (zh) * 2023-06-14 2023-09-15 深圳市华赛睿飞智能科技有限公司 一种无人机室内三维空间测绘方法、系统及存储介质
CN116758157B (zh) * 2023-06-14 2024-01-30 深圳市华赛睿飞智能科技有限公司 一种无人机室内三维空间测绘方法、系统及存储介质

Also Published As

Publication number Publication date
CN113436309A (zh) 2021-09-24

Similar Documents

Publication Publication Date Title
CN109084732B (zh) 定位与导航方法、装置及处理设备
US11747823B2 (en) Monocular modes for autonomous platform guidance systems with auxiliary sensors
KR101776622B1 (ko) 다이렉트 트래킹을 이용하여 이동 로봇의 위치를 인식하기 위한 장치 및 그 방법
KR101725060B1 (ko) 그래디언트 기반 특징점을 이용한 이동 로봇의 위치를 인식하기 위한 장치 및 그 방법
JP6198230B2 (ja) 深度カメラを使用した頭部姿勢トラッキング
CN109752003B (zh) 一种机器人视觉惯性点线特征定位方法及装置
AU2018211217B2 (en) Virtual reality parallax correction
KR101776620B1 (ko) 검색 기반 상관 매칭을 이용하여 이동 로봇의 위치를 인식하기 위한 장치 및 그 방법
JP6273352B2 (ja) 物体検出装置、物体検出方法、および移動ロボット
WO2021189784A1 (fr) Procédé, système et appareil de reconstruction de scénario, et robot de balayage
US11488354B2 (en) Information processing apparatus and information processing method
CN113137968B (zh) 基于多传感器融合的重定位方法、重定位装置和电子设备
WO2021189783A1 (fr) Procédé, système et dispositif de construction de scène, et robot automoteur
CN111220155A (zh) 基于双目视觉惯性里程计估计位姿的方法、装置与处理器
CN114255323A (zh) 机器人、地图构建方法、装置和可读存储介质
US20220351400A1 (en) Information processing apparatus, information processing method, and information processing program
EP3676801B1 (fr) Dispositifs électroniques, procédés et produits-programmes informatiques permettant de commander des opérations de modélisation 3d d'après des métriques de pose
CN111141274A (zh) 一种基于计算机视觉的机器人自动定位与导航方法
Wang et al. Self-supervised learning of depth and camera motion from 360 {\deg} videos
Nalpantidis et al. Obtaining reliable depth maps for robotic applications from a quad-camera system
CN117007037A (zh) 移动机器人的位姿估计方法、装置、移动机器人及介质
CN116051767A (zh) 一种三维地图构建方法以及相关设备
CN115619851A (zh) 基于锚点的vslam后端优化方法、装置、介质、设备和车辆
CN112288803A (zh) 一种针对计算设备的定位方法以及装置
CN118209101A (zh) 一种应用于动态环境的多传感器融合slam方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20926567

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20926567

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20926567

Country of ref document: EP

Kind code of ref document: A1