CN113436310A - Scene establishing method, system and device and self-moving robot - Google Patents

Scene establishing method, system and device and self-moving robot Download PDF

Info

Publication number
CN113436310A
CN113436310A CN202010207847.2A CN202010207847A CN113436310A CN 113436310 A CN113436310 A CN 113436310A CN 202010207847 A CN202010207847 A CN 202010207847A CN 113436310 A CN113436310 A CN 113436310A
Authority
CN
China
Prior art keywords
pose
current
self
moving robot
current moment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010207847.2A
Other languages
Chinese (zh)
Inventor
梁振振
于元隆
黄志勇
张涵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Kewo Robot Technology Co Ltd
Original Assignee
Nanjing Kewo Robot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Kewo Robot Technology Co Ltd filed Critical Nanjing Kewo Robot Technology Co Ltd
Priority to CN202010207847.2A priority Critical patent/CN113436310A/en
Priority to PCT/CN2020/115920 priority patent/WO2021189783A1/en
Publication of CN113436310A publication Critical patent/CN113436310A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a scene establishing method, a system, a device and a self-moving robot, wherein the method comprises the following steps: acquiring image data and inertial measurement data, wherein the image data is used for generating a first pose of the self-moving robot, and the inertial measurement data is used for generating a second pose of the self-moving robot; obtaining a current motion state of the mobile robot; selecting a first pose or a second pose suitable for the current moment according to the current motion state of the self-moving robot, and establishing a scene model according to the selected first pose and/or second pose; the first pose is the pose corresponding to the image frame at the current moment, and the second pose is the pose corresponding to the inertial measurement data at the current moment. According to the technical scheme, the accuracy and robustness of scene establishment can be improved.

Description

Scene establishing method, system and device and self-moving robot
Technical Field
The invention relates to the technical field of data processing, in particular to a scene establishing method, a scene establishing system, a scene establishing device and a self-moving robot.
Background
In the current three-dimensional scene reconstruction technology, a camera can be used to acquire an image of a scene, and the matched image features can be obtained by means of extracting image features, performing feature matching by using the image features, and the like. Then, sparse reconstruction can be performed based on the matched image features, so that the camera pose of each image is obtained. Finally, dense reconstruction can be performed based on the camera pose to obtain dense point cloud, and the dense point cloud can be used for reconstructing a three-dimensional scene.
In the current three-dimensional reconstruction process, because an indoor scene is complex, when the indoor scene is reconstructed based on an image acquired by a camera, the situation of low accuracy and robustness may be caused.
Disclosure of Invention
The application aims to provide a scene establishing method, a scene establishing system, a scene establishing device and a self-moving robot, and the scene establishing accuracy and robustness can be improved.
In order to achieve the above object, an aspect of the present application provides a scene creating method, where the method includes: acquiring image data and inertial measurement data, wherein the image data is used for generating a first pose of the self-moving robot, and the inertial measurement data is used for generating a second pose of the self-moving robot; obtaining a current motion state of the mobile robot; selecting a first pose or a second pose suitable for the current moment according to the current motion state of the self-moving robot, and establishing a scene model according to the selected first pose and/or second pose; the first pose is the pose corresponding to the image frame at the current moment, and the second pose is the pose corresponding to the inertial measurement data at the current moment.
In order to achieve the above object, another aspect of the present application further provides a scene creating system, including: the data acquisition unit is used for acquiring image data and inertial measurement data, wherein the image data is used for generating a first pose of the self-moving robot, and the inertial measurement data is used for generating a second pose of the self-moving robot; the motion state acquisition unit is used for acquiring the current motion state of the self-moving robot; the scene model establishing unit is used for selecting a first pose or a second pose suitable for the current moment according to the current motion state of the self-moving robot and establishing a scene model according to the selected first pose and/or the selected second pose; the first pose is the pose corresponding to the image frame at the current moment, and the second pose is the pose corresponding to the inertial measurement data at the current moment.
To achieve the above object, another aspect of the present application further provides a scene creating apparatus, which includes a memory and a processor, the memory is used for storing a computer program, and the computer program is used for realizing the following functions when being executed by the processor: acquiring image data and inertial measurement data, wherein the image data is used for generating a first pose of the self-moving robot, and the inertial measurement data is used for generating a second pose of the self-moving robot; obtaining a current motion state of the mobile robot; selecting a first pose or a second pose suitable for the current moment according to the current motion state of the self-moving robot, and establishing a scene model according to the selected first pose and/or the selected second pose; the first pose is the pose corresponding to the image frame at the current moment, and the second pose is the pose corresponding to the inertial measurement data at the current moment.
To achieve the above object, another aspect of the present application further provides a self-moving robot, including a memory and a processor, the memory storing a computer program, the computer program, when executed by the processor, implementing the following functions: acquiring image data and inertial measurement data, wherein the image data is used for generating a first pose of the self-moving robot, and the inertial measurement data is used for generating a second pose of the self-moving robot; obtaining a current motion state of the mobile robot; selecting a first pose or a second pose suitable for the current moment according to the current motion state of the self-moving robot, and establishing a scene model according to the selected first pose and/or the selected second pose; the first pose is the pose corresponding to the image frame at the current moment, and the second pose is the pose corresponding to the inertial measurement data at the current moment.
In order to achieve the above object, another aspect of the present application further provides a scene creating method, where the method includes: acquiring image data and inertial measurement data, wherein the image data is used for generating a first pose of the self-moving robot, and the inertial measurement data is used for generating a second pose of the self-moving robot; selecting a first pose or a second pose suitable for the current moment, and establishing a scene model according to the selected first pose and/or second pose; the first pose is the pose corresponding to the image frame at the current moment, and the second pose is the pose corresponding to the inertial measurement data at the current moment.
In order to achieve the above object, another aspect of the present application further provides a scene creating system, including: the data acquisition unit is used for acquiring image data and inertial measurement data, wherein the image data is used for generating a first pose of the self-moving robot, and the inertial measurement data is used for generating a second pose of the self-moving robot; the scene model establishing unit is used for selecting a first pose and/or a second pose suitable for different moments and establishing a scene model according to the selected first pose and/or the selected second pose; the first pose is the pose corresponding to the image frame at the current moment, and the second pose is the pose corresponding to the inertial measurement data at the current moment.
To achieve the above object, another aspect of the present application further provides a scene creating apparatus, which includes a memory and a processor, the memory is used for storing a computer program, and the computer program is used for realizing the following functions when being executed by the processor: acquiring image data and inertial measurement data, wherein the image data is used for generating a first pose of the self-moving robot, and the inertial measurement data is used for generating a second pose of the self-moving robot; selecting a first pose and/or a second pose suitable for each different moment, and establishing a scene model according to the selected first pose and/or second pose; the first pose is the pose corresponding to the image frame at the current moment, and the second pose is the pose corresponding to the inertial measurement data at the current moment.
To achieve the above object, another aspect of the present application further provides a self-moving robot, including a memory and a processor, the memory storing a computer program, the computer program, when executed by the processor, implementing the following functions: acquiring image data and inertial measurement data, wherein the image data is used for generating a first pose of the self-moving robot, and the inertial measurement data is used for generating a second pose of the self-moving robot; selecting a first pose and/or a second pose suitable for each different moment, and establishing a scene model according to the selected first pose and/or second pose; the first pose is the pose corresponding to the image frame at the current moment, and the second pose is the pose corresponding to the inertial measurement data at the current moment.
As can be seen from the above, according to the technical solutions provided by one or more embodiments of the present application, when a scene is established, image data and inertial measurement data may be acquired. The image data may be used to generate a first pose of the self-moving robot, and the inertial measurement data may be used to generate a second pose of the self-moving robot. The scene details contained in the image data are rich, and the first pose generated based on the image data is more accurate when enough high-quality matching information exists. However, at certain times, such as when the self-moving robot turns, rotates, or moves too fast, the camera field of view changes rapidly, resulting in the first pose being generated with less accuracy. At this time, a second pose generated based on inertial measurement data and capable of ensuring basic accuracy can be selected. Therefore, the corresponding first pose or second pose is selected according to the actual conditions at different moments, and finally the scene model can be established according to the selected first pose and/or second pose, so that the accuracy and robustness of scene establishment can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic illustration of the steps of a scene creation method in an embodiment of the invention;
FIG. 2 is a flow chart of scene creation based on speed information in an embodiment of the present invention;
FIG. 3 is a flowchart of scene establishment based on similarity in an embodiment of the present invention;
FIG. 4 is a flowchart of scene creation based on pose difference values in an embodiment of the invention;
FIG. 5 is a functional block diagram of a scene creation system in an embodiment of the invention;
fig. 6 is a schematic structural diagram of a scene creation apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clear, the technical solutions of the present application will be clearly and completely described below with reference to the detailed description of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present application, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art without any inventive work based on the embodiments in the present application are within the scope of protection of the present application.
In practical application, an image sensor can be installed on a device for reconstructing a three-dimensional scene, and the image sensor has an image acquisition function. By the image sensor, image data corresponding to an indoor scene can be collected in the process of moving the equipment. In one specific application example, the image sensor may be an RGB-D sensor. With the RGB-D sensor, RGB-D data may be acquired in which an RGB image and a depth image may be included.
In the prior art, a three-dimensional scene can be created by processing image data acquired by an image sensor. For example, a Dense localization and mapping (Dense SLAM) algorithm may be employed to process the image data to generate a three-dimensional scene model. However, the operation efficiency of the existing Dense SLAM algorithm is usually 10Hz-30Hz, so the data acquisition frame rate of the image sensor installed on the equipment is also usually 10Hz-30 Hz. When the equipment is turned over too fast or moves too fast, the change between adjacent frames is greatly increased, the scene information which can be captured by the image sensor is sharply reduced, the error rate of a Dense SLAM algorithm based on frame matching is higher, and the accuracy of establishing the three-dimensional scene is further reduced. Thus, high precision scene creation may not be possible by relying solely on image data acquired by the image sensor.
In view of this, an embodiment of the present application provides a scene creating method, which may be performed by a device performing three-dimensional scene reconstruction, or may be performed by a server specially responsible for data processing. The equipment for reconstructing the three-dimensional scene can be a sweeping robot, an automatic driving automobile, virtual reality glasses and the like. For example, during the moving process of the device, various pieces of required data may be collected, and then the collected data is processed by using a built-in operating system, thereby completing the process of scene establishment. For another example, the device may communicate with a data processing device such as a tianmao eidolon and a cloud server, so as to upload each item of collected data to the data processing device, and process the collected data through the data processing device, thereby completing the process of establishing the scene.
Referring to fig. 1, a scene establishment method according to an embodiment of the present application may include the following steps.
S1: image data for generating a first pose of a self-moving robot and inertial measurement data for generating a second pose of the self-moving robot are acquired.
In the present embodiment, an image sensor and an Inertial Measurement Unit (IMU) may be mounted on the self-moving robot. Wherein the image sensor may be an RGB-D sensor. Of course, with the development of technology and the advent of alternatives, the image sensor may also be a sensor of other image formats. For example, a CMYK sensor, a CMY sensor, an HSL sensor, an HSV sensor, a YUV sensor, etc. The inertial measurement unit may include an accelerometer and a gyroscope, wherein the accelerometer may measure acceleration components of the self-moving robot in three orthogonal directions, and the gyroscope may measure angular velocity components of the self-moving robot in three orthogonal directions. The image sensor can collect image data of an indoor scene, and the IMU can generate corresponding inertial measurement data according to the motion state of the self-moving robot.
In this embodiment, the self-moving robot may read image data from the image sensor and may read inertial measurement data from the IMU. The image data and the inertial measurement data may be processed by a server or a data processing module in the self-moving robot. Specifically, a color image and a depth image (depth image) may be included in the image data. Wherein the format of the color image may be consistent with the image format supported by the image sensor. For example, for an RGB-D sensor, the color image may be an RGB image. The pixel points in the depth image can represent the distance between the image sensor and each point in the scene. The inertial measurement data may include acceleration and angular velocity of the self-moving robot at different times.
In the present embodiment, when the self-moving robot travels smoothly, the image data collected by the image sensor may contain abundant environmental details, in which case the first pose generated based on the image data tends to have higher accuracy. When the self-moving robot turns or has sudden speed change, the difference between the acquired adjacent image frames is large due to the fact that the data acquisition frame rate of the image sensor is low, feature matching cannot be well conducted, and the accuracy of the generated first pose is low. And because the frame rate of data acquisition by the IMU is usually about 100 frames, the acquired inertial measurement data can accurately indicate the motion state of the self-moving robot, and the generated second posture is often more accurate. Therefore, the image sensor and the IMU have good complementary characteristics in the aspect of generating the pose, and therefore the three-dimensional scene can be established by combining the image data and the inertial measurement data. As can be seen from the above, the first pose may be a pose corresponding to the image frame at the current time, and the second pose may be a pose corresponding to the inertial measurement data at the current time.
S3: obtaining a current motion state of the mobile robot; and selecting a first pose or a second pose suitable for the current moment according to the current motion state of the self-moving robot, and establishing a scene model according to the selected first pose and/or second pose.
In the embodiment, the pose of the self-moving robot can be flexibly calculated according to the motion state of the self-moving robot at the current moment. The motion state of the self-moving robot can be obtained from the inertia measurement data or calculated according to the inertia measurement data. For example, the motion state of the self-moving robot may be represented by velocity information, which may be, for example, an operation velocity or an angular velocity of the self-moving robot. The angular velocities at different moments can be directly read from the inertia measurement data, and the running velocities at different moments can be obtained by processing the inertia measurement data by using an inertia navigation technology.
Referring to fig. 2, in an embodiment, the velocity information of the self-moving robot at the current time may be determined in the inertial measurement data in the manner described above, and then the motion state of the self-moving robot may be determined according to the obtained velocity information. Specifically, if the value represented by the speed information is greater than or equal to the specified speed threshold, it indicates that the running speed of the self-moving robot at the current time is too fast, or the steering speed is too fast, the self-moving robot may be in an unstable motion state, and the accuracy of the first pose may not be high, so that the current second pose may be taken as the pose applicable to the current time. And if the value represented by the speed information is smaller than the specified speed threshold, the self-moving robot is in a stable motion state at present, and at this time, the current first pose can be used as the pose suitable for the current moment. The specified speed threshold, as described above, may be consistent with the type of speed information characterization. The specified speed threshold may be an angular speed threshold if the speed information characterizes an angular speed, and an operational speed threshold if the speed information characterizes an operational speed.
In another embodiment, the motion state of the self-moving robot may also be determined by the similarity between two adjacent image frames. Generally, when the self-moving robot runs smoothly, two adjacent image frames acquired by the image sensor are similar to each other. When the self-moving robot runs non-stably, the image sensor is limited by the data acquisition frame rate, so that the difference between two adjacent acquired image frames is large. In view of this, referring to fig. 3, in the present embodiment, a current image frame at the current time may be read in image data, and a target image frame located before and adjacent to the current image frame may be read. Then, a similarity between the current image frame and the target image frame may be calculated, and a motion state of the self-moving robot may be determined according to the similarity.
Specifically, both the current image frame and the target image frame may be represented by digitized feature vectors. The feature vector may be constructed based on pixel values of pixel points in the image frame. The pixel value may be a numerical value within a specified interval. For example, the pixel value may be any one of values 0 to 255. The magnitude of the value may indicate the shade of the color. In this embodiment, pixel values of each pixel point of the image frame may be obtained, and a feature vector of the image frame may be formed by the obtained pixel values. For example, in an image frame having 9 × 9 — 81 pixels, pixel values of the pixels may be sequentially obtained, and then the obtained pixel values may be sequentially arranged in order from left to right and from top to bottom, thereby forming an 81-dimensional vector. The 81-dimensional vector can be used as a feature vector of the image frame. In this embodiment, the feature vector may be constructed based on CNN (Convolutional Neural Network) features of the image frame. Specifically, the image frame may be input into a convolutional neural network, and then the convolutional neural network may output a feature vector corresponding to the image frame.
In this embodiment, after the feature vectors of the current image frame and the target image frame are determined in the above manner, the similarity between the current image frame and the target image frame can be obtained by calculating a vector angle or a Pearson correlation coefficient between the two feature vectors. If the calculated similarity is larger than or equal to the specified similarity threshold, the comparison between the current image frame and the target image frame is similar, and at the moment, the self-moving robot is in a stable motion state, so that the current first pose can be used as the pose suitable for the current moment. And if the similarity is smaller than the specified similarity threshold, the difference between the two image frames is large, at this time, the self-moving robot may be in an unstable motion state due to excessively high running speed, and the calculated first pose is usually not accurate enough, so that the current second pose can be used as the pose suitable for the current moment.
In another embodiment, the image data and the inertial measurement data may be processed simultaneously to obtain corresponding first and second poses, respectively. In the operation process of the self-moving robot, the IMU generally has better stability, and therefore the calculated second pose is also generally more stable. At this time, the first pose and the second pose at the current moment can be compared, and if the difference between the first pose and the second pose is small, the first pose at the current moment is accurate. The first pose is calculated based on richer scene details and is more accurate, so the first pose can be selected under the condition. If the difference between the first pose and the second pose at the current moment is large, the second pose is basically accurate, which indicates that the first pose may not be accurate enough, and the second pose can be selected under the condition.
Specifically, referring to fig. 4, a first pose and a second pose at the current time may be obtained, a difference between the first pose at the current time and the second pose at the current time may be calculated, and then, a motion state of the self-moving robot may be determined according to the calculated difference. The pose may be a parameter matrix associated with two adjacent image frames, and a pixel point at a certain position in a first image frame may be mapped to another position in a second image frame by the parameter matrix, so that pixel points in two adjacent image frames may be mapped to each other by the pose. When the difference value between the first pose and the second pose at the current moment is calculated, the first pose and the second pose are both matrixes, and when the two matrixes are subtracted, the element difference value at the same position in the first pose and the second pose is calculated. It should be noted that, in practical applications, the element difference may be always a value greater than or equal to 0, and if the element value at a part of the positions obtained by subtracting the second position from the first position is a negative value, the absolute value of the negative value may be taken and the absolute value may be taken as the element difference. In this way, the size of the element difference value can correspond to the difference of the elements between the two poses, and subsequently, the element difference values can be added to obtain the pose difference values of the first pose and the second pose, and the pose difference values can be used as the difference values of the first pose and the second pose at the current moment.
In this embodiment, after the difference between the first pose and the second pose at the current time is obtained through calculation, the first pose or the second pose suitable for the current time may be selected according to the difference. Specifically, if the difference is greater than or equal to the specified difference threshold, it indicates that the difference between the first pose and the second pose is too large, and at this time, the motion state of the self-moving robot may change greatly, so that the current second pose may be taken as the pose applicable to the current time. And if the difference is smaller than the specified difference threshold, the difference between the first pose and the second pose is not large, and the more accurate current first pose can be taken as the pose applicable to the current moment.
In one embodiment, when the first pose is generated according to the image data, image features of image frames in the image data can be extracted, and feature matching is performed by using the image features, so that matched image features are obtained. Then, sparse establishment can be performed based on the matched image features, so that a first pose of the image frame is obtained. In practical applications, the first position calculated in the above manner may be further corrected. Specifically, the calculated first pose may be corrected by using an ICP (Iterative Closest Point, introduced by problem) algorithm, so as to obtain a final first pose of any image frame.
In one embodiment, in generating the second pose from the inertial measurement data, conventional error cancellation and pre-integration processing may be performed on the inertial measurement data to generate a corresponding second pose based on the inertial measurement data. Specifically, the motion increment between two adjacent frames can be calculated through pre-integration according to the inertial measurement data, and then the second pose of the self-moving robot in the next frame can be calculated on the basis of the second pose of the current frame.
In the embodiment, after the corresponding first pose or second pose is selected for different times, the selected pose can be processed according to the prior art, so that the process of establishing the scene is completed. Specifically, the dense establishment can be performed on the sparse feature point cloud according to the selected pose, so that the dense point cloud is obtained, and the scene establishment can be performed by using the dense point cloud.
In one embodiment, in order to make the created scene more accurate, loop detection may be performed during the scene creation process. Specifically, after a current scene is established, the current scene may be encoded and stored, and whether a historical scene similar to the current scene exists may be identified from the historically established scenes. If so, this indicates that a loop is generated. Subsequently, the pose of the current scene can be directly calculated by the historical scene through a loop detection technology, so that the pose result is more accurate, and the accuracy of scene establishment is improved.
In a specific application scenario, after the geodetic robot completes the distribution network, the geodetic robot can be in communication connection with a server at the cloud end. Therefore, in the process of cleaning the room, the robot can acquire indoor image data through the RGB-D camera and acquire inertia measurement data through the inertia measurement unit. The image data and the inertia measurement data can be uploaded to a cloud server periodically. Therefore, the server at the cloud end can be used for establishing an indoor scene by combining the data of the two aspects, and an established indoor model or an indoor map can be sent to the geo-treasure robot, so that the geo-treasure robot can better plan a cleaning path. In particular, when the georobot collects image data, the motion state may change greatly, so that the first pose generated based on the image data may not be accurate enough. In view of this, the second pose generated by the inertial measurement data can be combined to correct the first pose generated by the image data, so that the modeling precision of the indoor scene is ensured, and the geo-treasure robot can plan the cleaning path more accurately according to the generated map.
In another specific application scenario, the geo-treasure robot may directly process the acquired image data and inertial measurement data, so as to establish an indoor scenario, and may store an established indoor model or an indoor map locally. Subsequently, the user can directly look over the indoor map of storage in the precious robot in ground through APP to give the regional instruction of cleaning to precious robot in ground.
In another specific application scenario, the autonomous vehicle may reconstruct a three-dimensional scene around the driving path by collecting image data in the driving path and inertial measurement data of the vehicle itself, and may perform path planning and navigation based on the reconstructed scene. During the running of the vehicle, the situation of sudden change of the motion state is likely to be met. For example, the vehicle may suddenly accelerate or turn, in this case, the first pose error generated based on the image data may be large, and at this time, the second pose generated based on the inertial measurement data may be used to correct the pose, so that the reconstructed three-dimensional scene is more accurate, and the accuracy of path planning and the safety of automatic driving are ensured.
In another specific application scenario, when a user plays a game by wearing virtual reality glasses, the virtual reality eyes can simultaneously acquire image data in the environment where the user is located and inertia measurement data generated when the user moves, and the virtual reality eyes can reconstruct the environment where the user is located based on the acquired data. During the game playing process, the user is likely to turn or move suddenly and greatly, at this time, the difference between adjacent image frames of the image data is large, and the first posture generated based on the image data may not be accurate enough. Therefore, the first pose generated by the image data can be corrected by combining the second pose generated by the inertial measurement data, so that the modeling precision of the indoor scene is ensured.
Referring to fig. 5, the present application further provides a scene creating system, including:
the data acquisition unit is used for acquiring image data and inertial measurement data, wherein the image data is used for generating a first pose of the self-moving robot, and the inertial measurement data is used for generating a second pose of the self-moving robot;
the motion state acquisition unit is used for acquiring the current motion state of the self-moving robot;
the scene model establishing unit is used for selecting a first pose or a second pose suitable for the current moment according to the current motion state of the self-moving robot and establishing a scene model according to the selected first pose and/or the selected second pose;
the first pose is the pose corresponding to the image frame at the current moment, and the second pose is the pose corresponding to the inertial measurement data at the current moment.
In one embodiment, the motion state acquiring unit includes:
and the speed information determining module is used for determining the speed information of the self-moving robot at the current moment in the inertial measurement data and judging the motion state of the self-moving robot at the current moment according to the obtained speed information.
In one embodiment, the motion state acquiring unit includes:
the image frame reading module is used for reading a current image frame at the current moment in the image data and reading a target image frame which is positioned in front of the current image frame and is adjacent to the current image frame;
and the similarity determining module is used for calculating the similarity between the current image frame and the target image frame and determining the current motion state of the self-moving robot according to the similarity.
In one embodiment, the motion state acquiring unit includes:
the difference value calculation module is used for acquiring a first pose and a second pose at the current moment and calculating the difference value between the first pose at the current moment and the second pose at the current moment;
and the difference value judging module is used for determining the current motion state of the self-moving robot according to the calculated difference value.
Referring to fig. 6, the present application further provides a scene creating apparatus, which includes a memory and a processor, wherein the memory is used for storing a computer program, and the computer program is used for implementing the following functions when being executed by the processor:
acquiring image data and inertial measurement data, wherein the image data is used for generating a first pose of the self-moving robot, and the inertial measurement data is used for generating a second pose of the self-moving robot;
obtaining a current motion state of the mobile robot;
selecting a first pose or a second pose suitable for the current moment according to the current motion state of the self-moving robot, and establishing a scene model according to the selected first pose and/or the selected second pose;
the first pose is the pose corresponding to the image frame at the current moment, and the second pose is the pose corresponding to the inertial measurement data at the current moment.
The present application further provides a self-moving robot comprising a memory for storing a computer program and a processor, the computer program, when executed by the processor, for performing the following functions:
acquiring image data and inertial measurement data, wherein the image data is used for generating a first pose of the self-moving robot, and the inertial measurement data is used for generating a second pose of the self-moving robot;
obtaining a current motion state of the mobile robot;
selecting a first pose or a second pose suitable for the current moment according to the current motion state of the self-moving robot, and establishing a scene model according to the selected first pose and/or the selected second pose;
the first pose is the pose corresponding to the image frame at the current moment, and the second pose is the pose corresponding to the inertial measurement data at the current moment.
In other embodiments of the present application, for different time instants, the corresponding first pose or second pose may be selected according to a specific situation of the current time instant. For example, the first pose may be selected when the self-moving robot is in a steady motion state, or when there is no abrupt change in light in the acquired image data or the image features are relatively obvious. Conversely, if the self-moving robot has a large change in motion state, or the acquired image data has a sudden change of light or the image characteristics are not obvious enough, so that the first pose is not accurate enough, the second pose can be selected. In view of this, the present application may provide a scene establishing method, including:
acquiring image data and inertial measurement data, wherein the image data is used for generating a first pose of the self-moving robot, and the inertial measurement data is used for generating a second pose of the self-moving robot;
selecting a first pose or a second pose suitable for the current moment, and establishing a scene model according to the selected first pose and/or second pose;
the first pose is the pose corresponding to the image frame at the current moment, and the second pose is the pose corresponding to the inertial measurement data at the current moment.
In one embodiment, selecting the first or second pose applicable to the current time comprises:
and determining the speed information of the self-moving robot at the current moment in the inertial measurement data, and selecting a first pose or a second pose suitable for the current moment according to the speed information.
In one embodiment, selecting the first pose or the second pose applicable to the current time according to the speed information comprises:
if the numerical value represented by the speed information is larger than or equal to a specified speed threshold, taking the current second pose as the pose applicable to the current moment;
and if the numerical value represented by the speed information is smaller than the specified speed threshold, taking the current first pose as the pose applicable to the current moment.
In one embodiment, selecting the first pose and/or the second pose applicable to each of the different time instants comprises:
reading a current image frame at the current moment in the image data, and reading a target image frame which is positioned in front of the current image frame and is adjacent to the current image frame;
and calculating the similarity between the current image frame and the target image frame, and selecting a first pose or a second pose suitable for the current moment according to the similarity.
In one embodiment, selecting the first pose or the second pose applicable to the current time according to the similarity comprises:
if the similarity is larger than or equal to a specified similarity threshold, taking the current first pose as the pose applicable to the current moment;
and if the similarity is smaller than the specified similarity threshold, taking the current second pose as the pose applicable to the current moment.
In one embodiment, selecting the first pose and/or the second pose applicable to each of the different time instants comprises:
acquiring a first pose and a second pose at the current moment, and calculating a difference value between the first pose at the current moment and the second pose at the current moment;
and selecting the first pose or the second pose applicable to the current moment according to the calculated difference value.
In one embodiment, calculating the difference between the first pose at the current time and the second pose at the current time comprises:
calculating element difference values at the same position in the first pose at the current moment and the second pose at the current moment, adding the element difference values, and taking the added result as the difference value between the first pose at the current moment and the second pose at the current moment.
In one embodiment, selecting the first pose or the second pose applicable to the current time based on the calculated difference comprises:
if the difference is larger than or equal to a specified difference threshold, taking the current second pose as the pose applicable to the current moment;
and if the difference value is smaller than the specified difference value threshold value, taking the current first pose as the pose applicable to the current moment.
An embodiment of the present application may further provide a scene creating system, including:
the data acquisition unit is used for acquiring image data and inertial measurement data, wherein the image data is used for generating a first pose of the self-moving robot, and the inertial measurement data is used for generating a second pose of the self-moving robot;
the scene model establishing unit is used for selecting a first pose and/or a second pose suitable for different moments and establishing a scene model according to the selected first pose and/or the selected second pose;
the first pose is the pose corresponding to the image frame at the current moment, and the second pose is the pose corresponding to the inertial measurement data at the current moment.
An embodiment of the present application may further provide a scene creating apparatus, including a memory and a processor, where the memory is used to store a computer program, and the computer program is used to implement the following functions when being executed by the processor:
acquiring image data and inertial measurement data, wherein the image data is used for generating a first pose of the self-moving robot, and the inertial measurement data is used for generating a second pose of the self-moving robot;
selecting a first pose and/or a second pose suitable for each different moment, and establishing a scene model according to the selected first pose and/or second pose;
the first pose is the pose corresponding to the image frame at the current moment, and the second pose is the pose corresponding to the inertial measurement data at the current moment.
An embodiment of the present application may further provide a self-moving robot, including a memory and a processor, the memory storing a computer program, the computer program, when executed by the processor, implementing the following functions:
acquiring image data and inertial measurement data, wherein the image data is used for generating a first pose of the self-moving robot, and the inertial measurement data is used for generating a second pose of the self-moving robot;
selecting a first pose and/or a second pose suitable for each different moment, and establishing a scene model according to the selected first pose and/or second pose;
the first pose is the pose corresponding to the image frame at the current moment, and the second pose is the pose corresponding to the inertial measurement data at the current moment.
In this embodiment, the memory may include a physical device for storing information, and typically, the information is digitized and then stored in a medium using an electrical, magnetic, or optical method. The memory may include: devices that store information using electrical energy, such as RAM, ROM, etc.; devices that store information using magnetic energy, such as hard disks, floppy disks, tapes, core memories, bubble memories, usb disks; devices for storing information optically, such as CDs or DVDs. Of course, there are other ways of memory, such as quantum memory, graphene memory, and so forth.
In this embodiment, the processor may be implemented in any suitable manner. For example, the processor may take the form of, for example, a microprocessor or processor and a computer-readable medium that stores computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, an embedded microcontroller, and so forth.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments.
As can be seen from the above, according to the technical solutions provided by one or more embodiments of the present application, when a scene is established, image data and inertial measurement data may be acquired. The image data may be used to generate a first pose of the self-moving robot, and the inertial measurement data may be used to generate a second pose of the self-moving robot. The scene details contained in the image data are rich, and the first pose generated based on the image data is more accurate when enough high-quality matching information exists. However, at certain times, such as when the self-moving robot turns, rotates, or moves too fast, the camera field of view changes rapidly, resulting in the first pose being generated with less accuracy. At this time, a second pose generated based on inertial measurement data and capable of ensuring basic accuracy can be selected. Therefore, the corresponding first pose or second pose is selected according to the actual conditions at different moments, and finally the scene model can be established according to the selected first pose and/or second pose, so that the accuracy and robustness of scene establishment can be improved.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an embodiment of the present application, and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (25)

1. A method for scene creation, the method comprising:
acquiring image data and inertial measurement data, wherein the image data is used for generating a first pose of the self-moving robot, and the inertial measurement data is used for generating a second pose of the self-moving robot;
obtaining a current motion state of the mobile robot;
selecting a first pose or a second pose suitable for the current moment according to the current motion state of the self-moving robot, and establishing a scene model according to the selected first pose and/or second pose;
the first pose is the pose corresponding to the image frame at the current moment, and the second pose is the pose corresponding to the inertial measurement data at the current moment.
2. The method of claim 1, wherein the image data is acquired from an image sensor and the inertial measurement data is acquired from an inertial measurement unit.
3. The method of claim 1, wherein the obtaining of the motion state from the mobile robot comprises:
and determining the speed information of the self-moving robot at the current moment according to the inertia measurement data, and judging the motion state of the self-moving robot at the current moment according to the obtained speed information.
4. The method of claim 3, wherein selecting the first pose or the second pose applicable to the current time based on the current motion state of the self-moving robot comprises:
if the numerical value represented by the speed information is larger than or equal to a specified speed threshold, the self-moving robot is in an unstable motion state, and the current second pose is taken as the pose suitable for the current moment;
and if the numerical value represented by the speed information is smaller than the specified speed threshold, the self-moving robot is in a stable motion state, and the current first pose is taken as the pose suitable for the current moment.
5. A method according to claim 2 or 3, characterized in that the speed information is the running speed or the angular speed.
6. The method of claim 1, wherein obtaining the current motion state from the mobile robot comprises:
reading a current image frame at the current moment in the image data, and reading a target image frame which is positioned in front of the current image frame and is adjacent to the current image frame;
and calculating the similarity between the current image frame and the target image frame, and determining the current motion state of the self-moving robot according to the similarity.
7. The method of claim 6, wherein selecting the first pose or the second pose applicable to the current time based on the current motion state of the self-moving robot comprises:
if the similarity is larger than or equal to a specified similarity threshold value, the self-moving robot is in a stable motion state, and the current first pose is taken as the pose applicable to the current moment;
and if the similarity is smaller than the specified similarity threshold, the self-moving robot is in an unstable motion state, and the current second pose is taken as the pose suitable for the current moment.
8. The method of claim 1, wherein obtaining the current motion state from the mobile robot comprises:
acquiring a first pose and a second pose at the current moment, and calculating a difference value between the first pose at the current moment and the second pose at the current moment;
and determining the current motion state of the self-moving robot according to the calculated difference value.
9. The method of claim 8, wherein selecting the first pose or the second pose applicable to the current time based on the current motion state of the self-moving robot comprises:
if the difference is larger than or equal to a specified difference threshold, the self-moving robot is in an unstable motion state, and the current second pose is taken as the pose applicable to the current moment;
and if the difference is smaller than the specified difference threshold, the self-moving robot is in a stable motion state, and the current first pose is taken as the pose applicable to the current moment.
10. A scene creation system, the system comprising:
the data acquisition unit is used for acquiring image data and inertial measurement data, wherein the image data is used for generating a first pose of the self-moving robot, and the inertial measurement data is used for generating a second pose of the self-moving robot;
the motion state acquisition unit is used for acquiring the current motion state of the self-moving robot;
the scene model establishing unit is used for selecting a first pose or a second pose suitable for the current moment according to the current motion state of the self-moving robot and establishing a scene model according to the selected first pose and/or the selected second pose;
the first pose is the pose corresponding to the image frame at the current moment, and the second pose is the pose corresponding to the inertial measurement data at the current moment.
11. The system according to claim 10, wherein the motion state acquisition unit includes:
and the speed information determining module is used for determining the speed information of the self-moving robot at the current moment in the inertial measurement data and judging the current motion state of the self-moving robot according to the obtained speed information.
12. The system according to claim 10, wherein the motion state acquisition unit includes:
the image frame reading module is used for reading a current image frame at the current moment in the image data and reading a target image frame which is positioned in front of the current image frame and is adjacent to the current image frame;
and the similarity determining module is used for calculating the similarity between the current image frame and the target image frame and determining the current motion state of the self-moving robot according to the similarity.
13. The system according to claim 10, wherein the motion state acquisition unit includes:
the difference value calculation module is used for acquiring a first pose and a second pose at the current moment and calculating the difference value between the first pose at the current moment and the second pose at the current moment;
and the difference value judging module is used for determining the current motion state of the self-moving robot according to the calculated difference value.
14. A scene creation apparatus, comprising a memory for storing a computer program and a processor, wherein the computer program when executed by the processor is configured to perform the functions of:
acquiring image data and inertial measurement data, wherein the image data is used for generating a first pose of the self-moving robot, and the inertial measurement data is used for generating a second pose of the self-moving robot;
obtaining a current motion state of the mobile robot;
selecting a first pose or a second pose suitable for the current moment according to the current motion state of the self-moving robot, and establishing a scene model according to the selected first pose and/or the selected second pose;
the first pose is the pose corresponding to the image frame at the current moment, and the second pose is the pose corresponding to the inertial measurement data at the current moment.
15. A self-moving robot, comprising a memory for storing a computer program and a processor, wherein the computer program, when executed by the processor, is adapted to perform the functions of:
acquiring image data and inertial measurement data, wherein the image data is used for generating a first pose of the self-moving robot, and the inertial measurement data is used for generating a second pose of the self-moving robot;
obtaining a current motion state of the mobile robot;
selecting a first pose or a second pose suitable for the current moment according to the current motion state of the self-moving robot, and establishing a scene model according to the selected first pose and/or the selected second pose;
the first pose is the pose corresponding to the image frame at the current moment, and the second pose is the pose corresponding to the inertial measurement data at the current moment.
16. A method for scene creation, the method comprising:
acquiring image data and inertial measurement data, wherein the image data is used for generating a first pose of the self-moving robot, and the inertial measurement data is used for generating a second pose of the self-moving robot;
selecting a first pose or a second pose suitable for the current moment, and establishing a scene model according to the selected first pose and/or second pose;
the first pose is the pose corresponding to the image frame at the current moment, and the second pose is the pose corresponding to the inertial measurement data at the current moment.
17. The method of claim 16, wherein selecting the first pose or the second pose applicable to the current time comprises:
and determining the speed information of the self-moving robot at the current moment in the inertial measurement data, and selecting a first pose or a second pose suitable for the current moment according to the speed information.
18. The method of claim 17, wherein selecting the first pose or the second pose applicable to the current time based on the velocity information comprises:
if the numerical value represented by the speed information is larger than or equal to a specified speed threshold, taking the current second pose as the pose applicable to the current moment;
and if the numerical value represented by the speed information is smaller than the specified speed threshold, taking the current first pose as the pose applicable to the current moment.
19. The method of claim 16, wherein selecting the first pose and/or the second pose applicable to each different time instance comprises:
reading a current image frame at the current moment in the image data, and reading a target image frame which is positioned in front of the current image frame and is adjacent to the current image frame;
and calculating the similarity between the current image frame and the target image frame, and selecting a first pose or a second pose suitable for the current moment according to the similarity.
20. The method of claim 19, wherein selecting the first pose or the second pose applicable to the current time according to the similarity comprises:
if the similarity is larger than or equal to a specified similarity threshold, taking the current first pose as the pose applicable to the current moment;
and if the similarity is smaller than the specified similarity threshold, taking the current second pose as the pose applicable to the current moment.
21. The method of claim 16, wherein selecting the first pose and/or the second pose applicable to each different time instance comprises:
acquiring a first pose and a second pose at the current moment, and calculating a difference value between the first pose at the current moment and the second pose at the current moment;
and selecting the first pose or the second pose applicable to the current moment according to the calculated difference value.
22. The method of claim 21, wherein selecting the first pose or the second pose applicable to the current time based on the calculated difference comprises:
if the difference is larger than or equal to a specified difference threshold, taking the current second pose as the pose applicable to the current moment;
and if the difference value is smaller than the specified difference value threshold value, taking the current first pose as the pose applicable to the current moment.
23. A scene creation system, the system comprising:
the data acquisition unit is used for acquiring image data and inertial measurement data, wherein the image data is used for generating a first pose of the self-moving robot, and the inertial measurement data is used for generating a second pose of the self-moving robot;
the scene model establishing unit is used for selecting a first pose and/or a second pose suitable for different moments and establishing a scene model according to the selected first pose and/or the selected second pose;
the first pose is the pose corresponding to the image frame at the current moment, and the second pose is the pose corresponding to the inertial measurement data at the current moment.
24. A scene creation apparatus, comprising a memory for storing a computer program and a processor, wherein the computer program when executed by the processor is configured to perform the functions of:
acquiring image data and inertial measurement data, wherein the image data is used for generating a first pose of the self-moving robot, and the inertial measurement data is used for generating a second pose of the self-moving robot;
selecting a first pose and/or a second pose suitable for each different moment, and establishing a scene model according to the selected first pose and/or second pose;
the first pose is the pose corresponding to the image frame at the current moment, and the second pose is the pose corresponding to the inertial measurement data at the current moment.
25. A self-moving robot, comprising a memory for storing a computer program and a processor, wherein the computer program, when executed by the processor, is adapted to perform the functions of:
acquiring image data and inertial measurement data, wherein the image data is used for generating a first pose of the self-moving robot, and the inertial measurement data is used for generating a second pose of the self-moving robot;
selecting a first pose and/or a second pose suitable for each different moment, and establishing a scene model according to the selected first pose and/or second pose;
the first pose is the pose corresponding to the image frame at the current moment, and the second pose is the pose corresponding to the inertial measurement data at the current moment.
CN202010207847.2A 2020-03-23 2020-03-23 Scene establishing method, system and device and self-moving robot Pending CN113436310A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010207847.2A CN113436310A (en) 2020-03-23 2020-03-23 Scene establishing method, system and device and self-moving robot
PCT/CN2020/115920 WO2021189783A1 (en) 2020-03-23 2020-09-17 Scene building method, system and device, and self-moving robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010207847.2A CN113436310A (en) 2020-03-23 2020-03-23 Scene establishing method, system and device and self-moving robot

Publications (1)

Publication Number Publication Date
CN113436310A true CN113436310A (en) 2021-09-24

Family

ID=77753293

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010207847.2A Pending CN113436310A (en) 2020-03-23 2020-03-23 Scene establishing method, system and device and self-moving robot

Country Status (2)

Country Link
CN (1) CN113436310A (en)
WO (1) WO2021189783A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113984064A (en) * 2021-10-27 2022-01-28 山东亚历山大智能科技有限公司 Positioning method and system for indoor mobile robot

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108447116A (en) * 2018-02-13 2018-08-24 中国传媒大学 The method for reconstructing three-dimensional scene and device of view-based access control model SLAM
US10726634B2 (en) * 2018-05-04 2020-07-28 Microsoft Technology Licensing, Llc Generating and providing platform agnostic scene files in an intermediate format
CN109141433A (en) * 2018-09-20 2019-01-04 江阴市雷奥机器人技术有限公司 A kind of robot indoor locating system and localization method
CN109583457A (en) * 2018-12-03 2019-04-05 荆门博谦信息科技有限公司 A kind of method and robot of robot localization and map structuring
CN110766785B (en) * 2019-09-17 2023-05-05 武汉大学 Real-time positioning and three-dimensional reconstruction device and method for underground pipeline

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113984064A (en) * 2021-10-27 2022-01-28 山东亚历山大智能科技有限公司 Positioning method and system for indoor mobile robot

Also Published As

Publication number Publication date
WO2021189783A1 (en) 2021-09-30

Similar Documents

Publication Publication Date Title
CN109084732B (en) Positioning and navigation method, device and processing equipment
CN108717710B (en) Positioning method, device and system in indoor environment
US11501527B2 (en) Visual-inertial positional awareness for autonomous and non-autonomous tracking
US11748907B2 (en) Object pose estimation in visual data
KR101776622B1 (en) Apparatus for recognizing location mobile robot using edge based refinement and method thereof
US11466988B2 (en) Method and device for extracting key frames in simultaneous localization and mapping and smart device
US11948369B2 (en) Visual-inertial positional awareness for autonomous and non-autonomous mapping
KR101725060B1 (en) Apparatus for recognizing location mobile robot using key point based on gradient and method thereof
US10390003B1 (en) Visual-inertial positional awareness for autonomous and non-autonomous device
KR101708659B1 (en) Apparatus for recognizing location mobile robot using search based correlative matching and method thereof
US10032276B1 (en) Visual-inertial positional awareness for autonomous and non-autonomous device
KR101784183B1 (en) APPARATUS FOR RECOGNIZING LOCATION MOBILE ROBOT USING KEY POINT BASED ON ADoG AND METHOD THEREOF
CN105793730B (en) The classification based on laser radar of object motion
US11783443B2 (en) Extraction of standardized images from a single view or multi-view capture
CN104811683A (en) Method and apparatus for estimating position
US10950032B2 (en) Object capture coverage evaluation
WO2021189784A1 (en) Scenario reconstruction method, system and apparatus, and sweeping robot
JP2018205950A (en) Environment map generation apparatus for estimating self vehicle position, self vehicle position estimation device, environment map generation program for estimating self vehicle position, and self vehicle position estimation program
CN113436310A (en) Scene establishing method, system and device and self-moving robot
CN113902801A (en) Mobile robot repositioning method, device, equipment and storage medium
CN112904365B (en) Map updating method and device
US10977810B2 (en) Camera motion estimation
CN111829552A (en) Error correction method and device for visual inertial system
Cho et al. Rotation estimation from cloud tracking
CN112288803A (en) Positioning method and device for computing equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 210038 west side of the 5th and 6th floor, building C2, Hongfeng Science Park, Nanjing Economic and Technological Development Zone, Nanjing City, Jiangsu Province

Applicant after: Krypton (Nanjing) Technology Co.,Ltd.

Address before: 210038 floor 6, west of building C2, Hongfeng science and Technology Park, Nanjing Economic and Technological Development Zone, Nanjing, Jiangsu Province

Applicant before: Nanjing Kewo Robot Technology Co.,Ltd.

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination