WO2022205750A1 - Procédé et appareil de génération de données de nuage de points, dispositif électronique et support de stockage - Google Patents

Procédé et appareil de génération de données de nuage de points, dispositif électronique et support de stockage Download PDF

Info

Publication number
WO2022205750A1
WO2022205750A1 PCT/CN2021/114435 CN2021114435W WO2022205750A1 WO 2022205750 A1 WO2022205750 A1 WO 2022205750A1 CN 2021114435 W CN2021114435 W CN 2021114435W WO 2022205750 A1 WO2022205750 A1 WO 2022205750A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional feature
feature points
dimensional
target device
cloud data
Prior art date
Application number
PCT/CN2021/114435
Other languages
English (en)
Chinese (zh)
Inventor
谢卫健
钱权浩
章国锋
冯友计
王楠
Original Assignee
深圳市慧鲤科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市慧鲤科技有限公司 filed Critical 深圳市慧鲤科技有限公司
Publication of WO2022205750A1 publication Critical patent/WO2022205750A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the embodiments of the present disclosure relate to the field of positioning technologies, and to a method, an apparatus, an electronic device, and a storage medium for generating point cloud data.
  • Simultaneous localization and mapping refers to a mobile device moving from an unknown location in an unknown environment, positioning itself according to the location estimation and map during the movement process, and building on the basis of its own positioning. Incremental maps, enabling autonomous positioning and navigation of mobile devices.
  • a SLAM system running on a mobile device will generate a large error accumulation in long-distance tracking, which reduces the accuracy and stability of the SLAM system.
  • embodiments of the present disclosure provide at least a method, apparatus, electronic device, and storage medium for generating point cloud data.
  • an embodiment of the present disclosure provides a method for generating point cloud data, including:
  • the positioning pose information and the position information of the two-dimensional feature points in the scene image determine the three-dimensional feature points matching the two-dimensional feature points in the pre-built three-dimensional scene map, and the 3D position information of 3D feature points;
  • point cloud data corresponding to the current scene is generated.
  • the feature point extraction method corresponding to the SLAM system can be used to extract at least one two-dimensional feature point from the collected scene image, and according to the obtained positioning pose information and the position of the two-dimensional feature point in the scene image information, to determine the 3D feature points that match the 2D feature points in the 3D scene map, and the 3D position information of the 3D feature points.
  • the three-dimensional feature points matched by the feature points and the three-dimensional position information of the three-dimensional feature points can further generate relatively accurate point cloud data corresponding to the current scene.
  • the feature point extraction method corresponding to the SLAM system is used to extract the two-dimensional feature points in the scene image, the two-dimensional feature points are matched with the SLAM system, so that the generated point cloud data of the current scene is matched with the SLAM system, and the subsequent The accumulated error of the SLAM system is corrected by using the point cloud data corresponding to the current scene.
  • the method further includes:
  • the generating point cloud data corresponding to the current scene based on the three-dimensional feature points and the three-dimensional position information of the three-dimensional feature points including:
  • the point cloud data corresponding to the current scene is generated.
  • the semantic information of the three-dimensional feature points and/or the position confidence corresponding to the three-dimensional position information of the three-dimensional feature points based on the three-dimensional feature points, the three-dimensional position information of the three-dimensional feature points, and the determined semantic information and/or The position confidence is used to generate point cloud data corresponding to the current scene.
  • the generated point cloud data contains rich information of 3D feature points.
  • the method further includes:
  • the generating point cloud data corresponding to the current scene based on the three-dimensional feature points and the three-dimensional position information of the three-dimensional feature points including:
  • point cloud data corresponding to the current scene is generated.
  • the credible 3D feature point can be determined by the semantic information and/or the position confidence of the 3D feature point, and the unreliable 3D feature points in at least one 3D feature point can be screened out.
  • 3D position information and credible 3D feature points can generate more accurate point cloud data corresponding to the current scene, which alleviates the bad influence of unreliable 3D feature points on point cloud data.
  • the method further includes:
  • the current positioning result of the SLAM system is adjusted to obtain the adjusted current positioning result.
  • At least one two-dimensional feature point is extracted from the scene image by using the feature point extraction method corresponding to the SLAM system, that is, the type of the obtained two-dimensional feature point is the same as that of the feature point extracted by the SLAM system; for example, if the SLAM system
  • the corresponding feature point extraction method is the FAST feature point extraction algorithm, then use the FAST feature point extraction algorithm to extract at least one two-dimensional feature point from the scene image, and the obtained two-dimensional feature point is the FAST corner point, and the SLAM system.
  • the feature points are also FAST corner points, so the extracted two-dimensional feature points are of the same type as the feature points extracted in the SLAM system, so that the generated point cloud data corresponding to the current scene can be used to more accurately adjust the current SLAM system. Positioning results.
  • the stability of the positioning results of the SLAM system can be improved.
  • acquiring the positioning pose information of the target device includes:
  • the positioning pose information of the target device is determined.
  • setting up multiple ways to obtain the positioning pose information of the target device can improve the flexibility of determining the positioning pose information.
  • the obtaining of the scene image corresponding to the current scene collected by the target device and the positioning pose information of the target device include:
  • that the target device satisfies the set movement condition includes: the movement distance of the target device reaches the set distance threshold; or, the movement time of the target device reaches the set time threshold.
  • an embodiment of the present disclosure provides an apparatus for generating point cloud data, including:
  • an acquisition part configured to acquire the scene image corresponding to the current scene collected by the target device, and the positioning pose information of the target device
  • the extraction part is configured to extract at least one two-dimensional feature point from the scene image by using the feature point extraction method corresponding to the synchronous positioning and the mapping SLAM system;
  • a first determining part is configured to determine, according to the positioning pose information and the position information of the two-dimensional feature points in the scene image, matching with the two-dimensional feature points in a pre-built three-dimensional scene map The three-dimensional feature points of , and the three-dimensional position information of the three-dimensional feature points;
  • the generating part is configured to generate point cloud data corresponding to the current scene based on the three-dimensional feature points and the three-dimensional position information of the three-dimensional feature points.
  • the apparatus further includes: a second determining part configured to:
  • the generation section is also configured to:
  • the point cloud data corresponding to the current scene is generated.
  • the apparatus further includes: a third determining part configured to:
  • the generation section is also configured to:
  • point cloud data corresponding to the current scene is generated.
  • the device further includes: an adjustment part configured to:
  • the current positioning result of the SLAM system is adjusted to obtain the adjusted current positioning result.
  • the obtaining part is further configured as:
  • the positioning pose information of the target device is determined.
  • the obtaining part is further configured as:
  • that the target device satisfies the set movement condition includes: the movement distance of the target device reaches the set distance threshold; or, the movement time of the target device reaches the set time threshold.
  • an embodiment of the present disclosure provides an electronic device, including: a processor, a memory, and a bus, where the memory stores machine-readable instructions executable by the processor, and when the electronic device runs, the processor In communication with the memory through a bus, the machine-readable instructions execute the steps of the method for generating point cloud data according to the first aspect or any one of the embodiments when executed by the processor.
  • an embodiment of the present disclosure provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and the computer program is executed by a processor when it is executed as described in the first aspect or any one of the above-mentioned embodiments.
  • the steps of the point cloud data generation method are described in the first aspect or any one of the above-mentioned embodiments.
  • an embodiment of the present disclosure provides a computer program, including computer-readable code, when the computer-readable code is executed in an electronic device, a processor in the electronic device executes the first aspect or any of the above The steps of the method for generating point cloud data according to an embodiment.
  • FIG. 1 shows a schematic flowchart of a method for generating point cloud data provided by an embodiment of the present disclosure
  • FIG. 2 shows a schematic flowchart of a method for determining credible three-dimensional feature points in a method for generating point cloud data provided by an embodiment of the present disclosure
  • FIG. 3 shows another method for generating point cloud data provided by an embodiment of the present disclosure
  • FIG. 4 shows a schematic diagram of the architecture of an apparatus for generating point cloud data provided by an embodiment of the present disclosure
  • FIG. 5 shows a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • Real-time positioning and map construction means that a mobile device starts to move from an unknown position in an unknown environment, performs its own positioning according to the position estimation and map during the movement process, and builds an incremental map on the basis of its own positioning, so as to realize Autonomous positioning and navigation of mobile devices.
  • a SLAM system running on a mobile device will generate a large error accumulation in long-distance tracking, which reduces the accuracy and stability of the SLAM system.
  • the offline maps generated by lidar or three-dimensional reconstruction have high accuracy and global consistency
  • high-precision offline map point information can be integrated into the tracking process of the SLAM system to effectively Reduce the error of the SLAM system.
  • the local image can be uploaded to the cloud for visual positioning, and the inliers can be screened out according to the positioning results of the current image and the offline map, and the inliers can be returned to the SLAM system.
  • this method usually results in a limited number of inliers after screening, and it is difficult to continuously apply them to the SLAM system.
  • the embodiments of the present disclosure provide a method, apparatus, electronic device, and storage medium for generating point cloud data.
  • the method for generating point cloud data provided by the embodiments of the present disclosure can be applied to a mobile computer device with a certain computing capability, for example, the mobile computer device can be a mobile phone, a computer, a tablet, an Augmented Reality (AR) device, Robots etc.
  • the method for generating point cloud data may be implemented by a processor invoking computer-readable instructions stored in a memory.
  • FIG. 1 is a schematic flowchart of a method for generating point cloud data provided by an embodiment of the present disclosure
  • the method includes S101-S104, wherein:
  • S102 utilize the feature point extraction mode corresponding to synchronous positioning and mapping system, extract at least one two-dimensional feature point from the scene image;
  • the feature point extraction method corresponding to the SLAM system can be used to extract at least one two-dimensional feature point from the collected scene image, and according to the obtained positioning pose information and the position of the two-dimensional feature point in the scene image. information, to determine the 3D feature points that match the 2D feature points in the 3D scene map, and the 3D position information of the 3D feature points.
  • the three-dimensional feature points matched by the feature points and the three-dimensional position information of the three-dimensional feature points can further generate relatively accurate point cloud data corresponding to the current scene.
  • the feature point extraction method corresponding to the SLAM system is used to extract the two-dimensional feature points in the scene image, the two-dimensional feature points are matched with the SLAM system, so that the generated point cloud data of the current scene is matched with the SLAM system, and the subsequent The accumulated error of the SLAM system is corrected by using the point cloud data corresponding to the current scene.
  • the target device may be any movable device including an image acquisition device, for example, the target device may be a robot, an AR device, a mobile phone, a computer, and the like.
  • the scene image of the current scene collected by the image collection device set on the target device may be acquired; the image collection device may be a camera or the like.
  • a scene image corresponding to the current scene collected by the target device can be obtained, and the positioning pose information of the target device in the process of collecting the scene image can be obtained.
  • the positioning pose information may include position information and orientation information, for example, the position information may be three-dimensional position information; the orientation information may be represented by Euler angles.
  • obtaining the positioning pose information of the target device in the above S101 may be implemented by at least one of the following implementation manners:
  • Manner 1 Determine the positioning pose information of the target device based on the scene image.
  • Manner 2 Acquire detection data of a positioning sensor included on the target device; and determine the positioning pose information of the target device based on the detection data.
  • a visual positioning algorithm may be used to determine the positioning pose information of the target device based on the scene image corresponding to the current scene. For example, feature point extraction can be performed on the scene image to obtain multiple feature point information included in the scene image, and the positioning pose information of the target device can be determined by using the multiple feature point information and the constructed offline map.
  • the positioning sensor may include: a radar device, an inertial measurement unit (Inertial Measurement Unit, IMU), a gyroscope and other sensors capable of measuring the position and attitude of the device.
  • IMU Inertial Measurement Unit
  • the radar device can collect the point cloud data of the current scene, and then match the collected point cloud data with the high-precision map to determine the positioning pose information of the target device.
  • the method for determining the positioning pose information of the target device may also include other positioning methods, which are only illustratively described herein.
  • the positioning pose information of the target device can also be determined by positioning methods such as a global positioning system (Global Positioning System, GPS), wireless communication technology WiFi positioning, and real-time dynamic positioning (Real-Timekinematic, RTK).
  • setting up multiple ways to obtain the positioning pose information of the target device can improve the flexibility of determining the positioning pose information.
  • acquiring the scene image corresponding to the current scene collected by the target device and the positioning pose information of the target device may include:
  • that the target device satisfies the set movement condition includes: the movement distance of the target device reaches the set distance threshold; or, the movement time of the target device reaches the set time threshold.
  • the scene corresponding to the current scene collected by the target device can be obtained when the moving distance of the target device reaches the set distance threshold, or when the moving time of the target device reaches the time threshold of the device.
  • the distance threshold and time threshold can be set as required, for example, the distance threshold can be 20 meters, 30 meters, 50 meters, etc.; the time threshold can be 30 seconds, 1 minute, and so on.
  • the scene image of the current scene collected by the target device and the positioning pose information of the target device may be acquired once every time the target device moves 20 meters (distance threshold).
  • distance threshold when the target device moves every 20 seconds (time threshold), a scene image of the current scene collected by the target device and the positioning pose information of the target device are acquired once.
  • the movement distance of the target device may be determined by using a displacement sensor provided on the target device for measuring the movement distance.
  • the set positioning algorithm can be used to perform real-time detection on the moving distance of the target device, etc.
  • the moving time of the target device can be determined using the clock set on the target device.
  • At least one two-dimensional feature point can be extracted from the acquired scene image of the current scene by using the feature point extraction method corresponding to the SLAM system.
  • the two-dimensional feature points may be feature points on the target object included in the scene image.
  • the feature point extraction method may be a feature point extraction algorithm deployed in the SLAM system.
  • the feature point extraction algorithm may include, but is not limited to, Scale Invariant Feature Transform (SIFT) algorithm, Scale Invariant Feature Transform The accelerated version of the algorithm (SIFT algorithm) SURF algorithm, FAST feature point extraction algorithm, etc.
  • the FAST feature point extraction algorithm may be used to extract at least one two-dimensional feature point from the scene image.
  • the step of extracting at least one two-dimensional feature point from the scene image by using the feature point extraction algorithm corresponding to the SLAM system can be executed on the mobile device; the feature point extraction algorithm corresponding to the SLAM system can also be executed on the server, from the scene image.
  • the step of extracting at least one two-dimensional feature point from the image can be executed on the mobile device; the feature point extraction algorithm corresponding to the SLAM system can also be executed on the server, from the scene image.
  • At least one two-dimensional feature point can be extracted from the scene image by using the feature point extraction algorithm corresponding to the SLAM system set on the movable device.
  • the collected scene image can be sent to the server, so that at least one two-dimensional feature point can be extracted from the scene image by using the feature point extraction algorithm corresponding to the SLAM system set on the server.
  • the 3D feature points matching the 2D feature points and the 3D feature points can be determined in the pre-built 3D scene map according to the positioning pose information and the position information of the 2D feature points in the scene image.
  • the three-dimensional position information corresponding to the feature point is obtained.
  • a ray casting algorithm can be used to determine the three-dimensional feature points that match the two-dimensional feature points according to the positioning pose information, the position information of the two-dimensional feature points in the scene image, and the pre-built three-dimensional scene map. , and the three-dimensional position information corresponding to the three-dimensional feature points.
  • the point cloud data corresponding to the current scene can be generated by using the three-dimensional feature points and the three-dimensional position information of the three-dimensional feature points.
  • the method further includes: determining semantic information of the three-dimensional feature point, and/or position confidence corresponding to the three-dimensional position information of the three-dimensional feature point.
  • the generating the point cloud data corresponding to the current scene based on the three-dimensional feature points and the three-dimensional position information of the three-dimensional feature points includes: based on the three-dimensional feature points and the three-dimensional position information of the three-dimensional feature points, and the determined semantic information and/or the position confidence, to generate point cloud data corresponding to the current scene.
  • the semantic information of the three-dimensional feature points and/or the position confidence corresponding to the three-dimensional position information of the three-dimensional feature points based on the three-dimensional feature points, the three-dimensional position information of the three-dimensional feature points, and the determined semantic information and/or The position confidence is used to generate point cloud data corresponding to the current scene.
  • the generated point cloud data contains rich information of 3D feature points.
  • the information corresponding to each three-dimensional feature point may include semantic information and/or position confidence information.
  • the semantic information corresponding to the three-dimensional feature point and/or the position confidence corresponding to the three-dimensional position information of the three-dimensional feature point are obtained from the three-dimensional scene map.
  • the position confidence can be used to represent the reliability of the three-dimensional position information.
  • the semantic information and position confidence of the three-dimensional feature points may be determined in the pre-built three-dimensional scene map; In the process of information or position confidence, the semantic information and position confidence of the three-dimensional feature points in the pre-built three-dimensional scene map are determined.
  • the three-dimensional scene map can be constructed according to the following steps: obtaining the video corresponding to the scene, sampling the multi-frame scene samples from the video, or obtaining the collected multi-frame scene samples corresponding to the scene; A plurality of three-dimensional feature point information is extracted from the sample; and a three-dimensional scene map can be constructed based on the extracted multiple three-dimensional feature point information.
  • the trained semantic segmentation neural network can be used to detect the constructed 3D scene map, and determine the semantic information of each 3D feature point in the 3D scene map.
  • the semantic information of the three-dimensional feature point may be used to represent the type of the target object corresponding to the three-dimensional feature point.
  • the semantic information of the three-dimensional feature point may include walls, tables, cups, leaves, animals, and the like.
  • the semantic information of the three-dimensional feature points can be set as required.
  • the trained neural network can be used to detect the constructed 3D scene map to determine the position confidence of each 3D feature point in the 3D scene map.
  • the position confidence of each 3D feature point in the 3D scene map can also be determined according to the semantic information of the 3D feature point. For example, if the semantic information of the 3D feature point is a table, since the table is an object that is not easy to move, the confidence of the position of the 3D feature point can be set higher; if the semantic information of the 3D feature point is a leaf, because the leaf is relatively The object is easy to move, so the position confidence of the three-dimensional feature point can be set to be small.
  • point cloud data corresponding to the current scene may be generated based on the three-dimensional feature points, the three-dimensional position information of the three-dimensional feature points, and the determined semantic information and/or position confidence. For example, in the process that the three-dimensional feature points include semantic information, the generated point cloud data corresponding to the current scene includes the semantic information of each point cloud point.
  • the method further includes:
  • S202 Determine a credible three-dimensional feature point in at least one three-dimensional feature point based on the semantic information and/or the position confidence of the three-dimensional feature point.
  • the generating point cloud data corresponding to the current scene based on the three-dimensional feature points and the three-dimensional position information of the three-dimensional feature points includes: based on the trusted three-dimensional feature points and the trusted three-dimensional feature points
  • the corresponding three-dimensional position information is used to generate point cloud data corresponding to the current scene.
  • the credible 3D feature point can be determined by the semantic information and/or the position confidence of the 3D feature point, and the unreliable 3D feature points in at least one 3D feature point can be screened out.
  • 3D position information and credible 3D feature points can generate more accurate point cloud data corresponding to the current scene, which alleviates the bad influence of unreliable 3D feature points on point cloud data.
  • the trusted three-dimensional feature points may be determined based on semantic information and/or position confidence of the three-dimensional feature points.
  • determining a credible 3D feature point in at least one 3D feature point based on the semantic information of the 3D feature point it can be determined whether the object corresponding to the 3D feature point belongs to the movable category according to the semantic information of the 3D feature point, and if so, It is determined that the three-dimensional feature point does not belong to the credible three-dimensional feature point; if not, it is determined that the three-dimensional feature point belongs to the credible three-dimensional feature point.
  • a mapping relationship table of movable categories and immovable categories can be preset, and then it can be determined according to the semantic information of the three-dimensional feature points and the set mapping relationship table that the objects corresponding to the three-dimensional feature points belong to the movable category or belong to the immovable category. category.
  • a confidence threshold may be set, and a 3D feature point whose position confidence is greater than or equal to the confidence threshold may be determined as Credible 3D feature points; 3D feature points whose position confidence is less than the set confidence threshold are determined as unreliable 3D feature points.
  • the candidate can be determined based on the semantic information of the 3D feature point first in the at least one 3D feature point.
  • the trusted three-dimensional feature points are determined; and the trusted three-dimensional feature points in the candidate trusted three-dimensional feature points are determined based on the position confidence.
  • a candidate credible 3D feature point in at least one 3D feature point may be determined based on the position confidence of the 3D feature point; then based on semantic information, a credible 3D feature point in the candidate credible 3D feature point may be determined.
  • point cloud data corresponding to the current scene may be generated based on the trusted three-dimensional feature points and the three-dimensional position information corresponding to the trusted three-dimensional feature points.
  • the method further includes S301 , see FIG. 3 . shown:
  • the errors of the system are accumulated to obtain the adjusted current positioning result, so that the obtained adjusted current positioning result has a higher accuracy.
  • At least one two-dimensional feature point is extracted from the scene image by using the feature point extraction method corresponding to the SLAM system, that is, the type of the obtained two-dimensional feature point is the same as that of the feature point extracted by the SLAM system; for example, if the SLAM system
  • the corresponding feature point extraction method is the FAST feature point extraction algorithm, then use the FAST feature point extraction algorithm to extract at least one two-dimensional feature point from the scene image, and the obtained two-dimensional feature point is the FAST corner point, and the SLAM system.
  • the feature points are also FAST corner points, so the extracted two-dimensional feature points are of the same type as the feature points extracted in the SLAM system, so that the generated point cloud data corresponding to the current scene can be used to more accurately adjust the current SLAM system.
  • Positioning results At the same time, compared with using the acquired pose data of the target device to eliminate the cumulative error of the SLAM system, the stability of the positioning results of the SLAM system can be improved.
  • the writing order of each step does not mean a strict execution order but constitutes any limitation on the implementation process, and the execution order of each step should be based on its function and possible intrinsic Logical OK.
  • an embodiment of the present disclosure also provides an apparatus for generating point cloud data.
  • a schematic diagram of the architecture of the apparatus for generating point cloud data provided by an embodiment of the present disclosure includes an acquisition part 401 and an extraction part 402 , a first determining part 403, and a generating part 404, wherein:
  • the acquisition part 401 is configured to acquire the scene image corresponding to the current scene collected by the target device, and the positioning pose information of the target device;
  • the extraction part 402 is configured to extract at least one two-dimensional feature point from the scene image by using a feature point extraction method corresponding to the synchronous positioning and mapping SLAM system;
  • the first determining part 403 is configured to determine, according to the positioning pose information and the position information of the two-dimensional feature points in the scene image, the pre-built three-dimensional scene map, and the two-dimensional feature points and the two-dimensional feature points. matching three-dimensional feature points and three-dimensional position information of the three-dimensional feature points;
  • the generating part 404 is configured to generate point cloud data corresponding to the current scene based on the three-dimensional feature points and the three-dimensional position information of the three-dimensional feature points.
  • the apparatus further includes: a second determining part 405 configured to:
  • the generating section 404 is further configured to:
  • the point cloud data corresponding to the current scene is generated.
  • the apparatus further includes: a third determining part 406 configured to:
  • the generating section 404 is further configured to:
  • point cloud data corresponding to the current scene is generated.
  • the device further includes: an adjustment part 407 configured to:
  • the current positioning result of the SLAM system is adjusted to obtain the adjusted current positioning result.
  • the obtaining part 401 is further configured to:
  • Acquire detection data of a positioning sensor included on the target device and determine the positioning pose information of the target device based on the detection data.
  • the obtaining part 401 is further configured to:
  • that the target device satisfies the set movement condition includes: the movement distance of the target device reaches the set distance threshold; or, the movement time of the target device reaches the set time threshold.
  • the functions or templates included in the apparatus provided by the embodiments of the present disclosure may be configured to execute the methods described in the above method embodiments, and for implementation, reference may be made to the above method embodiments.
  • an embodiment of the present disclosure also provides an electronic device.
  • a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure includes a processor 401 , a memory 402 , and a bus 403 .
  • the memory 402 is configured to store execution instructions, including the memory 4021 and the external memory 4022; the memory 4021 here is also called internal memory, and is configured to temporarily store the operation data in the processor 401 and the external memory 4022 such as the hard disk.
  • the processor 401 exchanges data with the external memory 4022 through the memory 4021.
  • the processor 401 and the memory 402 communicate through the bus 403, so that the processor 401 executes the following instructions:
  • the positioning pose information and the position information of the two-dimensional feature points in the scene image determine the three-dimensional feature points matching the two-dimensional feature points in the pre-built three-dimensional scene map, and the 3D position information of 3D feature points;
  • point cloud data corresponding to the current scene is generated.
  • embodiments of the present disclosure further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is run by a processor, the method for generating point cloud data described in the foregoing method embodiments is executed A step of.
  • Embodiments of the present disclosure further provide a computer program product, where the computer program product carries program codes, and the instructions included in the program codes can be used to execute the steps of the point cloud data generation method described in the foregoing method embodiments. Method Examples.
  • the above-mentioned computer program product can be realized by means of hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK) and the like.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the technical solutions of the embodiments of the present disclosure are essentially or contribute to the prior art or parts of the technical solutions may be embodied in the form of software products, and the computer software products are stored in a storage medium , including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present disclosure.
  • the computer-readable storage medium may be a ferroelectric memory (Ferroelectric Random Access Memory, FRAM), a read-only memory (Read-Only Memory, ROM), a programmable read-only memory (Programmable read-only memory) , PROM), Electronic Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Flash Memory, Magnetic Surface Memory, Optical Disc, or CD - ROM and other memories; can also be various devices including one or any combination of the above memories.
  • FRAM Ferroelectric Random Access Memory
  • ROM Read-Only Memory
  • PROM programmable read-only memory
  • PROM Electronic Programmable Read-Only Memory
  • EPROM Electronic Programmable Read-Only Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • Flash Memory Magnetic Surface Memory, Optical Disc, or CD - ROM and other memories
  • the present disclosure provides a point cloud data generation method, device, electronic device and storage medium.
  • the method includes: acquiring a scene image corresponding to a current scene collected by a target device, and positioning pose information of the target device; using synchronous positioning The feature point extraction method corresponding to the mapping SLAM system, extracts at least one two-dimensional feature point from the scene image; according to the positioning pose information and the position information of the two-dimensional feature point in the scene image , determine the three-dimensional feature points that match the two-dimensional feature points and the three-dimensional position information of the three-dimensional feature points in the pre-built three-dimensional scene map; based on the three-dimensional feature points and the three-dimensional position of the three-dimensional feature points information to generate point cloud data corresponding to the current scene.
  • the embodiment of the present disclosure can more accurately determine the three-dimensional feature points that match the two-dimensional feature points and the three-dimensional position information of the three-dimensional feature points, and further, can generate the corresponding more accurate point cloud data. Since the feature point extraction method corresponding to the SLAM system is used to extract the two-dimensional feature points in the scene image, the two-dimensional feature points are matched with the SLAM system, so that the generated point cloud data of the current scene is matched with the SLAM system, and the subsequent The accumulated error of the SLAM system is corrected by using the point cloud data corresponding to the current scene.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

La présente divulgation concerne un procédé et un appareil de génération de données de nuage de points, un dispositif électronique et un support de stockage. Le procédé consiste à : acquérir une image de scène correspondant à une scène actuelle collectée par un dispositif cible et positionner des informations de pose du dispositif cible ; extraire au moins un point de caractéristique bidimensionnelle de l'image de scène à l'aide d'un moyen d'extraction de point de caractéristique correspondant à un système de localisation et de mappage simultanés (SLAM) ; en fonction des informations de pose de positionnement et des informations de position des points de caractéristique bidimensionnelle dans l'image de scène, déterminer, dans une carte de scène tridimensionnelle préconstruite, des points de caractéristique tridimensionnelle qui correspondent aux points de caractéristique bidimensionnelle et à des informations de position tridimensionnelle des points de caractéristique tridimensionnelle ; et générer des données de nuage de points correspondant à la scène actuelle sur la base des points de caractéristique tridimensionnelle et des informations de position tridimensionnelle des points de caractéristique tridimensionnelle.
PCT/CN2021/114435 2021-03-31 2021-08-25 Procédé et appareil de génération de données de nuage de points, dispositif électronique et support de stockage WO2022205750A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110348215.2A CN112907671B (zh) 2021-03-31 2021-03-31 点云数据生成方法、装置、电子设备及存储介质
CN202110348215.2 2021-03-31

Publications (1)

Publication Number Publication Date
WO2022205750A1 true WO2022205750A1 (fr) 2022-10-06

Family

ID=76109691

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/114435 WO2022205750A1 (fr) 2021-03-31 2021-08-25 Procédé et appareil de génération de données de nuage de points, dispositif électronique et support de stockage

Country Status (2)

Country Link
CN (1) CN112907671B (fr)
WO (1) WO2022205750A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112907671B (zh) * 2021-03-31 2022-08-02 深圳市慧鲤科技有限公司 点云数据生成方法、装置、电子设备及存储介质
CN113741698B (zh) * 2021-09-09 2023-12-15 亮风台(上海)信息科技有限公司 一种确定和呈现目标标记信息的方法与设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170076491A1 (en) * 2015-09-14 2017-03-16 Fujitsu Limited Operation support method, operation support program, and operation support system
CN108734654A (zh) * 2018-05-28 2018-11-02 深圳市易成自动驾驶技术有限公司 绘图与定位方法、系统及计算机可读存储介质
CN110288710A (zh) * 2019-06-26 2019-09-27 Oppo广东移动通信有限公司 一种三维地图的处理方法、处理装置及终端设备
CN110487274A (zh) * 2019-07-30 2019-11-22 中国科学院空间应用工程与技术中心 用于弱纹理场景的slam方法、系统、导航车及存储介质
CN112269851A (zh) * 2020-11-16 2021-01-26 Oppo广东移动通信有限公司 地图数据更新方法、装置、存储介质与电子设备
CN112907671A (zh) * 2021-03-31 2021-06-04 深圳市慧鲤科技有限公司 点云数据生成方法、装置、电子设备及存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6580821B1 (en) * 2000-03-30 2003-06-17 Nec Corporation Method for computing the location and orientation of an object in three dimensional space
US10839556B2 (en) * 2018-10-23 2020-11-17 Microsoft Technology Licensing, Llc Camera pose estimation using obfuscated features
CN111260538B (zh) * 2018-12-03 2023-10-03 北京魔门塔科技有限公司 基于长基线双目鱼眼相机的定位及车载终端
CN109887032B (zh) * 2019-02-22 2021-04-13 广州小鹏汽车科技有限公司 一种基于单目视觉slam的车辆定位方法及系统
CN110084272B (zh) * 2019-03-26 2021-01-08 哈尔滨工业大学(深圳) 一种聚类地图创建方法及基于聚类地图和位置描述子匹配的重定位方法
CN111586360B (zh) * 2020-05-14 2021-09-10 佳都科技集团股份有限公司 一种无人机投影方法、装置、设备及存储介质
CN111862180B (zh) * 2020-07-24 2023-11-17 盛景智能科技(嘉兴)有限公司 一种相机组位姿获取方法、装置、存储介质及电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170076491A1 (en) * 2015-09-14 2017-03-16 Fujitsu Limited Operation support method, operation support program, and operation support system
CN108734654A (zh) * 2018-05-28 2018-11-02 深圳市易成自动驾驶技术有限公司 绘图与定位方法、系统及计算机可读存储介质
CN110288710A (zh) * 2019-06-26 2019-09-27 Oppo广东移动通信有限公司 一种三维地图的处理方法、处理装置及终端设备
CN110487274A (zh) * 2019-07-30 2019-11-22 中国科学院空间应用工程与技术中心 用于弱纹理场景的slam方法、系统、导航车及存储介质
CN112269851A (zh) * 2020-11-16 2021-01-26 Oppo广东移动通信有限公司 地图数据更新方法、装置、存储介质与电子设备
CN112907671A (zh) * 2021-03-31 2021-06-04 深圳市慧鲤科技有限公司 点云数据生成方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN112907671B (zh) 2022-08-02
CN112907671A (zh) 2021-06-04

Similar Documents

Publication Publication Date Title
WO2019219077A1 (fr) Procédé de positionnement, appareil de positionnement, système de positionnement, support de stockage et procédé de construction de base de données cartographiques hors ligne
CN109059906B (zh) 车辆定位方法、装置、电子设备、存储介质
EP3309751B1 (fr) Dispositif, procédé et programme de traitement d'image
WO2022188094A1 (fr) Procédé et appareil de mise en correspondance de nuages de points, procédé et dispositif de navigation, procédé de positionnement et radar-laser
US9292961B1 (en) System and method for detecting a structural opening in a three dimensional point cloud
US10254118B2 (en) Extrinsic parameter calibration of a vision-aided inertial navigation system
CN108871311B (zh) 位姿确定方法和装置
US20150092048A1 (en) Off-Target Tracking Using Feature Aiding in the Context of Inertial Navigation
WO2022205750A1 (fr) Procédé et appareil de génération de données de nuage de points, dispositif électronique et support de stockage
EP2894602A1 (fr) Procédé d'utilisation de gauchissement d'image pour la correspondance de caractéristiques de géo-enregistrement dans le positionnement d'aide à la vision
JP2018522345A5 (fr)
EP2989481A1 (fr) Systèmes et procédés de localisation
CN106471548A (zh) 使用外围信息的加速模板匹配
WO2014048475A1 (fr) Procédé de détermination de la position et de l'orientation d'un dispositif associé à un dispositif de capture pour capturer au moins une image
WO2013130208A1 (fr) Auto-estimation de pose basée sur la structure de scène
CN111665512A (zh) 基于3d激光雷达和惯性测量单元的融合的测距和绘图
US20220230350A1 (en) Position recognition method and system based on visual information processing
Fissore et al. Towards surveying with a smartphone
CN113610702B (zh) 一种建图方法、装置、电子设备及存储介质
EP2927635B1 (fr) Optimisation d'ensemble de caractéristiques de positionnement basé sur la vision
Liu et al. Evaluation of different SLAM algorithms using Google tangle data
US9245343B1 (en) Real-time image geo-registration processing
CN117132904A (zh) 实时飞行位置定位方法、装置、飞行器以及存储介质
US11960013B2 (en) Motion capture system and method
Li et al. Image matching techniques for vision-based indoor navigation systems: A 3D map-based approach1

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21934393

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22.01.2024)

122 Ep: pct application non-entry in european phase

Ref document number: 21934393

Country of ref document: EP

Kind code of ref document: A1