CN111709988A - Method and device for determining characteristic information of object, electronic equipment and storage medium - Google Patents

Method and device for determining characteristic information of object, electronic equipment and storage medium Download PDF

Info

Publication number
CN111709988A
CN111709988A CN202010348299.5A CN202010348299A CN111709988A CN 111709988 A CN111709988 A CN 111709988A CN 202010348299 A CN202010348299 A CN 202010348299A CN 111709988 A CN111709988 A CN 111709988A
Authority
CN
China
Prior art keywords
coordinate system
determining
point cloud
coordinates
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010348299.5A
Other languages
Chinese (zh)
Other versions
CN111709988B (en
Inventor
金伟
王健威
沈孝通
秦宝星
程昊天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Gaussian Automation Technology Development Co Ltd
Original Assignee
Shanghai Gaussian Automation Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Gaussian Automation Technology Development Co Ltd filed Critical Shanghai Gaussian Automation Technology Development Co Ltd
Priority to CN202010348299.5A priority Critical patent/CN111709988B/en
Publication of CN111709988A publication Critical patent/CN111709988A/en
Application granted granted Critical
Publication of CN111709988B publication Critical patent/CN111709988B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4808Evaluating distance, position or velocity data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The application discloses a method and a device for determining characteristic information of an object, electronic equipment and a storage medium, wherein visual data and a laser data set can be acquired based on the same scene; determining category information of a target object in the visual data and position information of the target object in the visual data; clustering the laser data set to obtain a plurality of point clouds; determining a target point cloud matched with the target object from the plurality of point clouds based on the coordinates of each point cloud in the plurality of point clouds projected under the first coordinate system and the position information of the target object in the visual data; and determining the category information of the target point cloud according to the category information of the target object, and determining the spatial position. The matching relationship between the visual data and the laser data can be increased based on the same object, and simultaneously, complementary and redundant information are combined according to optimization criteria in space and time through reasonable domination and use of various sensor information, so that more information is provided for the application of subsequent multi-sensor positioning.

Description

Method and device for determining characteristic information of object, electronic equipment and storage medium
Technical Field
The present application relates to the field of robots, and in particular, to a method and an apparatus for determining characteristic information of an object, an electronic device, and a storage medium.
Background
An intelligent mobile robot is a device with high intelligent degree and integrating multiple functions of environment perception, dynamic decision and planning, behavior control and execution and the like, and the rapidity and the accuracy of the environment perception and a multi-sensor information fusion technology are indistinguishable. The multi-sensor information fusion technology is that a computer makes full use of sensor resources, and through reasonable domination and use of various measurement information, complementary and redundant information are combined according to certain optimization criteria in space and time to generate consistent explanation or description of an observation environment and generate a new fusion result. In the context awareness module, vision sensors and lidar are two commonly used sensors. In recent years, a visual image analysis method typified by deep learning has been greatly developed, and can accurately detect and classify pedestrians, vehicles, various obstacles, and the like. For a robot, a set of matching point calculations in the camera pixel coordinate system and the lidar coordinate system need to be acquired.
However, the prior art directly classifies objects only through images of the objects photographed by a robot or detects the objects through a laser radar, and no technology relates to matching connection between the images and the real objects.
Disclosure of Invention
The embodiment of the application provides a method, a device, electronic equipment and a storage medium for determining characteristic information of an object, matching relation between acquired visual data and laser data is realized, complementary information and redundant information are combined according to a certain optimization criterion in space and time through reasonable domination and use of various sensor information, consistency explanation or description of an observation environment and an object is generated, a new fusion result is generated at the same time, and more information is provided for subsequent multi-sensor positioning application.
In one aspect, an embodiment of the present application provides a method for determining characteristic information of an object, where the method includes:
acquiring visual data and a laser data set based on the same scene;
determining category information of a target object in the visual data and position information of the target object in the visual data;
clustering the laser data set to obtain a plurality of point clouds;
determining a target point cloud matched with the target object from the plurality of point clouds based on the coordinates of each point cloud in the plurality of point clouds projected under the first coordinate system and the position information of the target object in the visual data;
determining the category information of the target point cloud according to the category information of the target object; and determining a spatial location of the target point cloud.
Optionally, the determining, based on the coordinates of each point cloud projected in the first coordinate system in the plurality of point clouds and the location information of the target object in the visual data, a target point cloud matching the target object from the plurality of point clouds includes: determining coordinates of each point cloud projection in the plurality of point clouds under a first coordinate system; determining the Euclidean distance between each point cloud and the target object based on the coordinates of each point cloud projected in the first coordinate system and the position information of the target object in the visual data; determining a target Euclidean distance which is less than or equal to a preset distance threshold value from the Euclidean distance corresponding to each point cloud; and determining the determined point cloud meeting the target Euclidean distance as the target point cloud matched with the target object.
Optionally, determining coordinates of each point cloud of the plurality of point clouds projected under the first coordinate system includes: determining coordinates of each point cloud in the plurality of point clouds under a second coordinate system; acquiring a conversion rule of a second coordinate system and a first coordinate system; and converting the coordinates of each point cloud in the plurality of point clouds under the second coordinate system into the coordinates of each point cloud in the plurality of point clouds under the first coordinate system based on the conversion rule.
Optionally, the coordinates in the second coordinate system are two-dimensional data or three-dimensional data; and/or; coordinates in the first coordinate system are two-dimensional data or three-dimensional data; the position information of the target object in the visual data is two-dimensional data or three-dimensional data.
Optionally, obtaining a transformation rule of the second coordinate system and the first coordinate system includes: determining a first transformation matrix from the second coordinate system to the intermediate coordinate system; determining a second transformation matrix from the middle coordinate system to the first coordinate system; determining a second coordinate system and a conversion rule containing at least one unknown parameter of the first coordinate system based on the first conversion matrix and the second conversion matrix; and determining at least one unknown parameter in the conversion rule based on the N pairs of matched coordinates acquired by the first sensor and the second sensor at the N preset positions respectively to obtain the conversion rules of the second coordinate system and the first standard system.
Optionally, the first transformation matrix includes at least one unknown parameter; and/or; the second conversion matrix comprises at least one unknown parameter; the intermediate coordinate system comprises at least one sub-intermediate coordinate system and a transformation rule between the at least one sub-intermediate coordinate system.
Optionally, determining at least one unknown parameter in the transformation rule based on N pairs of matching coordinates acquired by the first sensor and the second sensor at N preset positions, respectively, includes: acquiring a first coordinate set of the calibration code in a first coordinate system at N preset positions through a first sensor; acquiring a second coordinate set of the calibration code in a second coordinate system at the N preset positions through a second sensor; converting a first coordinate in the first coordinate set and a second coordinate in the second coordinate set to obtain a first homogeneous coordinate set and a second homogeneous coordinate set; determining N pairs of matching coordinates from the first homogeneous coordinate set and the second homogeneous coordinate set based on the same preset position; and substituting the N pairs of matched coordinates into a function where a conversion rule containing at least one unknown parameter is located to obtain at least one unknown parameter in the conversion rule.
Optionally, clustering the laser data set to obtain a plurality of point clouds, including: determining a characteristic value of each data in the laser data set; calculating a distance value between the characteristic value of each datum and a preset characteristic value; and dividing the laser data set into a plurality of point clouds according to the distance grade of the distance value.
In another aspect, a method and apparatus for determining characteristic information of an object are provided, where the apparatus includes:
the acquisition module is used for acquiring visual data and a laser data set based on the same scene;
the target determining module is used for determining the category information of the target object in the visual data and the position information of the target object in the visual data;
the point cloud determining module is used for clustering the laser data set to obtain a plurality of point clouds;
the target point cloud determining module is used for determining a target point cloud matched with the target object from the plurality of point clouds based on the coordinates of each point cloud in the plurality of point clouds projected under the first coordinate system and the position information of the target object in the visual data;
the characteristic information determining module is used for determining the category information of the target point cloud according to the category information of the target object; and determining a spatial location of the target point cloud.
Optionally, the target point cloud determining module is specifically configured to:
determining coordinates of each point cloud projection in the plurality of point clouds under a first coordinate system;
determining the Euclidean distance between each point cloud and the target object based on the coordinates of each point cloud projected in the first coordinate system and the position information of the target object in the visual data;
determining a target Euclidean distance which is less than or equal to a preset distance threshold value from the Euclidean distance corresponding to each point cloud;
and determining the determined point cloud meeting the target Euclidean distance as the target point cloud matched with the target object.
Optionally, the target point cloud determining module is specifically configured to:
determining coordinates of each point cloud in the plurality of point clouds under a second coordinate system;
acquiring a conversion rule of a second coordinate system and a first coordinate system;
and converting the coordinates of each point cloud in the plurality of point clouds under the second coordinate system into the coordinates of each point cloud in the plurality of point clouds under the first coordinate system based on the conversion rule.
Optionally, the coordinates in the second coordinate system are two-dimensional data or three-dimensional data; and/or; coordinates in the first coordinate system are two-dimensional data or three-dimensional data; the position information of the target object in the visual data is two-dimensional data or three-dimensional data.
Optionally, the target point cloud determining module is specifically configured to:
determining a first transformation matrix from the second coordinate system to the intermediate coordinate system; determining a second transformation matrix from the middle coordinate system to the first coordinate system;
determining a second coordinate system and a conversion rule containing at least one unknown parameter of the first coordinate system based on the first conversion matrix and the second conversion matrix;
and determining at least one unknown parameter in the conversion rule based on the N pairs of matched coordinates acquired by the first sensor and the second sensor at the N preset positions respectively to obtain the conversion rules of the second coordinate system and the first standard system.
Optionally, the target point cloud determining module is specifically configured to:
acquiring a first coordinate set of the calibration code in a first coordinate system at N preset positions through a first sensor;
acquiring a second coordinate set of the calibration code in a second coordinate system at the N preset positions through a second sensor;
converting a first coordinate in the first coordinate set and a second coordinate in the second coordinate set to obtain a first homogeneous coordinate set and a second homogeneous coordinate set;
determining N pairs of matching coordinates from the first homogeneous coordinate set and the second homogeneous coordinate set based on the same preset position;
and substituting the N pairs of matched coordinates into a function where a conversion rule containing at least one unknown parameter is located to obtain at least one unknown parameter in the conversion rule.
Optionally, the point cloud determining module is specifically configured to:
determining a characteristic value of each data in the laser data set;
calculating a distance value between the characteristic value of each datum and a preset characteristic value;
and dividing the laser data set into a plurality of point clouds according to the distance grade of the distance value.
Another aspect provides an electronic device, which includes a processor and a memory, where at least one instruction or at least one program is stored in the memory, and the at least one instruction or the at least one program is loaded by the processor and executes the method for determining the characteristic information of the object.
Another aspect provides a computer-readable storage medium having at least one instruction or at least one program stored therein, the at least one instruction or the at least one program being loaded by a processor and executed to perform a method for determining characteristic information of an object.
The method, the device, the electronic equipment and the storage medium for determining the characteristic information of the object provided by the embodiment of the application have the following technical effects:
acquiring visual data and a laser data set based on the same scene; determining category information of a target object in the visual data and position information of the target object in the visual data; clustering the laser data set to obtain a plurality of point clouds; determining a target point cloud matched with the target object from the plurality of point clouds based on the coordinates of each point cloud in the plurality of point clouds projected under the first coordinate system and the position information of the target object in the visual data; determining the category information of the target point cloud according to the category information of the target object; the spatial position of the target point cloud is determined, the matching relation between the acquired visual data and the laser data can be effectively increased based on the same object, complementary information and redundant information are combined according to a certain optimization criterion in space and time through reasonable domination and use of various sensor information, consistency explanation or description of an observation environment and the object is generated, a new fusion result is generated at the same time, and more information is provided for subsequent multi-sensor positioning application.
Drawings
In order to more clearly illustrate the technical solutions and advantages of the embodiments of the present application or the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic diagram of an application environment provided by an embodiment of the present application;
fig. 2 is a schematic flowchart of a method for determining characteristic information of an object according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a method for determining characteristic information of an object according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of a method for determining characteristic information of an object according to an embodiment of the present disclosure;
FIG. 5 is a schematic flowchart illustrating a process of determining coordinates of each point cloud of a plurality of point clouds projected in a first coordinate system according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a transformation between two coordinate systems provided by an embodiment of the present application;
fig. 7 is a schematic flowchart of a transformation rule for obtaining a second coordinate system and a first coordinate system according to an embodiment of the present application;
fig. 8 is a schematic flowchart of a transformation rule for obtaining a second coordinate system and a first coordinate system according to an embodiment of the present application;
fig. 9 is a schematic diagram of a detection code according to an embodiment of the present application;
FIG. 10 is a schematic view of a measurement provided by an embodiment of the present application;
fig. 11 is a schematic structural diagram of an apparatus for determining characteristic information of an object according to an embodiment of the present application;
fig. 12 is a hardware block diagram of an electronic device of a method for determining characteristic information of an object according to an embodiment of the present application;
fig. 13 is a block diagram of a computer-readable storage medium according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Referring to fig. 1, fig. 1 is a schematic view of an application environment according to an embodiment of the present disclosure, the schematic view includes a mobile electronic device 101, the mobile electronic device 101 shown in the schematic view is a sweeping robot, and the mobile electronic device may be other robots such as a floor washing robot, a navigation cart, and the like. Therein, the mobile electronic device 101 comprises a first sensor 1011 and a second sensor 1012. For example, the first sensor 1011 may be a lidar and the second sensor 1012 may be a camera. Optionally, in this embodiment of the application, data acquired by the first sensor 1011 and data acquired by the second sensor 1012 may be two-dimensional data and/or three-dimensional data, and in this embodiment of the application, data acquired by the first sensor 1011 is taken as three-dimensional data, and data acquired by the second sensor 1012 is taken as two-dimensional data for explanation, and other manners may refer to the above-mentioned examples, and are not described again.
Specifically, the mobile electronic device 101 acquires the visual data and the laser data set based on the same scene, and determines the category information of the target object in the visual data and the position information of the target object in the visual data. Subsequently, the mobile electronic device may perform clustering processing on the laser data set to obtain a plurality of point clouds, and determine a target point cloud matching the target object from the plurality of point clouds based on coordinates of each point cloud in the plurality of point clouds projected in the first coordinate system and position information of the target object in the visual data. And finally, determining the category information of the target point cloud according to the category information of the target object, and determining the spatial position of the target point cloud. Therefore, the mobile electronic equipment can determine the characteristic information of the real object corresponding to the target object according to the category information of the target object determined by the visual data.
In the embodiment of the present application, all the technical steps in the above paragraph may be implemented within the mobile electronic device 101. Optionally, some technical steps (for example, determining category information of a target object in the visual data and position information of the target object in the visual data, then, the mobile electronic device may perform clustering processing on the laser data set to obtain a plurality of point clouds, and determine a target point cloud matching the target object from the plurality of point clouds based on coordinates of each point cloud of the plurality of point clouds projected in the first coordinate system and the position information of the target object in the visual data.
The following describes a specific embodiment of the method for determining characteristic information of an object according to the present application, and fig. 2 is a schematic flowchart of the method for determining characteristic information of an object according to the embodiment of the present application, and the present specification provides the method operation steps according to the embodiment or the flowchart, but may include more or less operation steps based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. In practice, the system or server product may be implemented in a sequential or parallel manner (e.g., parallel processor or multi-threaded environment) according to the embodiments or methods shown in the figures. Specifically, as shown in fig. 2, the method may include:
s201: visual data and laser data sets are acquired based on the same scene.
In an embodiment of the application, the visual data is acquired by a camera on the mobile electronic device, and the laser data set is acquired by a lidar on the mobile electronic device. The camera and the lidar acquire the same scene, optionally the camera and the lidar acquisition area are the same, optionally the camera and the lidar acquisition area are different, but have overlapping areas. In this way, only the target object in the visual data can correspond to the target point cloud of the laser data set.
S203: category information of the target object in the visual data and position information of the target object in the visual data are determined.
Optionally, the mobile electronic device may determine the target object in the visual data through a target detection algorithm, and determine the category information of the target object and the position information of the target object in the visual data. The target detection algorithm may be embodied as determining a target object in the visual data, category information of the target object, and position information of the target object in the visual data through a trained target detection model. The target detection model can be obtained by training based on the principle of a convolutional neural network model, or the target detection model can also be obtained by training based on the principle of a cyclic neural network model.
Alternatively, the position information in the target object is two-dimensional data or three-dimensional data, and the following description will be given by taking the position information in the target object as two-dimensional data (alternatively, a coordinate system may be established, and the position information may be represented by data on the X axis and the Y axis, or the position information may be represented by the number of pixel rows and columns). Optionally, the category information of the target object is a category to which the target object belongs, such as a car, a pedestrian, a building, and the like.
S205: and clustering the laser data set to obtain a plurality of point clouds.
In order to determine the data corresponding to the target object from the laser data set, in the embodiment of the application, the mobile electronic device may perform clustering processing on the laser data set to obtain a plurality of point clouds, so as to facilitate determining the point clouds corresponding to the target object from the plurality of point clouds. Each point cloud may contain a plurality of laser data, each of which implies three-dimensional coordinates of the point.
Fig. 3 is a schematic flowchart of a method for determining characteristic information of an object according to an embodiment of the present application, where step S205 in the schematic diagram may be represented as:
s2051: a characteristic value is determined for each data in the laser data set.
The feature value may be a feature value obtained by integrating the color, geometric characteristics, and the like of each data. Generally, the characteristic values of data contained in the same thing are relatively close.
S2052: and calculating a distance value between the characteristic value of each datum and a preset characteristic value.
The preset feature value may be set at will or according to an empirical value, and mainly determines a reference to determine a difference between each data and the reference, for example, the preset feature value may be set to 0.
S2053: and dividing the laser data set into a plurality of point clouds according to the distance grade of the distance value.
For example, the distance between the eigenvalue of the first portion of data and the preset eigenvalue is between 0 and 1, the distance between the eigenvalue of the second portion of data and the preset eigenvalue is between 1 and 2, and the distance between the eigenvalue of the third portion of data and the preset eigenvalue is between 2 and 3, … … can be moved to divide the laser data set into a first point cloud, a second point cloud, and a third point cloud … …
S207: and determining a target point cloud matched with the target object from the plurality of point clouds based on the coordinates of each point cloud in the plurality of point clouds projected under the first coordinate system and the position information of the target object in the visual data.
An optional target point cloud matching the target object is determined from the multiple point clouds, and fig. 4 is a schematic flow chart of a method for determining characteristic information of an object according to an embodiment of the present application, where step S207 in the schematic flow chart may be represented as:
s2071: coordinates of each point cloud of the plurality of point clouds projected under the first coordinate system are determined.
Fig. 5 is a schematic flowchart of determining coordinates of each point cloud of a plurality of point clouds projected in a first coordinate system according to an embodiment of the present application, where the flowchart includes:
s501: and determining the coordinates of each point cloud in the plurality of point clouds under the second coordinate system.
In the embodiment of the present application, the coordinates in the second coordinate system are three-dimensional coordinates, and the coordinates in the first coordinate system are two-dimensional coordinates.
Since the dataset in each point cloud is acquired by lidar, the mobile electronic device may directly determine the coordinates of each point cloud in the second coordinate system.
S503: and acquiring a conversion rule of the second coordinate system and the first coordinate system.
Specifically, a transformation rule from the first coordinate system to the second coordinate system may be obtained, or a transformation rule from the second coordinate system to the first coordinate system may be obtained.
In an alternative embodiment, the transformation rule may be embodied in a transformation matrix:
fig. 6 is a schematic diagram of a conversion between two coordinate systems provided in the embodiment of the present application, in which a second coordinate system corresponding to a laser radar is converted into a first coordinate system under a camera of a camera.
In this embodiment, the first coordinate system may be a coordinate system corresponding to the first sensor, and the second coordinate system may be a coordinate system corresponding to the second sensor. For example, the first coordinate system may be a pixel coordinate system corresponding to the camera, and the second coordinate system may be a lidar coordinate system corresponding to the lidar.
In an alternative embodiment of obtaining the transformation rule of the second coordinate system and the first coordinate system, the mobile electronic device may determine the transformation rule through the schematic diagram shown in fig. 7, where fig. 7 is a schematic flowchart of a process for obtaining the transformation rule from the second coordinate system to the first coordinate system according to an embodiment of the present application, and the process includes:
s5031: determining a first transformation matrix from the second coordinate system to the intermediate coordinate system;
wherein the first conversion matrix comprises at least one unknown parameter; the intermediate coordinate system comprises at least one sub-intermediate coordinate system and a transformation rule between the at least one sub-intermediate coordinate system.
Optionally, the intermediate coordinate system may include one sub-intermediate coordinate system or a plurality of sub-intermediate coordinate systems. Alternatively, the intermediate coordinate system has only one sub-coordinate system. Optionally, if there are multiple sub-intermediate coordinate systems in the intermediate coordinate system, there may also be a transformation matrix between the multiple sub-intermediate coordinate systems in the intermediate coordinate system.
In the following description, a plurality of sub-intermediate coordinate systems in the intermediate coordinate system will be described, and the present application will be further described with reference to the second coordinate system as a lidar coordinate system, assuming, for example, that the intermediate coordinate system may include a camera coordinate system and an image coordinate system.
Optionally, the first transformation matrix is a transformation rule from a laser radar coordinate system to a camera coordinate system, and the first transformation matrix may be expressed as
Figure BDA0002471025670000121
Let the coordinate of the lidar data point under the lidar coordinate system be plidar=[xlidarylidarzlidar]tThe coordinate in the camera coordinate system is Pc=[xcyczc]tAnd if t is transposed, the formula from the laser radar coordinate system to the camera coordinate system is as follows:
Figure BDA0002471025670000122
where T is the translation matrix, and T ═ Txtytz]tR is an orthogonal rotation matrix,
Figure BDA0002471025670000123
the following describes the camera coordinate system and the image coordinate system, and the conversion rules between the camera coordinate system and the image coordinate system. The coordinates in the camera coordinate system mentioned above are Pc=[xcyczc]tAssuming that the homogeneous coordinate in the image coordinate system is m ═ xpyp1]tThe coordinate of the principal point (the position of the origin of the pixel coordinate system in the image coordinate system) in the image coordinate system is pc=[x0y01]tThen, the formula from the camera coordinate system to the image coordinate system is:
Figure BDA0002471025670000131
s5032: and determining a second transformation matrix from the intermediate coordinate system to the first coordinate system.
Wherein the second transformation matrix may comprise at least one unknown parameter.
Based on the above, the first coordinate system is a pixel coordinate system. The second transformation matrix is a transformation rule from an image coordinate system to a pixel coordinate system, assuming that the length and width of a pixel are dxAnd dyLet p be [ u v 1 ] as a coordinate in the pixel coordinate system]tThen, the formula from the image coordinate system to the pixel coordinate system is:
Figure BDA0002471025670000132
here p is lower case.
S5033: a transformation rule from the second coordinate system to the first coordinate system is determined, which contains at least one unknown variable, on the basis of the first transformation matrix and the second transformation matrix.
In this way, the mobile electronic device determines a conversion rule of the lidar coordinate system to the pixel coordinate system, which includes at least one unknown parameter, based on the first conversion matrix and the second conversion matrix, that is, the lidar coordinate system to the camera coordinate system conversion, the camera coordinate system to the image coordinate system conversion, and the image coordinate system to the pixel coordinate system conversion.
The transformation matrix K from the camera coordinate system to the pixel coordinate system is:
Figure BDA0002471025670000133
therefore, the overall transformation of data points in the lidar coordinate system from the lidar coordinate system to the pixel coordinate system is represented as:
Figure BDA0002471025670000141
s5034: and determining at least one unknown parameter in the conversion rule based on the N pairs of matching coordinates acquired by the first sensor and the second sensor at the N preset positions respectively to obtain the conversion rule from the laser radar coordinate system to the pixel coordinate system.
Optionally, N is an integer greater than or equal to 4.
Thus, by the above formula, the real coordinates of N data points in the laser coordinate system and the corresponding projections in the pixel coordinate system are known, the transformation relations R and T and the PnP (passive N-Point) problem are solved, and then the above formula is transformed:
f=P(RPlidar+ T) -p … … formula (5)
Due to PlidarAnd p are known, solving for the corresponding R and T, i.e., making f → 0, can be transformed into the following optimization formula:
Figure BDA0002471025670000142
wherein the content of the first and second substances,
Figure BDA0002471025670000143
by the formula, corresponding R and T can be obtained, and the specific solving flow of R and T is as follows: and acquiring corresponding characteristic points on the visual data by detecting a marker in the visual data in a described data acquisition mode, and then finding corresponding catastrophe points in the 2-dimensional laser data to finish data collection. And measuring the height of each marker to change the 2-dimensional laser point into a 3-dimensional laser point so as to finish the acquisition of the three-dimensional laser point. The matched camera characteristic points and the matched three-dimensional laser points are brought into a target function
Figure BDA0002471025670000144
And solving by a ceres library to obtain final R and T. cerees is an open-source C + + library, is used for solving optimization problems of models and functions, and can be used for solving a nonlinear least square problem with boundary constraint and a general unconstrained optimization problem.
Fig. 8 is a schematic flow chart of obtaining a conversion rule from a laser radar coordinate system to a pixel coordinate system according to an embodiment of the present application, where how to know real coordinates of n data points in a laser coordinate system and corresponding projections in the pixel coordinate system includes:
s801: and acquiring a first coordinate set of the calibration code in a first coordinate system at the N preset positions through a first sensor.
As shown in FIG. 9, calibration code is an open source library used to detect code, Aruco marker, opencv. An ArUco marker is a binary square mark that consists of a wide black border and an internal binary matrix that determines their id. The black border facilitates rapid detection of the visual data and the binary code can verify the identification information. In an alternative embodiment, the calibration code may be located at any detectable location, such as the middle of a wall or a corner of a wall, but the calibration code is typically located at the corner of a wall because the calibration code is more easily detected at the corner of a wall.
Based on the above, if the first sensor is a lidar, the first coordinate system is a lidar coordinate system, and the first coordinate in the first coordinate set can be regarded as a feature point in the lidar coordinate system, where the feature point is obtained by detecting a corner by the lidar. Since the detection is performed at the N preset positions, the mobile electronic device can acquire the feature points in the N lidar coordinate systems. The detection method of the characteristic points in the laser radar coordinate system comprises the following steps: and detecting the wall corner, namely traversing the laser radar point cloud in the front range of the robot, and finding out a point with a sudden change of the point cloud distance value, wherein the coordinate of the point cloud distance value is the coordinate of the characteristic point wall corner in a laser radar coordinate system.
Optionally, the distance between each preset position in the N preset positions and the calibration code is inconsistent, for example, as shown in fig. 10, if N is equal to 4, the laser radar of the mobile electronic device may detect the calibration code at positions 5 meters, 4 meters, 3 meters, and 2 meters away from the corner of the wall, respectively, and during a specific detection process, the mobile electronic device may rotate left and right in place to comprehensively detect the calibration code.
S802: acquiring a second coordinate set of the calibration code in a second coordinate system at the N preset positions through a second sensor;
assuming that the second sensor is a camera, the second coordinate system is a camera pixel coordinate system, and the second coordinates in the second coordinate set can be regarded as feature points in the camera pixel coordinate system, which are obtained by detecting the calibration code by the camera. Since the detection is performed at the N preset positions, the mobile electronic device can obtain the feature points in the pixel coordinate system of the N camera cameras. The method for detecting the characteristic points in the pixel coordinate system of the camera comprises the following steps: and detecting the calibration code, wherein the aryl marker is attached to the corner, and the coordinate of the corner can be detected, namely the coordinate of the corner of the feature point in a camera pixel coordinate system.
Similarly, the camera of the mobile electronic device can detect the calibration code at positions 5 m, 4 m, 3 m and 2 m away from the corner of the wall, respectively.
S803: converting a first coordinate in the first coordinate set and a second coordinate in the second coordinate set to obtain a first homogeneous coordinate set and a second homogeneous coordinate set;
in the embodiment of the present application, homogeneous coordinates: given a point (x, y) on the euclidean plane, for any non-zero real number Z, the triplet (x Z, y Z, Z) is referred to as the homogeneous coordinate of the point. By definition, another set of homogeneous coordinates of the same point can be obtained by multiplying the values in the homogeneous coordinates by the same non-zero real number. For example, a point (1,2) on cartesian coordinates may be labeled as (1,2,1) or (2,4,2) in homogeneous coordinates. The original cartesian coordinates may be retrieved by dividing the first two values by the third value. The step is to convert the original binary first coordinate and the original binary second coordinate into ternary first homogeneous coordinate and ternary second homogeneous coordinate for subsequent operation.
S804: and determining N pairs of matching coordinates from the first homogeneous coordinate set and the second homogeneous coordinate set based on the same preset position.
In this application, the matching coordinates may be referred to as matching points, which are a pair of coordinates, herein coordinates of the calibration code in the lidar coordinate system and coordinates of the calibration code in the camera pixel coordinate system. Thus, the distances 5 meters, 4 meters, 3 meters, 2 meters of the mobile electronic device to the calibration code referred to above correspond to 4 pairs of matching coordinates, respectively, wherein the 4 pairs of matching coordinates are not collinear.
S805: and substituting the N pairs of matched coordinates into a function where a conversion rule containing at least one unknown parameter is located to obtain at least one unknown parameter in the conversion rule.
In summary, the mobile electronic device may obtain the conversion rule from the laser radar coordinate system to the pixel coordinate system by solving all the unknowns, that is, solve the conversion matrix from the laser radar coordinate system to the pixel coordinate system.
In another optional implementation manner, at N preset positions, a first coordinate set of a calibration code in a second coordinate system is acquired through a first sensor; acquiring a second coordinate set of the calibration code in the first coordinate system at the N preset positions through a second sensor; acquiring an Mth coordinate set of the calibration code in the first coordinate system at the N preset positions through an Mth sensor; converting a first coordinate in the first coordinate set, a second coordinate in the second coordinate set and an Mth coordinate in the Mth coordinate set to obtain a first homogeneous coordinate set, a second homogeneous coordinate set and an Mth homogeneous coordinate set; determining a group of N pairs of matched coordinates from the first homogeneous coordinate set and the second homogeneous coordinate set based on the same preset position; or, determining a group of N pairs of matched coordinates from the first homogeneous coordinate set and the Mth homogeneous coordinate set based on the same preset position, or determining a group of N pairs of matched coordinates from the second homogeneous coordinate set and the Mth homogeneous coordinate set based on the same preset position; and substituting the at least one group of N pairs of matched coordinates into a function of a conversion rule containing at least one unknown parameter to obtain at least one unknown parameter in the conversion rule.
In another alternative embodiment, the coordinate relationship may be embodied as a coordinate transformation model, which may be a recurrent neural network model or a convolutional neural network model.
How to train the coordinate transformation model is described below:
acquiring a sample data set, wherein the sample data set comprises a first sample coordinate and a second sample coordinate corresponding to each sample position in a plurality of sample positions; the first sample coordinate is in a first coordinate system; the second sample coordinate is in a second coordinate system; the corresponding first and second sample coordinates are both obtained by the mobile electronic device based on the same thing (e.g., a calibration code) at the same sample location.
And constructing a preset machine learning model, and determining the preset machine learning model as the current machine learning model.
And performing coordinate conversion operation on the first sample coordinate based on the current machine learning model, and determining a second predicted coordinate corresponding to the first sample coordinate.
And determining a loss value based on the second predicted coordinate and the second sample coordinate corresponding to the first sample coordinate.
When the loss value is larger than the preset threshold value, performing back propagation based on the loss value, updating the current machine learning model to obtain an updated machine learning model, and re-determining the updated machine learning model as the current machine learning model; repeating the steps: and performing coordinate conversion operation on the first sample coordinate based on the current machine learning model, and determining a second predicted coordinate corresponding to the first sample coordinate.
And when the loss value is less than or equal to a preset threshold value, determining the current machine learning model as a coordinate conversion model.
However, in the first embodiment (and the transformation matrix), only a small amount of first coordinates and second coordinates (for example, 4 pairs of first coordinates and second coordinates) need to be acquired, and then transformation is performed through each coordinate system, so that the transformation rule from the laser radar coordinate system to the pixel coordinate system can be obtained, the accuracy is high, and the requirements on hardware and software of a computer are low. Not only does this approach of passing through the coordinate transformation model take a lot of time to collect a large amount of first sample data and second sample data for training, but the platform for constructing the training model has a very high requirement on hardware and software supporting the operation of the computer, and therefore, a lot of manpower and material resources may be needed for support.
S505: and converting the coordinates of each point cloud in the plurality of point clouds under the second coordinate system into the coordinates of each point cloud in the plurality of point clouds under the first coordinate system based on the conversion rule.
In this way, the mobile electronic device can obtain the coordinates of each point cloud in the plurality of point clouds under the pixel coordinate system.
S2072: and determining the Euclidean distance between each point cloud and the target object based on the coordinates of each point cloud projected in the first coordinate system and the position information of the target object in the visual data.
In an alternative embodiment, the euclidean distance between the coordinates of each point (i.e., the laser data) in the point cloud in the first coordinate system and the corresponding visual data for each point in the target object may be determined. And obtaining a set of Euclidean distances, and determining the Euclidean distance between each point cloud and the target object according to each Euclidean distance in the set of Euclidean distances.
In another alternative embodiment, each point cloud may be regarded as a whole, and the target object may also be regarded as a whole, and then a center point a may be determined from each point cloud, and a center point B may be determined from the data of the target object, where the center point a and the center point B are corresponding. And determining the Euclidean distance between each point cloud and the target object according to the central point A and the central point B.
S2073: and determining a target Euclidean distance which is less than or equal to a preset distance threshold value from the Euclidean distance corresponding to each point cloud.
Alternatively, the preset distance threshold may be preset based on empirical values. The mobile electronic equipment can determine target Euclidean distances according to a preset distance threshold, if a plurality of determined target Euclidean distances exist, one with the smallest difference with the distance threshold is selected, and if no target Euclidean distance exists, the one with the smallest difference with the distance threshold is also determined as the target Euclidean distance from all the Euclidean distances.
S2074: and determining the determined point cloud meeting the target Euclidean distance as the target point cloud matched with the target object.
S209: and determining the category information of the target point cloud according to the category information of the target object, and determining the spatial position of the target point cloud.
In the embodiment of the application, the spatial position of the target point cloud can be determined directly through the obtained laser data set.
In summary, the matching relation between the point cloud corresponding to the real object and the object in the visual data is realized through the conversion between the first coordinate system and the second coordinate system, so that the characteristic information of the real object in the actual space is determined, and a basis is laid for subsequent applications, for example, the blind can be reminded of obstacles around the blind, obstacles exist in an invisible area of a driving vehicle, and the like.
An embodiment of the present application further provides a device for determining characteristic information of an object, and fig. 11 is a schematic structural diagram of the device for determining characteristic information of an object provided in the embodiment of the present application, as shown in fig. 11, the device includes:
the obtaining module 1101 is configured to obtain visual data and a laser data set based on the same scene;
the target determining module 1102 is configured to determine category information of a target object in the visual data and location information of the target object in the visual data;
the point cloud determining module 1103 is configured to perform clustering on the laser data set to obtain a plurality of point clouds;
the target point cloud determining module 1104 is configured to determine a target point cloud matching the target object from the plurality of point clouds based on the coordinates of each point cloud in the plurality of point clouds projected in the first coordinate system and the position information of the target object in the visual data;
the characteristic information determining module 1105 is configured to determine category information of the target point cloud according to the category information of the target object; and determining a spatial location of the target point cloud.
In an alternative embodiment, the apparatus further comprises:
the target point cloud determining module 1104 is configured to determine coordinates of each point cloud of the plurality of point clouds projected under a first coordinate system; determining the Euclidean distance between each point cloud and the target object based on the coordinates of each point cloud projected in the first coordinate system and the position information of the target object in the visual data; determining a target Euclidean distance which is less than or equal to a preset distance threshold value from the Euclidean distance corresponding to each point cloud; and determining the determined point cloud meeting the target Euclidean distance as the target point cloud matched with the target object.
In an alternative embodiment, the apparatus further comprises:
the target point cloud determining module 1104 is configured to determine coordinates of each point cloud of the plurality of point clouds in a second coordinate system; acquiring a conversion rule of a second coordinate system and a first coordinate system; and converting the coordinates of each point cloud in the plurality of point clouds under the second coordinate system into the coordinates of each point cloud in the plurality of point clouds under the first coordinate system based on the conversion rule.
In an alternative embodiment, the apparatus further comprises:
the target point cloud determining module 1104 is configured to determine a first transformation matrix from the second coordinate system to the intermediate coordinate system; determining a second transformation matrix from the middle coordinate system to the first coordinate system; determining a second coordinate system and a conversion rule containing at least one unknown parameter of the first coordinate system based on the first conversion matrix and the second conversion matrix; and determining at least one unknown parameter in the conversion rule based on the N pairs of matched coordinates acquired by the first sensor and the second sensor at the N preset positions respectively to obtain the conversion rules of the second coordinate system and the first standard system.
In an alternative embodiment, the apparatus further comprises:
the target point cloud determining module 1104 is configured to obtain, at the N preset positions, a first coordinate set of the calibration code in a first coordinate system through the first sensor; acquiring a second coordinate set of the calibration code in a second coordinate system at the N preset positions through a second sensor; converting a first coordinate in the first coordinate set and a second coordinate in the second coordinate set to obtain a first homogeneous coordinate set and a second homogeneous coordinate set; determining N pairs of matching coordinates from the first homogeneous coordinate set and the second homogeneous coordinate set based on the same preset position; and substituting the N pairs of matched coordinates into a function where a conversion rule containing at least one unknown parameter is located to obtain at least one unknown parameter in the conversion rule.
In an alternative embodiment, the apparatus further comprises:
the point cloud determining module 1103 is configured to determine a feature value of each data in the laser data set; calculating a distance value between the characteristic value of each datum and a preset characteristic value; and dividing the laser data set into a plurality of point clouds according to the distance grade of the distance value.
The device and method embodiments in the embodiments of the present application are based on the same application concept.
The method provided by the embodiment of the application can be executed in a computer terminal, an electronic device or a similar operation device. Taking the example of the method executed on the electronic device, fig. 12 is a hardware structure block diagram of the electronic device according to the method for determining the characteristic information of the object provided in the embodiment of the present application. As shown in fig. 12, the electronic device 1200 may have a relatively large difference due to different configurations or performances, and may include one or more Central Processing Units (CPUs) 1210 (the processors 1210 may include but are not limited to processing devices such as a microprocessor MCU or a programmable logic device FPGA), a memory 1230 for storing data, and one or more storage media 1220 (e.g., one or more mass storage devices) for storing applications 1223 or data 1222. Memory 1230 and storage media 1220, among other things, may be transient storage or persistent storage. The program stored in the storage medium 1220 may include one or more modules, each of which may include a series of instruction operations for an electronic device. Still further, the central processor 1210 may be configured to communicate with the storage medium 1220 to execute a series of instruction operations in the storage medium 1220 on the electronic device 1200. The electronic device 1200 may also include one or more power supplies 1260, one or more wired or wireless network interfaces 1250, one or more input-output interfaces 1240, and/or one or more operating systems 1221, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, and so forth.
The input/output interface 1240 may be used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the electronic device 1200. In one example, the input/output Interface 1240 includes a Network Interface Controller (NIC) that may be coupled to other Network devices via a base station to communicate with the internet. In one example, the input/output interface 1240 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
It will be understood by those skilled in the art that the structure shown in fig. 12 is only an illustration and is not intended to limit the structure of the electronic device. For example, electronic device 1200 may also include more or fewer components than shown in FIG. 12, or have a different configuration than shown in FIG. 12.
As shown in fig. 13, an embodiment of the present application further provides a computer-readable storage medium 1310, which may be disposed in a server to store at least one instruction, at least one program, a code set, or a set of instructions 1311 for implementing the method for determining characteristic information of an object in the method embodiment, where the at least one instruction, the at least one program, the code set, or the set of instructions 1311 is loaded by the processor 1320 and executed to perform the method for determining characteristic information of an object.
Alternatively, in this embodiment, the storage medium may be located in at least one network server of a plurality of network servers of a computer network. Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
As can be seen from the embodiments of the method, the device, or the storage medium for determining characteristic information of an object provided by the present application, the method specifically includes acquiring visual data and a laser data set based on the same scene; determining category information of a target object in the visual data and position information of the target object in the visual data; clustering the laser data set to obtain a plurality of point clouds; determining a target point cloud matched with the target object from the plurality of point clouds based on the coordinates of each point cloud in the plurality of point clouds projected under the first coordinate system and the position information of the target object in the visual data; determining the category information of the target point cloud according to the category information of the target object; the spatial position of the target point cloud is determined, the matching relation between the acquired visual data and the laser data can be effectively increased based on the same object, complementary information and redundant information are combined according to a certain optimization criterion in space and time through reasonable domination and use of various sensor information, consistency explanation or description of an observation environment and the object is generated, a new fusion result is generated at the same time, and more information is provided for subsequent multi-sensor positioning application.
It should be noted that: the sequence of the embodiments of the present application is only for description, and does not represent the advantages and disadvantages of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (11)

1. A method for determining characteristic information of an object, the method comprising:
acquiring visual data and a laser data set based on the same scene;
determining category information of a target object in the visual data and position information of the target object in the visual data;
clustering the laser data set to obtain a plurality of point clouds;
determining a target point cloud matched with the target object from the plurality of point clouds based on the coordinates of each point cloud in the plurality of point clouds projected in the first coordinate system and the position information of the target object in the visual data;
determining the category information of the target point cloud according to the category information of the target object; and determining a spatial location of the target point cloud.
2. The method of claim 1, wherein determining a target point cloud from the plurality of point clouds that matches the target object based on the coordinates of each point cloud of the plurality of point clouds projected in the first coordinate system and the position information of the target object in the visual data comprises:
determining coordinates of each point cloud projection in the plurality of point clouds under a first coordinate system;
determining Euclidean distance between each point cloud and the target object based on the coordinates of each point cloud projected in a first coordinate system and the position information of the target object in the visual data;
determining a target Euclidean distance which is less than or equal to a preset distance threshold value from the Euclidean distance corresponding to each point cloud;
and determining the determined point cloud meeting the target Euclidean distance as the target point cloud matched with the target object.
3. The method of claim 2, wherein the determining coordinates of each of the plurality of point clouds projected under a first coordinate system comprises:
determining coordinates of each point cloud in the plurality of point clouds under a second coordinate system;
acquiring a conversion rule of the second coordinate system and the first coordinate system;
and converting the coordinates of each point cloud in the plurality of point clouds under the second coordinate system into the coordinates of each point cloud in the plurality of point clouds under the first coordinate system based on the conversion rule.
4. The method of claim 3,
coordinates in the first coordinate system are two-dimensional data or three-dimensional data; and/or the coordinates in the second coordinate system are two-dimensional data or three-dimensional data; the position information of the target object in the visual data is two-dimensional data or three-dimensional data.
5. The method of claim 3, wherein obtaining the transformation rules of the second coordinate system and the first coordinate system comprises:
determining a first transformation matrix from the second coordinate system to an intermediate coordinate system;
determining a second transformation matrix from the intermediate coordinate system to the first coordinate system;
determining the transformation rules of the second coordinate system and the first coordinate system including at least one unknown parameter based on the first transformation matrix and the second transformation matrix;
and determining the at least one unknown parameter in the conversion rule based on N pairs of matched coordinates acquired by the first sensor and the second sensor at N preset positions respectively to obtain the conversion rules of the second coordinate system and the first coordinate system.
6. The method of claim 5, wherein:
the first conversion matrix comprises at least one unknown parameter; and/or; the second conversion matrix comprises at least one unknown parameter;
the intermediate coordinate system comprises at least one sub-intermediate coordinate system and a transformation rule between the at least one sub-intermediate coordinate system.
7. The method of claim 5, wherein determining the at least one unknown quantity in the transformation rule based on N pairs of matching coordinates obtained by the first sensor and the second sensor at N predetermined positions respectively comprises:
acquiring a first coordinate set of a calibration code in the first coordinate system through the first sensor at N preset positions;
acquiring a second coordinate set of the calibration code in the second coordinate system at the N preset positions through the second sensor;
converting a first coordinate in the first coordinate set and a second coordinate in the second coordinate set to obtain a first homogeneous coordinate set and a second homogeneous coordinate set;
determining N pairs of matching coordinates from the first homogeneous coordinate set and the second homogeneous coordinate set based on the same preset position;
and substituting the N pairs of matched coordinates into a function of a conversion rule containing the at least one unknown parameter to obtain the at least one unknown parameter in the conversion rule.
8. The method of claim 1, wherein clustering the laser data set to obtain a plurality of point clouds comprises:
determining a characteristic value for each data in the laser data set;
calculating a distance value between the characteristic value of each datum and a preset characteristic value;
and dividing the laser data set into the plurality of point clouds according to the distance grade of the distance value.
9. An apparatus for determining characteristic information of an object, the apparatus comprising:
the acquisition module is used for acquiring visual data and a laser data set based on the same scene;
a target determination module for determining category information of a target object in the visual data and position information of the target object in the visual data;
the point cloud determining module is used for clustering the laser data set to obtain a plurality of point clouds;
a target point cloud determining module, configured to determine a target point cloud matching the target object from the plurality of point clouds based on coordinates of each point cloud in the plurality of point clouds projected in a first coordinate system and position information of the target object in the visual data;
the characteristic information determining module is used for determining the category information of the target point cloud according to the category information of the target object; and determining a spatial location of the target point cloud.
10. An electronic device, characterized in that the electronic device comprises a processor and a memory, in which at least one instruction or at least one program is stored, which is loaded by the processor and executes the method for determining characteristic information of an object according to any one of claims 1-8.
11. A computer storage medium, wherein at least one instruction or at least one program is stored, and the at least one instruction or the at least one program is loaded and executed by a processor to implement the method for determining characteristic information of an object according to any one of claims 1 to 8.
CN202010348299.5A 2020-04-28 2020-04-28 Method and device for determining characteristic information of object, electronic equipment and storage medium Active CN111709988B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010348299.5A CN111709988B (en) 2020-04-28 2020-04-28 Method and device for determining characteristic information of object, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010348299.5A CN111709988B (en) 2020-04-28 2020-04-28 Method and device for determining characteristic information of object, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111709988A true CN111709988A (en) 2020-09-25
CN111709988B CN111709988B (en) 2024-01-23

Family

ID=72536378

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010348299.5A Active CN111709988B (en) 2020-04-28 2020-04-28 Method and device for determining characteristic information of object, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111709988B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113673388A (en) * 2021-08-09 2021-11-19 北京三快在线科技有限公司 Method and device for determining position of target object, storage medium and equipment
CN114440922A (en) * 2020-10-30 2022-05-06 阿里巴巴集团控股有限公司 Method and device for evaluating laser calibration, related equipment and storage medium
WO2023217047A1 (en) * 2022-05-07 2023-11-16 节卡机器人股份有限公司 Positioning method and apparatus, and electronic device and readable storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106485737A (en) * 2015-08-25 2017-03-08 南京理工大学 Cloud data based on line feature and the autoregistration fusion method of optical image
CN108171748A (en) * 2018-01-23 2018-06-15 哈工大机器人(合肥)国际创新研究院 A kind of visual identity of object manipulator intelligent grabbing application and localization method
CN108509918A (en) * 2018-04-03 2018-09-07 中国人民解放军国防科技大学 Target detection and tracking method fusing laser point cloud and image
CN108872991A (en) * 2018-05-04 2018-11-23 上海西井信息科技有限公司 Target analyte detection and recognition methods, device, electronic equipment, storage medium
CN109283538A (en) * 2018-07-13 2019-01-29 上海大学 A kind of naval target size detection method of view-based access control model and laser sensor data fusion
US20190096086A1 (en) * 2017-09-22 2019-03-28 Zoox, Inc. Three-Dimensional Bounding Box From Two-Dimensional Image and Point Cloud Data
EP3506203A1 (en) * 2017-12-29 2019-07-03 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for fusing point cloud data technical field
CN109978955A (en) * 2019-03-11 2019-07-05 武汉环宇智行科技有限公司 A kind of efficient mask method for combining laser point cloud and image
CN110264468A (en) * 2019-08-14 2019-09-20 长沙智能驾驶研究院有限公司 Point cloud data mark, parted pattern determination, object detection method and relevant device
CN110456363A (en) * 2019-06-17 2019-11-15 北京理工大学 The target detection and localization method of three-dimensional laser radar point cloud and infrared image fusion
CN110675431A (en) * 2019-10-08 2020-01-10 中国人民解放军军事科学院国防科技创新研究院 Three-dimensional multi-target tracking method fusing image and laser point cloud
CN110942449A (en) * 2019-10-30 2020-03-31 华南理工大学 Vehicle detection method based on laser and vision fusion

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106485737A (en) * 2015-08-25 2017-03-08 南京理工大学 Cloud data based on line feature and the autoregistration fusion method of optical image
US20190096086A1 (en) * 2017-09-22 2019-03-28 Zoox, Inc. Three-Dimensional Bounding Box From Two-Dimensional Image and Point Cloud Data
EP3506203A1 (en) * 2017-12-29 2019-07-03 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for fusing point cloud data technical field
CN108171748A (en) * 2018-01-23 2018-06-15 哈工大机器人(合肥)国际创新研究院 A kind of visual identity of object manipulator intelligent grabbing application and localization method
CN108509918A (en) * 2018-04-03 2018-09-07 中国人民解放军国防科技大学 Target detection and tracking method fusing laser point cloud and image
CN108872991A (en) * 2018-05-04 2018-11-23 上海西井信息科技有限公司 Target analyte detection and recognition methods, device, electronic equipment, storage medium
CN109283538A (en) * 2018-07-13 2019-01-29 上海大学 A kind of naval target size detection method of view-based access control model and laser sensor data fusion
CN109978955A (en) * 2019-03-11 2019-07-05 武汉环宇智行科技有限公司 A kind of efficient mask method for combining laser point cloud and image
CN110456363A (en) * 2019-06-17 2019-11-15 北京理工大学 The target detection and localization method of three-dimensional laser radar point cloud and infrared image fusion
CN110264468A (en) * 2019-08-14 2019-09-20 长沙智能驾驶研究院有限公司 Point cloud data mark, parted pattern determination, object detection method and relevant device
CN110675431A (en) * 2019-10-08 2020-01-10 中国人民解放军军事科学院国防科技创新研究院 Three-dimensional multi-target tracking method fusing image and laser point cloud
CN110942449A (en) * 2019-10-30 2020-03-31 华南理工大学 Vehicle detection method based on laser and vision fusion

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
刘俊生: "基于激光点云与图像融合的车辆检测方法研究", 《中国优秀硕士学位论文全文数据库,工程科技II辑》 *
刘俊生: "基于激光点云与图像融合的车辆检测方法研究", 《中国优秀硕士学位论文全文数据库,工程科技II辑》, no. 8, 15 August 2019 (2019-08-15), pages 035 - 184 *
叶刚;: "融合影像信息的LiDAR点云多特征分类方法", 地理空间信息, no. 06 *
朱建新;沈东羽;吴钪;: "基于激光点云的智能挖掘机目标识别", 计算机工程, no. 01 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114440922A (en) * 2020-10-30 2022-05-06 阿里巴巴集团控股有限公司 Method and device for evaluating laser calibration, related equipment and storage medium
CN113673388A (en) * 2021-08-09 2021-11-19 北京三快在线科技有限公司 Method and device for determining position of target object, storage medium and equipment
WO2023217047A1 (en) * 2022-05-07 2023-11-16 节卡机器人股份有限公司 Positioning method and apparatus, and electronic device and readable storage medium

Also Published As

Publication number Publication date
CN111709988B (en) 2024-01-23

Similar Documents

Publication Publication Date Title
US20220028163A1 (en) Computer Vision Systems and Methods for Detecting and Modeling Features of Structures in Images
CN111220993B (en) Target scene positioning method and device, computer equipment and storage medium
JP2022514974A (en) Object detection methods, devices, electronic devices, and computer programs
CN111709988B (en) Method and device for determining characteristic information of object, electronic equipment and storage medium
CN110705385B (en) Method, device, equipment and medium for detecting angle of obstacle
CN111856499B (en) Map construction method and device based on laser radar
EP4168984A1 (en) System and method for aerial to ground registration
Garrote et al. 3D point cloud downsampling for 2D indoor scene modelling in mobile robotics
CN113822996B (en) Pose estimation method and device for robot, electronic device and storage medium
Kaushik et al. Accelerated patch-based planar clustering of noisy range images in indoor environments for robot mapping
CN113822892B (en) Evaluation method, device and equipment of simulated radar and computer storage medium
CN111708046A (en) Method and device for processing plane data of obstacle, electronic equipment and storage medium
CN113158816A (en) Visual odometer quadric-surface road sign construction method for outdoor scene object
Kaleci et al. Plane segmentation of point cloud data using split and merge based method
Dierenbach et al. Next-Best-View method based on consecutive evaluation of topological relations
Sun et al. Indoor Li-DAR 3D mapping algorithm with semantic-based registration and optimization
Häselich et al. Markov random field terrain classification of large-scale 3D maps
CN115375713B (en) Ground point cloud segmentation method and device and computer readable storage medium
Gigli et al. Dartboard Based Ground Detection on 3d Point Cloud
CN115861628A (en) 3D target detection method, device, equipment and storage medium
CN117830559A (en) Live working scene modeling method and device, storage medium and computer equipment
CN115270919A (en) Target detection method, method and device for establishing spatial correlation perception model
Yörük Performance comparison of point and plane features for slam
CN117058358A (en) Scene boundary detection method and mobile platform
CN117576653A (en) Target tracking method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant