CN111709988B - Method and device for determining characteristic information of object, electronic equipment and storage medium - Google Patents

Method and device for determining characteristic information of object, electronic equipment and storage medium Download PDF

Info

Publication number
CN111709988B
CN111709988B CN202010348299.5A CN202010348299A CN111709988B CN 111709988 B CN111709988 B CN 111709988B CN 202010348299 A CN202010348299 A CN 202010348299A CN 111709988 B CN111709988 B CN 111709988B
Authority
CN
China
Prior art keywords
coordinate system
determining
point cloud
target
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010348299.5A
Other languages
Chinese (zh)
Other versions
CN111709988A (en
Inventor
金伟
王健威
沈孝通
秦宝星
程昊天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Gaussian Automation Technology Development Co Ltd
Original Assignee
Shanghai Gaussian Automation Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Gaussian Automation Technology Development Co Ltd filed Critical Shanghai Gaussian Automation Technology Development Co Ltd
Priority to CN202010348299.5A priority Critical patent/CN111709988B/en
Publication of CN111709988A publication Critical patent/CN111709988A/en
Application granted granted Critical
Publication of CN111709988B publication Critical patent/CN111709988B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4808Evaluating distance, position or velocity data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The application discloses a method, a device, electronic equipment and a storage medium for determining characteristic information of an object, wherein visual data and a laser data set can be acquired based on the same scene; determining category information of a target object in the visual data and position information of the target object in the visual data; clustering the laser data sets to obtain a plurality of point clouds; determining a target point cloud matched with the target object from the plurality of point clouds based on the coordinates projected by each point cloud in the plurality of point clouds under the first coordinate system and the position information of the target object in the visual data; and determining the category information of the target point cloud according to the category information of the target object, and determining the spatial position. The matching relation between the visual data and the laser data can be increased based on the same object, and the complementary information and the redundant information are combined according to the optimization criteria in space and time through reasonable control and use of various sensor information, so that more information is provided for the subsequent multi-sensor positioning application.

Description

Method and device for determining characteristic information of object, electronic equipment and storage medium
Technical Field
The present invention relates to the field of robots, and in particular, to a method and apparatus for determining feature information of an object, an electronic device, and a storage medium.
Background
The intelligent mobile robot is a device integrating the functions of environment sensing, dynamic decision and planning, behavior control and execution and the like, and has high intelligent degree, and is inseparable from the rapidity and accuracy of environment sensing and the multi-sensor information fusion technology. The multi-sensor information fusion technology is that a computer fully utilizes the sensor resources, and through reasonable control and use of various measurement information, complementary information and redundant information are combined in space and time according to a certain optimization criterion to generate consistency interpretation or description of an observation environment and generate a new fusion result. In the context awareness module, vision sensors and lidar are two commonly used sensors. In recent years, a visual image analysis method typified by deep learning has been greatly developed, and pedestrians, vehicles, various obstacles, and the like can be accurately detected and classified. For robots, a set of matching point calculations in the camera pixel coordinate system and the laser radar coordinate system need to be acquired.
However, the prior art classifies objects directly only by images of the objects taken by robots or detects objects by lidar, and does not relate to the matching link between images and objects.
Disclosure of Invention
The embodiment of the application provides a method, a device, electronic equipment and a storage medium for determining characteristic information of an object, which can obtain matching connection between visual data and laser data, and through reasonable control and use of various sensor information, complementary information and redundant information are combined in space and time according to a certain optimization criterion to generate consistency interpretation or description of an observation environment and an object, and simultaneously generate a new fusion result to provide more information for subsequent multi-sensor positioning application.
In one aspect, an embodiment of the present application provides a method for determining feature information of an object, where the method includes:
acquiring visual data and a laser data set based on the same scene;
determining category information of a target object in the visual data and position information of the target object in the visual data;
clustering the laser data sets to obtain a plurality of point clouds;
determining a target point cloud matched with the target object from the plurality of point clouds based on the coordinates projected by each point cloud in the plurality of point clouds under the first coordinate system and the position information of the target object in the visual data;
Determining class information of the target point cloud according to the class information of the target object; and determining a spatial location of the target point cloud.
Optionally, determining, from the plurality of point clouds, a target point cloud matching the target object based on the coordinates projected by each of the plurality of point clouds in the first coordinate system and the position information of the target object in the visual data, includes: determining coordinates of each point cloud projection in the plurality of point clouds under a first coordinate system; determining Euclidean distance between each point cloud and the target object based on the coordinates of each point cloud projected under the first coordinate system and the position information of the target object in the visual data; determining a target Euclidean distance smaller than or equal to a preset distance threshold value from Euclidean distances corresponding to each point cloud; and determining the determined point cloud meeting the target Euclidean distance as a target point cloud matched with the target object.
Optionally, determining coordinates of each point cloud projection of the plurality of point clouds in the first coordinate system includes: determining coordinates of each point cloud in the plurality of point clouds under a second coordinate system; acquiring a conversion rule of the second coordinate system and the first coordinate system; and converting the coordinates of each of the plurality of point clouds in the second coordinate system into the coordinates of each of the plurality of point clouds in the first coordinate system based on the conversion rule.
Optionally, the coordinates in the second coordinate system are two-dimensional data or three-dimensional data; and/or; coordinates in the first coordinate system are two-dimensional data or three-dimensional data; the position information of the target object in the visual data is two-dimensional data or three-dimensional data.
Optionally, acquiring a conversion rule of the second coordinate system and the first coordinate system includes: determining a first transformation matrix from the second coordinate system to the intermediate coordinate system; determining a second transformation matrix from the intermediate coordinate system to the first coordinate system; determining a conversion rule of the second coordinate system and the first coordinate system containing at least one unknown parameter based on the first conversion matrix and the second conversion matrix; and determining at least one unknown parameter in the conversion rule based on N pairs of matching coordinates acquired by the first sensor and the second sensor at N preset positions respectively to obtain the conversion rule of the second coordinate system and the first standard system.
Optionally, the first transformation matrix includes at least one unknown parameter; and/or; the second conversion matrix comprises at least one unknown parameter; the intermediate coordinate system includes at least one sub-intermediate coordinate system and a conversion rule between the at least one sub-intermediate coordinate system.
Optionally, determining at least one unknown parameter in the conversion rule based on N pairs of matching coordinates acquired by the first sensor and the second sensor at N preset positions respectively includes: acquiring a first coordinate set of the calibration code under a first coordinate system at N preset positions through a first sensor; acquiring a second coordinate set of the calibration code under a second coordinate system at N preset positions through a second sensor; converting a first coordinate in the first coordinate set and a second coordinate in the second coordinate set to obtain a first homogeneous coordinate set and a second homogeneous coordinate set; determining N pairs of matching coordinates from the first homogeneous coordinate set and the second homogeneous coordinate set based on the same preset position; substituting the N pairs of matching coordinates into a function of a conversion rule containing at least one unknown parameter to obtain at least one unknown parameter in the conversion rule.
Optionally, clustering the laser data set to obtain a plurality of point clouds, including: determining a characteristic value of each data in the laser data set; calculating a distance value between the characteristic value of each datum and a preset characteristic value; and dividing the laser data set into a plurality of point clouds according to the distance level of the distance value.
In another aspect, a method and an apparatus for determining feature information of an object are provided, where the apparatus includes:
the acquisition module is used for acquiring visual data and a laser data set based on the same scene;
the target determining module is used for determining category information of a target object in the visual data and position information of the target object in the visual data;
the point cloud determining module is used for carrying out clustering processing on the laser data set to obtain a plurality of point clouds;
the target point cloud determining module is used for determining target point clouds matched with the target object from the plurality of point clouds based on the coordinates projected by each point cloud in the plurality of point clouds under the first coordinate system and the position information of the target object in the visual data;
the characteristic information determining module is used for determining the category information of the target point cloud according to the category information of the target object; and determining a spatial location of the target point cloud.
Optionally, the target point cloud determining module is specifically configured to:
determining coordinates of each point cloud projection in the plurality of point clouds under a first coordinate system;
determining Euclidean distance between each point cloud and the target object based on the coordinates of each point cloud projected under the first coordinate system and the position information of the target object in the visual data;
determining a target Euclidean distance smaller than or equal to a preset distance threshold value from Euclidean distances corresponding to each point cloud;
and determining the determined point cloud meeting the target Euclidean distance as a target point cloud matched with the target object.
Optionally, the target point cloud determining module is specifically configured to:
determining coordinates of each point cloud in the plurality of point clouds under a second coordinate system;
acquiring a conversion rule of the second coordinate system and the first coordinate system;
and converting the coordinates of each of the plurality of point clouds in the second coordinate system into the coordinates of each of the plurality of point clouds in the first coordinate system based on the conversion rule.
Optionally, the coordinates in the second coordinate system are two-dimensional data or three-dimensional data; and/or; coordinates in the first coordinate system are two-dimensional data or three-dimensional data; the position information of the target object in the visual data is two-dimensional data or three-dimensional data.
Optionally, the target point cloud determining module is specifically configured to:
determining a first transformation matrix from the second coordinate system to the intermediate coordinate system; determining a second transformation matrix from the intermediate coordinate system to the first coordinate system;
determining a conversion rule of the second coordinate system and the first coordinate system containing at least one unknown parameter based on the first conversion matrix and the second conversion matrix;
and determining at least one unknown parameter in the conversion rule based on N pairs of matching coordinates acquired by the first sensor and the second sensor at N preset positions respectively to obtain the conversion rule of the second coordinate system and the first standard system.
Optionally, the target point cloud determining module is specifically configured to:
acquiring a first coordinate set of the calibration code under a first coordinate system at N preset positions through a first sensor;
acquiring a second coordinate set of the calibration code under a second coordinate system at N preset positions through a second sensor;
converting a first coordinate in the first coordinate set and a second coordinate in the second coordinate set to obtain a first homogeneous coordinate set and a second homogeneous coordinate set;
determining N pairs of matching coordinates from the first homogeneous coordinate set and the second homogeneous coordinate set based on the same preset position;
Substituting the N pairs of matching coordinates into a function of a conversion rule containing at least one unknown parameter to obtain at least one unknown parameter in the conversion rule.
Optionally, the point cloud determining module is specifically configured to:
determining a characteristic value of each data in the laser data set;
calculating a distance value between the characteristic value of each datum and a preset characteristic value;
and dividing the laser data set into a plurality of point clouds according to the distance level of the distance value.
In another aspect, an electronic device is provided, the electronic device including a processor and a memory, the memory storing at least one instruction or at least one program, the at least one instruction or the at least one program loaded by the processor and executing a method of determining characteristic information of an object.
In another aspect, a computer-readable storage medium is provided, in which at least one instruction or at least one program is stored, the at least one instruction or the at least one program being loaded by a processor and executed to determine characteristic information of an object.
The method, the device, the electronic equipment and the storage medium for determining the characteristic information of the object have the following technical effects:
acquiring visual data and a laser data set based on the same scene; determining category information of a target object in the visual data and position information of the target object in the visual data; clustering the laser data sets to obtain a plurality of point clouds; determining a target point cloud matched with the target object from the plurality of point clouds based on the coordinates projected by each point cloud in the plurality of point clouds under the first coordinate system and the position information of the target object in the visual data; determining class information of the target point cloud according to the class information of the target object; and determining the spatial position of the target point cloud, and based on the same object, effectively increasing the matching relation between the acquired visual data and the laser data, combining complementary information and redundant information in space and time according to a certain optimization criterion to generate consistency interpretation or description of the observation environment and the object, generating a new fusion result, and providing more information for the subsequent application of multi-sensor positioning.
Drawings
In order to more clearly illustrate the technical solutions and advantages of embodiments of the present application or of the prior art, the following description will briefly introduce the drawings that are required to be used in the embodiments or the prior art descriptions, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an application environment provided by an embodiment of the present application;
fig. 2 is a flow chart of a method for determining feature information of an object according to an embodiment of the present application;
fig. 3 is a flow chart of a method for determining feature information of an object according to an embodiment of the present application;
fig. 4 is a flow chart of a method for determining feature information of an object according to an embodiment of the present application;
fig. 5 is a schematic flow chart of determining coordinates of each of a plurality of point clouds projected in a first coordinate system according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a transformation between two coordinate systems provided in an embodiment of the present application;
fig. 7 is a schematic flow chart of a conversion rule for obtaining a second coordinate system and a first coordinate system according to an embodiment of the present application;
Fig. 8 is a schematic flow chart of a conversion rule for obtaining a second coordinate system and a first coordinate system according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a detection code according to an embodiment of the present application;
FIG. 10 is a schematic illustration of a measurement provided by an embodiment of the present application;
fig. 11 is a schematic structural diagram of a device for determining feature information of an object according to an embodiment of the present application;
fig. 12 is a block diagram of a hardware structure of an electronic device according to a method for determining feature information of an object according to an embodiment of the present application;
fig. 13 is a block diagram of a computer-readable storage medium according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present application based on the embodiments herein.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Referring to fig. 1, fig. 1 is a schematic diagram of an application environment provided in an embodiment of the present application, where the schematic diagram includes a mobile electronic device 101, and the mobile electronic device 101 shown in the schematic diagram is a sweeping robot, and may be other types of mobile electronic devices, such as a floor cleaning robot, a navigation cart, and so on. Wherein the mobile electronic device 101 comprises a first sensor 1011 and a second sensor 1012. For example, the first sensor 1011 may be a lidar and the second sensor 1012 may be a camera. Alternatively, in the embodiment of the present application, the data acquired by the first sensor 1011 and the second sensor 1012 may be two-dimensional data and/or three-dimensional data, and in the embodiment of the present application, the data acquired by the first sensor 1011 is taken as three-dimensional data, and the data acquired by the second sensor 1012 is taken as two-dimensional data for example, and other manners may refer to the above examples and are not repeated.
Specifically, the mobile electronic device 101 acquires the visual data and the laser data set based on the same scene, and determines the category information of the target object in the visual data and the position information of the target object in the visual data. Then, the mobile electronic device can perform clustering processing on the laser data set to obtain a plurality of point clouds, and determine a target point cloud matched with the target object from the plurality of point clouds based on the coordinates of each point cloud projected under the first coordinate system in the plurality of point clouds and the position information of the target object in the visual data. And finally, determining the category information of the target point cloud according to the category information of the target object, and determining the spatial position of the target point cloud. Thus, the mobile electronic equipment can determine the characteristic information of the real object corresponding to the target object through the category information of the target object determined by the visual data.
In the embodiment of the present application, all the technical steps in the upper section may be implemented in the mobile electronic device 101. Optionally, some technical steps (e.g., determining category information of the target object in the visual data and location information of the target object in the visual data, then, the mobile electronic device may perform clustering on the laser data set to obtain a plurality of point clouds, and determine a target point cloud matching the target object from the plurality of point clouds based on coordinates of each point cloud projected by the plurality of point clouds under the first coordinate system and the location information of the target object in the visual data.
In the following, a specific embodiment of a method for determining feature information of an object according to the present application is described, and fig. 2 is a schematic flow chart of a method for determining feature information of an object according to the embodiment of the present application, where the method operation steps as the example or the flowchart are provided in the present application, but more or fewer operation steps may be included based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one way of performing the order of steps and does not represent a unique order of execution. When implemented in a real system or server product, the methods illustrated in the embodiments or figures may be performed sequentially or in parallel (e.g., in a parallel processor or multithreaded environment). As shown in fig. 2, the method may include:
S201: visual data and a laser data set are acquired based on the same scene.
In an embodiment of the present application, the visual data is acquired by a camera on the mobile electronic device and the laser data set is acquired by a laser radar on the mobile electronic device. The camera and the lidar acquire the same scene, optionally the camera and the lidar acquire the same area, optionally the camera and the lidar acquire the different areas, but with overlapping areas. In this way, there is only a result that the target object in the visual data can correspond to the target point cloud of the laser dataset.
S203: category information of the target object in the visual data and location information of the target object in the visual data are determined.
Alternatively, the mobile electronic device may determine the target object in the visual data through a target detection algorithm, and determine category information of the target object and location information of the target object in the visual data. The target detection algorithm may be embodied as determining, by means of a trained target detection model, a target object in the visual data, category information of the target object and location information of the target object in the visual data. The target detection model may be obtained based on principle training of a convolutional neural network model, or the target detection model may be obtained based on principle training of a cyclic neural network model.
Alternatively, the position information in the target object is two-dimensional data or three-dimensional data, and hereinafter, the position information in the target object is two-dimensional data (alternatively, a coordinate system may be established, the position information may be represented by line data on the X-axis and the Y-axis, or the position information may be represented by the number of rows and columns of pixels). Optionally, the category information of the target object is a category to which the target object belongs, such as an automobile, a pedestrian, a building, and the like.
S205: and clustering the laser data set to obtain a plurality of point clouds.
In order to determine data corresponding to the target object from the laser data set, in the embodiment of the present application, the mobile electronic device may perform clustering processing on the laser data set to obtain a plurality of point clouds, so that the following determination of the point cloud corresponding to the target object from the plurality of point clouds is facilitated. Each point cloud may contain a plurality of laser data, each of which implies a three-dimensional coordinate of the point.
Fig. 3 is a flowchart of a method for determining feature information of an object according to an embodiment of the present application, where step S205 in the schematic diagram may be expressed as:
s2051: a characteristic value for each data in the laser data set is determined.
The feature value may be a feature value obtained by integrating the color, geometric characteristics, and the like of each data. In general, the eigenvalues of the data contained in the same thing are relatively close.
S2052: and calculating a distance value between the characteristic value of each datum and a preset characteristic value.
The preset feature value may be set at will or according to an empirical value, mainly determining a reference, and determining a difference between each data and the reference, for example, the preset feature value may be set to 0.
S2053: and dividing the laser data set into a plurality of point clouds according to the distance level of the distance value.
For example, the mobile electronic device can divide the laser data set into a first point cloud, a second point cloud, and a third point cloud … … when the distance between the characteristic value of the first partial data and the preset characteristic value is between 0 and 1, the distance between the characteristic value of the second partial data and the preset characteristic value is between 1 and 2, and the distance between the characteristic value of the third partial data and the preset characteristic value is between 2 and 3, … …
S207: and determining a target point cloud matched with the target object from the plurality of point clouds based on the coordinates of each point cloud projected in the first coordinate system and the position information of the target object in the visual data.
Referring to the following description of an optional determination of a target point cloud matching a target object from a plurality of point clouds, fig. 4 is a flowchart of a method for determining feature information of an object according to an embodiment of the present application, where step S207 in the schematic may be expressed as:
s2071: coordinates of each point cloud projection in the first coordinate system are determined.
Fig. 5 is a flowchart of determining coordinates of each point cloud projection in a first coordinate system in a plurality of point clouds according to an embodiment of the present application, including:
s501: coordinates of each of the plurality of point clouds in the second coordinate system are determined.
The embodiment of the application will be described by taking the coordinates in the second coordinate system as three-dimensional coordinates and the coordinates in the first coordinate system as two-dimensional coordinates as an example.
Since the data set in each point cloud is acquired by the lidar, the mobile electronic device may directly determine the coordinates of each point cloud in the second coordinate system.
S503: and acquiring a conversion rule of the second coordinate system and the first coordinate system.
Specifically, the conversion rule from the first coordinate system to the second coordinate system may be obtained, or the conversion rule from the second coordinate system to the first coordinate system may be obtained.
In an alternative embodiment, the transformation rules may be embodied in a transformation matrix:
fig. 6 is a schematic diagram of conversion between two coordinate systems according to an embodiment of the present application, where a second coordinate system corresponding to a laser radar is converted into a first coordinate system under a camera head of a camera.
In this embodiment of the present application, the first coordinate system may be a coordinate system corresponding to the first sensor, and the second coordinate system may be a coordinate system corresponding to the second sensor. For example, the first coordinate system may be a pixel coordinate system corresponding to the camera, and the second coordinate system may be a laser radar coordinate system corresponding to the laser radar.
In an alternative implementation manner of obtaining the conversion rule of the second coordinate system and the first coordinate system, the mobile electronic device may determine the conversion rule through a schematic diagram shown in fig. 7, and fig. 7 is a schematic flow diagram of obtaining the conversion rule from the second coordinate system to the first coordinate system provided in the embodiment of the present application, where the method includes:
s5031: determining a first transformation matrix from the second coordinate system to the intermediate coordinate system;
wherein the first transformation matrix contains at least one unknown parameter; the intermediate coordinate system includes at least one sub-intermediate coordinate system and a conversion rule between the at least one sub-intermediate coordinate system.
Alternatively, the intermediate coordinate system may include one sub-intermediate coordinate system or a plurality of sub-intermediate coordinate systems. Alternatively, the intermediate coordinate system has only one sub-coordinate system. Alternatively, if there are a plurality of sub-intermediate coordinate systems in the intermediate coordinate system, there may also be a transformation matrix between the plurality of sub-intermediate coordinate systems in the intermediate coordinate system.
In the following, a description will be given of a plurality of sub-intermediate coordinate systems in the intermediate coordinate system, and the present application will be further described with reference to the second coordinate system as a lidar coordinate system, for example, assuming that the intermediate coordinate system may include a camera coordinate system and an image coordinate system.
Alternatively, the first transformation matrix is a transformation rule from the laser radar coordinate system to the camera coordinate system, and the first transformation matrix may be expressed asLet the coordinate of the laser radar data point in the laser radar coordinate system be p lidar =[x lidar y lidar z lidar ] t The coordinate under the camera coordinate system is P c =[x c y c z c ] t Wherein t is a transpose, and the formula between the laser radar coordinate system and the camera coordinate system is:
wherein T is a translation matrix, and T= [ T ] x t y t z ] t R is an orthogonal rotation matrix,
the camera coordinate system and the image coordinate system, and the conversion rule between the camera coordinate system and the image coordinate system are described below. The coordinates in the camera coordinate system mentioned above are P c =[x c y c z c ] t Assume that the homogeneous coordinates in the image coordinate system are m= [ x ] p y p 1] t The principal point (the position of the origin of the pixel coordinate system in the image coordinate system) has a coordinate p in the image coordinate system c =[x 0 y 0 1] t The formula between the camera coordinate system and the image coordinate system is:
s5032: and determining a second transformation matrix from the intermediate coordinate system to the first coordinate system.
Wherein the second transformation matrix may comprise at least one unknown parameter.
Based on the continuing description above, the first coordinate system is a pixel coordinate system. The second transformation matrix is a transformation rule from the image coordinate system to the pixel coordinate system, assuming that the length and width of one pixel are d x And d y Let the coordinates in the pixel coordinate system be p= [ u v 1] t The formula between the image coordinate system and the pixel coordinate system is:
where p is a lowercase.
S5033: a transformation rule comprising at least one unknown parameter from the second coordinate system to the first coordinate system is determined based on the first transformation matrix and the second transformation matrix.
In this way, the mobile electronic device determines a conversion rule of the lidar coordinate system to the pixel coordinate system, that is, a transformation of the lidar coordinate system to the camera coordinate system, a transformation of the camera coordinate system to the image coordinate system, and a transformation of the image coordinate system to the pixel coordinate system, based on the first conversion matrix and the second conversion matrix.
The transformation matrix K from the camera coordinate system to the pixel coordinate system is:
thus, the overall transformation of data points in the lidar coordinate system from the lidar coordinate system to the pixel coordinate system is represented as:
s5034: and determining at least one unknown parameter in the conversion rule based on N pairs of matching coordinates acquired by the first sensor and the second sensor at N preset positions respectively, and obtaining the conversion rule from the laser radar coordinate system to the pixel coordinate system.
Optionally, N is an integer greater than or equal to 4.
Thus, by the above formula, the true coordinates of N data points in the laser coordinate system and the corresponding projections in the pixel coordinate system are known, the transformation relations R and T, and the PnP (transparent N-Point) problem are solved, and then the above formula is transformed:
f=P(RP lidar +T) -p … … equation (5)
Due to P lidar Knowing p, solving for the corresponding R and T, i.e., f.fwdarw.0, can be converted toAnd (3) optimizing a formula:
wherein,
through the formula, corresponding R and T can be obtained, and the specific R and T solving flow is as follows: and through the described data acquisition mode, the corresponding feature points on the visual data are obtained by detecting the markers in the visual data, and then the corresponding mutation points are found in the 2-dimensional laser data, so that the data collection is completed. And measuring the height of each marker to change the 2-dimensional laser spot into a 3-dimensional laser spot, and completing the acquisition of the three-dimensional laser spot. Bringing the matched camera characteristic points and the matched three-dimensional laser points into an objective function And solving through a ceres library to obtain final R and T. ceres is an open source c++ library that is used to solve model and function optimization problems, and can be used to solve non-linear least squares problems with boundary constraints and general unconstrained optimization problems.
Fig. 8 is a schematic flow chart of obtaining a conversion rule from a laser radar coordinate system to a pixel coordinate system according to an embodiment of the present application, where the flow chart includes:
s801: and acquiring a first coordinate set of the calibration code under a first coordinate system at N preset positions through a first sensor.
As shown in fig. 9, the calibration code is an open source library in ArUco marker, opencv. An ArUco marker is a binary square marker consisting of a broad black edge and an internal binary matrix, which determines their id. The black border facilitates the rapid detection of visual data and the binary code can verify the identification information. In an alternative embodiment, the calibration code may be located at any position that can be detected, such as in the middle of a wall or at a corner location, but for reasons that the calibration code at the corner location is more easily detected, the calibration code is typically located at the corner location.
Based on the above continuing discussion, the first sensor is a lidar, then the first coordinate system is a lidar coordinate system, and the first coordinate in the first set of coordinates may be considered a feature point in the lidar coordinate system, the feature point being obtained by the lidar detecting corner. Because the detection is performed at N preset positions, the mobile electronic device can acquire the characteristic points in the N laser radar coordinate systems. The method for detecting the characteristic points in the laser radar coordinate system comprises the following steps: and detecting the corner, namely traversing the laser radar point cloud in the front range of the robot, and finding out the point with the abrupt change of the point cloud distance value, wherein the coordinate of the point cloud is the coordinate of the characteristic point corner in the laser radar coordinate system.
Optionally, the distance between each of the N preset positions and the calibration code is inconsistent, for example, as shown in fig. 10, if N is equal to 4, the laser radar of the mobile electronic device may detect the calibration code at positions 5 meters, 4 meters, 3 meters, and 2 meters from the corner, and in a specific detection process, the mobile electronic device may rotate left and right in situ to comprehensively detect the calibration code.
S802: acquiring a second coordinate set of the calibration code under a second coordinate system at N preset positions through a second sensor;
Assuming that the second sensor is a camera, the second coordinate system is a camera pixel coordinate system, and the second coordinates in the second set of coordinates may be considered as feature points in the camera pixel coordinate system, the feature points being obtained by the camera detecting the calibration code. The mobile electronic device can obtain the feature points in the pixel coordinate system of the N cameras because the detection is performed at the N preset positions. The method for detecting the characteristic points in the pixel coordinate system of the camera comprises the following steps: and detecting the calibration code, wherein an aro marker is attached to the corner, and the coordinates of the corner can be detected, namely the coordinates of the feature point corner in a camera pixel coordinate system.
Likewise, the camera of the mobile electronic device may detect the calibration code at positions 5 meters, 4 meters, 3 meters, and 2 meters from the corner, respectively.
S803: converting a first coordinate in the first coordinate set and a second coordinate in the second coordinate set to obtain a first homogeneous coordinate set and a second homogeneous coordinate set;
in the embodiment of the application, the homogeneous coordinates: given a point (x, y) on the euclidean plane, for any non-zero real number Z, a triplet (x Z, y Z, Z) is called the homogeneous coordinate of that point. By definition, the values in the homogeneous coordinates are multiplied by the same non-zero real number to obtain another set of homogeneous coordinates of the same point. For example, points (1, 2) on Cartesian coordinates may be labeled as (1, 2, 1) or (2, 4, 2) in homogeneous coordinates. The original Cartesian coordinates can be retrieved by dividing the first two values by the third value. The step is to convert the original binary first coordinate and second coordinate into ternary first uniform coordinate and second uniform coordinate for subsequent operation.
S804: and determining N pairs of matching coordinates from the first homogeneous coordinate set and the second homogeneous coordinate set based on the same preset position.
In this application, the matching coordinates may be referred to as a matching point, which is a pair of coordinates, herein the coordinates of the calibration code in the lidar coordinate system and the coordinates of the calibration code in the camera pixel coordinate system. Thus, the distances 5 meters, 4 meters, 3 meters, 2 meters from the mobile electronic device to the calibration code referred to above correspond to 4 pairs of matching coordinates, respectively, wherein the 4 pairs of matching coordinates are non-collinear.
S805: substituting the N pairs of matching coordinates into a function of a conversion rule containing at least one unknown parameter to obtain at least one unknown parameter in the conversion rule.
In summary, the mobile electronic device may obtain the rule of conversion from the lidar coordinate system to the pixel coordinate system by solving all the unknowns, that is, solving the matrix of conversion from the lidar coordinate system to the pixel coordinate system.
In another optional implementation manner, at N preset positions, a first coordinate set of the calibration code under a second coordinate system is obtained through a first sensor; acquiring a second coordinate set of the calibration code under the first coordinate system at N preset positions through a second sensor; acquiring an Mth coordinate set of the calibration code under a first coordinate system through an Mth sensor at N preset positions; converting the first coordinate in the first coordinate set and the second coordinate in the second coordinate set and the M coordinate in the M coordinate set to obtain a first homogeneous coordinate set, a second homogeneous coordinate set and an M homogeneous coordinate set; determining a group of N pairs of matching coordinates from the first homogeneous coordinate set and the second homogeneous coordinate set based on the same preset position; or determining a group of N pairs of matching coordinates from the first homogeneous coordinate set and the Mth homogeneous coordinate set based on the same preset position, or determining a group of N pairs of matching coordinates from the second homogeneous coordinate set and the Mth homogeneous coordinate set based on the same preset position; substituting at least one group of N pairs of matching coordinates into a function of a conversion rule containing at least one unknown parameter to obtain at least one unknown parameter in the conversion rule.
In another alternative embodiment, the coordinate relationship may be embodied as a coordinate transformation model, which may be a recurrent neural network model or a convolutional neural network model.
How to train the coordinate transformation model is described as follows:
acquiring a sample data set, wherein the sample data set comprises a first sample coordinate and a second sample coordinate corresponding to each sample position in a plurality of sample positions; the first sample coordinates are in a first coordinate system; the second sample coordinates are in a second coordinate system; the corresponding first sample coordinates and second sample coordinates are obtained by the mobile electronic device based on the same thing (e.g., calibration code) at the same sample location.
And constructing a preset machine learning model, and determining the preset machine learning model as a current machine learning model.
And performing coordinate conversion operation on the first sample coordinates based on the current machine learning model, and determining second predicted coordinates corresponding to the first sample coordinates.
And determining a loss value based on the second predicted coordinate and the second sample coordinate corresponding to the first sample coordinate.
When the loss value is greater than a preset threshold value, back propagation is performed based on the loss value, the current machine learning model is updated to obtain an updated machine learning model, and the updated machine learning model is redetermined as the current machine learning model; repeating the steps of: and performing coordinate conversion operation on the first sample coordinates based on the current machine learning model, and determining second predicted coordinates corresponding to the first sample coordinates.
And when the loss value is smaller than or equal to a preset threshold value, determining the current machine learning model as a coordinate conversion model.
The above two methods can determine the conversion rule from the laser radar coordinate system to the pixel coordinate system, however, the first embodiment (and the conversion matrix) only needs to collect a smaller number of first coordinates and second coordinates (for example, 4 pairs of first coordinates and second coordinates), and then the conversion rule from the laser radar coordinate system to the pixel coordinate system can be obtained through the conversion of each coordinate system, so that the accuracy is high, and the requirements on hardware and software of a computer are also lower. However, this way of transforming the coordinate model not only requires a lot of time to collect a lot of first sample data and second sample data for training, but also requires a lot of manpower and material resources to support the hardware and software of computer operation for constructing the training model.
S505: and converting the coordinates of each of the plurality of point clouds in the second coordinate system into the coordinates of each of the plurality of point clouds in the first coordinate system based on the conversion rule.
In this way, the mobile electronic device can obtain the coordinates of each point cloud in the plurality of point clouds under the pixel coordinate system.
S2072: and determining the Euclidean distance between each point cloud and the target object based on the coordinates of each point cloud projected under the first coordinate system and the position information of the target object in the visual data.
In an alternative embodiment, the Euclidean distance between the coordinates of each point (i.e., the laser data) in the point cloud under the first coordinate system and the corresponding visual data of each point in the target object may be determined. And obtaining a set of Euclidean distances, and determining the Euclidean distance between each point cloud and the target object according to each Euclidean distance in the set of Euclidean distances.
In another alternative embodiment, each point cloud may be regarded as a whole, and the target object may be regarded as a whole, and then a center point a may be determined from each point cloud, and a center point B may be determined from the data of the target object, where the center point a and the center point B correspond to each other. And determining the Euclidean distance between each point cloud and the target object according to the center point A and the center point B.
S2073: and determining the target Euclidean distance smaller than or equal to a preset distance threshold value from the Euclidean distance corresponding to each point cloud.
Alternatively, the preset distance threshold may be preset according to an empirical value. The mobile electronic device may determine the target euclidean distance according to a preset distance threshold, if a plurality of determined target euclidean distances exist, select one with the smallest difference with the distance threshold, and if no target euclidean distance exists, determine one with the smallest difference with the distance threshold from all euclidean distances as the target euclidean distance.
S2074: and determining the determined point cloud meeting the target Euclidean distance as a target point cloud matched with the target object.
S209: and determining the category information of the target point cloud according to the category information of the target object, and determining the spatial position of the target point cloud.
In the embodiment of the application, the spatial position of the target point cloud in the system can be determined directly through the obtained laser data set.
In summary, through the conversion between the first coordinate system and the second coordinate system, the matching connection between the point cloud corresponding to the real object and the object in the visual data is realized, so that the characteristic information of the real object in the actual space is determined, a foundation is laid for subsequent application, for example, the blind people can be reminded of which barriers exist around, the barriers exist in the invisible area of the driving, and the like.
The embodiment of the application also provides a device for determining the characteristic information of the object, and fig. 11 is a schematic structural diagram of the device for determining the characteristic information of the object, as shown in fig. 11, where the device includes:
the acquisition module 1101 is configured to acquire visual data and a laser data set based on the same scene;
the target determining module 1102 is configured to determine category information of a target object in the visual data and location information of the target object in the visual data;
The point cloud determining module 1103 is configured to perform clustering processing on the laser data set to obtain a plurality of point clouds;
the target point cloud determining module 1104 is configured to determine a target point cloud matching the target object from the plurality of point clouds based on the coordinates projected by each of the plurality of point clouds in the first coordinate system and the position information of the target object in the visual data;
the feature information determining module 1105 is configured to determine category information of the target point cloud according to category information of the target object; and determining a spatial location of the target point cloud.
In an alternative embodiment, the apparatus further comprises:
the target point cloud determining module 1104 is configured to determine coordinates of each of the plurality of point clouds projected under a first coordinate system; determining Euclidean distance between each point cloud and the target object based on the coordinates of each point cloud projected under the first coordinate system and the position information of the target object in the visual data; determining a target Euclidean distance smaller than or equal to a preset distance threshold value from Euclidean distances corresponding to each point cloud; and determining the determined point cloud meeting the target Euclidean distance as a target point cloud matched with the target object.
In an alternative embodiment, the apparatus further comprises:
The target point cloud determining module 1104 is configured to determine coordinates of each of the plurality of point clouds in the second coordinate system; acquiring a conversion rule of the second coordinate system and the first coordinate system; and converting the coordinates of each of the plurality of point clouds in the second coordinate system into the coordinates of each of the plurality of point clouds in the first coordinate system based on the conversion rule.
In an alternative embodiment, the apparatus further comprises:
the target point cloud determining module 1104 is configured to determine a first transformation matrix from the second coordinate system to the intermediate coordinate system; determining a second transformation matrix from the intermediate coordinate system to the first coordinate system; determining a conversion rule of the second coordinate system and the first coordinate system containing at least one unknown parameter based on the first conversion matrix and the second conversion matrix; and determining at least one unknown parameter in the conversion rule based on N pairs of matching coordinates acquired by the first sensor and the second sensor at N preset positions respectively to obtain the conversion rule of the second coordinate system and the first standard system.
In an alternative embodiment, the apparatus further comprises:
the target point cloud determining module 1104 is configured to obtain, at N preset positions, a first coordinate set of the calibration code in a first coordinate system through a first sensor; acquiring a second coordinate set of the calibration code under a second coordinate system at N preset positions through a second sensor; converting a first coordinate in the first coordinate set and a second coordinate in the second coordinate set to obtain a first homogeneous coordinate set and a second homogeneous coordinate set; determining N pairs of matching coordinates from the first homogeneous coordinate set and the second homogeneous coordinate set based on the same preset position; substituting the N pairs of matching coordinates into a function of a conversion rule containing at least one unknown parameter to obtain at least one unknown parameter in the conversion rule.
In an alternative embodiment, the apparatus further comprises:
the point cloud determining module 1103 is configured to determine a feature value of each data in the laser data set; calculating a distance value between the characteristic value of each datum and a preset characteristic value; and dividing the laser data set into a plurality of point clouds according to the distance level of the distance value.
The apparatus and method embodiments in the embodiments of the present application are based on the same application concept.
The method embodiments provided in the embodiments of the present application may be performed in a computer terminal, an electronic device, or a similar computing device. Taking the operation on the electronic device as an example, fig. 12 is a block diagram of a hardware structure of the electronic device according to the method for determining feature information of an object provided in the embodiment of the present application. As shown in fig. 12, the electronic device 1200 may vary considerably in configuration or performance, and may include one or more central processing units (Central Processing Units, CPU) 1210 (the processor 1210 may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA), memory 1230 for storing data, one or more storage media 1220 (e.g., one or more mass storage devices) for storing applications 1223 or data 1222. Wherein memory 1230 and storage medium 1220 can be transitory or persistent. The program stored on the storage medium 1220 may include one or more modules, each of which may include a series of instruction operations in an electronic device. Still further, the central processor 1210 may be configured to communicate with a storage medium 1220 and execute a series of instruction operations in the storage medium 1220 on the electronic device 1200. The electronic device 1200 may also include one or more power supplies 1260, one or more wired or wireless network interfaces 1250, one or more input/output interfaces 1240, and/or one or more operating systems 1221, such as Windows ServerTM, mac OS XTM, unixTM, linuxTM, freeBSDTM, and the like.
The input-output interface 1240 may be used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the electronic device 1200. In one example, the input-output interface 1240 includes a network adapter (Network Interface Controller, NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the input/output interface 1240 may be a Radio Frequency (RF) module for communicating with the internet wirelessly.
It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 12 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, the electronic device 1200 may also include more or fewer components than shown in fig. 12, or have a different configuration than shown in fig. 12.
As shown in fig. 13, embodiments of the present application further provide a computer readable storage medium 1310, which may be disposed in a server to store at least one instruction, at least one program, a code set, or an instruction set 1311 related to a method for implementing the method for determining the characteristic information of an object in the method embodiment, where the at least one instruction, the at least one program, the code set, or the instruction set 1311 is loaded by the processor 1320 and performs the method for determining the characteristic information of the object.
Alternatively, in this embodiment, the storage medium may be located in at least one network server among a plurality of network servers of the computer network. Alternatively, in the present embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Embodiments of the method, apparatus, or storage medium for determining feature information of an object provided by the present application described above may be seen, including in particular acquiring visual data and a laser data set based on the same scene; determining category information of a target object in the visual data and position information of the target object in the visual data; clustering the laser data sets to obtain a plurality of point clouds; determining a target point cloud matched with the target object from the plurality of point clouds based on the coordinates projected by each point cloud in the plurality of point clouds under the first coordinate system and the position information of the target object in the visual data; determining class information of the target point cloud according to the class information of the target object; and determining the spatial position of the target point cloud, and based on the same object, effectively increasing the matching relation between the acquired visual data and the laser data, combining complementary information and redundant information in space and time according to a certain optimization criterion to generate consistency interpretation or description of the observation environment and the object, generating a new fusion result, and providing more information for the subsequent application of multi-sensor positioning.
It should be noted that: the foregoing sequence of the embodiments of the present application is only for describing, and does not represent the advantages and disadvantages of the embodiments. And the foregoing description has been directed to specific embodiments of this specification. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the apparatus embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments in part.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the present application is not intended to limit the invention to the particular embodiments of the present application, but to limit the scope of the invention to the particular embodiments of the present application.

Claims (10)

1. A method of determining characteristic information of an object, the method comprising:
acquiring visual data and a laser data set based on the same scene;
determining category information of a target object in the visual data and position information of the target object in the visual data, wherein the target object in the visual data is determined through a target detection algorithm; the category information of the target object is the category to which the target object belongs;
clustering the laser data sets to obtain a plurality of point clouds;
determining coordinates of each point cloud projection in the plurality of point clouds under a first coordinate system; determining Euclidean distance between each point cloud and the target object based on the coordinates of each point cloud projected under a first coordinate system and the position information of the target object in the visual data; determining a target Euclidean distance smaller than or equal to a preset distance threshold value from Euclidean distances corresponding to each point cloud; determining the determined point cloud meeting the target Euclidean distance as the target point cloud matched with the target object;
Determining the category information of the target point cloud according to the category information of the target object; and determining a spatial location of the target point cloud, the spatial location of the target point cloud being determined by the laser dataset.
2. The method of claim 1, wherein the determining coordinates of each of the plurality of point clouds projected under the first coordinate system comprises:
determining coordinates of each point cloud in the plurality of point clouds in a second coordinate system;
acquiring conversion rules of the second coordinate system and the first coordinate system;
and converting the coordinate of each point cloud in the plurality of point clouds under the second coordinate system into the coordinate of each point cloud in the plurality of point clouds under the first coordinate system based on the conversion rule.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
the coordinates in the first coordinate system are two-dimensional data or three-dimensional data; and/or, the coordinates in the second coordinate system are two-dimensional data or three-dimensional data; the position information of the target object in the visual data is two-dimensional data or three-dimensional data.
4. The method of claim 2, wherein the obtaining the transformation rules for the second coordinate system and the first coordinate system comprises:
Determining a first transformation matrix from the second coordinate system to an intermediate coordinate system;
determining a second transformation matrix from the intermediate coordinate system to the first coordinate system;
determining the transformation rules of the second coordinate system and the first coordinate system containing at least one unknown parameter based on the first transformation matrix and the second transformation matrix;
and determining the at least one unknown parameter in the conversion rule based on N pairs of matching coordinates acquired by the first sensor and the second sensor at N preset positions respectively, so as to obtain the conversion rule of the second coordinate system and the first coordinate system.
5. The method according to claim 4, wherein:
the first conversion matrix comprises at least one unknown parameter; and/or; the second conversion matrix comprises at least one unknown parameter;
the intermediate coordinate system includes at least one sub-intermediate coordinate system and a conversion rule between the at least one sub-intermediate coordinate system.
6. The method of claim 4, wherein determining the at least one unknown parameter in the conversion rule based on N pairs of matching coordinates acquired at N preset locations by the first sensor and the second sensor, respectively, comprises:
Acquiring a first coordinate set of the calibration code under the first coordinate system at N preset positions through the first sensor;
acquiring a second coordinate set of the calibration code under the second coordinate system at the N preset positions through the second sensor;
converting a first coordinate in the first coordinate set and a second coordinate in the second coordinate set to obtain a first homogeneous coordinate set and a second homogeneous coordinate set;
determining N pairs of matching coordinates from the first homogeneous coordinate set and the second homogeneous coordinate set based on the same preset position;
substituting the N pairs of matching coordinates into a function of a conversion rule containing the at least one unknown parameter to obtain the at least one unknown parameter in the conversion rule.
7. The method of claim 1, wherein clustering the laser data sets to obtain a plurality of point clouds comprises:
determining a characteristic value of each data in the laser dataset;
calculating a distance value between the characteristic value of each datum and a preset characteristic value;
and dividing the laser data set into a plurality of point clouds according to the distance level of the distance value.
8. A device for determining characteristic information of an object, the device comprising:
the acquisition module is used for acquiring visual data and a laser data set based on the same scene;
the target determining module is used for determining category information of a target object in the visual data and position information of the target object in the visual data, wherein the target object in the visual data is determined through a target detection algorithm; the category information of the target object is the category to which the target object belongs;
the point cloud determining module is used for carrying out clustering processing on the laser data set to obtain a plurality of point clouds;
the target point cloud determining module is used for determining the coordinates of each point cloud projection in the plurality of point clouds under a first coordinate system; determining Euclidean distance between each point cloud and the target object based on the coordinates of each point cloud projected under a first coordinate system and the position information of the target object in the visual data; determining a target Euclidean distance smaller than or equal to a preset distance threshold value from Euclidean distances corresponding to each point cloud; determining the determined point cloud meeting the target Euclidean distance as the target point cloud matched with the target object;
The characteristic information determining module is used for determining the category information of the target point cloud according to the category information of the target object; and determining a spatial location of the target point cloud, the spatial location of the target point cloud being determined by the laser dataset.
9. An electronic device, characterized in that it comprises a processor and a memory, in which at least one instruction or at least one program is stored, which is loaded by the processor and which performs the method of determining the characteristic information of an object according to any one of claims 1-7.
10. A computer storage medium having stored therein at least one instruction or at least one program loaded and executed by a processor to implement a method of determining characteristic information of an object as claimed in any one of claims 1 to 7.
CN202010348299.5A 2020-04-28 2020-04-28 Method and device for determining characteristic information of object, electronic equipment and storage medium Active CN111709988B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010348299.5A CN111709988B (en) 2020-04-28 2020-04-28 Method and device for determining characteristic information of object, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010348299.5A CN111709988B (en) 2020-04-28 2020-04-28 Method and device for determining characteristic information of object, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111709988A CN111709988A (en) 2020-09-25
CN111709988B true CN111709988B (en) 2024-01-23

Family

ID=72536378

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010348299.5A Active CN111709988B (en) 2020-04-28 2020-04-28 Method and device for determining characteristic information of object, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111709988B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114440922A (en) * 2020-10-30 2022-05-06 阿里巴巴集团控股有限公司 Method and device for evaluating laser calibration, related equipment and storage medium
CN113673388A (en) * 2021-08-09 2021-11-19 北京三快在线科技有限公司 Method and device for determining position of target object, storage medium and equipment
CN114279432B (en) * 2021-12-13 2024-10-15 阿里云计算有限公司 Fusion positioning method, computing device and storage medium
CN114820794A (en) * 2022-05-07 2022-07-29 上海节卡机器人科技有限公司 Positioning method, positioning device, electronic equipment and readable storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106485737A (en) * 2015-08-25 2017-03-08 南京理工大学 Cloud data based on line feature and the autoregistration fusion method of optical image
CN108171748A (en) * 2018-01-23 2018-06-15 哈工大机器人(合肥)国际创新研究院 A kind of visual identity of object manipulator intelligent grabbing application and localization method
CN108509918A (en) * 2018-04-03 2018-09-07 中国人民解放军国防科技大学 Target detection and tracking method fusing laser point cloud and image
CN108872991A (en) * 2018-05-04 2018-11-23 上海西井信息科技有限公司 Target analyte detection and recognition methods, device, electronic equipment, storage medium
CN109283538A (en) * 2018-07-13 2019-01-29 上海大学 A kind of naval target size detection method of view-based access control model and laser sensor data fusion
EP3506203A1 (en) * 2017-12-29 2019-07-03 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for fusing point cloud data technical field
CN109978955A (en) * 2019-03-11 2019-07-05 武汉环宇智行科技有限公司 A kind of efficient mask method for combining laser point cloud and image
CN110264468A (en) * 2019-08-14 2019-09-20 长沙智能驾驶研究院有限公司 Point cloud data mark, parted pattern determination, object detection method and relevant device
CN110456363A (en) * 2019-06-17 2019-11-15 北京理工大学 The target detection and localization method of three-dimensional laser radar point cloud and infrared image fusion
CN110675431A (en) * 2019-10-08 2020-01-10 中国人民解放军军事科学院国防科技创新研究院 Three-dimensional multi-target tracking method fusing image and laser point cloud
CN110942449A (en) * 2019-10-30 2020-03-31 华南理工大学 Vehicle detection method based on laser and vision fusion

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10438371B2 (en) * 2017-09-22 2019-10-08 Zoox, Inc. Three-dimensional bounding box from two-dimensional image and point cloud data

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106485737A (en) * 2015-08-25 2017-03-08 南京理工大学 Cloud data based on line feature and the autoregistration fusion method of optical image
EP3506203A1 (en) * 2017-12-29 2019-07-03 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for fusing point cloud data technical field
CN108171748A (en) * 2018-01-23 2018-06-15 哈工大机器人(合肥)国际创新研究院 A kind of visual identity of object manipulator intelligent grabbing application and localization method
CN108509918A (en) * 2018-04-03 2018-09-07 中国人民解放军国防科技大学 Target detection and tracking method fusing laser point cloud and image
CN108872991A (en) * 2018-05-04 2018-11-23 上海西井信息科技有限公司 Target analyte detection and recognition methods, device, electronic equipment, storage medium
CN109283538A (en) * 2018-07-13 2019-01-29 上海大学 A kind of naval target size detection method of view-based access control model and laser sensor data fusion
CN109978955A (en) * 2019-03-11 2019-07-05 武汉环宇智行科技有限公司 A kind of efficient mask method for combining laser point cloud and image
CN110456363A (en) * 2019-06-17 2019-11-15 北京理工大学 The target detection and localization method of three-dimensional laser radar point cloud and infrared image fusion
CN110264468A (en) * 2019-08-14 2019-09-20 长沙智能驾驶研究院有限公司 Point cloud data mark, parted pattern determination, object detection method and relevant device
CN110675431A (en) * 2019-10-08 2020-01-10 中国人民解放军军事科学院国防科技创新研究院 Three-dimensional multi-target tracking method fusing image and laser point cloud
CN110942449A (en) * 2019-10-30 2020-03-31 华南理工大学 Vehicle detection method based on laser and vision fusion

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于激光点云与图像融合的车辆检测方法研究;刘俊生;《中国优秀硕士学位论文全文数据库,工程科技II辑》;20190815(第8期);C035-184 *
基于激光点云的智能挖掘机目标识别;朱建新;沈东羽;吴钪;;计算机工程(01);全文 *
融合影像信息的LiDAR点云多特征分类方法;叶刚;;地理空间信息(06);全文 *

Also Published As

Publication number Publication date
CN111709988A (en) 2020-09-25

Similar Documents

Publication Publication Date Title
CN111709988B (en) Method and device for determining characteristic information of object, electronic equipment and storage medium
CN111486855B (en) Indoor two-dimensional semantic grid map construction method with object navigation points
US20220028163A1 (en) Computer Vision Systems and Methods for Detecting and Modeling Features of Structures in Images
CN111220993B (en) Target scene positioning method and device, computer equipment and storage medium
CN111665842B (en) Indoor SLAM mapping method and system based on semantic information fusion
CN111563442A (en) Slam method and system for fusing point cloud and camera image data based on laser radar
JP6456141B2 (en) Generating map data
CN108875804A (en) A kind of data processing method and relevant apparatus based on laser point cloud data
CN104574406A (en) Joint calibration method between 360-degree panorama laser and multiple visual systems
CN111856499B (en) Map construction method and device based on laser radar
CN109035329A (en) Camera Attitude estimation optimization method based on depth characteristic
WO2022217988A1 (en) Sensor configuration scheme determination method and apparatus, computer device, storage medium, and program
US11861855B2 (en) System and method for aerial to ground registration
Garrote et al. 3D point cloud downsampling for 2D indoor scene modelling in mobile robotics
CN113822892B (en) Evaluation method, device and equipment of simulated radar and computer storage medium
Jiang et al. Determination of construction site elevations using drone technology
CN117367404A (en) Visual positioning mapping method and system based on SLAM (sequential localization and mapping) in dynamic scene
CN111696147A (en) Depth estimation method based on improved YOLOv3 model
Yoshisada et al. Indoor map generation from multiple LiDAR point clouds
Peng et al. Autonomous UAV-Based Structural Damage Exploration Platform for Post-Disaster Reconnaissance
CN112182122A (en) Method and device for acquiring navigation map of working environment of mobile robot
CN111708046A (en) Method and device for processing plane data of obstacle, electronic equipment and storage medium
Wang Autonomous mobile robot visual SLAM based on improved CNN method
Badalkhani et al. Multi-robot SLAM in dynamic environments with parallel maps
Romero-Manchado et al. Application of gradient-based edge detectors to determine vanishing points in monoscopic images: Comparative study

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant