CN111986232B - Target object detection method, target object detection device, robot and storage medium - Google Patents
Target object detection method, target object detection device, robot and storage medium Download PDFInfo
- Publication number
- CN111986232B CN111986232B CN202010813988.9A CN202010813988A CN111986232B CN 111986232 B CN111986232 B CN 111986232B CN 202010813988 A CN202010813988 A CN 202010813988A CN 111986232 B CN111986232 B CN 111986232B
- Authority
- CN
- China
- Prior art keywords
- current
- position information
- target
- target object
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/02—Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
- G01S13/06—Systems determining position data of a target
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/66—Radar-tracking systems; Analogous systems
- G01S13/72—Radar-tracking systems; Analogous systems for two-dimensional tracking, e.g. combination of angle and range tracking, track-while-scan radar
- G01S13/723—Radar-tracking systems; Analogous systems for two-dimensional tracking, e.g. combination of angle and range tracking, track-while-scan radar by using numerical data
- G01S13/726—Multiple target tracking
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/86—Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
- G01S13/865—Combination of radar systems with lidar systems
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/66—Tracking systems using electromagnetic waves other than radio waves
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10044—Radar image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Manipulator (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention discloses a method and a device for detecting a target object, a robot and a storage medium, wherein the method comprises the following steps: determining current first position information of a target object in vision acquired by a vision acquisition device at the current sampling moment, determining current second position information of the target object in point cloud data acquired by a radar at the current sampling moment, determining current target first position information meeting a pairing condition and corresponding current target second position information, updating an association parameter of target tracking information if the current target first position information and the target tracking information in a tracking unit meet a matching condition, and determining the current target second position information corresponding to the current target first position information as final position information of the target object if the updated association parameter of the target tracking information is greater than a preset association parameter threshold. The target object detection method improves the accuracy and robustness of target object detection.
Description
Technical Field
The embodiment of the invention relates to the technical field of robot detection, in particular to a method and a device for detecting a target object, a robot and a storage medium.
Background
With the continuous development of computer technology, the development and perfection of robot sensor equipment and the popularization of robot operating systems, the trend of robot exchange is coming. The detection technology in the movement of the robot is to analyze and calculate a series of sensor data to obtain the position of a target object, so that the robot can react in time and avoid in advance. The robust and accurate detection capability is one of core technologies of the robot and is a basic requirement for realizing autonomous navigation of the robot.
At present, when a robot detects in a moving process, point cloud data acquired by a laser radar and an image acquired by an image acquisition device can be fused, specifically, Bayesian decision is adopted for fusion, and target detection and tracking are carried out according to a fusion result.
However, in the above process, because bayesian decision is needed for fusion, detection is performed based on prior probability in the process, and because the prior probability has errors, the detection accuracy is low.
Disclosure of Invention
The invention provides a target object detection method, a target object detection device, a robot and a storage medium, and aims to solve the technical problem of low accuracy of the conventional detection method.
In a first aspect, an embodiment of the present invention provides a method for detecting a target object, including:
determining current first position information of a target object in vision, which is acquired by a vision acquisition device at the current sampling moment;
determining current second position information of a target object in point cloud data acquired by a radar at the current sampling moment;
determining current target first position information and corresponding current target second position information which meet pairing conditions according to the current first position information and the current second position information;
if the current target first position information and the target tracking information in the tracking unit meet the matching condition, updating the associated parameters of the target tracking information; the tracking unit is determined according to historical target first position information at historical sampling time;
and if the updated associated parameter of the target tracking information is larger than a preset associated parameter threshold, determining the current target second position information corresponding to the current target first position information as the final position information of the target object.
In the method as described above, the current first position information includes a first distance between a target object and a robot, and the current second position information includes a second distance between the target object and the robot;
the determining, according to the current first location information and the current second location information, current target first location information and corresponding current target second location information that satisfy a pairing condition includes:
determining current effective first position information in the current first position information and current effective second position information corresponding to the current effective first position information according to the current first position information and the current second position information;
and if the difference value of the first distance in the current effective first position information and the second distance in the corresponding current effective second position information meets the pairing condition, determining the current effective first position information as the current target first position information, and determining the corresponding current effective second position information as the corresponding current target second position information.
In the implementation manner, the current effective first position information and the current effective second position information corresponding to the current effective first position information are screened out firstly, and then the current target first position information and the corresponding current target second position information are determined based on the current effective first position information and the current effective second position information corresponding to the current effective first position information, so that the current effective first position information and the current effective second position information corresponding to the current effective first position information are determined efficiently.
In the method shown above, the pairing conditions are: a difference value obtained by subtracting a target distance compensation value from a first distance in the current effective first position information, and a difference value obtained by subtracting a second distance in the corresponding current effective second position information are greater than zero and smaller than a preset target distance threshold, wherein the target distance compensation value includes: a first distance compensation value and a second distance compensation value, wherein the preset target distance threshold comprises: a first distance threshold and a second distance threshold, the first distance compensation value being less than the second distance compensation value, the first distance threshold being less than the second distance threshold;
the method further comprises the following steps:
if the position corresponding to the current effective first position information is determined to be in a preset safety area, determining the target distance compensation value to be the first distance compensation value, and determining the preset target distance threshold value to be the first distance threshold value;
and if the position corresponding to the current effective first position information is determined to be outside a preset safety area, determining that the target distance compensation value is the second distance compensation value, and determining that the preset target distance threshold is the second distance threshold.
In this implementation manner, different distance thresholds and distance compensation values are set for whether the current effective first position information is in the safety region, so that the adaptability of the pairing conditions can be improved, the current effective first position information at different positions corresponds to different pairing conditions, and the accuracy of the determined current target first position information and the determined current second position information is improved.
In the method shown above, the determining, according to the current first location information and the current second location information, current valid first location information in the current first location information and current valid second location information corresponding to the current valid first location information includes:
determining an area where a position corresponding to the current first position information is located according to the current first position information;
determining an effectiveness judgment condition according to the area of the position corresponding to the current first position information;
if the current second position information which meets the validity judgment condition with the current first position information exists in the current second position information, the current first position information is determined as the current valid first position information, and the current second position information which meets the validity judgment condition with the current first position information is determined as the current valid second position information corresponding to the current valid first position information.
In this implementation manner, validity determination conditions are different depending on the area where the position corresponding to the current first position information is located. The processing mode can improve the accuracy of the determined current effective first position information and the corresponding current effective second position information.
In the method as shown above, the current first location information further includes: the maximum angle between the target object and the robot and the minimum angle between the target object and the robot, and the current second position information further includes: the angle of the target object to the robot;
determining an effectiveness judgment condition according to the area where the position corresponding to the current first position information is located, wherein the determining includes:
if the area where the position corresponding to the current first position information is located is determined to be the middle area, determining that the validity judgment condition is as follows: the sum of the maximum angle of the target object and the robot and the angle compensation value is larger than the angle of the target object and the robot, and the difference value between the minimum angle of the target object and the robot and the angle compensation value is smaller than the angle of the target object and the robot;
if the area where the position corresponding to the current first position information is located is determined to be the left area, determining that the validity judgment condition is as follows: the maximum angle between the target object and the robot is larger than the angle between the target object and the robot, and the difference value between the minimum angle between the target object and the robot and the angle compensation value of the preset first multiple is smaller than the angle between the target object and the robot;
if the area where the position corresponding to the current first position information is located is determined to be the right area, determining that the validity judgment condition is as follows: the sum of the maximum angle of the target object and the robot and the preset second-multiple angle compensation value is larger than the angle of the target object and the robot, and the minimum angle of the target object and the robot is smaller than the angle of the target object and the robot.
In this implementation manner, how to determine the validity judgment condition according to the area where the position corresponding to the current first position information is located is defined. The processing mode can further improve the accuracy of the determined current effective first position information and the corresponding current effective second position information.
In the method as shown above, the tracking unit includes a plurality of tracking information, each tracking information including a plurality of historical target first location information;
if the current target first position information and the target tracking information in the tracking unit meet the matching condition, updating the associated parameters of the target tracking information, including:
and if the quotient of the distance between the position corresponding to the current target first position information and the position corresponding to the latest historical target first position information in the target tracking information and the width of the target object corresponding to the current target first position information is smaller than a preset matching threshold value, updating the associated parameters of the target tracking information.
In the implementation mode, the matching condition met by the current target first position information and the target tracking information in the tracking unit is limited, and the accuracy of the determined target tracking information is improved.
In the method as shown above, after determining that a quotient of a distance between a location corresponding to the current target first location information and a location corresponding to the latest historical target first location information in the target tracking information and a width of a target object corresponding to the current target first location information is smaller than a preset matching threshold, the method further includes:
and updating the current target first position information into the target tracking information.
The method for updating the target tracking information can facilitate the target object detection at the subsequent sampling moment, and improve the robustness of the target object detection.
As in the method above, the method further comprises:
and if the current target first position information and any tracking information in the tracking unit do not meet the matching condition, taking the current target first position information as new tracking information and adding the new tracking information into the tracking unit.
The method for updating the target tracking information can facilitate the target object detection at the subsequent sampling moment, and improve the robustness of the target object detection.
In the method, the determining the current first position information of the target object in the vision, which is acquired by the vision acquisition device at the current sampling time, includes:
inputting the vision into a pre-trained deep learning model, and acquiring the position information of a circumscribed polygon of the target object identified by the deep learning model;
performing homography transformation on the position information of the circumscribed polygon of the target object to obtain the position information of the circumscribed polygon of the target object on a laser plane;
performing coordinate transformation on the position information of the circumscribed polygon of the target object on a laser plane, and determining the position information of the circumscribed polygon of the target object in a world coordinate system;
and determining the position information of the circumscribed polygon of the target object in a world coordinate system as the current first position information of the target object.
In the implementation mode, the position information of the target object is detected by using the depth model, the detected position information is subjected to coordinate transformation, the accuracy of the obtained current first position information of the target object is high, and due to the fact that the coordinate transformation is carried out, follow-up processing is facilitated, and the target object detection efficiency is improved.
In the method as shown above, the determining the current second position information of the target object in the point cloud data acquired by the radar at the current sampling time includes:
clustering the point cloud data to obtain a plurality of clustered target object classes;
taking the position average value of the point clouds in each target object class as the position information of the target object in the robot coordinate system;
performing coordinate transformation on the position information of the target object in a robot coordinate system, and determining the position information of the target object in a world coordinate system;
and determining the position information of the target object in the world coordinate system as the current second position information of the target object.
In the implementation mode, the target object class is obtained in a clustering mode, the position average value of point clouds in the target object class is used as the position information of the target object in the robot coordinate system, and after coordinate transformation, the current second position information of the target object is determined.
In the method as described above, the target object is a road obstacle.
The implementation mode can realize the detection of the road barrier and avoid in advance, and improves the safety of the robot.
In a second aspect, an embodiment of the present invention provides a target object detection apparatus, including:
the first determination module is used for determining current first position information of the target object in the vision, which is acquired by the vision acquisition device at the current sampling moment;
the second determining module is used for determining current second position information of the target object in the point cloud data acquired by the radar at the current sampling moment;
a third determining module, configured to determine, according to the current first location information and the current second location information, current target first location information and corresponding current target second location information that meet a pairing condition;
the first updating module is used for updating the associated parameters of the target tracking information if the current target first position information and the target tracking information in the tracking unit meet the matching condition; the tracking unit is determined according to historical target first position information at historical sampling time;
a fourth determining module, configured to determine, if the updated associated parameter of the target tracking information is greater than a preset associated parameter threshold, the current target second position information corresponding to the current target first position information as final position information of the target object.
In a third aspect, an embodiment of the present invention further provides a robot, including:
one or more processors;
a memory for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors implement the method for detecting a target object as provided in the first aspect.
In a fourth aspect, the embodiments of the present invention also provide a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the method for detecting a target object as provided in the first aspect.
The embodiment of the invention provides a method and a device for detecting a target object, a robot and a storage medium, wherein the method comprises the following steps: determining current first position information of a target object in vision acquired by a vision acquisition device at the current sampling moment, determining current second position information of the target object in point cloud data acquired by a radar at the current sampling moment, determining current target first position information meeting the matching condition and corresponding current target second position information according to the current first position information and the current second position information, if the current target first position information and the target tracking information in the tracking unit meet the matching condition, updating the associated parameters of the target tracking information, wherein, the tracking unit is determined according to the historical first position information of the target at the historical sampling moment, if the updated associated parameter of the target tracking information is larger than the preset associated parameter threshold, determining the current target second position information corresponding to the current target first position information as the final position information of the target object. According to the method for detecting the target object, the current first position information of the target object in the vision acquired by the vision acquisition image and the current second position information of the target object in the point cloud data acquired by the radar are fused, and the target object is detected by adopting a time domain tracking method.
Drawings
Fig. 1 is a schematic view of an application scenario of a method for detecting a target object according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a method for detecting a target object according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of determining current first position information of a target object according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of one implementation of current first location information of a target object;
FIG. 5 is a diagram illustrating one implementation of determining current second location information of a target object according to an embodiment of the invention;
fig. 6 is a schematic process diagram illustrating a process of determining current target first position information and corresponding current target second position information that satisfy a pairing condition in the target object detection method according to an embodiment of the present invention;
FIG. 7 is a diagram of a secure area according to an embodiment of the present invention;
fig. 8 is a schematic process diagram of updating the tracking unit in the target object detection method according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a target object detection apparatus according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a robot according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Fig. 1 is a schematic view of an application scenario of a target object detection method according to an embodiment of the present invention. As shown in fig. 1, when the robot 11 is moving, for example, when the robot 11 navigates autonomously on a known map, it is necessary to detect a target object in the surrounding environment, so that the robot can react in time to avoid the target object in advance. The target object in this embodiment may be a road obstacle 12 in the surrounding environment. Such as a one meter fence, partition, baffle, etc. The one-meter column in the present embodiment refers to a column, a rectangular parallelepiped, or the like having a height of a preset height, for example, 1 meter, for guiding a pedestrian to pass through. According to the method for detecting the target object, the current first position information of the target object in the vision acquired by the vision acquisition image and the current second position information of the target object in the point cloud data acquired by the radar are fused, and the target object is detected by adopting a time domain tracking method.
Fig. 2 is a schematic flow chart of a method for detecting a target object according to an embodiment of the present invention. The embodiment is suitable for a scene for detecting the target object in the surrounding environment in the moving process of the robot. The target object detection method may be performed by a target object detection device, which may be implemented by software and/or hardware, and may be integrated in a robot. As shown in fig. 2, the method for detecting a target object provided in this embodiment includes the following steps:
step 201: and determining current first position information of the target object in the vision, which is acquired by the vision acquisition device at the current sampling moment.
Specifically, the visual acquisition device in this embodiment may be an image acquisition device or other devices capable of realizing visual acquisition. The vision acquisition device is arranged in a robot and acquires an object entering a Field of View (FOV) of the vision acquisition device at a preset frequency. The visual in this embodiment may be a visual image.
The target object in this embodiment may be a road obstacle. The road barrier here may be a fence, a partition, a baffle, etc. The one-meter fence in the embodiment refers to a device which is fixedly or movably arranged in a road, has a height of a preset height threshold value, and is used for guiding pedestrians to pass through a pillar, a cuboid and the like.
In one implementation, the current first position information of the target object in the vision acquired by the vision acquisition device at the current sampling time may be determined through an image recognition technology.
The following describes in detail a process of determining the current first position information of the target object. Fig. 3 is a schematic flowchart of determining current first position information of a target object according to an embodiment of the present invention. As shown in fig. 3, a process of determining current first position information of a target object includes the steps of:
step 2011: and inputting the vision into a pre-trained deep learning model, and acquiring the position information of the circumscribed polygon of the target object identified by the deep learning model.
The deep learning model in the present embodiment is a model for detecting a target object that is trained in advance from a training image containing the target object. Illustratively, the deep learning model may be a Convolutional Neural Network (CNN), a Recursive Convolutional Neural Network (RCNN), or the like. The present implementation is not limited thereto.
After the deep learning model is trained, the vision collected by the vision collection device at the current sampling moment is input into the deep learning model, and the position information of the circumscribed polygon of the target object identified by the deep learning model can be obtained. Illustratively, the circumscribed polygon herein may be a minimum circumscribed rectangle of the target object. Of course, the circumscribed polygon may also be a minimum circumscribed pentagon, hexagon, etc. of the target object.
The position information of the circumscribed polygon of the target object acquired in step 2011 may be coordinate information of the circumscribed polygon of the target object in the visual image.
Step 2012: and carrying out homography transformation on the position information of the circumscribed polygon of the target object to obtain the position information of the circumscribed polygon of the target object on the laser plane.
In step 2012, the position information of the circumscribed polygon of the target object is homography-transformed to project the circumscribed polygon of the target object on the plane of the radar emitting the electromagnetic wave. The plane of the electromagnetic wave herein refers to a plane having a preset height from the ground, where the preset height may be determined according to the height of the radar.
Specifically, homography transformation can be realized through a homography matrix calibrated in advance. In the case where the circumscribed polygon is the minimum circumscribed rectangle, the coordinates of the vertices of the minimum circumscribed rectangle of the target object are the coordinates of the vertices of the minimum circumscribed rectangle in the laser plane.
Step 2013: and carrying out coordinate transformation on the position information of the circumscribed polygon of the target object in the laser plane, and determining the position information of the circumscribed polygon of the target object in a world coordinate system.
Step 2014: and determining the position information of the circumscribed polygon of the target object in the world coordinate system as the current first position information of the target object.
In step 2013, the position information of the circumscribed polygon of the target object determined in step 2012 on the laser plane is subjected to coordinate transformation, and the position information of the circumscribed polygon of the target object in the world coordinate system is obtained. In step 2014, the position information of the circumscribed polygon of the target object in the world coordinate system is used as the current first position information of the target object.
In the process of acquiring the current first position information of the target object based on steps 2011 to 2014, the depth model is used for detecting the position information of the target object, and the detected position information is subjected to coordinate transformation, so that the accuracy of the acquired current first position information of the target object is high, and due to the fact that the coordinate transformation is performed, the subsequent processing is facilitated, and the target object detection efficiency is improved.
Illustratively, the current first location information may be wi,i=0,...,m1. Wherein m is1Is the number of target objects detected in a frame of visual image. That is, the current first location information may be plural. Each of the current first position information may correspond to a target object.
In the case where the circumscribed polygon is the minimum circumscribed rectangle, wiThe attribute of (c) may include at least one of: coordinates of the four vertexes, coordinates of the center point and distances from the center point to the center point of the robot. FIG. 4 is a schematic diagram of one implementation of current first location information of a target object. As shown in fig. 4, a plurality of current first location information is shown. Since the current first position information is obtained by performing homography transformation and coordinate transformation on the minimum circumscribed rectangle, the shape formed by the current first position information may not be a rectangle, but may be other types of quadrangles.
Step 202: and determining current second position information of the target object in the point cloud data acquired by the radar at the current sampling moment.
Specifically, in this embodiment, the robot may collect point cloud data through a radar disposed thereon. The point cloud data in this embodiment may be two-dimensional or three-dimensional point cloud data. The radar in this embodiment collects point cloud data of objects entering its FOV at a preset frequency. The radar in this embodiment may be a laser radar, an over-the-horizon radar, a microwave radar, a millimeter wave radar, or the like.
In this embodiment, the current second position information of the target object may be determined by a method of clustering point cloud data. The specific process can be as follows: clustering the point cloud data to obtain a plurality of clustered target object classes; taking the position average value of the point clouds in each target object class as the position information of the target object in the robot coordinate system; carrying out coordinate transformation on the position information of the target object in the robot coordinate system, and determining the position information of the target object in the world coordinate system; and determining the position information of the target object in the world coordinate system as the current second position information of the target object.
Fig. 5 is a schematic diagram of an implementation manner of determining current second position information of the target object according to an embodiment of the present invention. As shown in fig. 5, first, a plurality of point cloud data are acquired by radar. The coordinate values of the point cloud data may be coordinate values of the collected points in the robot coordinate system. In the scene of the two-dimensional point cloud data, the robot coordinate system refers to a coordinate system constructed by using one point on the robot as an origin, using the robot forward direction as an X-axis direction, and using the robot left direction as a Y-axis direction. In the scene of the three-dimensional point cloud data, the robot coordinate system refers to a coordinate system which is constructed by taking one point on the robot as an origin, taking the advancing direction of the robot as an X-axis direction, taking the left direction of the robot as a Y-axis direction and taking the direction vertical to an X-Y plane as a Z-axis direction. Here, a point on the robot may be any point on the robot, for example, a middle point of a driving wheel line.
With reference to fig. 5, the plurality of point cloud data are clustered. For example, the plurality of point cloud data may be clustered by a euclidean clustering method, a cluster having a clustering radius within a preset distance, for example, a range of 3 cm to 5 cm, is identified as a target object class, and a plurality of clustered target object classes are obtained. Each target object class herein may represent a target object. And taking the position average value of the point clouds in each target object class as the position information of the target object in the robot coordinate system. And then, performing coordinate transformation to acquire the position information of the target object in the world coordinate system, and taking the position information of the target object in the world coordinate system as the current second position information of the target object.
According to the process for determining the current second position information of the target object, the target object class is obtained in a clustering mode, the position average value of the point cloud in the target object class is used as the position information of the target object in the robot coordinate system, and the current second position information of the target object is determined after coordinate transformation.
Alternatively, the current second position information may be the point nj,j=0,...,m2. Wherein m is2Is the number of target objects detected by one frame of point cloud data. That is, the current second location information may be plural. Each of the current second position information may correspond to a target object.
The attribute of the current second position information includes a point njCoordinates under the world coordinate System (n)jx,njy0), point njDistance from center point of robot (r)x,ry0) angle nj. theta and distance nj. And (5) range. nj. range and nj. the theta is calculated as shown in the following formulas 1.1 and 1.2:
x_=njx-rx
y_=njy-ry
if(x_>0)
else
it should be noted that there is no timing relationship between step 201 and step 202. These two steps may be performed in any order.
Step 203: and determining the current target first position information and the corresponding current target second position information which meet the pairing condition according to the current first position information and the current second position information.
Specifically, the current first position information of the target object is determined in step 201, and the current second position information of the target object is determined in step 202. The vision collected by the vision collection device is easily influenced by illumination, so that the identification performance is reduced, the point cloud data only has geometric information, and the identification degree of a target object is not enough. Therefore, in the embodiment, the current first position information of the target object and the current second position information of the target object are fused to improve the accuracy of the determined final position information of the target object.
In step 203, the current first location information and the current second location information are paired, and the current target first location information and the corresponding current target second location information meeting the pairing condition are determined.
In one implementation, a pairing condition may be preset, and according to the pairing condition, the current target first location information and the corresponding current target second location information that satisfy the pairing condition are determined.
In another implementation manner, current valid first location information in the current first location information and current valid second location information corresponding to the current valid first location information may be determined, and then the current target first location information and the corresponding current target second location information may be determined according to whether the current valid first location information and the current valid second location information corresponding to the current valid first location information satisfy the pairing condition.
This implementation is described in detail below. Fig. 6 is a schematic process diagram of determining current target first position information and corresponding current target second position information that satisfy a pairing condition in the target object detection method according to an embodiment of the present invention. In this implementation, the current first position information includes a first distance w of the target object from the robotiRange, the current second position information comprises a second distance n between the target object and the robotjAnd (4) range. More specifically, the first distance of the target object from the robot in the current first position information may be a distance between a center point of the shape constituted by the current first position information and a center point of the robot. As shown in fig. 6, determining the current target first location information and the corresponding current target second location information satisfying the pairing condition includes the following steps:
step 2031: and determining current effective first position information in the current first position information and current effective second position information corresponding to the current effective first position information according to the current first position information and the current second position information.
Since there is a possibility of false detection, some of the determined current first position information are invalid data. In step 2031, currently valid first location information in the current first location information and currently valid second location information corresponding to the currently valid first location information may be screened out according to the current first location information and the current second location information. The current effective first position information and the current effective second position information corresponding to the current effective first position information are screened out, so that the efficiency of subsequently determining the current target first position information and the corresponding current target second position information can be improved conveniently.
One possible implementation manner is that according to the current first position information, an area where a position corresponding to the current first position information is located is determined; determining an effectiveness judgment condition according to the area of the position corresponding to the current first position information; if the current second position information which meets the validity judgment condition with the current first position information exists in the current second position information, the current first position information is determined as the current valid first position information, and the current second position information which meets the validity judgment condition with the current first position information is determined as the current valid second position information corresponding to the current valid first position information.
In this embodiment, the visual image may be divided into three regions in advance, for example, a left region, a right region, and a middle region. The range of the divided region can be transformed into a world coordinate system through homography transformation and coordinate transformation. The area where the position corresponding to the current first position information is located can be determined according to the current first position information, and then the validity judgment condition is determined according to the area where the position corresponding to the current first position information is located. And the validity judgment conditions are different according to different regions where the position corresponding to the current first position information is located. The processing mode can improve the accuracy of the determined current effective first position information and the corresponding current effective second position information.
More specifically, the position corresponding to the current first position information may be a position of a center point of the shape constituted by the current first position information.
After determining the validity judgment condition, traversing njAnd determining whether the current second position information which meets the validity judgment condition with the current first position information exists in the current second position information. And if the current second position information meeting the validity judgment condition with the current first position information exists, determining the current first position information as the current valid first position information, and determining the current second position information meeting the validity judgment condition with the current first position information as the current valid second position information corresponding to the current valid first position information. If the first position information does not exist, the validity judgment is satisfied with the current first position informationAnd if the current second position information of the condition is disconnected, the current first position information is not the current effective first position information.
Alternatively, in the case where the circumscribed polygon is the minimum circumscribed rectangle, the current first position information wiWhen the minimum coordinate of the x axis in the four vertex coordinates is less than the first preset area threshold value, determining wiLocated in the left region; at the current first position information wiWhen the maximum coordinate of the x-axis in the four vertex coordinates is greater than a second preset region threshold value, determining wiLocated in the right region; when none of the above conditions is satisfied, determining wiIs located in the middle region.
The following describes in detail how to determine a specific implementation process of the validity determination condition according to the area where the position corresponding to the current first position information is located. Further, the current first location information further includes: the maximum angle between the target object and the robot and the minimum angle between the target object and the robot, and the current second position information further includes: angle n of target object to robotj. And (e) theta. The determination process of the maximum angle between the target object and the robot and the minimum angle between the target object and the robot may be as follows: first determine wiThe determination mode of the angles between the four top points and the central point of the robot can refer to the formula 1.2, and then the maximum angle w between the target object and the robot is determinedimax_thetaAnd, minimum angle w of the target object to the robotimin_theta。
In practical implementation, may be wiAnd setting a position flag bit. At the determination of wiAnd after the region is located, setting the position flag bit to a corresponding value. For example, if it is determined that the region where the position corresponding to the current first position information is located is the middle region, the corresponding w is setiThe position flag bit of (1) is medium. If the region where the position corresponding to the current first position information is located is determined to be the left region, the corresponding wiThe position flag bit of (1) is left. If the area where the position corresponding to the current first position information is located is determined to be the right area, the corresponding wiThe position flag bit of (1) is right.
In the first case, if it is determined that the area where the position corresponding to the current first position information is located is the middle area, the validity determination condition is determined as follows: the sum of the maximum angle of the target object to the robot and the angle compensation value is greater than the angle of the target object to the robot, and the difference between the minimum angle of the target object to the robot and the angle compensation value is less than the angle of the target object to the robot.
In other words, if wiThe position flag bit of (1) is medium, and the validity judgment condition is determined as follows:
nj。theta<wimax_theta+ angle _ offset and nj。theta>wimin_theta-angle _ offset, wherein angle _ offset represents an angle compensation value. Thereafter, it is determined whether n is presentjThe validity judgment condition is satisfied.
In the second case, if it is determined that the area where the position corresponding to the current first position information is located is the left area, it is determined that the validity determination condition is: the maximum angle between the target object and the robot is larger than the angle between the target object and the robot, and the difference value between the minimum angle between the target object and the robot and the angle compensation value of the preset first multiple is smaller than the angle between the target object and the robot.
In other words, if wiThe position flag bit is left, and the validity judgment condition is determined as follows:
nj。theta<wimax_thetaand n isj。theta>wimin_theta-3 angle _ offset. The preset first multiple here is 3. Thereafter, it is determined whether n is presentjThe validity judgment condition is satisfied.
In a third case, if it is determined that the area where the position corresponding to the current first position information is located is the right area, determining that the validity judgment condition is: the sum of the maximum angle of the target object and the robot and the preset second-multiple angle compensation value is larger than the angle of the target object and the robot, and the minimum angle of the target object and the robot is smaller than the angle of the target object and the robot.
In other words, if the wi-flag is right, the validity determination condition is determined as:
nj。theta<wimax_theta+3 angle _ offset and nj。theta>wimin_theta. Thereafter, it is determined whether n is presentjThe validity judgment condition is satisfied. The preset second multiple here is 3.
Step 2032: and if the difference value of the first distance in the current effective first position information and the second distance in the corresponding current effective second position information meets the pairing condition, determining the current effective first position information as the current target first position information, and determining the corresponding current effective second position information as the corresponding current target second position information.
Optionally, the pairing conditions are: and subtracting the difference value of the target distance compensation value from the first distance in the current effective first position information, and then subtracting the difference value of the second distance in the corresponding current effective second position information, wherein the difference value is larger than zero and smaller than a preset target distance threshold value. The target distance compensation value includes: a first distance compensation value and a second distance compensation value. The preset target distance threshold includes: a first distance threshold and a second distance threshold. The first distance compensation value is smaller than the second distance compensation value, and the first distance threshold value is smaller than the second distance threshold value.
The target distance threshold is represented by min _ dist, the target distance compensation value is represented by move _ offset, and the pairing conditions are as follows: first calculating dist, dist ═ wi.range-move_offset-njRange, then compare dist with 0 and min _ dist, dist needs to satisfy being greater than zero and less than a preset target distance threshold. If dist is satisfied>0 and dist<And min _ dist, determining the current effective first position information as current target first position information, and determining the corresponding current effective second position information as corresponding current target second position information.
Based on the implementation manner of the pairing condition, the method for detecting the target object provided by this embodiment further includes the following steps: if the position corresponding to the current effective first position information is determined to be in the preset safety area, determining the target distance compensation value as a first distance compensation value, and determining the preset target distance threshold value as a first distance threshold value; and if the position corresponding to the current effective first position information is determined to be outside the preset safety area, determining the target distance compensation value as a second distance compensation value, and setting the preset target distance threshold as a second distance threshold.
Different distance thresholds and distance compensation values are set according to whether the current effective first position information is in the safety area, so that the adaptability of the pairing conditions can be improved, the current effective first position information of different positions corresponds to different pairing conditions, and the accuracy of the determined current target first position information and the determined current second position information is improved.
Fig. 7 is a schematic diagram of a secure area according to an embodiment of the present invention. The safety area is a part of the area in the visual image after projection and coordinate transformation. Optionally, the safety region is an area smaller than the area of the projected and coordinate-transformed visual image. Alternatively, the first distance compensation value may be 0, the second distance compensation value is 0.3, the first distance threshold value is 0.4, and the second distance threshold value is 0.8.
Step 204: and if the current first position information of the target and the target tracking information in the tracking unit meet the matching condition, updating the associated parameters of the target tracking information.
The tracking unit is determined according to historical target first position information at the historical sampling time.
In this embodiment, after determining the first position information of the current target and the corresponding second position information of the current target, a time domain tracking method may be adopted to determine the final position information of the target object.
In order to realize time domain tracking, the present embodiment provides a tracking unit determined based on the historical target first position information at the historical sampling time. The historical target first position information in this embodiment refers to target first position information that is determined at a time before the current time and satisfies a pairing condition with the second position information.
Optionally, the tracking unit comprises a plurality of tracking information, each tracking information comprising a plurality of historical target first location information. Step 204 may specifically be: and if the quotient of the distance between the position corresponding to the current target first position information and the position corresponding to the latest historical target first position information in the target tracking information and the width of the target object corresponding to the current target first position information is smaller than a preset matching threshold value, updating the associated parameters of the target tracking information. The matching condition satisfied by the current target first position information and the target tracking information in the tracking unit is limited, and the accuracy of the determined target tracking information is improved.
Suppose with sqIndicating the latest historical target first location information in the tracking information. Traversing all tracking information and calculatingAnd if a certain distance is smaller than the matching threshold, determining the corresponding tracking information as the target tracking information.
It should be noted that the position corresponding to the current target first position information may be a position of a center point of the target first position information, and the position corresponding to the historical target first position information may be a position of a center point of the historical target first position information.
The tracking information in this embodiment corresponds to the associated parameters. The correlation parameter may be used to indicate the number of target first location information successfully matched with the tracking information. The initial value of the associated parameter may be 0.
And updating the associated parameters of the target tracking information after determining that the current target first position information and the target tracking information meet the matching condition. Specifically, the association parameter before updating may be added by 1 to be the updated association parameter.
Step 205: and if the updated associated parameter of the target tracking information is larger than the preset associated parameter threshold, determining the current target second position information corresponding to the current target first position information as the final position information of the target object.
In step 205, if the updated correlation parameter of the target tracking information is greater than the preset correlation parameter threshold, the current target second position information corresponding to the current target first position information is determined as the final position information of the target object. After determining the final position information of the target object, the final position information may be sent to the global map for marking, so as to perform path planning later.
In one implementation, after step 204, the current target first position information may be updated into target tracking information. When the next sampling time comes, the current target first position information may be used as the latest historical target first position information in the target tracking information. The method for updating the target tracking information can facilitate the target object detection at the subsequent sampling moment, and improve the robustness of the target object detection.
Optionally, if the current target first position information does not satisfy the matching condition with any tracking information in the tracking unit, the current target first position information is used as a new tracking information and added into the tracking unit. The method for updating the target tracking information can facilitate the target object detection at the subsequent sampling moment, and improve the robustness of the target object detection.
Optionally, to improve timeliness, the historical target first location information in the tracking unit is timeliness. And setting a survival time threshold, and deleting the historical target first position information after the survival time of the historical target first position information exceeds the survival time threshold. Illustratively, the time-to-live threshold may be 7 seconds.
Further, the state of the trace information may be set. There are two states for the trace information: unstable (transient) and stable (constant). The initial state of the trace information is an unstable state. And when the associated parameter of the tracking information is greater than a preset associated parameter threshold value, determining that the tracking information is in a stable state. In order to improve the matching speed between the current target first position information and the target tracking information, it may be determined whether the current target first position information and the stable-state tracking information satisfy the matching condition, and if not, it may be determined whether the current target first position information and the unstable-state tracking information satisfy the matching condition.
Fig. 8 is a schematic process diagram of updating the tracking unit in the target object detection method according to an embodiment of the present invention. As shown in fig. 8, the tracking unit includes a plurality of tracking information, assuming 4 tracking information: t1, t2, t3 and t 4. t1 includes two historical target first position information w1 and w2, t2 includes two historical target first position information w3 and w4, t3 includes three historical target first position information w5, w6 and w7, and t4 includes two historical target first position information w8 and w 9. Assume that the current target first position information is w 28.
In one case, assuming t2 is the target tracking information, after step 204, w28 may be updated into t 2. Updated t2 includes: w3, w4 and w 28.
In another case, assuming that w28 does not satisfy the matching condition with any tracking information in the tracking cell, w28 is added to the tracking cell as a new tracking information, assumed as t 5. The updated tracking cells include: t1, t2, t3, t4 and t 5. In this case t1, t2, t3 and t4 are not updated, and t5 includes w 28.
The method for detecting the target object provided by the embodiment can be suitable for detecting the road obstacle and planning a route when the cleaning robot executes a cleaning task so as to improve the safety of the cleaning robot.
The method for detecting a target object provided by the embodiment comprises the following steps: determining current first position information of a target object in vision acquired by a vision acquisition device at the current sampling moment, determining current second position information of the target object in point cloud data acquired by a radar at the current sampling moment, determining current target first position information meeting the matching condition and corresponding current target second position information according to the current first position information and the current second position information, if the current target first position information and the target tracking information in the tracking unit meet the matching condition, updating the associated parameters of the target tracking information, wherein, the tracking unit is determined according to the historical first position information of the target at the historical sampling moment, if the updated associated parameter of the target tracking information is larger than the preset associated parameter threshold, determining the current target second position information corresponding to the current target first position information as the final position information of the target object. According to the method for detecting the target object, the current first position information of the target object in the vision acquired by the vision acquisition image and the current second position information of the target object in the point cloud data acquired by the radar are fused, and the target object is detected by adopting a time domain tracking method.
Fig. 9 is a schematic structural diagram of a detection apparatus for a target object according to an embodiment of the present invention. As shown in fig. 9, the detection apparatus for a target object provided in this embodiment includes the following modules: a first determination module 91, a second determination module 92, a third determination module 93, a first update module 94, and a fourth determination module 95.
The first determining module 91 is configured to determine current first position information of the target object in the vision, which is acquired by the vision acquisition apparatus at the current sampling time.
Optionally, the first determining module 91 is specifically configured to: inputting vision into a pre-trained deep learning model, and acquiring position information of a circumscribed polygon of a target object identified by the deep learning model; carrying out homography transformation on the position information of the circumscribed polygon of the target object to obtain the position information of the circumscribed polygon of the target object on a laser plane; carrying out coordinate transformation on the position information of the circumscribed polygon of the target object on the laser plane, and determining the position information of the circumscribed polygon of the target object in a world coordinate system; and determining the position information of the circumscribed polygon of the target object in the world coordinate system as the current first position information of the target object.
Optionally, the target object in this embodiment is a road obstacle.
And the second determining module 92 is configured to determine current second position information of the target object in the point cloud data acquired by the radar at the current sampling time.
Optionally, the second determining module 92 is specifically configured to: clustering the point cloud data to obtain a plurality of clustered target object classes; taking the position average value of the point clouds in each target object class as the position information of the target object in the robot coordinate system; carrying out coordinate transformation on the position information of the target object in the robot coordinate system, and determining the position information of the target object in the world coordinate system; and determining the position information of the target object in the world coordinate system as the current second position information of the target object.
The third determining module 93 is configured to determine, according to the current first location information and the current second location information, current target first location information and corresponding current target second location information that meet the pairing condition.
In one implementation, the current first position information includes a first distance of the target object from the robot, and the current second position information includes a second distance of the target object from the robot. Correspondingly, the third determining module 93 includes: a first determination sub-module 931 and a second determination sub-module 932.
The first determining submodule 931 is specifically configured to: and determining current effective first position information in the current first position information and current effective second position information corresponding to the current effective first position information according to the current first position information and the current second position information.
The second determination submodule 932 is specifically configured to: and if the difference value of the first distance in the current effective first position information and the second distance in the corresponding current effective second position information meets the pairing condition, determining the current effective first position information as the current target first position information, and determining the corresponding current effective second position information as the corresponding current target second position information.
Alternatively, the pairing condition may be: and subtracting the difference value of the target distance compensation value from the first distance in the current effective first position information, and then subtracting the difference value of the second distance in the corresponding current effective second position information, wherein the difference value is larger than zero and smaller than a preset target distance threshold value. The target distance compensation value includes: a first distance compensation value and a second distance compensation value. The preset target distance threshold includes: a first distance threshold and a second distance threshold. The first distance compensation value is smaller than the second distance compensation value, and the first distance threshold value is smaller than the second distance threshold value. The apparatus may further include: a fifth determination module and a sixth determination module.
And the fifth determining module is used for determining that the target distance compensation value is the first distance compensation value and the preset target distance threshold value is the first distance threshold value if the position corresponding to the current effective first position information is determined to be in the preset safety area.
And the sixth determining module is used for determining that the target distance compensation value is the second distance compensation value and the preset target distance threshold value is the second distance threshold value if the position corresponding to the current effective first position information is determined to be outside the preset safety area.
More specifically, the first determination sub-module 931 is specifically configured to: determining an area where a position corresponding to the current first position information is located according to the current first position information; determining an effectiveness judgment condition according to the area of the position corresponding to the current first position information; if the current second position information which meets the validity judgment condition with the current first position information exists in the current second position information, the current first position information is determined as the current valid first position information, and the current second position information which meets the validity judgment condition with the current first position information is determined as the current valid second position information corresponding to the current valid first position information.
The current first location information further includes: the maximum angle between the target object and the robot and the minimum angle between the target object and the robot, and the current second position information further includes: angle of the target object to the robot. In terms of determining the validity determination condition according to the area where the position corresponding to the current first position information is located, the first determining sub-module 931 is specifically configured to: if the area where the position corresponding to the current first position information is located is determined to be the middle area, determining that the validity judgment condition is as follows: the sum of the maximum angle of the target object and the robot and the angle compensation value is larger than the angle of the target object and the robot, and the difference value between the minimum angle of the target object and the robot and the angle compensation value is smaller than the angle of the target object and the robot; if the area where the position corresponding to the current first position information is located is determined to be the left area, determining that the validity judgment condition is as follows: the maximum angle between the target object and the robot is larger than the angle between the target object and the robot, and the difference value between the minimum angle between the target object and the robot and the angle compensation value of the preset first multiple is smaller than the angle between the target object and the robot; if the area where the position corresponding to the current first position information is located is determined to be the right area, determining that the validity judgment condition is as follows: the sum of the maximum angle of the target object and the robot and the preset second-multiple angle compensation value is larger than the angle of the target object and the robot, and the minimum angle of the target object and the robot is smaller than the angle of the target object and the robot.
And a first updating module 94, configured to update the associated parameter of the target tracking information if the current target first position information and the target tracking information in the tracking unit satisfy the matching condition.
The tracking unit is determined according to historical target first position information at the historical sampling time.
In one implementation, the tracking unit includes a plurality of tracking information, each tracking information including a plurality of historical target first location information. The first updating module 94 is specifically configured to: and if the quotient of the distance between the position corresponding to the current target first position information and the position corresponding to the latest historical target first position information in the target tracking information and the width of the target object corresponding to the current target first position information is smaller than a preset matching threshold value, updating the associated parameters of the target tracking information.
A fourth determining module 95, configured to determine, if the updated association parameter of the target tracking information is greater than the preset association parameter threshold, the current target second position information corresponding to the current target first position information as final position information of the target object.
Optionally, the apparatus further comprises a second update module. The second update module is specifically configured to: and updating the current target first position information into the target tracking information.
Optionally, the second updating module is further specifically configured to: and if the current target first position information and any tracking information in the tracking unit do not meet the matching condition, taking the current target first position information as new tracking information and adding the new tracking information into the tracking unit.
The target object detection device provided by the embodiment of the invention can execute the target object detection method provided by any shown embodiment and various optional modes of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Fig. 10 is a schematic structural diagram of a robot according to an embodiment of the present invention. As shown in fig. 10, the robot includes a processor 70 and a memory 71. The number of the processors 70 in the robot can be one or more, and one processor 70 is taken as an example in fig. 10; the processor 70 and the memory 71 of the robot may be connected by a bus or other means, as exemplified by the bus connection in fig. 10.
The memory 71 is a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, and modules, such as program instructions and modules corresponding to the target object detection method in the embodiment of the present invention (for example, the first determination module 91, the second determination module 92, the third determination module 93, the first update module 94, and the fourth determination module 95 in the target object detection apparatus). The processor 70 executes various functional applications and data processing of the robot by running software programs, instructions and modules stored in the memory 71, that is, implements the above-described target object detection method.
The memory 71 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the robot, and the like. Further, the memory 71 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory 71 may further include memory remotely located from the processor 70, which may be connected to the robot through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Optionally, the robot may further include: a power component 72, an audio component 73, a communication component 74, and a sensor component 75. The power component 72, audio component 73, communication component 74, and sensor component 75 may all be connected to the processor 70 via a bus.
The power supply assembly 72 provides power to the various components of the robot. The power components 72 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the robot.
The audio component 73 is configured to output and/or input audio signals. For example, the audio component 73 comprises a microphone configured to receive external audio signals when the robot is in an operation mode, such as a recording mode and a speech recognition mode. The received audio signal may further be stored in the memory 71 or transmitted via the communication component 74. In some embodiments, audio assembly 73 also includes a speaker for outputting audio signals.
The communication component 74 is configured to facilitate wired or wireless communication between the robot and other devices. The robot may access a wireless network based on a communication standard. In an exemplary embodiment, the communication component 74 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the Communication component 74 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association technology, ultra wideband technology, bluetooth technology, and other technologies.
The sensor assembly 75 includes one or more sensors for providing various aspects of status assessment for the robot. The sensor assembly 75 may include a laser sensor for collecting point cloud data. In some embodiments, the sensor assembly 75 may also include an acceleration sensor, a magnetic sensor, a pressure sensor, a temperature sensor, or the like.
Fig. 11 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention. As shown in fig. 11, the present invention also provides a computer-readable storage medium 82 containing computer-executable instructions 81, the computer-executable instructions 81 when executed by a processor 83 for performing a method of target object detection, the method comprising:
determining current first position information of a target object in vision, which is acquired by a vision acquisition device at the current sampling moment;
determining current second position information of a target object in point cloud data acquired by a radar at the current sampling moment;
determining current target first position information and corresponding current target second position information which meet pairing conditions according to the current first position information and the current second position information;
if the current target first position information and the target tracking information in the tracking unit meet the matching condition, updating the associated parameters of the target tracking information; the tracking unit is determined according to historical target first position information at historical sampling time;
and if the updated associated parameter of the target tracking information is larger than a preset associated parameter threshold, determining the current target second position information corresponding to the current target first position information as the final position information of the target object.
Of course, the storage medium provided by the embodiment of the present invention contains computer-executable instructions, and the computer-executable instructions are not limited to the method operations described above, and may also perform related operations in the target object detection method provided by any embodiment of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk, or an optical disk of a computer, and includes instructions for enabling a robot (which may be a personal computer, a vehicle, or a network device) to perform the target object detection method according to the embodiments of the present invention.
It should be noted that, in the embodiment of the detection apparatus for a target object, the included units and modules are only divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
Claims (14)
1. A method of detecting a target object, comprising:
determining current first position information of a target object in vision, which is acquired by a vision acquisition device at the current sampling moment;
determining current second position information of a target object in point cloud data acquired by a radar at the current sampling moment;
determining current target first position information and corresponding current target second position information which meet pairing conditions according to the current first position information and the current second position information;
if the current target first position information and the target tracking information in the tracking unit meet the matching condition, updating the associated parameters of the target tracking information, wherein the associated parameters are used for indicating the number of the target first position information successfully matched with the target tracking information; the tracking unit is determined according to historical target first position information at historical sampling time;
and if the updated associated parameter of the target tracking information is larger than a preset associated parameter threshold, determining the current target second position information corresponding to the current target first position information as the final position information of the target object.
2. The method of claim 1, wherein the current first location information comprises a first distance of a target object from a robot, and the current second location information comprises a second distance of the target object from the robot;
the determining, according to the current first location information and the current second location information, current target first location information and corresponding current target second location information that satisfy a pairing condition includes:
determining current effective first position information in the current first position information and current effective second position information corresponding to the current effective first position information according to the current first position information and the current second position information;
and if the difference value of the first distance in the current effective first position information and the second distance in the corresponding current effective second position information meets the pairing condition, determining the current effective first position information as the current target first position information, and determining the corresponding current effective second position information as the corresponding current target second position information.
3. The method according to claim 2, wherein the pairing condition is: a difference value obtained by subtracting a target distance compensation value from a first distance in the current effective first position information, and a difference value obtained by subtracting a second distance in the corresponding current effective second position information are greater than zero and smaller than a preset target distance threshold, wherein the target distance compensation value includes: a first distance compensation value and a second distance compensation value, wherein the preset target distance threshold comprises: a first distance threshold and a second distance threshold, the first distance compensation value being less than the second distance compensation value, the first distance threshold being less than the second distance threshold;
the method further comprises the following steps:
if the position corresponding to the current effective first position information is determined to be in a preset safety area, determining the target distance compensation value to be the first distance compensation value, and determining the preset target distance threshold value to be the first distance threshold value;
and if the position corresponding to the current effective first position information is determined to be outside a preset safety area, determining that the target distance compensation value is the second distance compensation value, and determining that the preset target distance threshold is the second distance threshold.
4. The method of claim 2, wherein the determining, according to the current first location information and the current second location information, current valid first location information in the current first location information and current valid second location information corresponding to the current valid first location information comprises:
determining an area where a position corresponding to the current first position information is located according to the current first position information;
determining an effectiveness judgment condition according to the area of the position corresponding to the current first position information;
if the current second position information which meets the validity judgment condition with the current first position information exists in the current second position information, the current first position information is determined as the current valid first position information, and the current second position information which meets the validity judgment condition with the current first position information is determined as the current valid second position information corresponding to the current valid first position information.
5. The method of claim 4, wherein the current first location information further comprises: the maximum angle between the target object and the robot and the minimum angle between the target object and the robot, and the current second position information further includes: the angle of the target object to the robot;
determining an effectiveness judgment condition according to the area where the position corresponding to the current first position information is located, wherein the determining includes:
if the area where the position corresponding to the current first position information is located is determined to be the middle area, determining that the validity judgment condition is as follows: the sum of the maximum angle of the target object and the robot and the angle compensation value is larger than the angle of the target object and the robot, and the difference value between the minimum angle of the target object and the robot and the angle compensation value is smaller than the angle of the target object and the robot;
if the area where the position corresponding to the current first position information is located is determined to be the left area, determining that the validity judgment condition is as follows: the maximum angle between the target object and the robot is larger than the angle between the target object and the robot, and the difference value between the minimum angle between the target object and the robot and the angle compensation value of the preset first multiple is smaller than the angle between the target object and the robot;
if the area where the position corresponding to the current first position information is located is determined to be the right area, determining that the validity judgment condition is as follows: the sum of the maximum angle of the target object and the robot and the preset second-multiple angle compensation value is larger than the angle of the target object and the robot, and the minimum angle of the target object and the robot is smaller than the angle of the target object and the robot.
6. The method according to any one of claims 1-5, wherein the tracking unit comprises a plurality of tracking information, each tracking information comprising a plurality of historical target first location information;
if the current target first position information and the target tracking information in the tracking unit meet the matching condition, updating the associated parameters of the target tracking information, including:
and if the quotient of the distance between the position corresponding to the current target first position information and the position corresponding to the latest historical target first position information in the target tracking information and the width of the target object corresponding to the current target first position information is smaller than a preset matching threshold value, updating the associated parameters of the target tracking information.
7. The method of claim 6, wherein after determining that a quotient of a distance between a location corresponding to the current target first location information and a location corresponding to the latest historical target first location information in the target tracking information and a width of a target object corresponding to the current target first location information is less than a preset matching threshold, the method further comprises:
and updating the current target first position information into the target tracking information.
8. The method of claim 6, further comprising:
and if the current target first position information and any tracking information in the tracking unit do not meet the matching condition, taking the current target first position information as new tracking information and adding the new tracking information into the tracking unit.
9. The method according to any one of claims 1-5, wherein determining the current first position information of the target object in vision acquired by the vision acquisition device at the current sampling time comprises:
inputting the vision into a pre-trained deep learning model, and acquiring the position information of a circumscribed polygon of the target object identified by the deep learning model;
performing homography transformation on the position information of the circumscribed polygon of the target object to obtain the position information of the circumscribed polygon of the target object on a laser plane;
performing coordinate transformation on the position information of the circumscribed polygon of the target object on a laser plane, and determining the position information of the circumscribed polygon of the target object in a world coordinate system;
and determining the position information of the circumscribed polygon of the target object in a world coordinate system as the current first position information of the target object.
10. The method of any one of claims 1-5, wherein determining current second position information of the target object in the point cloud data acquired by the radar at the current sampling time comprises:
clustering the point cloud data to obtain a plurality of clustered target object classes;
taking the position average value of the point clouds in each target object class as the position information of the target object in the robot coordinate system;
performing coordinate transformation on the position information of the target object in a robot coordinate system, and determining the position information of the target object in a world coordinate system;
and determining the position information of the target object in the world coordinate system as the current second position information of the target object.
11. The method according to any one of claims 1-5, wherein the target object is a road obstacle.
12. A target object detection apparatus, comprising:
the first determination module is used for determining current first position information of the target object in the vision, which is acquired by the vision acquisition device at the current sampling moment;
the second determining module is used for determining current second position information of the target object in the point cloud data acquired by the radar at the current sampling moment;
a third determining module, configured to determine, according to the current first location information and the current second location information, current target first location information and corresponding current target second location information that meet a pairing condition;
the first updating module is used for updating the associated parameters of the target tracking information if the current target first position information and the target tracking information in the tracking unit meet the matching condition, wherein the associated parameters are used for indicating the number of the target first position information which is successfully matched with the target tracking information; the tracking unit is determined according to historical target first position information at historical sampling time;
a fourth determining module, configured to determine, if the updated associated parameter of the target tracking information is greater than a preset associated parameter threshold, the current target second position information corresponding to the current target first position information as final position information of the target object.
13. A robot, characterized in that the robot comprises:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a method of detecting a target object as recited in any of claims 1-11.
14. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a method of detecting a target object according to any one of claims 1 to 11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010813988.9A CN111986232B (en) | 2020-08-13 | 2020-08-13 | Target object detection method, target object detection device, robot and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010813988.9A CN111986232B (en) | 2020-08-13 | 2020-08-13 | Target object detection method, target object detection device, robot and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111986232A CN111986232A (en) | 2020-11-24 |
CN111986232B true CN111986232B (en) | 2021-09-14 |
Family
ID=73434292
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010813988.9A Active CN111986232B (en) | 2020-08-13 | 2020-08-13 | Target object detection method, target object detection device, robot and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111986232B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112270707A (en) * | 2020-11-25 | 2021-01-26 | 广州极飞科技有限公司 | Crop position detection method and device, mobile platform and storage medium |
CN113223091B (en) * | 2021-04-29 | 2023-01-24 | 达闼机器人股份有限公司 | Three-dimensional target detection method, three-dimensional target capture device and electronic equipment |
CN113253735B (en) * | 2021-06-15 | 2021-10-08 | 同方威视技术股份有限公司 | Method, device, robot and computer readable storage medium for following target |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102508246A (en) * | 2011-10-13 | 2012-06-20 | 吉林大学 | Method for detecting and tracking obstacles in front of vehicle |
CN106447680A (en) * | 2016-11-23 | 2017-02-22 | 湖南华诺星空电子技术有限公司 | Method for radar and vision fused target detecting and tracking in dynamic background environment |
CN107991671A (en) * | 2017-11-23 | 2018-05-04 | 浙江东车智能科技有限公司 | A kind of method based on radar data and vision signal fusion recognition risk object |
CN109102702A (en) * | 2018-08-24 | 2018-12-28 | 南京理工大学 | Vehicle speed measuring method based on video encoder server and Radar Signal Fusion |
CN109444911A (en) * | 2018-10-18 | 2019-03-08 | 哈尔滨工程大学 | A kind of unmanned boat waterborne target detection identification and the localization method of monocular camera and laser radar information fusion |
CN109490890A (en) * | 2018-11-29 | 2019-03-19 | 重庆邮电大学 | A kind of millimetre-wave radar towards intelligent vehicle and monocular camera information fusion method |
CN109581345A (en) * | 2018-11-28 | 2019-04-05 | 深圳大学 | Object detecting and tracking method and system based on millimetre-wave radar |
CN110208793A (en) * | 2019-04-26 | 2019-09-06 | 纵目科技(上海)股份有限公司 | DAS (Driver Assistant System), method, terminal and medium based on millimetre-wave radar |
CN110246159A (en) * | 2019-06-14 | 2019-09-17 | 湖南大学 | The 3D target motion analysis method of view-based access control model and radar information fusion |
CN110414396A (en) * | 2019-07-19 | 2019-11-05 | 中国人民解放军海军工程大学 | A kind of unmanned boat perception blending algorithm based on deep learning |
CN110532896A (en) * | 2019-08-06 | 2019-12-03 | 北京航空航天大学 | A kind of road vehicle detection method merged based on trackside millimetre-wave radar and machine vision |
CN110794396A (en) * | 2019-08-05 | 2020-02-14 | 上海埃威航空电子有限公司 | Multi-target identification method and system based on laser radar and navigation radar |
CN110942449A (en) * | 2019-10-30 | 2020-03-31 | 华南理工大学 | Vehicle detection method based on laser and vision fusion |
CN111045017A (en) * | 2019-12-20 | 2020-04-21 | 成都理工大学 | Method for constructing transformer substation map of inspection robot by fusing laser and vision |
CN111168685A (en) * | 2020-02-17 | 2020-05-19 | 上海高仙自动化科技发展有限公司 | Robot control method, robot, and readable storage medium |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8717545B2 (en) * | 2009-02-20 | 2014-05-06 | Digital Signal Corporation | System and method for generating three dimensional images using lidar and video measurements |
US8232872B2 (en) * | 2009-12-03 | 2012-07-31 | GM Global Technology Operations LLC | Cross traffic collision alert system |
US9329269B2 (en) * | 2012-03-15 | 2016-05-03 | GM Global Technology Operations LLC | Method for registration of range images from multiple LiDARS |
-
2020
- 2020-08-13 CN CN202010813988.9A patent/CN111986232B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102508246A (en) * | 2011-10-13 | 2012-06-20 | 吉林大学 | Method for detecting and tracking obstacles in front of vehicle |
CN106447680A (en) * | 2016-11-23 | 2017-02-22 | 湖南华诺星空电子技术有限公司 | Method for radar and vision fused target detecting and tracking in dynamic background environment |
CN107991671A (en) * | 2017-11-23 | 2018-05-04 | 浙江东车智能科技有限公司 | A kind of method based on radar data and vision signal fusion recognition risk object |
CN109102702A (en) * | 2018-08-24 | 2018-12-28 | 南京理工大学 | Vehicle speed measuring method based on video encoder server and Radar Signal Fusion |
CN109444911A (en) * | 2018-10-18 | 2019-03-08 | 哈尔滨工程大学 | A kind of unmanned boat waterborne target detection identification and the localization method of monocular camera and laser radar information fusion |
CN109581345A (en) * | 2018-11-28 | 2019-04-05 | 深圳大学 | Object detecting and tracking method and system based on millimetre-wave radar |
CN109490890A (en) * | 2018-11-29 | 2019-03-19 | 重庆邮电大学 | A kind of millimetre-wave radar towards intelligent vehicle and monocular camera information fusion method |
CN110208793A (en) * | 2019-04-26 | 2019-09-06 | 纵目科技(上海)股份有限公司 | DAS (Driver Assistant System), method, terminal and medium based on millimetre-wave radar |
CN110246159A (en) * | 2019-06-14 | 2019-09-17 | 湖南大学 | The 3D target motion analysis method of view-based access control model and radar information fusion |
CN110414396A (en) * | 2019-07-19 | 2019-11-05 | 中国人民解放军海军工程大学 | A kind of unmanned boat perception blending algorithm based on deep learning |
CN110794396A (en) * | 2019-08-05 | 2020-02-14 | 上海埃威航空电子有限公司 | Multi-target identification method and system based on laser radar and navigation radar |
CN110532896A (en) * | 2019-08-06 | 2019-12-03 | 北京航空航天大学 | A kind of road vehicle detection method merged based on trackside millimetre-wave radar and machine vision |
CN110942449A (en) * | 2019-10-30 | 2020-03-31 | 华南理工大学 | Vehicle detection method based on laser and vision fusion |
CN111045017A (en) * | 2019-12-20 | 2020-04-21 | 成都理工大学 | Method for constructing transformer substation map of inspection robot by fusing laser and vision |
CN111168685A (en) * | 2020-02-17 | 2020-05-19 | 上海高仙自动化科技发展有限公司 | Robot control method, robot, and readable storage medium |
Non-Patent Citations (2)
Title |
---|
A Multi-sensor Fusion System for Moving Object Detection and Tracking in Urban Driving Environments;Cho H et al;《IEEE International Conference on Robotics and Automation(ICRA)》;20141231;全文 * |
融合毫米波雷达与单目视觉的前车检测与跟踪;赵望宇等;《武汉大学学报·信息科学版》;20191231;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN111986232A (en) | 2020-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111986232B (en) | Target object detection method, target object detection device, robot and storage medium | |
Zhang et al. | Road-segmentation-based curb detection method for self-driving via a 3D-LiDAR sensor | |
US11530924B2 (en) | Apparatus and method for updating high definition map for autonomous driving | |
KR20150058679A (en) | Apparatus and method for localization of autonomous vehicle in a complex | |
KR102166512B1 (en) | Method, device, map management device and system for precise location tracking of automobiles in the surrounding environment | |
EP3939863A1 (en) | Overhead-view image generation device, overhead-view image generation system, and automatic parking device | |
US20180268692A1 (en) | Moving object and driving support system for moving object | |
CN112363494A (en) | Method and device for planning advancing path of robot and storage medium | |
CN109840454B (en) | Target positioning method, device, storage medium and equipment | |
US11987245B2 (en) | Method for controlling vehicle and vehicle control device | |
EP3217376A2 (en) | Object detecting device, object detecting method, and computer-readable medium | |
CN113743171A (en) | Target detection method and device | |
JP2020193954A (en) | Position correction server, position management device, moving object position management system and method, position information correction method, computer program, onboard device, and vehicle | |
CN113469045B (en) | Visual positioning method and system for unmanned integrated card, electronic equipment and storage medium | |
CN114051628A (en) | Method and device for determining target object point cloud set | |
Houben et al. | Park marking-based vehicle self-localization with a fisheye topview system | |
CN115339453B (en) | Vehicle lane change decision information generation method, device, equipment and computer medium | |
Cardarelli et al. | Multisensor data fusion for obstacle detection in automated factory logistics | |
CN110390252B (en) | Obstacle detection method and device based on prior map information and storage medium | |
CN114740867A (en) | Intelligent obstacle avoidance method and device based on binocular vision, robot and medium | |
KR102630991B1 (en) | Method for determining driving posision of vehicle, apparatus thereof and driving control system | |
CN114489050A (en) | Obstacle avoidance route control method, device, equipment and storage medium for straight line driving | |
CN113112478B (en) | Pose recognition method and terminal equipment | |
Lee et al. | Infrastructure node-based vehicle localization for autonomous driving | |
CN110909569B (en) | Road condition information identification method and terminal equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |