CN110674724A - Robot target identification method and system based on active strategy and image sensor - Google Patents

Robot target identification method and system based on active strategy and image sensor Download PDF

Info

Publication number
CN110674724A
CN110674724A CN201910891152.8A CN201910891152A CN110674724A CN 110674724 A CN110674724 A CN 110674724A CN 201910891152 A CN201910891152 A CN 201910891152A CN 110674724 A CN110674724 A CN 110674724A
Authority
CN
China
Prior art keywords
robot
target
image data
suspected target
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910891152.8A
Other languages
Chinese (zh)
Other versions
CN110674724B (en
Inventor
牛小骥
张乐翔
张隽赓
蒋郡祥
张提升
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201910891152.8A priority Critical patent/CN110674724B/en
Publication of CN110674724A publication Critical patent/CN110674724A/en
Application granted granted Critical
Publication of CN110674724B publication Critical patent/CN110674724B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a robot target identification method and system based on an active strategy and an image sensor, and belongs to the field of target identification. Compared with the prior art, the invention obviously improves the quality from the source of image information, can more effectively, flexibly and reliably realize target identification, can reduce the calculated amount to a certain extent, and has better identification effect especially for some target objects needing to be observed and identified from specific angles and distances, such as two-dimensional codes, house number numbers and the like.

Description

Robot target identification method and system based on active strategy and image sensor
Technical Field
The invention belongs to the technical field of target identification, and particularly relates to a robot target identification method and system based on an active strategy and an image sensor.
Background
With the development of scientific technology and national economy, more and more unmanned systems appear in the visual field of people, including automatic inspection by robots, autonomous cruise by unmanned planes and the like, and meanwhile, new challenges are provided for the technical development of the unmanned systems. The perception and understanding of complex dynamic scenes are the basis and key of robot intellectualization, and the information for identifying targets in the environment is the key technology of scene perception and understanding.
At present, most of robot target recognition is passive recognition based on a given image, the passive image does not consider the influence of imaging quality on the recognition in an imaging stage, the target recognition is carried out only based on the existing image data without considering the active improvement of the imaging quality, and a lot of difficulties are brought to the target recognition. In the current information age, most data are acquired by sensors, and most carriers of acquisition equipment are mobile robots. Therefore, the controllable characteristic of the mobile robot carrier is reasonably utilized to assist the image sensor to actively identify the target, and the accuracy of target identification can be greatly improved. At present, two main target recognition sensors used on a robot are a camera and a laser radar, and the two sensors and the corresponding target recognition methods have the characteristics that:
image-based target identification methods are mature at present and are divided into traditional methods and machine learning methods. The target recognition of the traditional method is mainly based on an image processing method and comprises three parts of region selection, feature extraction and classifier, namely, firstly, the image is traversed by using sliding windows with different length-width ratios, then, the features (such as HOG, SIFT features and the like) are extracted from the regions, and finally, the classification is carried out by using a trained classifier (such as an SVM classifier and the like). Image target recognition based on the traditional method is greatly influenced by environments, such as illumination, angles, background environments and the like, and has large limitation. In the last decade, object recognition based on machine learning develops rapidly, and is divided into traversing images by using sliding windows with different length-width ratios, running a convolution network and comparing confidence coefficients of a model to output object positions and categories. Object recognition based on machine learning methods is relatively less affected by the environment, but its training network requires a large number of data sets. The target recognition method based on the image is generally influenced by the distance from the target, the image resolution, the ambient illumination, the image shooting angle and the like, the recognition effect is still not robust enough in some complex environments, and the real-time image recognition method based on the deep learning has large calculated amount and strong dependence on hardware.
Target identification based on laser radar point cloud is not mature, and a bird's-eye view-based method and a voxel-based method are mainly adopted. Currently, a method based on a bird's-eye view is mostly used in automatic driving, and the steps are to map point cloud data to a two-dimensional plane, and to identify a target in the bird's-eye view, which has poor identification effect on small objects such as pedestrians and bicycles. The voxel-based method is characterized in that point cloud voxels are converted into a volume grid, and the image CNN is popularized to a 3D CNN, so that the recognition effect is not ideal enough.
The reference patent application CN110084168A proposes an active target recognition method and device, which uses a learning method to actively adjust imaging parameters to recognize targets. However, this method has the following disadvantages: 1) the imaging angle and the imaging distance can not be actively adjusted only by actively adjusting the imaging parameters of the sensor, and targets of two-dimensional codes, house numbers, safety exits and the like which need to obtain clear images from angles close to the front and proper distances can not be well identified; 2) fusion recognition of various sensors is not involved, and the advantages of different sensors are exerted, so that the target is better recognized.
Disclosure of Invention
Aiming at the defects or improvement requirements of the prior art, the invention provides a robot target identification method and system based on an active strategy and an image sensor, so that the technical problem that the prior art has certain limitations in the aspects of solving the influence factors such as target distance, target shooting angle and illumination and the like in the process of target identification based on images is solved.
To achieve the above object, according to one aspect of the present invention, there is provided a robot target recognition method based on an active strategy and an image sensor, including:
(1) acquiring image data on a robot cruising route, and processing the acquired image data to obtain a relative pose relation between a suspected target and a robot carrier;
(2) actively adjusting the cruising route of the robot based on the relative pose relation between the suspected target and the robot carrier so that the robot can acquire the image data of the suspected target according to the requirement;
(3) and (3) performing target identification according to the image data of the suspected target obtained after the cruising route of the robot is adjusted, if the quality of the image data of the suspected target obtained after adjustment cannot meet the identification requirement, returning to the step (2) until the robot cannot be actively adjusted or the quality of the image data of the suspected target meets the requirement, and outputting a final identification result.
Preferably, step (1) comprises:
(1.1) collecting point cloud data on a robot cruising route, fitting the ground by randomly sampling and uniformly after removing null data in the point cloud data, and removing the point cloud data on the ground to obtain target point cloud data;
(1.2) carrying out voxel filtering on the target point cloud data to convert each point cloud into a three-dimensional voxel unit, merging the point clouds in the same voxel, constructing the merged voxel into an octree to find a core point, constructing a KD tree according to the core point to divide the point clouds into different point cloud clusters, finally finding the boundary point according to the relation between the boundary point and the core point and classifying the boundary point into the cluster where the corresponding core point is located;
and (1.3) judging whether the three-dimensional geometric features of the point cloud cluster accord with the target features, and if so, outputting the direction of the suspected target point cloud cluster relative to the robot.
Preferably, the core points satisfy the following relationship: n (x) corresponding to epsilon neighborhood of the core pointi) At least contains minPts sample points, where N (x)i)={xj∈D|||xi-xj||<ε, D is the sample set, xjAnd xiFor samples in the sample set D,. noncritical|xi-xj| is xiTo xjMin Pts is the minimum number of clustering points in the sample set.
Preferably, the boundary points satisfy the following relationship: n (x) corresponding to epsilon neighborhood of the boundary pointi) Comprising at least one core point and N (x)i) Contains points less than min Pts.
Preferably, step (2) comprises:
and actively adjusting the cruising route of the robot based on the position of the suspected target point cloud cluster relative to the robot so as to enable the suspected target point cloud cluster to be projected at the central position of the acquired image.
Preferably, the projection relationship of the point cloud onto the image is: pCamera with a camera module=K*(RPRadar apparatus+ t), wherein PCamera with a camera moduleIs a coordinate in the pixel coordinate system, PRadar apparatusThe method comprises the steps of obtaining a point cloud coordinate of a laser radar coordinate system, wherein K is a camera internal reference matrix, R is a rotation matrix of the laser radar relative to a camera, and t is a translation vector of the laser radar relative to the camera.
Preferably, step (3) comprises:
(3.1) inputting the image data of the suspected target acquired after the cruising route of the robot is adjusted into a trained image target recognition neural network built based on tensoflow for target recognition;
and (3.2) judging whether the quality of the image data of the suspected target obtained after adjustment meets the identification requirement or not according to the confidence of the output result, if the quality of the image data of the suspected target does not meet the identification requirement, continuing to adjust the cruising route of the robot until the robot cannot actively adjust or the quality of the image data of the suspected target meets the requirement, and outputting a final identification result.
Preferably, in step (3.2), the method comprises
Figure BDA0002208790210000041
Judging whether the quality of the image data of the suspected target obtained after adjustment meets the identification requirement, wherein m is whether the cruising route of the robot is actively adjusted, and m is 1 to represent that the cruising route of the robot is actively adjustedAdjusting a cruising route of the robot, wherein m is 0 to indicate that the cruising route of the robot is not adjusted, x is confidence coefficient of target recognition output by the image target recognition neural network, q is a preset threshold value, z is judged whether the robot can move to the next position according to the laser radar point cloud, z is 0 to indicate that the robot can move to the next position, and z is not equal to 0 to indicate that the robot cannot move to the next position.
According to another aspect of the present invention, there is provided a robot target recognition system based on an active strategy and an image sensor, comprising: an image sensor and a processor;
the image sensor is used for acquiring image data on the cruising route of the robot;
the processor is used for processing the acquired image data to obtain the relative pose relation between the suspected target and the robot carrier; actively adjusting the cruising route of the robot based on the relative pose relation between the suspected target and the robot carrier so that the robot can acquire the image data of the suspected target according to the requirement; and performing target identification according to the image data of the suspected target obtained after the cruise route of the robot is adjusted, if the quality of the image data of the suspected target obtained after adjustment cannot meet the identification requirement, continuously adjusting the cruise route of the robot until the robot cannot actively adjust or the quality of the image data of the suspected target meets the requirement, and outputting a final identification result.
Preferably, the system further comprises: the robot comprises a robot chassis, a display screen and a battery;
the robot chassis is used for bearing all parts in the system and can move;
the display screen is used for outputting the identification result;
the battery is used for supplying power to all parts of the system.
In general, compared with the prior art, the above technical solution contemplated by the present invention can achieve the following beneficial effects:
(1) according to the method, the image sensor is used for acquiring the image data on the cruising route of the robot, the image data is processed to obtain the suspected target, the robot is intelligently adjusted by adopting an active strategy aiming at the suspected target, the sensor can better acquire the image of the suspected target, whether the suspected target is the target needing to be identified is finally confirmed, and an accurate identification result is given. Compared with the prior art, the quality is obviously improved from the source of the image information, the target identification can be realized more effectively, flexibly and reliably, the calculated amount can be reduced to a certain extent, and the identification effect is better especially for some target objects needing to be observed and identified from specific angles and distances, such as two-dimensional codes, house numbers and the like. Meanwhile, the problems that the robustness is insufficient, the precision is low, the intelligent degree is low and the like in the process of autonomous cruising of the mobile robot for recognizing the specific target are solved.
(2) The relevance of sensor imaging, robot carrier motion and image target identification is considered, the mobile robot and the sensor are adjusted through an active strategy aiming at improving the target identification performance, the robustness and the precision of target identification are obviously improved, and the mobile robot can identify the target more intelligently.
Drawings
FIG. 1 is a schematic diagram of a system architecture according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a target identification method according to an embodiment of the present invention;
fig. 3 is a schematic view of an active angle adjustment of a robot according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a clustering method according to an embodiment of the present invention;
FIG. 5 is a flow chart of coarse identification according to an embodiment of the present invention;
FIG. 6 is a flow chart of fine identification according to an embodiment of the present invention;
the system comprises a robot chassis, a laser radar, a camera, a processor, a display screen and a battery, wherein the robot chassis is 1, the laser radar is 2, the camera is 3, the processor is 4, and the battery is 6.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The robot sensor according to the embodiment of the present invention is installed as shown in fig. 1, and includes a robot chassis 1, an image sensor, a processor 4, a display screen 5, and a battery 6. An autonomous cruising path of the robot is set through the upper computer, and target objects to be identified exist around the path.
The image sensor is used for acquiring image data on the cruising route of the robot;
the processor is used for processing the acquired image data to obtain the relative pose relation between the suspected target and the robot carrier; actively adjusting the cruising route of the robot based on the relative pose relation between the suspected target and the robot carrier so that the robot can acquire the image data of the suspected target according to the requirement; performing target identification according to the image data of the suspected target obtained after the cruise route of the robot is adjusted, if the quality of the image data of the suspected target obtained after adjustment cannot meet the identification requirement, continuously adjusting the cruise route of the robot until the robot cannot actively adjust or the quality of the image data of the suspected target meets the requirement, and outputting a final identification result;
the robot chassis is used for bearing all parts in the system and can move; the display screen is used for outputting the identification result; the battery is used for supplying power to all parts of the system.
In the embodiment of the present invention, the image sensor includes a laser radar 2, a camera 3, a multispectral sensor, and other sensors capable of acquiring image information, and the embodiment of the present invention is not limited uniquely.
Fig. 2 is a schematic flow chart of a target identification method according to an embodiment of the present invention, which includes the following steps:
s1: acquiring image data on a robot cruising route, and processing the acquired image data to obtain the relative pose relation between a suspected target and a robot carrier;
in the embodiment of the present invention, the relative pose relationship between the suspected target and the robot carrier includes a relative distance relationship, a relative angle relationship, an illumination difference, and the like, and the embodiment of the present invention is not limited uniquely.
When the robot moves according to the planned path, the point cloud data collected by the laser radar 2 is transmitted to the processor 4 in real time for processing. The method comprises the steps of firstly carrying out effectiveness detection on point clouds, removing the point clouds with a data value of nan, and fitting a ground plane through random sampling consistency, so that the probability of false detection in some special conditions, such as small-angle slopes and uneven pavements, can be avoided. In order to meet the real-time requirement of the embedded robot platform, the point clouds with the ground removed are subjected to voxel filtering, each point cloud is converted into a three-dimensional voxel unit, the point clouds in the same voxel are combined, and the calculation redundancy caused by the fact that the point clouds in the near place are too dense is prevented.
For point cloud clustering, a sample set D ═ { x is defined1,x2...xi...xmThe neighborhood is epsilon, the minimum point of clustering is min Pts, | xi-xj| is xiTo xjM is the number of samples, the following relationship holds:
1. epsilon neighborhood: for arbitrary sample point xiE.g. D, whose e neighborhood contains the sum x in the sample set DiA set of subsamples whose distance is not greater than epsilon, i.e. N (x)i)={xj∈D|||xi-xj||<ε};
2. Core point: for any sample xiE.g., D if its epsilon neighborhood corresponds to N (x)i) At least min Pts sample points are included, i.e. if size (N (x)i) ) is not less than min Pts, then xiIs a core point, such as point a in fig. 4;
3. boundary points are as follows: for any sample xiE.g., D if its epsilon neighborhood corresponds to N (x)i) Comprising at least one core point and N (x)i) Containing points less than min Pts, i.e.
Figure BDA0002208790210000081
So that | xi-xj||<ε and size (N (x)i) Min Pts, point B, C in FIG. 4;
4. noise points: for any sample xiE.g., D if its epsilon neighborhood corresponds to N (x)i) Does not contain any core point and N (x)i) Containing points less than min Pts, i.e.
Figure BDA0002208790210000082
All have | | xi-xjI > ε and size (N (x)i) < minPts, point N in FIG. 4.
When the points are clustered in a cloud mode, in order to reduce the calculation amount, firstly, the voxels are constructed into an octree to find core points, then, the octree is constructed according to the core points to divide the octree into different clusters, and finally, the boundary points are found according to the relation between the boundary points and the core points and are classified into the corresponding clusters where the core points are located.
And judging according to the three-dimensional geometrical characteristics of the point cloud cluster, and outputting the direction of the suspected target point cloud cluster relative to the robot if the point cloud cluster conforms to the target characteristics. The coarse identification process is shown in fig. 5.
S2: actively adjusting the cruising route of the robot based on the relative pose relation between the suspected target and the robot carrier so that the robot can acquire the image data of the suspected target according to the requirement;
in an embodiment of the present invention, actively adjusting a cruising route of a robot includes: when the distance is too far, the robot autonomously drives to the vicinity of the target; adjusting the direction of the robot to enable the image sensor to acquire target images from different angles; adjusting the pose of the robot to enable the target to be imaged in the center of the image; actively using an illumination light source for light supplement; the exposure is deliberately kept still for a long time. The embodiment of the present invention is not limited uniquely by what strategy is specifically adopted.
And the robot adjusts a pre-planned robot track according to the orientation information of the suspected target point cloud cluster, so that the robot actively drives to the position near the suspected target. Projecting the point cloud onto an image according to the external parameters of the laser radar 2 and the camera 3 which are calibrated in advance, wherein the projection relationship is as follows:
Pcamera with a camera module=K*(RPRadar apparatus+t) (1)
Wherein, PCamera with a camera moduleIs a coordinate in the pixel coordinate system, PRadar apparatusAnd K is a camera internal reference matrix. An external reference matrix (namely a rotation matrix R and a translational vector t) of the laser radar 2 relative to the camera 3 is obtained by jointly calibrating the camera 3 and the laser radar 2, a front-left upper point cloud coordinate system which is obtained by the laser radar 2 and takes the radar as an origin is converted into a right-lower front camera coordinate system which takes the optical center of the camera 3 as the origin, and the origins of the radar and the camera are coincided. And obtaining the corresponding pixel coordinates of the point cloud on the image through PnP calculation.
In order to obtain better imaging, an active strategy is adopted to continuously adjust the position and the posture of the robot, the suspected target point cloud cluster is projected at the central position of the image as far as possible on the premise of not colliding with an obstacle, and the image is acquired by a camera.
S3: and performing target identification according to the image data of the suspected target obtained after the cruise route of the robot is adjusted, if the quality of the image data of the suspected target obtained after adjustment cannot meet the identification requirement, continuously adjusting the cruise route of the robot until the robot cannot actively adjust or the quality of the image data of the suspected target meets the requirement, and outputting a final identification result.
In the embodiment of the present invention, the method for identifying the target includes feature-based identification, machine learning-based identification, and the like, and specifically, what identification method is used is not limited uniquely in the embodiment of the present invention.
In the embodiment of the invention, the output recognition result comprises information of target recognition and confidence of recognition.
Inputting the collected image into an image target recognition neural network which is trained according to corresponding target characteristics and is built based on tensorflow, wherein the image target recognition neural network is preferably 106 layers of image target recognition neural networks, and judging the next step according to the confidence coefficient of the output result, wherein the function is as follows:
wherein m is whether the robot is adjusted by adopting an active strategy, x is the confidence coefficient of target identification output by the neural network, q is a set threshold value, and z is whether the robot can move to the next position judged according to the laser radar point cloud. When z ≠ 0, it means that it can move to the next position, and z ≠ 0 means that it cannot move to the next position, i.e. there is an obstacle. When m is 1, namely the confidence coefficient is lower than the threshold value and the laser radar judges that no barrier exists near the target, an active strategy is adopted to adjust the robot, the robot is automatically controlled to move for a long arc distance of a plurality of degrees by taking the target as a circle center, in the embodiment of the invention, the robot is preferably automatically controlled to move for a long arc distance of 10 degrees by taking the target as the circle center, as shown in fig. 3, and the pose of the robot is adjusted to enable the target point cloud cluster to be projected in the center of an image, namely the target is in the center of the shot image; when m is 0, namely the confidence coefficient is higher than the set recognition threshold value or the confidence coefficient is lower than the threshold value, and the laser radar judges that an obstacle exists near the target, the robot cannot move to the next angle to recognize the target, and at the moment, the recognition result is output and the robot is controlled to return to the preset route to continue cruising. The specific flow is shown in fig. 6.
And after the steps are completed, outputting the recognition result and the recognition confidence of the target.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A robot target identification method based on an active strategy and an image sensor is characterized by comprising the following steps:
(1) acquiring image data on a robot cruising route, and processing the acquired image data to obtain a relative pose relation between a suspected target and a robot carrier;
(2) actively adjusting the cruising route of the robot based on the relative pose relation between the suspected target and the robot carrier so that the robot can acquire the image data of the suspected target according to the requirement;
(3) and (3) performing target identification according to the image data of the suspected target obtained after the cruising route of the robot is adjusted, if the quality of the image data of the suspected target obtained after adjustment cannot meet the identification requirement, returning to the step (2) until the robot cannot be actively adjusted or the quality of the image data of the suspected target meets the requirement, and outputting a final identification result.
2. The method of claim 1, wherein step (1) comprises:
(1.1) collecting point cloud data on a robot cruising route, fitting the ground by randomly sampling and uniformly after removing null data in the point cloud data, and removing the point cloud data on the ground to obtain target point cloud data;
(1.2) carrying out voxel filtering on the target point cloud data to convert each point cloud into a three-dimensional voxel unit, merging the point clouds in the same voxel, constructing the merged voxel into an octree to find a core point, constructing a KD tree according to the core point to divide the point clouds into different point cloud clusters, finally finding the boundary point according to the relation between the boundary point and the core point and classifying the boundary point into the cluster where the corresponding core point is located;
and (1.3) judging whether the three-dimensional geometric features of the point cloud cluster accord with the target features, and if so, outputting the direction of the suspected target point cloud cluster relative to the robot.
3. The method of claim 2, wherein the core points satisfy the following relation: n (x) corresponding to epsilon neighborhood of the core pointi) At least including min Pts sample points, where N (x)i)={xj∈D|||xi-xj||<ε, D is the sample set, xjAnd xiFor samples in sample set D, | xi-xj| is xiTo xjMin Pts is the minimum number of clustering points in the sample set.
4. The method of claim 3, wherein the boundary points satisfy the following relationship: n (x) corresponding to epsilon neighborhood of the boundary pointi) Comprising at least one core point and N (x)i) Contains points less than min Pts.
5. The method of any one of claims 2 to 4, wherein step (2) comprises:
and actively adjusting the cruising route of the robot based on the position of the suspected target point cloud cluster relative to the robot so as to enable the suspected target point cloud cluster to be projected at the central position of the acquired image.
6. The method of claim 5, wherein the projection relationship of the point cloud onto the image is: pCamera with a camera module=K*(RPRadar apparatus+ t), wherein PCamera with a camera moduleIs a coordinate in the pixel coordinate system, PRadar apparatusThe method comprises the steps of obtaining a point cloud coordinate of a laser radar coordinate system, wherein K is a camera internal reference matrix, R is a rotation matrix of the laser radar relative to a camera, and t is a translation vector of the laser radar relative to the camera.
7. The method of claim 1, wherein step (3) comprises:
(3.1) inputting the image data of the suspected target acquired after the cruising route of the robot is adjusted into a trained image target recognition neural network built based on tensoflow for target recognition;
and (3.2) judging whether the quality of the image data of the suspected target obtained after adjustment meets the identification requirement or not according to the confidence of the output result, if the quality of the image data of the suspected target does not meet the identification requirement, continuing to adjust the cruising route of the robot until the robot cannot actively adjust or the quality of the image data of the suspected target meets the requirement, and outputting a final identification result.
8. The method of claim 7, wherein in step (a), (b), (c) and (d)3.2) from
Figure FDA0002208790200000021
And judging whether the quality of the image data of the suspected target obtained after adjustment meets the identification requirement, wherein m is the cruising route of the robot which is actively adjusted, m is 1 and represents the cruising route of the robot which is actively adjusted, m is 0 and represents the cruising route of the robot which is not adjusted, x is the confidence coefficient of target identification output by the image target identification neural network, q is a preset threshold value, z is whether the robot can move to the next position judged according to the laser radar point cloud, z is 0 and represents that the robot can move to the next position, and z is not equal to 0 and represents that the robot cannot move to the next position.
9. A robot target recognition system based on an active strategy and an image sensor, comprising: an image sensor and a processor;
the image sensor is used for acquiring image data on the cruising route of the robot;
the processor is used for processing the acquired image data to obtain the relative pose relation between the suspected target and the robot carrier; actively adjusting the cruising route of the robot based on the relative pose relation between the suspected target and the robot carrier so that the robot can acquire the image data of the suspected target according to the requirement; and performing target identification according to the image data of the suspected target obtained after the cruise route of the robot is adjusted, if the quality of the image data of the suspected target obtained after adjustment cannot meet the identification requirement, continuously adjusting the cruise route of the robot until the robot cannot actively adjust or the quality of the image data of the suspected target meets the requirement, and outputting a final identification result.
10. The system of claim 9, further comprising: the robot comprises a robot chassis, a display screen and a battery;
the robot chassis is used for bearing all parts in the system and can move;
the display screen is used for outputting the identification result;
the battery is used for supplying power to all parts of the system.
CN201910891152.8A 2019-09-20 2019-09-20 Robot target identification method and system based on active strategy and image sensor Active CN110674724B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910891152.8A CN110674724B (en) 2019-09-20 2019-09-20 Robot target identification method and system based on active strategy and image sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910891152.8A CN110674724B (en) 2019-09-20 2019-09-20 Robot target identification method and system based on active strategy and image sensor

Publications (2)

Publication Number Publication Date
CN110674724A true CN110674724A (en) 2020-01-10
CN110674724B CN110674724B (en) 2022-07-15

Family

ID=69078463

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910891152.8A Active CN110674724B (en) 2019-09-20 2019-09-20 Robot target identification method and system based on active strategy and image sensor

Country Status (1)

Country Link
CN (1) CN110674724B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107042511A (en) * 2017-03-27 2017-08-15 国机智能科技有限公司 The inspecting robot head method of adjustment of view-based access control model feedback
CN108839016A (en) * 2018-06-11 2018-11-20 深圳市百创网络科技有限公司 Robot method for inspecting, storage medium, computer equipment and crusing robot
CN109444911A (en) * 2018-10-18 2019-03-08 哈尔滨工程大学 A kind of unmanned boat waterborne target detection identification and the localization method of monocular camera and laser radar information fusion
CN109753081A (en) * 2018-12-14 2019-05-14 中国矿业大学 A kind of patrol unmanned machine system in tunnel based on machine vision and air navigation aid
CN110110802A (en) * 2019-05-14 2019-08-09 南京林业大学 Airborne laser point cloud classification method based on high-order condition random field
CN110142785A (en) * 2019-06-25 2019-08-20 山东沐点智能科技有限公司 A kind of crusing robot visual servo method based on target detection

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107042511A (en) * 2017-03-27 2017-08-15 国机智能科技有限公司 The inspecting robot head method of adjustment of view-based access control model feedback
CN108839016A (en) * 2018-06-11 2018-11-20 深圳市百创网络科技有限公司 Robot method for inspecting, storage medium, computer equipment and crusing robot
CN109444911A (en) * 2018-10-18 2019-03-08 哈尔滨工程大学 A kind of unmanned boat waterborne target detection identification and the localization method of monocular camera and laser radar information fusion
CN109753081A (en) * 2018-12-14 2019-05-14 中国矿业大学 A kind of patrol unmanned machine system in tunnel based on machine vision and air navigation aid
CN110110802A (en) * 2019-05-14 2019-08-09 南京林业大学 Airborne laser point cloud classification method based on high-order condition random field
CN110142785A (en) * 2019-06-25 2019-08-20 山东沐点智能科技有限公司 A kind of crusing robot visual servo method based on target detection

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
SHUAI ZHANG等: "An integrated UAV navigation system based on geo-registered 3D point cloud", 《2017 IEEE INTERNATIONAL CONFERENCE ON MULTISENSOR FUSION AND INTEGRATION FOR INTELLIGENT SYSTEMS (MFI)》, 11 December 2017 (2017-12-11), pages 650 - 655 *
XIAOLING等: "Dual-arm cooperation and implementing for robotic harvesting tomato using binocular vision", 《ROBOTICS AND AUTONOMOUS SYSTEMS》, vol. 114, 30 April 2019 (2019-04-30), pages 134 - 143 *
王梓丞: "基于ROS的机器人室内自主巡航系统设计与实现", 《中国优秀硕士学位论文全文数据库信息科技辑》, 15 May 2019 (2019-05-15), pages 140 - 421 *
黄豪: "果园自主巡航作业机器人研制", 《中国优秀硕士学位论文全文数据库农业科技辑》, 15 January 2018 (2018-01-15), pages 048 - 184 *

Also Published As

Publication number Publication date
CN110674724B (en) 2022-07-15

Similar Documents

Publication Publication Date Title
Caraffi et al. Off-road path and obstacle detection using decision networks and stereo vision
US8913783B2 (en) 3-D model based method for detecting and classifying vehicles in aerial imagery
CN111369541B (en) Vehicle detection method for intelligent automobile under severe weather condition
Zhou et al. Self‐supervised learning to visually detect terrain surfaces for autonomous robots operating in forested terrain
CN113156421A (en) Obstacle detection method based on information fusion of millimeter wave radar and camera
CN108805906A (en) A kind of moving obstacle detection and localization method based on depth map
CN113506318B (en) Three-dimensional target perception method under vehicle-mounted edge scene
CN110717445B (en) Front vehicle distance tracking system and method for automatic driving
Wang et al. An overview of 3d object detection
CN115049700A (en) Target detection method and device
CN105160649A (en) Multi-target tracking method and system based on kernel function unsupervised clustering
CN113902860A (en) Multi-scale static map construction method based on multi-line laser radar point cloud
CN110579771A (en) Airplane berth guiding method based on laser point cloud
CN110992378B (en) Dynamic updating vision tracking aerial photographing method and system based on rotor flying robot
CN111998862B (en) BNN-based dense binocular SLAM method
CN114399675A (en) Target detection method and device based on machine vision and laser radar fusion
CN113537047A (en) Obstacle detection method, obstacle detection device, vehicle and storage medium
CN116978009A (en) Dynamic object filtering method based on 4D millimeter wave radar
Hayton et al. CNN-based Human Detection Using a 3D LiDAR onboard a UAV
Zhang et al. Vessel detection and classification fusing radar and vision data
Saif et al. Crowd density estimation from autonomous drones using deep learning: challenges and applications
CN113792593A (en) Underwater close-range target identification and tracking method and system based on depth fusion
CN107274477B (en) Background modeling method based on three-dimensional space surface layer
CN112733678A (en) Ranging method, ranging device, computer equipment and storage medium
CN110674724B (en) Robot target identification method and system based on active strategy and image sensor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant