CN111797929B - Binocular robot obstacle feature detection method based on CNN and PSO - Google Patents

Binocular robot obstacle feature detection method based on CNN and PSO Download PDF

Info

Publication number
CN111797929B
CN111797929B CN202010646906.6A CN202010646906A CN111797929B CN 111797929 B CN111797929 B CN 111797929B CN 202010646906 A CN202010646906 A CN 202010646906A CN 111797929 B CN111797929 B CN 111797929B
Authority
CN
China
Prior art keywords
cnn
image
camera
binocular
particle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010646906.6A
Other languages
Chinese (zh)
Other versions
CN111797929A (en
Inventor
周洪成
李刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinling Institute of Technology
Original Assignee
Jinling Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinling Institute of Technology filed Critical Jinling Institute of Technology
Priority to CN202010646906.6A priority Critical patent/CN111797929B/en
Publication of CN111797929A publication Critical patent/CN111797929A/en
Application granted granted Critical
Publication of CN111797929B publication Critical patent/CN111797929B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a binocular robot obstacle characteristic detection method based on CNN and PSO. The invention utilizes the multi-source image data (binocular common camera and depth camera) of the robot, firstly, the multi-source data are respectively preprocessed such as calibration processing, size adjustment and the like; and then, respectively identifying data sources acquired by the binocular camera and the depth camera by using CNN to obtain the characteristics of the size, the distance and the like of the obstacle. Because errors occur in recognition, the recognition structure of the two groups of data is weighted, PSO algorithm optimization is used, the optimal weight of the two groups of data is calculated, and the accuracy of obstacle-taking feature detection is improved. By accurately detecting the characteristics of the obstacle, the robot can make accurate response in time.

Description

Binocular robot obstacle feature detection method based on CNN and PSO
Technical Field
The invention relates to the field of machine vision, in particular to a binocular robot obstacle feature detection method based on CNN and PSO.
Background
With the recent rapid development of neural network technology, machine vision technology based on image processing has been developed at a high speed. Especially, the machine vision technology is important for robots, and the robots need to detect obstacles in the space in real time in the walking process, so that path planning is performed in the space, and the working efficiency of the robots is improved.
The binocular camera is positioned by two cameras, images of an object are shot by two cameras fixed at different positions, and coordinates of the point on image planes of the two cameras are respectively obtained. Information features in the images can be calculated from the acquired images of the two cameras. Convolutional neural networks are one of the very representative neural networks in the technical field of deep learning at present, and a plurality of breakthrough progress is made in the fields of image analysis and processing.
Therefore, the invention provides a binocular robot obstacle characteristic detection method based on CNN and PSO, which is used for identifying characteristic signals of obstacles of a robot so that the robot can accurately detect the characteristics of the obstacles in a space and establish an accurate space path.
Disclosure of Invention
In order to solve the above-mentioned problems. The invention provides a binocular robot obstacle characteristic detection method based on CNN and PSO, which is used for identifying characteristic signals of obstacles of a robot so that the robot can accurately detect the characteristics of the obstacles in a space and establish an accurate space path. To achieve this object:
the invention provides a binocular robot obstacle characteristic detection method based on CNN and PSO, which comprises the following specific steps:
step 1: carrying out image correction on the binocular camera image;
step 2: compressing and dimension-reducing the binocular camera and the camera, and reducing the data volume of the image;
step 3: establishing two CNN models, and respectively carrying out model training along with the binocular camera correction result and the depth image;
step 4: initializing weights W of two CNN training model recognition results 1 ,W 2
Step 5: searching for optimal weight W using particle swarm algorithm 1 ,W 2
Step 6: and carrying out weighted average on the two CNN recognition results by using the optimal weight to obtain a final recognition result.
As a further improvement of the present invention, in the step 1, the left and right cameras rotate in the image correction, the angle is half of the included angle between the two cameras, so that the imaging planes of the left and right cameras are parallel, and the formula is satisfied:
wherein a is l ,a r A matrix of camera rotations is left and right, respectively.
As a further improvement of the present invention, the translation vector T in the image correction in the step 1 constructs a transformation matrix a rect =[e 1 e 2 e 3 ]The left image and the right image to be matched are aligned;
e 1 ,e 2 ,e 3 the expressions of (2) are respectively:
wherein T is the vector between the X axis of the camera coordinate system and the connecting line included angle of the principal point.
As a further improvement of the invention, the method for reducing the dimension of the data in the step 2 is a Gaussian pyramid, the invention adopts average filtering to obtain a low-resolution image, and the expression of a sampling operator is as follows:
as a further improvement of the invention, the specific flow of the PSO algorithm in the step 5 is as follows:
1) Initializing the speed and position of each particle in the population of particles;
2) Calculating the fitness function of each particle;
3) Calculating optimal adaptive particles of the particle swarm;
4) Detecting whether the optimizing stopping condition is reached, if yes, ending, otherwise executing the step 5;
5) The velocity and position of the population of particles are updated.
As a further improvement of the present invention, the update formula of the particle swarm speed and the position in the step 5 is as follows:
V i =w*V i-1 +c*rand()*(gbest i -X i ) (4)
X i =X i-1 +V i (5)
wherein V is i For the current velocity of the particle, w is the inertial factor, c is the learning factor, rand () is a random number between (0, 1), gbest i And the optimal position of the particle swarm.
As a further improvement of the present invention, the final recognition result formula in the step 6 is:
wherein S is 1 And S is equal to 2 Is the recognition result of CNN model, w 1 And w is equal to 2 Is the optimal weight of the two.
The binocular robot obstacle characteristic detection method based on CNN and PSO has the beneficial effects that:
1. according to the invention, more information of the obstacle can be obtained by utilizing the multi-data source image, and the characteristics of the obstacle can be obtained more accurately.
2. The invention utilizes PSO algorithm optimization to detect the accuracy of the obstacle characteristics.
3. The method uses the pyramid algorithm, reduces the dimension of the image and increases the real-time performance of the algorithm.
4. The algorithm of the invention has simple realization and low cost.
Drawings
FIG. 1 is a system block diagram;
FIG. 2 is a system flow diagram;
FIG. 3 is a schematic diagram of a pyramid algorithm;
Detailed Description
The invention is described in further detail below with reference to the attached drawings and detailed description:
the invention provides a binocular robot obstacle characteristic detection method based on CNN and PSO, which is used for identifying characteristic signals of obstacles of a robot so that the robot can accurately detect the characteristics of the obstacles in a space and establish an accurate space path. Fig. 1 is a system block diagram, and fig. 2 is a system flow diagram.
The invention provides a binocular robot obstacle characteristic detection method based on CNN and PSO, which comprises the following specific steps:
first, image correction is performed on a binocular camera image, compression dimension reduction is performed on the binocular camera and the camera, and the data size of the image is reduced. Fig. 3 is a schematic diagram of a pyramid algorithm.
The left camera and the right camera rotate in image correction, the angle is half of the included angle of the two cameras, so that the imaging planes of the left camera and the right camera are parallel, and the formula is satisfied:
wherein a is l ,a r A matrix of camera rotations is left and right, respectively.
Translation vector T in image correction constructs transformation matrix a rect =[e 1 e 2 e 3 ]The left image and the right image to be matched are aligned;
e 1 ,e 2 ,e 3 the expressions of (2) are respectively:
wherein T is the vector between the X axis of the camera coordinate system and the connecting line included angle of the principal point.
The method for reducing the dimension of the data is a Gaussian pyramid, the invention adopts average filtering to obtain a low-resolution image, and the expression of a sampling operator is as follows:
then, two CNN models are established, model training is carried out along with the binocular camera correction result and the depth image respectively, weights W1 and W2 of the two CNN training model recognition results are initialized, and an optimal weight W1 and W2 are found by using a particle swarm algorithm.
The specific flow of the PSO algorithm is as follows:
1) Initializing the speed and position of each particle in the population of particles;
2) Calculating the fitness function of each particle;
3) Calculating optimal adaptive particles of the particle swarm;
4) Detecting whether the optimizing stopping condition is reached, if yes, ending, otherwise executing the step 5;
5) The velocity and position of the population of particles are updated.
The updated formula of particle group velocity and position is:
V i =w*V i-1 +c*rand()*(gbest i -X i ) (4)
X i =X i-1 +V i (5)
wherein V is i For the current velocity of the particle, w is the inertial factor, c is the learning factor, rand () is a random number between (0, 1), gbest i And the optimal position of the particle swarm.
And finally, carrying out weighted average on the two CNN recognition results by using the optimal weight to obtain a final recognition result.
The final recognition result formula:
wherein S is 1 And S is equal to 2 Is the recognition result of CNN model, w 1 And w is equal to 2 Is the optimal weight of the two.
The above description is only of the preferred embodiment of the present invention, and is not intended to limit the present invention in any other way, but is intended to cover any modifications or equivalent variations according to the technical spirit of the present invention, which fall within the scope of the present invention as defined by the appended claims.

Claims (1)

1. The binocular robot obstacle feature detection method based on CNN and PSO comprises the following specific steps of;
step 1: carrying out image correction on the binocular camera image;
in the step 1, the left and right cameras rotate in an image correction way, the angle is half of the included angle of the two cameras, so that the imaging planes of the left and right cameras are parallel, and the formula is satisfied:
wherein a is l ,a r Left and right respectivelyA matrix of camera rotations;
the translation vector T in the image correction in the step 1 constructs a transformation matrix a rect =[e 1 e 2 e 3 ]The left image and the right image to be matched are aligned;
e 1 ,e 2 ,e 3 the expressions of (2) are respectively:
wherein T is a vector between the X axis of the camera coordinate system and the connecting line included angle of the main point;
step 2: compressing and dimension-reducing the binocular camera and the camera, and reducing the data volume of the image;
the method for reducing the dimension of the data in the step 2 is a Gaussian pyramid, a low-resolution image is obtained by means of mean filtering, and the expression of a sampling operator is as follows:
wherein the size of the sampling operator is 2×2;
step 3: establishing two CNN models, and respectively carrying out model training along with the binocular camera correction result and the depth image;
step 4: initializing weights W of two CNN training model recognition results 1 ,W 2
Step 5: searching for optimal weight w using particle swarm algorithm 1 ,w 2
The specific flow of the particle swarm algorithm in the step 5 is as follows:
1) Initializing the speed and position of each particle in the population of particles;
2) Calculating the fitness function of each particle;
3) Calculating optimal adaptive particles of the particle swarm;
4) Detecting whether the optimizing stopping condition is reached, if yes, ending, otherwise executing the step 5;
5) Updating the speed and the position of the particle swarm;
the update formula of the particle swarm speed and the position in the step 5 is as follows:
V i =w*V i-1 +c*rand()*(gbest i -X i ) (4)
X i =X i-1 +V i (5)
wherein V is i For the current velocity of the particle, w is the inertial factor, c is the learning factor, rand () is a random number between (0, 1), gbest i An optimal position of the particle swarm;
step 6: performing weighted average on the two CNN recognition results by using the optimal weight to obtain a final recognition result;
the final recognition result formula in the step 6 is as follows:
wherein S is 1 And S is equal to 2 Is the recognition result of CNN model, w 1 And w is equal to 2 Is the optimal weight of the two.
CN202010646906.6A 2020-07-07 2020-07-07 Binocular robot obstacle feature detection method based on CNN and PSO Active CN111797929B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010646906.6A CN111797929B (en) 2020-07-07 2020-07-07 Binocular robot obstacle feature detection method based on CNN and PSO

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010646906.6A CN111797929B (en) 2020-07-07 2020-07-07 Binocular robot obstacle feature detection method based on CNN and PSO

Publications (2)

Publication Number Publication Date
CN111797929A CN111797929A (en) 2020-10-20
CN111797929B true CN111797929B (en) 2023-08-22

Family

ID=72809668

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010646906.6A Active CN111797929B (en) 2020-07-07 2020-07-07 Binocular robot obstacle feature detection method based on CNN and PSO

Country Status (1)

Country Link
CN (1) CN111797929B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955920A (en) * 2014-04-14 2014-07-30 桂林电子科技大学 Binocular vision obstacle detection method based on three-dimensional point cloud segmentation
CN105869166A (en) * 2016-03-29 2016-08-17 北方工业大学 Human body action identification method and system based on binocular vision
CN108081266A (en) * 2017-11-21 2018-05-29 山东科技大学 A kind of method of the mechanical arm hand crawl object based on deep learning
CN108205658A (en) * 2017-11-30 2018-06-26 中原智慧城市设计研究院有限公司 Detection of obstacles early warning system based on the fusion of single binocular vision
CN108247637A (en) * 2018-01-24 2018-07-06 中南大学 A kind of industrial machine human arm vision anticollision control method
CN109858415A (en) * 2019-01-21 2019-06-07 东南大学 The nuclear phase followed suitable for mobile robot pedestrian closes filtered target tracking
CN110110793A (en) * 2019-05-10 2019-08-09 中山大学 Binocular image fast target detection method based on double-current convolutional neural networks

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955920A (en) * 2014-04-14 2014-07-30 桂林电子科技大学 Binocular vision obstacle detection method based on three-dimensional point cloud segmentation
CN105869166A (en) * 2016-03-29 2016-08-17 北方工业大学 Human body action identification method and system based on binocular vision
CN108081266A (en) * 2017-11-21 2018-05-29 山东科技大学 A kind of method of the mechanical arm hand crawl object based on deep learning
CN108205658A (en) * 2017-11-30 2018-06-26 中原智慧城市设计研究院有限公司 Detection of obstacles early warning system based on the fusion of single binocular vision
CN108247637A (en) * 2018-01-24 2018-07-06 中南大学 A kind of industrial machine human arm vision anticollision control method
CN109858415A (en) * 2019-01-21 2019-06-07 东南大学 The nuclear phase followed suitable for mobile robot pedestrian closes filtered target tracking
CN110110793A (en) * 2019-05-10 2019-08-09 中山大学 Binocular image fast target detection method based on double-current convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于双目视觉的人体行为分析技术研究;邓俊·;《中国优秀硕士学位论文全文数据库 信息科技辑》;I138-274 *

Also Published As

Publication number Publication date
CN111797929A (en) 2020-10-20

Similar Documents

Publication Publication Date Title
CN110533722B (en) Robot rapid repositioning method and system based on visual dictionary
CN109800689B (en) Target tracking method based on space-time feature fusion learning
CN110692082B (en) Learning device, learning method, learning model, estimating device, and clamping system
CN109035204B (en) Real-time detection method for weld joint target
Maggio et al. Loc-nerf: Monte carlo localization using neural radiance fields
CN108247637B (en) Industrial robot arm vision anti-collision control method
CN108196453B (en) Intelligent calculation method for mechanical arm motion planning group
CN109947097B (en) Robot positioning method based on vision and laser fusion and navigation application
CN103198477B (en) Apple fruitlet bagging robot visual positioning method
CN113276106B (en) Climbing robot space positioning method and space positioning system
CN110378325B (en) Target pose identification method in robot grabbing process
CN111998862B (en) BNN-based dense binocular SLAM method
CN110610130A (en) Multi-sensor information fusion power transmission line robot navigation method and system
Setyawan et al. Object detection of omnidirectional vision using PSO-neural network for soccer robot
CN111753696A (en) Method for sensing scene information, simulation device and robot
WO2022228391A1 (en) Terminal device positioning method and related device therefor
CN113393524B (en) Target pose estimation method combining deep learning and contour point cloud reconstruction
CN113034526B (en) Grabbing method, grabbing device and robot
CN111797929B (en) Binocular robot obstacle feature detection method based on CNN and PSO
CN111553954B (en) Online luminosity calibration method based on direct method monocular SLAM
CN117315025A (en) Mechanical arm 6D pose grabbing method based on neural network
CN112950787B (en) Target object three-dimensional point cloud generation method based on image sequence
CN113305848B (en) Real-time capture detection method based on YOLO v2 network
CN108534797A (en) A kind of real-time high-precision visual odometry method
Suzui et al. Toward 6 dof object pose estimation with minimum dataset

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant