CN111230857A - Target positioning and grabbing based on machine vision and deep learning - Google Patents

Target positioning and grabbing based on machine vision and deep learning Download PDF

Info

Publication number
CN111230857A
CN111230857A CN201910165083.2A CN201910165083A CN111230857A CN 111230857 A CN111230857 A CN 111230857A CN 201910165083 A CN201910165083 A CN 201910165083A CN 111230857 A CN111230857 A CN 111230857A
Authority
CN
China
Prior art keywords
camera
barrel
machine vision
supplementary
pyramid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910165083.2A
Other languages
Chinese (zh)
Inventor
张绍泉
吴朝明
李璠
田伟
徐晨光
王军
张俊
汪胜前
邓承志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang Institute of Technology
Original Assignee
Nanchang Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang Institute of Technology filed Critical Nanchang Institute of Technology
Priority to CN201910165083.2A priority Critical patent/CN111230857A/en
Publication of CN111230857A publication Critical patent/CN111230857A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/087Controls for manipulators by means of sensing devices, e.g. viewing or touching devices for sensing other physical parameters, e.g. electrical or chemical properties
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses target positioning and grabbing based on machine vision and depth learning, which belongs to the field of machine vision, and comprises a camera set, a mechanical arm and an installation frame, wherein the camera set comprises a left camera and a right camera which are both connected to the upper end of the installation frame, the center of the upper end of the installation frame is connected with an installation cylinder, the inner side of the installation cylinder is provided with an auxiliary pyramid which is matched with an auxiliary level determinator, the scheme utilizes the optical auto-collimation imaging principle to assist the horizontal adjustment of the initial state of the camera set through the LED light-emitting element and linear array CCD imaging technical design, ensures the initial levelness of the camera set so as to provide a reference for accurately adjusting the angle adjustment of the subsequent camera set, and basically eliminates the influence of the initial levelness error of the camera set on the calibration precision, and meanwhile, the effectiveness of the correlation algorithm can be further improved.

Description

Target positioning and grabbing based on machine vision and deep learning
Technical Field
The invention relates to the field of machine vision, in particular to target positioning and grabbing based on machine vision and deep learning.
Background
With the development of industrial automation technology, the number of robots is increasing continuously, and industrial robots are widely applied to various aspects of life such as automobile manufacturing, machining, electronic and electrical, intelligent home service and the like. The development of the robot technology reflects the development level of national automation to a certain extent, and along with the development of social economy, the expansion of production scale and the complication of production environment, the development and production of an automation system with more intellectualization, informatization and high precision are particularly important. In the field of machine vision, target identification and positioning are key technologies, and the target identification and positioning can not only guide a robot to complete a certain task, such as industrial part processing, sorting, carrying and other tasks. The method has important significance in the complex visual fields of visual scene understanding, map creation, AR and the like, and the research of the machine vision technology is a huge measure for promoting the development of the robot.
The traditional mechanical arm adopts a teaching system for grabbing, but when the position, the shape and the environment of an object to be grabbed change, the mechanical arm under the teaching system cannot be correspondingly adjusted along with the change of the outside world, so that the grabbing task fails.
The general method for solving the grabbing task of the mechanical arm through computer vision is that a scene is sampled by a camera, the attitude information of a target position and a space is obtained by an image processing algorithm, and finally the mechanical arm finishes the grabbing task. In the traditional image processing of the mechanical arm recognition stage, a feature extraction method is adopted to process image information, and the feature extraction process is easily influenced by external factors such as illumination, target shape and target size, so that the generalization capability and robustness are poor.
The concept of deep learning was first proposed by Hinton in 2006, and the ImageNet competition in 2012 by krishevsky achieved excellent performance by using a deep learning method, which has attracted the attention of researchers all over the world. Compared with the traditional visual algorithm, the deep learning does not need the user to select which extracted features in advance, but finds the features of the target in a large amount of data in a learning mode.
In a recently published Master thesis, namely, a research on target identification and capture positioning based on machine vision and deep learning, experimental research is carried out on the target identification and capture positioning based on the machine vision and the deep learning, an algorithm obtained by the research has an obvious effect, and a research result has certain theoretical significance and application value.
In the above mentioned paper, for the part of the camera calibration experiment, before the camera calibration experiment is performed, the camera set needs to be installed and fixed, and then the level is adjusted, so that the camera set is in the horizontal initial state to provide an adjustment reference for the angle adjustment of the subsequent camera set, but the existing camera set is difficult to ensure the horizontal initial state, thereby easily affecting the calibration accuracy and reducing the effectiveness of the related algorithm.
Disclosure of Invention
1. Technical problem to be solved
Aiming at the problems in the prior art, the invention aims to provide target positioning and grabbing based on machine vision and deep learning, which utilizes the optical auto-collimation imaging principle, assists the horizontal adjustment of the initial state of a camera set through the design of an LED light-emitting element and a linear array CCD imaging technology, ensures the initial levelness of the camera set so as to provide a reference for adjusting the angle of the subsequent camera set accurately, basically eliminates the influence of the initial levelness error of the camera set on the calibration precision, and can further improve the effectiveness of a related algorithm.
2. Technical scheme
In order to solve the above problems, the present invention adopts the following technical solutions.
Target positioning and grabbing based on machine vision and deep learning comprises a camera set, a mechanical arm and an installation frame, wherein the camera set comprises a left camera and a right camera, the left camera and the right camera are connected to the upper end of the installation frame, the left camera and the right camera are positioned on the same horizontal plane and are used for calibrating the cameras, the mechanical arm is positioned at the lower side of the camera set and is connected with the installation frame, an installation cylinder is connected to the center of the upper end of the installation frame, an auxiliary pyramid is arranged on the inner side of the installation cylinder and is matched with an auxiliary level measuring instrument, and the horizontal adjustment of the initial state of the camera set is assisted through an LED light-emitting element and a CCD linear array imaging technology design by utilizing an optical auto-collimation imaging principle, so that the initial levelness of the camera set is ensured, and accurate adjustment reference is provided for angle adjustment of a subsequent camera set, the influence of the initial levelness error of the camera set on the calibration precision is basically eliminated, and meanwhile, the effectiveness of a related algorithm can be further improved.
Further, supplementary horizontal determination appearance includes the mount and supplementary survey piece, supplementary survey piece includes barrel and supplementary mirror group, barrel fixed connection is in the mount upper end, supplementary mirror group connects in the barrel inboardly, the barrel is the cross-under type, supplementary mirror group includes first pyramid, second pyramid, two cemented lens, prism, formation of image CCD and light-transmitting lens, the prism is connected in the inboard middle part of barrel, formation of image CCD connects in barrel left side logical opening department, two cemented lens are located between prism and the formation of image CCD, the second pyramid is connected in barrel right side logical opening department, first pyramid is connected in barrel lower logical opening department, light-transmitting lens connects logical opening department on the barrel.
Further, through-hole department threaded connection has a movable section of thick bamboo under the barrel, the second pyramid is connected in a movable section of thick bamboo, operating personnel demolishs the activity section of thick bamboo of this supplementary horizontal measuring instrument lower extreme of auto-collimation state, aim at supplementary pyramid up end with the prism downside direction among this supplementary horizontal measuring instrument, through the position of the supplementary pyramid of adjusting this supplementary horizontal measuring instrument downside, make the formation of image point of two routes light beam assemble same position on formation of image CCD, think this time that this supplementary pyramid is perpendicular with the second pyramid of this supplementary horizontal measuring instrument, the horizontal position of this camera group is fixed this moment, accomplish camera group level (l) ing.
Further, the barrel upper end is connected with the warning light, electric connection has first auxiliary control ware between warning light and the supplementary level apparatus, and before the camera unit level, first auxiliary control ware control warning light scintillation is reported to the police all the time, and until the camera unit is in the horizontality, first auxiliary control ware control warning light is closed, can strengthen the warning effect to operating personnel.
Furthermore, the left camera and the right camera are optical cameras with the same configuration in the gigabit network industrial structure, so that color information and depth information in a visual field can be accurately obtained, errors and noises of the depth information are small, and the left camera and the right camera are both connected with a computer through network cables, so that parameters such as the number of image sampling frames and image scales can be conveniently adjusted.
Furthermore, the upper ends of the left camera and the right camera are connected with light supplementing lamps, and under the condition that light is not ideal, the light supplementing lamps can be used for supplementing light, so that brightness is enhanced, and calibration work can be smoothly carried out.
Furthermore, the light filling lamp side is connected with miniature optical line sensor, and electric connection has the second auxiliary control ware between light filling lamp and the miniature optical line sensor, and when miniature optical line sensor monitored that light on every side surpassed its self manual setting threshold value, the automation of accessible second auxiliary control ware control light filling lamp was opened and is closed, can save the electric energy to a certain extent.
Furthermore, the mechanical arm adopts a multi-axis mechanical arm, and can reach various spatial ranges and complete various postures.
Furthermore, the left camera and the right camera can be calibrated by adopting a Zhangyingyou plane calibration method, the algorithm is between the traditional camera calibration and self-calibration algorithm, the operation is simple, the calibration precision is high, and the robustness is good.
3. Advantageous effects
Compared with the prior art, the invention has the advantages that:
(1) the scheme utilizes the optical auto-collimation imaging principle, and the LED light-emitting element and linear array CCD imaging technology are designed to assist the horizontal adjustment of the initial state of the camera set, so that the initial levelness of the camera set is ensured, accurate adjustment reference is provided for the angle adjustment of the subsequent camera set, the influence of the initial levelness error of the camera set on the calibration precision is basically eliminated, and meanwhile, the effectiveness of a related algorithm can be further improved.
(2) Supplementary horizontal determination appearance includes the mount and supplementary survey piece, supplementary survey piece includes barrel and supplementary mirror group, barrel fixed connection is in the mount upper end, supplementary mirror group connects in the barrel inboard, the barrel is the four-way formula, supplementary mirror group includes first pyramid, the second pyramid, two cemented lens, the prism, formation of image CCD and light-transmitting lens, the prism is connected in the inboard middle part of barrel, formation of image CCD connects in barrel left through-hole department, two cemented lens are located between prism and the formation of image CCD, second pyramid is connected in barrel right through-hole department, first pyramid is connected in barrel lower through-hole department, light-transmitting lens connects through-hole department on the barrel.
(3) Through-hole department threaded connection has a movable section of thick bamboo under the barrel, the second pyramid is connected in a movable section of thick bamboo, operating personnel demolishs the activity section of thick bamboo of this supplementary horizontal measuring instrument lower extreme of auto-collimation state, the prism downside orientation in this supplementary horizontal measuring instrument is aimed at supplementary pyramid up end, position through the supplementary pyramid of adjusting this supplementary horizontal measuring instrument downside, make the formation of image point of two tunnel light beams assemble same position on formation of image CCD, consider this supplementary pyramid perpendicular with this supplementary horizontal measuring instrument's second pyramid this moment, the horizontal position of this camera group is fixed this moment, accomplish camera group level (l) regulation.
(4) Barrel upper end is connected with the warning light, and electric connection has first auxiliary control ware between warning light and the supplementary horizontal determination appearance, and before camera unit level, first auxiliary control ware control warning light scintillation is reported to the police all the time, and until camera unit is in the horizontality, first auxiliary control ware control warning light is closed, can strengthen the warning effect to operating personnel.
(5) The left camera and the right camera are gigabit network industrial structure optical cameras with the same configuration, color information and depth information in a visual field can be accurately obtained, errors and noise of the depth information are small, and the left camera and the right camera are connected with a computer through network cables, so that parameters such as image sampling frame number, image scale and the like can be conveniently adjusted.
(6) The upper ends of the left camera and the right camera are connected with light supplementing lamps, and under the condition that light is not ideal, the light supplementing lamps can be used for supplementing light, so that brightness is enhanced, and calibration work can be smoothly carried out.
(7) The light filling lamp side is connected with miniature light sensor, and electric connection has the second auxiliary control ware between light filling lamp and the miniature light sensor, and when miniature light sensor monitored that light surpassed its self manual setting threshold value around, the accessible second auxiliary control ware controlled the automation of light filling lamp and opened and close, can save the electric energy to a certain extent.
(8) The mechanical arm adopts a multi-axis mechanical arm, and can reach various spatial ranges and complete various postures.
(9) The left camera and the right camera can be calibrated by adopting a Zhangyingyou plane calibration method, the algorithm is between the traditional camera calibration and self-calibration algorithm, the operation is simple, the calibration precision is high, and the robustness is good.
Drawings
FIG. 1 is a schematic structural view of the present invention;
FIG. 2 is a schematic view of the construction of the auxiliary level measuring device of the present invention;
FIG. 3 is a schematic side view of the present invention;
FIG. 4 is a calibration flow chart of the present invention;
FIG. 5 is a flow chart of target detection according to the present invention;
FIG. 6 is a table of identification measurement coordinates of the present invention;
FIG. 7 is a table of measurement error analysis samples according to the present invention;
fig. 8 is a sample table of calibration results of left and right cameras according to the present invention.
The reference numbers in the figures illustrate:
1 barrel, 2 formation of image CCD, 3 two cemented lens, 4 light-transmitting mirrors, 5 second pyramids, 6 first pyramids, 7 supplementary pyramids, 8 warning lights, 9 left cameras, 10 right cameras, 11 arms, 12 light filling lamps.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention; it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments, and all other embodiments obtained by those skilled in the art without any inventive work are within the scope of the present invention.
In the description of the present invention, it should be noted that the terms "upper", "lower", "inner", "outer", "top/bottom", and the like indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience of description and simplification of description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and thus should not be construed as limiting the present invention. Furthermore, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "disposed," "sleeved/connected," "connected," and the like are to be construed broadly, e.g., "connected," which may be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Example 1:
referring to fig. 3, the target positioning and grabbing device based on machine vision and deep learning comprises a camera set, a mechanical arm 11 and a mounting rack 8, wherein the camera set comprises a left camera 9 and a right camera 10, the left camera 9 and the right camera 10 are both connected to the upper end of the mounting rack 8, the left camera 9 and the right camera 10 are positioned on the same horizontal plane and are calibrated by the left camera 9 and the right camera 10, the mechanical arm 11 is positioned at the lower side of the camera set and is connected with the mounting rack 8, the center of the upper end of the mounting rack 8 is connected with a mounting cylinder, an auxiliary pyramid 7 is arranged at the inner side of the mounting cylinder, the auxiliary pyramid 7 is matched with an auxiliary level determinator, the horizontal adjustment of the initial state of the camera set is assisted by an optical auto-collimation imaging principle and through an LED light-emitting element and linear array CCD imaging technology, the initial levelness of the camera set is ensured, so as to provide accurate, the influence of the initial levelness error of the camera set on the calibration precision is basically eliminated, and meanwhile, the effectiveness of a related algorithm can be further improved.
Please refer to fig. 1 and 2, the auxiliary level measuring apparatus includes a fixing frame and an auxiliary measuring member, the auxiliary measuring member includes a barrel 1 and an auxiliary lens group, the barrel 1 is fixedly connected to the upper end of the fixing frame, the auxiliary lens group is connected to the inner side of the barrel 1, the barrel 1 is of a four-way type, the auxiliary lens group includes a first pyramid 6, a second pyramid 5, a double-cemented lens 3, a prism, an imaging CCD 2 and a transparent lens 4, the prism is connected to the middle part of the inner side of the barrel 1, the imaging CCD 2 is connected to the left through hole of the barrel 1, the double-cemented lens 3 is located between the prism and the imaging CCD 2, the second pyramid 5 is connected to the right through hole of the barrel 1, the first pyramid 6 is connected to the lower through hole of the barrel 1, the transparent lens 4 is connected to.
Barrel 1 lower through-hole department threaded connection has a movable section of thick bamboo, and second pyramid 5 is connected in a movable section of thick bamboo, and operating personnel demolishs the activity section of thick bamboo of this supplementary horizontal measuring instrument lower extreme of auto-collimation state, aligns supplementary pyramid 7 up end with the prism downside orientation in this supplementary horizontal measuring instrument, through the position of the supplementary pyramid 7 of adjusting this supplementary horizontal measuring instrument downside, makes the imaging point of two way light beams assemble same position on formation of image CCD 2, promptly: laser emitted by the laser is divided into two paths of light beams after passing through the prism, one path of light beam is reflected, and the other path of light beam is directly transmitted downwards through the prism. The reflected light returns to the prism after being reflected by the second pyramid 5, at the moment, a part of the reflected light beam is reflected out of the device, and a part of the reflected light beam passes through the prism and is focused and imaged on the imaging CCD 2 photosensitive surface through the double-cemented lens 3; the directly reflected light is reflected by the auxiliary pyramid 7 and then returns to the prism, at this time, a part of the reflected light beam penetrates through the prism and then exits the device, a part of the reflected light beam is reflected by the prism and focused and imaged on the photosensitive surface of the imaging CCD 2 through the double-cemented lens 3, when two beams of light on the photosensitive surface of the imaging CCD 2 coincide, the auxiliary pyramid 7 is considered to be vertical to the second pyramid 5 of the auxiliary level determinator, at this time, the horizontal position of the camera set is fixed, and the horizontal adjustment of the camera set is completed.
Referring to fig. 1 and 2, the upper end of the barrel 1 is connected with a warning lamp 8, a first auxiliary controller is electrically connected between the warning lamp 8 and the auxiliary level measuring instrument, before the camera set is horizontal, the first auxiliary controller controls the warning lamp 8 to flash and give an alarm all the time until the camera set is in a horizontal state, and the first auxiliary controller controls the warning lamp 8 to be turned off, so that the reminding effect on operators can be enhanced.
The left camera 9 and the right camera 10 are gigabit industrial structure optical cameras with the same configuration, color information and depth information in a visual field can be obtained accurately, errors and noise of the depth information are small, the left camera 9 and the right camera 10 are connected with a computer through network cables, and parameters such as image sampling frame number and image scale are convenient to adjust.
Referring to fig. 3, the upper ends of the left camera 9 and the right camera 10 are both connected to a fill-in light lamp 12, and in case of non-ideal light, the fill-in light lamp 12 can be used to fill in light, so as to enhance the brightness, and the calibration work can be performed smoothly.
The side end of the light supplementing lamp 12 is connected with a miniature light sensor, a second auxiliary controller is electrically connected between the light supplementing lamp 12 and the miniature light sensor, when the miniature light sensor monitors that ambient light exceeds the manual setting threshold value of the miniature light sensor, the second auxiliary controller can control the light supplementing lamp 12 to be automatically turned on and turned off, and electric energy can be saved to a certain extent.
The robot arm 11 adopts a multi-axis robot arm 11, and can reach various spatial ranges and complete various postures.
The left camera 9 and the right camera 10 can be calibrated by adopting a Zhangyingyou plane calibration method, the algorithm is between the traditional camera calibration and self-calibration algorithm, the operation is simple, the calibration precision is high, and the robustness is good.
An operator firstly installs a left camera 9 and a right camera 10 on the left side and the right side of the upper end of an installation frame 8 respectively, a certain distance is kept between the left camera 9 and the right camera 10 according to actual needs, the installation angles of the left camera 9 and the right camera 10 are adjusted as required, a movable cylinder at the lower end of an auxiliary level measuring instrument in an auto-collimation state is disassembled, the lower side direction of a prism in the auxiliary level measuring instrument is aligned to the upper end surface of an auxiliary pyramid 7, the imaging points of two paths of light beams are converged at the same position on an imaging CCD 2 by adjusting the position of the auxiliary pyramid 7 at the lower side of the auxiliary level measuring instrument, the auxiliary pyramid 7 is considered to be vertical to a second pyramid 5 of the auxiliary level measuring instrument at the moment, the horizontal position of the camera set is fixed, the horizontal adjustment of the camera set is completed, and the calibration experiment.
Referring to fig. 4, the specific process of camera calibration is as follows:
step one, opening a left camera 9 and a right camera 10;
step two, collecting images;
step three, binarization, edge detection, contour extraction and ellipse fitting;
step four, identifying a target;
step five, circulating the step one, the step two, the step three and the step four until the target extraction times are finished;
step six, calibrating the camera;
and seventhly, calibrating the binocular.
Referring to fig. 6, the coordinates of each identification point in the calibration image are recorded, the left camera 9 and the right camera 10 are monocular calibrated by using the target images of the left camera 9 and the right camera 10, and referring to fig. 8, the internal reference matrices of the left camera 9 and the right camera 10 and the distortion coefficients of the left camera 9 and the right camera 10 are calculated.
In the matching test, a target with more feature points is selected as a feature extraction object, the target is placed in a visual field, the left camera 9 and the right camera 10 collect images at the same time, the target is identified by using a target identification algorithm for the images collected by the left camera 9 and the right camera 10 respectively, the target identification algorithm is the prior art, the repeated recording of the coordinates of each feature point on the target is omitted, the feature point matching algorithm proposed in the Master's paper cited in the invention is used for matching the feature points on the images collected by the left camera 9 and the right camera 10, and the higher the matching precision is, the validity of the algorithm can be verified. Selecting a plurality of identification points in the successfully matched target as measured characteristic points, recording coordinates of circle centers of the identification points on images of the left camera 9 and the right camera 10, and recording three-dimensional coordinates of the characteristic points.
Referring to fig. 6 and 7, a plurality of feature points are respectively designated as A, B, C, D and E, the distances of the AB, BE, CD, BC, BD, CE, and DE groups are respectively calculated, and the error from the true distance is calculated, so as to obtain the distance group with the highest error, and calculate the error rate.
In the later stage of object identification and positioning, object identification and positioning can be divided into two tasks, one is to find out an object in a scene and give a region where the object is located, and the other is to identify the category of the object according to the found object, and the process is generally called target detection; referring to fig. 5, the target detection can be divided into the following six stages:
stage one, inputting an image;
stage two, pretreatment;
selecting a region;
step four, feature extraction;
step five, classification;
and a sixth stage of outputting prediction.
Firstly, preprocessing an input image or video frame, wherein in the preprocessing stage, the optimal data set is searched from a large amount of data, in the learning stage, the data is normalized to generate data in a standard format, and in the preprocessing stage, the operations of denoising, mean reduction, scaling and the like are generally carried out on the image. Features are extracted for each candidate region and then classified, thereby outputting a prediction.
The foregoing is only a preferred embodiment of the present invention; the scope of the invention is not limited thereto. Any person skilled in the art should be able to cover the technical scope of the present invention by equivalent or modified solutions and modifications within the technical scope of the present invention.

Claims (10)

1. Target location and snatch based on machine vision and deep learning, its characterized in that: including camera set, arm (11) and mounting bracket (8), camera set includes left camera (9) and right camera (10), left side camera (9) and right camera (10) all connect in mounting bracket (8) upper end, left side camera (9) are in on same horizontal plane and utilize left camera (9) and right camera (10) to carry out the camera mark, arm (11) are located camera set downside and are connected with mounting bracket (8), mounting bracket (8) upper end center department is connected with the installation section of thick bamboo, installation section of thick bamboo inboard is equipped with supplementary pyramid (7), supplementary pyramid (7) cooperate there is supplementary spirit level determination appearance.
2. The machine vision and deep learning based object localization and grabbing of claim 1, characterized by: supplementary horizontal determination appearance includes the mount and supplementary survey the piece, supplementary survey the piece including barrel (1) and supplementary mirror group, barrel (1) fixed connection is in the mount upper end, supplementary mirror group connects in barrel (1) inboard.
3. The machine vision and deep learning based object localization and grabbing of claim 2, characterized by: barrel (1) is the cross-under type, supplementary mirror group includes first pyramid (6), second pyramid (5), two cemented lens (3), prism, formation of image CCD (2) and light-transmitting lens (4), the prism is connected in barrel (1) inboard middle part, formation of image CCD (2) are connected in barrel (1) left side logical opening department, two cemented lens (3) are located between prism and the formation of image CCD (2), second pyramid (5) are connected in barrel (1) right side logical opening department, first pyramid (6) are connected in barrel (1) lower logical opening department, light-transmitting lens (4) are connected in barrel (1) last logical opening department.
4. The machine vision and deep learning based object localization and grabbing of claim 3, characterized by: the lower through hole of the barrel body (1) is in threaded connection with a movable barrel, and the second pyramid (5) is connected into the movable barrel.
5. The machine vision and deep learning based object localization and grabbing of claim 3, characterized by: barrel (1) upper end is connected with warning light (8), electric connection has first auxiliary control ware between warning light (8) and the supplementary level apparatus.
6. The machine vision and deep learning based object localization and grabbing of claim 1, characterized by: the left camera (9) and the right camera (10) are gigabit industrial structure optical cameras with the same configuration, and the left camera (9) and the right camera (10) are both connected with a computer through network cables.
7. The machine vision and deep learning based object localization and grabbing of claim 6, characterized by: the upper ends of the left camera (9) and the right camera (10) are both connected with a light supplement lamp (12).
8. The machine vision and deep learning based object localization and grabbing of claim 1, characterized by: the side end of the light supplement lamp (12) is connected with a micro light sensor, and a second auxiliary controller is electrically connected between the light supplement lamp (12) and the micro light sensor.
9. The machine vision and deep learning based object localization and grabbing of claim 1, characterized by: the mechanical arm (11) adopts a multi-axis mechanical arm (11).
10. The machine vision and deep learning based object localization and grabbing of claim 1, characterized by: the left camera (9) and the right camera (10) can be calibrated by adopting a Zhangyingyou plane calibration method.
CN201910165083.2A 2019-03-05 2019-03-05 Target positioning and grabbing based on machine vision and deep learning Pending CN111230857A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910165083.2A CN111230857A (en) 2019-03-05 2019-03-05 Target positioning and grabbing based on machine vision and deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910165083.2A CN111230857A (en) 2019-03-05 2019-03-05 Target positioning and grabbing based on machine vision and deep learning

Publications (1)

Publication Number Publication Date
CN111230857A true CN111230857A (en) 2020-06-05

Family

ID=70877772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910165083.2A Pending CN111230857A (en) 2019-03-05 2019-03-05 Target positioning and grabbing based on machine vision and deep learning

Country Status (1)

Country Link
CN (1) CN111230857A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204360026U (en) * 2014-12-20 2015-05-27 中国科学院西安光学精密机械研究所 One realizes mirror surface position finder light beam fast alignment device
CN205310238U (en) * 2015-12-31 2016-06-15 上海灿星文化传播有限公司 Multipurpose intelligence machine people that makes a video recording
CN106886225A (en) * 2017-03-16 2017-06-23 山东大学 A kind of multi-functional UAV Intelligent landing station system
CN207213537U (en) * 2017-09-19 2018-04-10 北京京东尚科信息技术有限公司 Camera erecting device and goods radio frequency
CN108136600A (en) * 2015-09-28 2018-06-08 株式会社理光 System
CN109150302A (en) * 2018-08-20 2019-01-04 中国科学院上海技术物理研究所 A kind of the optical axis self-calibrating device and method of optical communication system
CN109407335A (en) * 2018-12-14 2019-03-01 珠海博明视觉科技有限公司 A kind of adjustment device and method of adjustment for lens group adjustment
CN109425474A (en) * 2017-08-22 2019-03-05 中国科学院长春光学精密机械与物理研究所 A kind of optical alignment method, apparatus and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204360026U (en) * 2014-12-20 2015-05-27 中国科学院西安光学精密机械研究所 One realizes mirror surface position finder light beam fast alignment device
CN108136600A (en) * 2015-09-28 2018-06-08 株式会社理光 System
CN205310238U (en) * 2015-12-31 2016-06-15 上海灿星文化传播有限公司 Multipurpose intelligence machine people that makes a video recording
CN106886225A (en) * 2017-03-16 2017-06-23 山东大学 A kind of multi-functional UAV Intelligent landing station system
CN109425474A (en) * 2017-08-22 2019-03-05 中国科学院长春光学精密机械与物理研究所 A kind of optical alignment method, apparatus and system
CN207213537U (en) * 2017-09-19 2018-04-10 北京京东尚科信息技术有限公司 Camera erecting device and goods radio frequency
CN109150302A (en) * 2018-08-20 2019-01-04 中国科学院上海技术物理研究所 A kind of the optical axis self-calibrating device and method of optical communication system
CN109407335A (en) * 2018-12-14 2019-03-01 珠海博明视觉科技有限公司 A kind of adjustment device and method of adjustment for lens group adjustment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李传朋: "基于机器视觉和深度学习的目标识别与抓取定位研究", 《中国优秀硕士学位论文全文 信息科技辑》 *

Similar Documents

Publication Publication Date Title
JP6068593B2 (en) System and method for calibrating a vision system to a touch probe
CN102239424B (en) Position determination method and geodetic measuring system
JP2020101799A (en) Distance determination of sample plane in microscope system
CN111612737B (en) Artificial board surface flaw detection device and detection method
CN107084671B (en) A kind of recessed bulb diameter measuring system and measurement method based on three wire configuration light
CN110146017B (en) Industrial robot repeated positioning precision measuring method
CN108332708A (en) Laser leveler automatic checkout system and detection method
CN109141255A (en) A kind of bow net monitoring method
CN112268514A (en) Power battery pole piece coating uniformity online metering test system
CN105004324A (en) Monocular vision sensor with triangulation ranging function
CN107747913A (en) A kind of pipe bending degree measurement apparatus and method
CN102323044A (en) Motor vehicle headlamp luminous intensity distribution property self-adaption detection method based on camera method
CN114593897A (en) Measuring method and device of near-eye display
CN111208146B (en) Tunnel cable detection system and detection method
CN207850296U (en) A kind of wheel hub classification detection device
CN111230857A (en) Target positioning and grabbing based on machine vision and deep learning
CN205785171U (en) A kind of active theodolite system
CN208187381U (en) Laser leveler automatic checkout system
CN110849332A (en) Attitude measurement system of moving object
CN108195291B (en) Moving vehicle three-dimensional detection method and detection device based on differential light spots
CN101980119A (en) Device for arranging and debugging linear laser light source and method thereof
CN109091228A (en) A kind of more instrument optical positioning methods and system
CN112919124A (en) Sample information identification device and method
CN113048949A (en) Cylindrical object pose detection device and method based on line structure optical vision
CN209214634U (en) A kind of star simulator calibration and self-collimation measurement system based on total optical path

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200605

RJ01 Rejection of invention patent application after publication