CN110142785A - A kind of crusing robot visual servo method based on target detection - Google Patents

A kind of crusing robot visual servo method based on target detection Download PDF

Info

Publication number
CN110142785A
CN110142785A CN201910552462.7A CN201910552462A CN110142785A CN 110142785 A CN110142785 A CN 110142785A CN 201910552462 A CN201910552462 A CN 201910552462A CN 110142785 A CN110142785 A CN 110142785A
Authority
CN
China
Prior art keywords
image
robot
target
equipment
inspection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910552462.7A
Other languages
Chinese (zh)
Inventor
房桦
马青岷
张世伟
朱孟鹏
孙自虎
李现奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Mudian Intelligent Technology Co Ltd
Original Assignee
Shandong Mudian Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Mudian Intelligent Technology Co Ltd filed Critical Shandong Mudian Intelligent Technology Co Ltd
Priority to CN201910552462.7A priority Critical patent/CN110142785A/en
Publication of CN110142785A publication Critical patent/CN110142785A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

A kind of crusing robot visual servo method based on target detection, the following steps are included: S1, acquisition and the sample image for making each type equipment, the conditions such as angle, illumination, the scaled size observed when according to robot inspection, sample breeding is carried out to equipment image, for each sample image tagging equipment classification, the sample data sets with mark for covering all categories are completed.The present invention realizes the vision module that deep learning algorithm is deployed in crusing robot, to realize target device positioning in the real time panoramic image of acquisition, the angular deviation with central region is calculated on this basis, carry out the accurate calling of holder angle, to the limitation of robot acquisition image and the problem of robot patrol task mistake caused by causing visual servo function to be made mistakes in the passing method of effective solution, a large amount of human configuration work have been liberated, the efficiency and quality of robot inspection work are improved.

Description

A kind of crusing robot visual servo method based on target detection
Technical field
Equipment routing inspection robot is that the round-the-clock movement in indoor and outdoor based on independent navigation, accurate positioning, automatic charging is flat Platform integrates the sensors such as visible light, infrared, sound;Based on laser scanning navigation system, the optimal path of crusing robot is realized The video of detected target device, image data are passed through wireless network transmissions to monitoring room by planning and bidirectional walking;After inspection Platform system is realized by the technologies such as image procossing to equipment to be checked and pattern-recognition, bonding apparatus image template library to equipment Defect, the differentiation of abnormal appearance and the identification of switch closed condition, meter reading, oil level indicator position;And cooperate data service System exports the report of inspection result and the Analysis of Abnormal State report.
In daily robot patrol task, the image of the measurement equipment to be checked acquired in real time often generates the inclined of the visual field Difference, the equipment region for resulting in the need for detection, which cannot occupy the center of image or even equipment region, partly or entirely to be deflected away from Image, will lead to robot in this way can not normally carry out device target positioning and the identification of working condition, cause inspection field The hidden danger of institute's equipment safety operation.The reason of biased error generated when this robot acquisition image, carries out in reference to 1 Detailed description.
With reference to 1: " patent of invention, the patent No.: ZL201610457745.X, " the substation inspection machine of view-based access control model servo User tripod head control method ", inventor: Fang Hua, Sui Ji are superfine."
The target device station-keeping mode of current crusing robot mostly uses greatly realtime graphic and equipment template image matched Mode, in China 201510229248.X invention " a kind of power equipment appearance method for detecting abnormality compared based on image " Description carries out carrying out target device positioning with the matched method of original image using crusing robot acquisition image.Equipment template image is Shooting, collecting is carried out by consistency operation person's manual control machine user tripod head and camera, equipment region is configured under normal circumstances and is located at figure The center portion of picture saves the relevant parameter such as holder angle and camera focus with this, and robot is when executing patrol task No operator interferes it, after robot marches to the test point presetting bit of each equipment, calls the equipment Prototype drawing The acquisition parameters of picture, carry out cloud platform rotation control and camera draws coke, autonomous to acquire real-time device status image, lack equipment region Whether the verifying of field of view range is deflected away from.
Based on this reason, the patent inventor proposes a kind of visual servo solution in reference to 1, that is, uses image The field-of-view angle that Feature Correspondence Algorithm calculates between real-time inspection image and equipment template image is poor, is at this angle holder benefit Repaying value calls cloud platform rotation to acquire accurate target device image to suitable position.It is good in illumination condition, robot acquisition When characteristics of image is abundant, target device does not deflect away from image boundary completely, this method is feasible, and the enabling by visual servo function can Complete target device image is collected with accurate, and then exports effective recognition result.But occur in following situation When this method still will appear mistake:
(1) influence of the Image Feature Matching algorithm to intensity of illumination is very sensitive, in robot patrol task often There is the case where image is acquired under backlighting condition, the light differential of realtime graphic and template image is obvious, and characteristics of image is caused to match It sets unsuccessfully, therefore visual angle offset can not be calculated
(2) equipment region surface is smoother, and without features such as apparent edge, angle points, and background is more single, such as Metope, sky, during Image Feature Matching, calculated to can be used for matched characteristics of image sparse, causes realtime graphic Images match can not be carried out with template image;
(3) when the parking position of crusing robot is there are when error, the acquisition parameter of calling can not obtain and Prototype drawing As consistent, clearly image, phenomena such as shooting angle horizontal deflection, empty burnt, dimensional variation such as occurs, the image of this quality The accuracy of Image Feature Matching is not can guarantee.
In recent years, in computer vision field widely use depth learning technology, recognition of face, intelligent driving, Achievement abundant has been harvested in the tasks such as scene classification.Especially in target detection (target positioning) algorithm, using deep learning Detection model gradually replaced the image matching algorithm using target image as template.Deep learning is current manual's intelligent study Main way, the concept of deep learning is derived from the research of artificial neural network, and the multilayer perceptron containing more hidden layers is exactly one kind Deep learning structure 2.For target detection for computer, the image of input is the array that some values are 0-255, thus is difficult It directly obtains and specifically there is this high level semantic-concept of certain object in image, also do not know which area in the picture occurs in target Domain.Target in image is likely to occur in any position, and there may be various variations, the backgrounds of image for the form of target Multifarious, target detection function can be mainly convolutional neural networks using deep learning (convolutionneuralnetwork:CNN) it is realized with candidate regions (regionproposal) algorithms.Using deep learning The algorithm of target detection of technology, the influence for being illuminated by the light condition is small, for different scale, angular deflection, the robustness for obscuring noise It is high.
With reference to 2, " .2012 is studied in Sun Zhijun, Xue Lei, Xu Yangming, Wang Zheng deep learning Review Study computer application (08)”。
In this field, a kind of Chinese 201510785505.8th invention " vision in UAV Maneuver target locating Method of servo-controlling " according to the imaging sequence of target, carry out attitude angle given value and the course line of positioning and the target following of target Visual servoing control is completed in the calculating of the attitude angle given value of tracking;China 201110216396.X invention " is based on substation The circuit breaker state template-matching identification method of crusing robot " use handmarking's target device area in template image Equipment region is mapped in acquisition image by the inspection image of acquisition and the method for template image feature registration and completes mesh by domain Target positioning;Chinese 201610874173.5th invention is " a kind of with the pedestrian crosswalk signal lamp system of function of collecting evidence of making a dash across the red light And method " depth learning technology is used to carry out target detection, target area and identity by extraction to pedestrian and human face region Identification carries out illegal punishment.
It such as refers to and is described in 1, the visual servo process based on Image Feature Matching are as follows: before executing patrol task, need The equipment image of each presetting bit in the inspection scene of robot shooting is saved in template library, the equipment in template library Equipment in image and inspection scene be it is one-to-one, will be each pre- when assigning patrol task to crusing robot The parameters (e.g., holder angle, camera focus etc.) for setting parking stall shooting template image are explicitly pointed out, in crusing robot It advances behind so far preset parking stall, carries out pose adjustment according to the acquisition mode of the equipment template image, shoot real-time equipment Image calculates angular deviation by the characteristic matching of realtime graphic and template image, and then robot autonomous control holder carries out Target device is placed in the center of camera fields of view by angle compensation.
It can be seen that robot is to feature between the acquisition quality and realtime graphic and template image of equipment image The accuracy matched is related to the result of entire visual servo operation.The template image of equipment is adopted by manual operation robot Collection, often selects light to irradiate template soft, that the good image of clarity is as the equipment.But it is patrolled in actual robot During inspection, the irradiation of sunburst, equipment reflective, bright sky background, dazzle and the mistake of robot parking position The several factors such as difference restrict the precision of Image Feature Matching.
Significantly more efficient improvement should be given panorama (or wide-angle) image comprising some equipment, using depth The mode that sensation target is searched in habit finds that object area identical with sample set example apparatus in the picture, calculates inspection The angle of regional center and picture centre out, and the compensation as holder rotation angle, rotary platform set target at this angle The focal length that furthers in camera fields of view center is purchased to the position for being appropriate for image recognition.
Following several respects problem is primarily present currently based on the visual servo solution of images match:
1, it needs to acquire clearly template image for each target device to be inspected to need to carry out images match A large amount of manual working participates in;
2, the robot parameter of template image acquisition is fixed, to call in patrol task for robot, if mesh Marking device apart from robot observation point farther out when need to acquire using the focal length end of camera, the observation posture of robot slightly misses Difference, then target easily deflects away from the observation visual field, causes and the matched failure of template image;
Image Feature Matching algorithm is influenced by intensity of illumination and the sparse degree of characteristic point, and for change of scale, make an uproar Sound, rotation images match result be also easy to produce error, these factors all restrict the accuracy of final visual servo result, should It goes to carry out the target detection in image using more robust algorithm;
For this purpose, we have proposed a kind of crusing robot visual servo methods based on target detection to solve above-mentioned ask Topic.
Summary of the invention
The purpose of the present invention is to solve disadvantage existing in the prior art, and propose a kind of based on target detection Crusing robot visual servo method.
To achieve the goals above, present invention employs following technical solutions:
A kind of crusing robot visual servo method based on target detection, comprising the following steps:
S1, acquisition and the sample image for making each type equipment, the angle observed when according to robot inspection, light According to conditions such as, scaled sizes, sample breeding is carried out to equipment image, is each sample image tagging equipment classification, will cover The sample data sets with mark of all categories complete, and data use the neural network framework for being entered deep learning In the network model of training generating device target detection;
S2, inspection configuration robot device's test point presetting bit configure observation point for each target device in map Coordinate is no longer target device configuration template image;
S3, robot stop according to preset map reference during executing patrol task, read the appearance of the observation point State and camera parameter, acquisition include the wide angle picture of the target device;
S4, the inspection image of acquisition is inputted into target detection network model, is examined in the picture according to the device class of the point Target area is surveyed, if there are target devices in image, then calculates target area profile minimum circumscribed rectangle, target is completed and positions work Make;If not detecting target device in the picture, result is fed back into robot pose control module, first collator Device people parking position error, by loading after attitude parameter capturing panoramic view image again with ensure target device by comprising wherein, It re-execute the steps S4;
The coordinate of the target area boundary rectangle center that S5, calculating are detected in the picture, and calculate the point and arrive The horizontal pixel offset and vertical pixel offset of image center location;
S6, the tangent ratio that image distance and focal length are calculated according to the image-forming principle of camera, image pixel offset is scaled Cloud platform rotation angle compensation amount, and be cloud platform rotation control parameter by the angular transition, cloud platform rotation is called, target device is set In the center of field of view;
The length and width pixel number of S7, the equipment region boundary rectangle detected with the resolution ratio and step 4 of inspection image, are acquired Suitable multiplying power, and call camera to adjust focal length within the variation range of camera focus multiplying power, it occupy target device and patrols It examines the center of image and is full of suitable space, to identify the details of equipment working state;
The Robot Visual Servoing funcall completion of S8, observation point, acquire equipment image, identify working condition, complete The inspection Detection task of one equipment.
Preferably, the S1 has universality, and the target detection network model generated can be by multiple and different inspection fields Load is shared by the robot of scape, not only special for a certain inspection scene.
Preferably, sample image is not limited to the inspection scene of robot, the similar image resource in internet in the S1 It can be utilized.
Preferably, the method that sample image is bred in the S1 includes: that three-dimensional is multiplied in the range of deflection angles of restriction The sample image of different angle affine transformation;The different sample image of brightness is multiplied in the brightness of image variation range of restriction; Procreation has the sample image of scaling difference within the scope of the scaled size of restriction;It is numerous in the range of image noise allows Spread out and is superimposed the sample image of a variety of noises.
Preferably, sample data includes image, target position and classification information to be processed in the S1.
Preferably, observation point coordinate is the machine user tripod head rotation angle parameter camera applicable with image is acquired in the S2 Parameter.
The present invention realizes the vision module that deep learning algorithm is deployed in crusing robot, in the real-time of acquisition Target device positioning is realized in panoramic picture, is calculated the angular deviation with central region on this basis, is carried out holder angle It precisely calls, to the limitation of robot acquisition image and the fault of visual servo function is caused to be made in the passing method of effective solution At robot patrol task mistake the problem of, liberated the work of a large amount of human configurations, improved the effect of robot inspection work Rate and quality.
The present invention, which is realized, is applied to crusing robot for the algorithm of deep learning, keeps robot more intelligentized to view Feel that the image of acquisition carries out target detection and localization, the accuracy of crusing robot acquisition equipment image is improved, in addition using first Into the algorithm based on deep learning target detection target device is positioned in wide-angle or panoramic picture, to acquisition image In target detection success rate method significantly improves than before, the acquisition image accuracy rate after the execution of visual servo function can be with 99.5% or more, and observed object equipment running status details under remote big focal length is allowed the robot to, realizes and patrol The target that inspection robot is observed a variety of environment, plurality of devices all standing.
The present invention has the ability of making decisions on one's own in robot graphics' configuration phase and patrol task execution stage, improves Intelligent level has liberated the cumbersome work of consistency operation personnel, saves hand labor resource, device people can carry visible light and The visual servo function that thermal camera is described using the present invention, 24 can be executed by realizing robot after the present invention is implemented The equipment routing inspection task of hour round-the-clock more scenes, ensures equipment safety operation.
Detailed description of the invention
Fig. 1 is deep learning network model figure of the training for target device detection in the present invention;
Fig. 2 is the flow chart that crusing robot applies the visual servo function when executing patrol task in the present invention;
Fig. 3 is that target device involved in step 6 deviates picture centre angle calculation schematic diagram figure in the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.
Referring to Fig.1-3, a kind of crusing robot visual servo method based on target detection, comprising the following steps:
S1, acquisition and the sample image for making each type equipment, but sample image is not limited to the inspection field of robot Scape, the similar image resource in internet can be utilized.Since same type of equipment can deposit multiple positions in the scene The conditions such as angle, illumination, the scaled size observed in, robot inspection are different, it is therefore desirable to carry out to equipment image Sample breeding.The method of sample image breeding may include: to multiply three-dimensional different angle in the range of deflection angles of restriction to imitate Penetrate the sample image of transformation;The different sample image of brightness is multiplied in the brightness of image variation range of restriction;In the contracting of restriction Put the sample image that procreation in size range has scaling difference;Procreation superposition is a variety of in the range of image noise allows The sample image of noise.For each sample image tagging equipment classification, the sample with mark of all categories is covered in this way Notebook data set just completes, and sample data includes image, target position and classification information to be processed.Data will be entered The neural network framework of deep learning is used to train the network model of generating device target detection;
S2, inspection configuration robot device's test point presetting bit configure observation point for each target device in map Coordinate, machine user tripod head rotation angle parameter, the applicable camera parameter etc. of acquisition image, is no longer target device configuration template figure Picture;
S3, robot stop according to preset map reference during executing patrol task, read the appearance of the observation point State and camera parameter, acquisition include the wide angle picture of the target device;
S4, the inspection image of acquisition is inputted into target detection network model, is examined in the picture according to the device class of the point Target area is surveyed, if there are target devices in image, then calculates target area profile minimum circumscribed rectangle, target is completed and positions work Make;If not detecting target device in the picture, result is fed back into robot pose control module, first collator Device people parking position error, by loading after attitude parameter capturing panoramic view image again with ensure target device by comprising wherein, It re-execute the steps S4;
The coordinate of the target area boundary rectangle center that S5, calculating are detected in the picture, and calculate the point and arrive The horizontal pixel offset and vertical pixel offset of image center location;
S6, the tangent ratio that image distance and focal length are calculated according to the image-forming principle of camera, image pixel offset is scaled Cloud platform rotation angle compensation amount, and be cloud platform rotation control parameter by the angular transition, cloud platform rotation is called, target device is set In the center of field of view;
The length and width pixel number of S7, the equipment region boundary rectangle detected with the resolution ratio and step 4 of inspection image, are acquired Suitable multiplying power, and call camera to adjust focal length within the variation range of camera focus multiplying power, it occupy target device and patrols It examines the center of image and is full of suitable space, to identify the details of equipment working state;
The Robot Visual Servoing funcall completion of S8, observation point, acquire equipment image, identify working condition, complete The inspection Detection task of one equipment.
The pixel displacement deviation of target device minimum circumscribed rectangle centre coordinate and image center in step S5 calculates Formula are as follows:
Pixofs(H, V)=Cdev-CimgFormula (1)
Wherein, Pixofs(pixeloffset) offset distance being expressed as unit of pixel, (H) are horizontal direction, (V) For vertical direction;Cdev(centerofdevice) it is expressed as the minimum external square of the equipment region calibrated by algorithm of target detection The coordinate of shape center in the picture;Cimg(centerofimage) center position coordinates of acquired image are expressed as.
In step S6, according to the tangent ratio of the image-forming principle image distance of camera and focal length, target device regional center is calculated The angle for deviateing picture centre, its calculation formula is:
Wherein,
Solofs(H, V)=Pixofs(H, V) × solution formula (3)
Angofs(angleoffset) angle being expressed as between target device regional center and picture centre, and by this angle Degree is decomposed into horizontal direction (H) angle and vertical direction (V) angle;Solofs(solutionoffset) it is expressed as pixel-shift Measure the physical deflection amount on imaging component;Solut o n is expressed as each pixel reality shared on imaging component Border distance;fnowSet camera focus when to acquire image.
In step S7, after the offset angle that holder set-up procedure 6 calculates, target device occupies the center of field of view Position, due to being shot for the image of visual servo amendment holder angle using the wide-angle side focal length of camera, target device is being schemed Shared region is too small so that it cannot be directly used in state recognition process as in.It needs to calculate suitable amplification in this step Multiplying power, so that area accounting is proper in the picture is more advantageous to carry out algorithm for pattern recognition for target device.Calculation formula is as follows:
Wherein, RtIt is expressed as the adjustable multiplying power of camera, its value is the minimum value in three reference values.Three references Value is respectively as follows: picture altitude (heightofimage) and target device region minimum circumscribed rectangle height (heightofdevice) ratio;Picture traverse Wimg(widthofimage) wide with target device region minimum circumscribed rectangle Spend the ratio of (widthofdevice);Camera maximum focal length fmaxWith the ratio of current focus.Obtaining can be after enlargement ratio, then Camera focus is adjusted according to target multiplying power, calculation formula is as follows:
fnext=fnow×RtFormula (5)
Wherein, fnextIt is expressed as the camera focus that will be arranged.
By amplifying focal length, target device can carry out figure between two parties and with maximum visual effect in camera fields of view As acquisition, good image condition is provided for subsequent working condition identification function, passes through the side of deep learning target detection Legal position equipment region;By the calculating to equipment region center and picture centre angle, target device is placed in by mobile holder Camera fields of view center;Amplify camera focus multiplying power appropriate, target device clear image can be acquired for pattern-recognition.
The foregoing is only a preferred embodiment of the present invention, but scope of protection of the present invention is not limited thereto, Anyone skilled in the art in the technical scope disclosed by the present invention, according to the technique and scheme of the present invention and its Inventive concept is subject to equivalent substitution or change, should be covered by the protection scope of the present invention.

Claims (6)

1. a kind of crusing robot visual servo method based on target detection, which comprises the following steps:
S1, acquisition and the sample image for making each type equipment, the angle observed when according to robot inspection, illumination, contracting The conditions such as size are put, sample breeding is carried out to equipment image, is each sample image tagging equipment classification, all classes will be covered Other sample data sets with mark complete, and the neural network framework for being entered deep learning is used to train by data The network model of generating device target detection;
S2, inspection configuration robot device's test point presetting bit configure observation point coordinate for each target device in map, It is no longer target device configuration template image;
S3, robot stop during executing patrol task according to preset map reference, read the posture of the observation point with Camera parameter, acquisition include the wide angle picture of the target device;
S4, the inspection image of acquisition is inputted into target detection network model, detects mesh in the picture according to the device class of the point Region is marked, if there are target devices in image, then calculates target area profile minimum circumscribed rectangle, target is completed and positions work; If not detecting target device in the picture, result is fed back into robot pose control module, first verification machine People parking position error, by capturing panoramic view image after loading attitude parameter again to ensure target device by comprising wherein, weighing It is new to execute step S4;
The coordinate of the target area boundary rectangle center that S5, calculating are detected in the picture, and the point is calculated to image The horizontal pixel offset and vertical pixel offset of center;
S6, the tangent ratio that image distance and focal length are calculated according to the image-forming principle of camera, are scaled holder for image pixel offset Rotational angle compensation rate, and be cloud platform rotation control parameter by the angular transition, cloud platform rotation is called, target device is placed in figure As the center in the visual field;
The length and width pixel number of S7, the equipment region boundary rectangle detected with the resolution ratio and step 4 of inspection image, it is suitable to acquire Multiplying power, and within the variation range of camera focus multiplying power call camera adjust focal length, so that target device is occupy inspection figure The center of picture and be full of suitable space, to identify the details of equipment working state;
The Robot Visual Servoing funcall completion of S8, observation point, acquire equipment image, identify working condition, complete one The inspection Detection task of equipment.
2. a kind of crusing robot visual servo method based on target detection according to claim 1, it is characterised in that: The S1 has universality, and the target detection network model generated can be shared by the robot of multiple and different inspection scenes Load, it is not only special for a certain inspection scene institute.
3. a kind of crusing robot visual servo method based on target detection according to claim 1, it is characterised in that: Sample image is not limited to the inspection scene of robot in the S1, and the similar image resource in internet can be utilized.
4. a kind of crusing robot visual servo method based on target detection according to claim 1, it is characterised in that: The method that sample image is bred in the S1 includes: that three-dimensional different angle affine transformation is multiplied in the range of deflection angles of restriction Sample image;The different sample image of brightness is multiplied in the brightness of image variation range of restriction;In the scaled size of restriction Procreation has the sample image of scaling difference in range;Procreation is superimposed a variety of noises in the range of image noise allows Sample image.
5. a kind of crusing robot visual servo method based on target detection according to claim 1, it is characterised in that: Sample data includes image, target position and classification information to be processed in the S1.
6. a kind of crusing robot visual servo method based on target detection according to claim 1, it is characterised in that: Observation point coordinate is the camera parameter that machine user tripod head rotation angle parameter and acquisition image are applicable in the S2.
CN201910552462.7A 2019-06-25 2019-06-25 A kind of crusing robot visual servo method based on target detection Pending CN110142785A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910552462.7A CN110142785A (en) 2019-06-25 2019-06-25 A kind of crusing robot visual servo method based on target detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910552462.7A CN110142785A (en) 2019-06-25 2019-06-25 A kind of crusing robot visual servo method based on target detection

Publications (1)

Publication Number Publication Date
CN110142785A true CN110142785A (en) 2019-08-20

Family

ID=67596578

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910552462.7A Pending CN110142785A (en) 2019-06-25 2019-06-25 A kind of crusing robot visual servo method based on target detection

Country Status (1)

Country Link
CN (1) CN110142785A (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110674724A (en) * 2019-09-20 2020-01-10 武汉大学 Robot target identification method and system based on active strategy and image sensor
CN110909653A (en) * 2019-11-18 2020-03-24 南京七宝机器人技术有限公司 Method for automatically calibrating screen cabinet of distribution room by indoor robot
CN110991360A (en) * 2019-12-06 2020-04-10 合肥科大智能机器人技术有限公司 Robot inspection point location intelligent configuration method based on visual algorithm
CN110989344A (en) * 2019-11-27 2020-04-10 云南电网有限责任公司电力科学研究院 Automatic adjustment method and system for preset parameters of inspection robot
CN111015676A (en) * 2019-12-16 2020-04-17 中国科学院深圳先进技术研究院 Grabbing learning control method and system based on hands-free eye calibration, robot and medium
CN111015660A (en) * 2019-12-24 2020-04-17 江苏生益特种材料有限公司 Use method of CCL (CCL) laminating production robot vision system
CN111161446A (en) * 2020-01-10 2020-05-15 浙江大学 Image acquisition method of inspection robot
CN111272148A (en) * 2020-01-20 2020-06-12 江苏方天电力技术有限公司 Unmanned aerial vehicle autonomous inspection self-adaptive imaging quality optimization method for power transmission line
CN111414012A (en) * 2020-04-08 2020-07-14 深圳市千乘机器人有限公司 Region retrieval and holder correction method for inspection robot
CN111611989A (en) * 2020-05-22 2020-09-01 四川智动木牛智能科技有限公司 Multi-target accurate positioning identification method based on autonomous robot
CN111633660A (en) * 2020-06-15 2020-09-08 吴洪婷 Intelligent inspection robot
CN111745652A (en) * 2020-06-24 2020-10-09 杭州安森智能信息技术有限公司 Robot intelligent task management method and system
CN111832760A (en) * 2020-07-14 2020-10-27 深圳市法本信息技术股份有限公司 Automatic inspection method for well lid based on visual algorithm
CN111931832A (en) * 2020-07-30 2020-11-13 国网智能科技股份有限公司 Optimal data acquisition method and system for substation inspection equipment
CN112097913A (en) * 2020-05-22 2020-12-18 漳州华康信息科技有限公司 Infrared temperature measuring device of thing networking of face identification laser automatic tracking
CN112256001A (en) * 2020-09-29 2021-01-22 华南理工大学 Visual servo control method for mobile robot under visual angle constraint
CN112947550A (en) * 2021-01-29 2021-06-11 齐鲁工业大学 Illegal aircraft striking method based on visual servo and robot
CN112995519A (en) * 2021-03-26 2021-06-18 精英数智科技股份有限公司 Camera self-adaptive adjustment method and device for water detection monitoring
CN113324998A (en) * 2021-05-13 2021-08-31 常州博康特材科技有限公司 Production quality inspection supervision system for titanium alloy bars
CN113452912A (en) * 2021-06-25 2021-09-28 山东新一代信息产业技术研究院有限公司 Pan-tilt camera control method, device, equipment and medium for inspection robot
CN113741413A (en) * 2020-05-29 2021-12-03 广州极飞科技股份有限公司 Operation method of unmanned equipment, unmanned equipment and storage medium
CN113748827A (en) * 2020-06-01 2021-12-07 上海山科机器人有限公司 Signal station for autonomous working equipment, autonomous working equipment and system
CN113778091A (en) * 2021-09-13 2021-12-10 华能息烽风力发电有限公司 Method for inspecting equipment of wind power plant booster station
CN113920612A (en) * 2021-10-13 2022-01-11 国网山西省电力公司输电检修分公司 Intelligent drilling and crossing inspection device and method
CN113965698A (en) * 2021-11-12 2022-01-21 白银银珠电力(集团)有限责任公司 Monitoring image calibration processing method, device and system for fire-fighting Internet of things
CN114311023A (en) * 2020-09-29 2022-04-12 中国科学院沈阳自动化研究所 Service-robot-based visual function detection method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015085458A (en) * 2013-10-31 2015-05-07 セイコーエプソン株式会社 Robot control device, robot system and robot
CN106125744A (en) * 2016-06-22 2016-11-16 山东鲁能智能技术有限公司 The Intelligent Mobile Robot cloud platform control method of view-based access control model servo
CN107680195A (en) * 2017-11-13 2018-02-09 国网内蒙古东部电力有限公司 A kind of transformer station intelligent robot inspection Computer Aided Analysis System and method
CN107729808A (en) * 2017-09-08 2018-02-23 国网山东省电力公司电力科学研究院 A kind of image intelligent acquisition system and method for power transmission line unmanned machine inspection
CN108171748A (en) * 2018-01-23 2018-06-15 哈工大机器人(合肥)国际创新研究院 A kind of visual identity of object manipulator intelligent grabbing application and localization method
CN108189043A (en) * 2018-01-10 2018-06-22 北京飞鸿云际科技有限公司 A kind of method for inspecting and crusing robot system applied to high ferro computer room
CN108229561A (en) * 2018-01-03 2018-06-29 北京先见科技有限公司 Particle product defect detection method based on deep learning
CN109583425A (en) * 2018-12-21 2019-04-05 西安电子科技大学 A kind of integrated recognition methods of the remote sensing images ship based on deep learning
CN109886359A (en) * 2019-03-25 2019-06-14 西安电子科技大学 Small target detecting method and detection model based on convolutional neural networks

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015085458A (en) * 2013-10-31 2015-05-07 セイコーエプソン株式会社 Robot control device, robot system and robot
CN106125744A (en) * 2016-06-22 2016-11-16 山东鲁能智能技术有限公司 The Intelligent Mobile Robot cloud platform control method of view-based access control model servo
CN107729808A (en) * 2017-09-08 2018-02-23 国网山东省电力公司电力科学研究院 A kind of image intelligent acquisition system and method for power transmission line unmanned machine inspection
CN107680195A (en) * 2017-11-13 2018-02-09 国网内蒙古东部电力有限公司 A kind of transformer station intelligent robot inspection Computer Aided Analysis System and method
CN108229561A (en) * 2018-01-03 2018-06-29 北京先见科技有限公司 Particle product defect detection method based on deep learning
CN108189043A (en) * 2018-01-10 2018-06-22 北京飞鸿云际科技有限公司 A kind of method for inspecting and crusing robot system applied to high ferro computer room
CN108171748A (en) * 2018-01-23 2018-06-15 哈工大机器人(合肥)国际创新研究院 A kind of visual identity of object manipulator intelligent grabbing application and localization method
CN109583425A (en) * 2018-12-21 2019-04-05 西安电子科技大学 A kind of integrated recognition methods of the remote sensing images ship based on deep learning
CN109886359A (en) * 2019-03-25 2019-06-14 西安电子科技大学 Small target detecting method and detection model based on convolutional neural networks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
何东健 等: "数字图像处理", 《数字图像处理 *
朱庆棠 等: "《周围神经缺损修复材料的生物制造与临床评估》", 31 August 2018 *

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110674724A (en) * 2019-09-20 2020-01-10 武汉大学 Robot target identification method and system based on active strategy and image sensor
CN110674724B (en) * 2019-09-20 2022-07-15 武汉大学 Robot target identification method and system based on active strategy and image sensor
CN110909653B (en) * 2019-11-18 2022-03-15 南京七宝机器人技术有限公司 Method for automatically calibrating screen cabinet of distribution room by indoor robot
CN110909653A (en) * 2019-11-18 2020-03-24 南京七宝机器人技术有限公司 Method for automatically calibrating screen cabinet of distribution room by indoor robot
CN110989344A (en) * 2019-11-27 2020-04-10 云南电网有限责任公司电力科学研究院 Automatic adjustment method and system for preset parameters of inspection robot
CN110991360A (en) * 2019-12-06 2020-04-10 合肥科大智能机器人技术有限公司 Robot inspection point location intelligent configuration method based on visual algorithm
CN110991360B (en) * 2019-12-06 2023-07-04 合肥科大智能机器人技术有限公司 Robot inspection point position intelligent configuration method based on visual algorithm
CN111015676A (en) * 2019-12-16 2020-04-17 中国科学院深圳先进技术研究院 Grabbing learning control method and system based on hands-free eye calibration, robot and medium
CN111015660A (en) * 2019-12-24 2020-04-17 江苏生益特种材料有限公司 Use method of CCL (CCL) laminating production robot vision system
CN111161446A (en) * 2020-01-10 2020-05-15 浙江大学 Image acquisition method of inspection robot
CN111272148A (en) * 2020-01-20 2020-06-12 江苏方天电力技术有限公司 Unmanned aerial vehicle autonomous inspection self-adaptive imaging quality optimization method for power transmission line
CN111414012A (en) * 2020-04-08 2020-07-14 深圳市千乘机器人有限公司 Region retrieval and holder correction method for inspection robot
CN111611989A (en) * 2020-05-22 2020-09-01 四川智动木牛智能科技有限公司 Multi-target accurate positioning identification method based on autonomous robot
CN112097913A (en) * 2020-05-22 2020-12-18 漳州华康信息科技有限公司 Infrared temperature measuring device of thing networking of face identification laser automatic tracking
CN113741413A (en) * 2020-05-29 2021-12-03 广州极飞科技股份有限公司 Operation method of unmanned equipment, unmanned equipment and storage medium
CN113741413B (en) * 2020-05-29 2022-11-08 广州极飞科技股份有限公司 Operation method of unmanned equipment, unmanned equipment and storage medium
CN113748827B (en) * 2020-06-01 2022-12-06 上海山科机器人有限公司 Signal station for autonomous working equipment, autonomous working equipment and system
CN113748827A (en) * 2020-06-01 2021-12-07 上海山科机器人有限公司 Signal station for autonomous working equipment, autonomous working equipment and system
CN111633660A (en) * 2020-06-15 2020-09-08 吴洪婷 Intelligent inspection robot
CN111745652A (en) * 2020-06-24 2020-10-09 杭州安森智能信息技术有限公司 Robot intelligent task management method and system
CN111832760A (en) * 2020-07-14 2020-10-27 深圳市法本信息技术股份有限公司 Automatic inspection method for well lid based on visual algorithm
CN111832760B (en) * 2020-07-14 2023-09-29 深圳市法本信息技术股份有限公司 Automatic inspection method for well lid based on visual algorithm
CN111931832A (en) * 2020-07-30 2020-11-13 国网智能科技股份有限公司 Optimal data acquisition method and system for substation inspection equipment
CN114311023B (en) * 2020-09-29 2023-12-26 中国科学院沈阳自动化研究所 Visual function detection method based on service robot
CN114311023A (en) * 2020-09-29 2022-04-12 中国科学院沈阳自动化研究所 Service-robot-based visual function detection method
CN112256001A (en) * 2020-09-29 2021-01-22 华南理工大学 Visual servo control method for mobile robot under visual angle constraint
CN112947550A (en) * 2021-01-29 2021-06-11 齐鲁工业大学 Illegal aircraft striking method based on visual servo and robot
CN112995519B (en) * 2021-03-26 2024-03-01 精英数智科技股份有限公司 Camera self-adaptive adjustment method and device for water detection monitoring
CN112995519A (en) * 2021-03-26 2021-06-18 精英数智科技股份有限公司 Camera self-adaptive adjustment method and device for water detection monitoring
CN113324998A (en) * 2021-05-13 2021-08-31 常州博康特材科技有限公司 Production quality inspection supervision system for titanium alloy bars
CN113452912B (en) * 2021-06-25 2022-12-27 山东新一代信息产业技术研究院有限公司 Pan-tilt camera control method, device, equipment and medium for inspection robot
CN113452912A (en) * 2021-06-25 2021-09-28 山东新一代信息产业技术研究院有限公司 Pan-tilt camera control method, device, equipment and medium for inspection robot
CN113778091A (en) * 2021-09-13 2021-12-10 华能息烽风力发电有限公司 Method for inspecting equipment of wind power plant booster station
CN113920612A (en) * 2021-10-13 2022-01-11 国网山西省电力公司输电检修分公司 Intelligent drilling and crossing inspection device and method
CN113965698A (en) * 2021-11-12 2022-01-21 白银银珠电力(集团)有限责任公司 Monitoring image calibration processing method, device and system for fire-fighting Internet of things
CN113965698B (en) * 2021-11-12 2024-03-08 白银银珠电力(集团)有限责任公司 Monitoring image calibration processing method, device and system for fire-fighting Internet of things

Similar Documents

Publication Publication Date Title
CN110142785A (en) A kind of crusing robot visual servo method based on target detection
CN109887040B (en) Moving target active sensing method and system for video monitoring
JP5586765B2 (en) Camera calibration result verification apparatus and method
CN110850723B (en) Fault diagnosis and positioning method based on transformer substation inspection robot system
CN109977813A (en) A kind of crusing robot object localization method based on deep learning frame
CN102917171B (en) Based on the small target auto-orientation method of pixel
CN105898107B (en) A kind of target object grasp shoot method and system
CN107357286A (en) Vision positioning guider and its method
CN106878687A (en) A kind of vehicle environment identifying system and omni-directional visual module based on multisensor
CN108447091A (en) Object localization method, device, electronic equipment and storage medium
CN108731587A (en) A kind of the unmanned plane dynamic target tracking and localization method of view-based access control model
CN108195472B (en) Heat conduction panoramic imaging method based on track mobile robot
CN206611521U (en) A kind of vehicle environment identifying system and omni-directional visual module based on multisensor
CN107509055A (en) A kind of rotary panorama focus identification optronic tracker and its implementation
JP5079547B2 (en) Camera calibration apparatus and camera calibration method
CN109739239A (en) A kind of planing method of the uninterrupted Meter recognition for crusing robot
CN110334701A (en) Collecting method based on deep learning and multi-vision visual under the twin environment of number
CN109712188A (en) A kind of method for tracking target and device
CN112307912A (en) Method and system for determining personnel track based on camera
US11703820B2 (en) Monitoring management and control system based on panoramic big data
CN115717867A (en) Bridge deformation measurement method based on airborne double cameras and target tracking
KR20140114594A (en) Auto-Camera Calibration Method Based on Human Object Tracking
CN110276379A (en) A kind of the condition of a disaster information rapid extracting method based on video image analysis
CN107767366B (en) A kind of transmission line of electricity approximating method and device
CN109883400A (en) Fixed station Automatic Targets and space-location method based on YOLO-SITCOL

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190820