CN106774208A - Group's visual machine collaborative assembly method and model system - Google Patents

Group's visual machine collaborative assembly method and model system Download PDF

Info

Publication number
CN106774208A
CN106774208A CN201611209458.3A CN201611209458A CN106774208A CN 106774208 A CN106774208 A CN 106774208A CN 201611209458 A CN201611209458 A CN 201611209458A CN 106774208 A CN106774208 A CN 106774208A
Authority
CN
China
Prior art keywords
vision robot
robot
intelligent vision
feeding
control platform
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611209458.3A
Other languages
Chinese (zh)
Other versions
CN106774208B (en
Inventor
韩九强
于洋
郑辑光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN201611209458.3A priority Critical patent/CN106774208B/en
Publication of CN106774208A publication Critical patent/CN106774208A/en
Application granted granted Critical
Publication of CN106774208B publication Critical patent/CN106774208B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
    • G05B19/41805Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM] characterised by assembly
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Manipulator (AREA)
  • General Engineering & Computer Science (AREA)
  • Manufacturing & Machinery (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Automatic Assembly (AREA)

Abstract

One population visual machine collaborative assembly method, including control platform, vision robot's work station and undertake between control platform and vision robot's work station communicate wireless module, parts number and species of the control platform according to needed for the sequence information of input is calculated, the quantity of intelligent vision robot devoted oneself to work, and according to actual working condition, dynamic regulation is operated in the quantity of the intelligent vision robot of disparate modules;Present invention also offers the model system of methods described, the present invention uses intelligent vision robot work compound, according to sequence information planning, corresponding program can be formulated according to different demands, hardware configuration need not be changed and increase sensor, flexibility is strong, Multi-varieties and Small-batch Production pattern can be met, and due to introducing vision sensor, operation can be simplified, work independently, can individually be repaired during failure, the work of whole system is not influenceed, robustness is good, production efficiency is greatly improved, production equipment input and floor space are saved again.

Description

Group's visual machine collaborative assembly method and model system
Technical field
The invention belongs to technical field of industrial automatic control, more particularly to a population visual machine collaborative assembly method and mould Type system.
Background technology
As economic fast development, competition become fiercer, the cycle of model change gradually reduces, for many The demand of kind small lot batch manufacture pattern is increasingly strong, causes more concerns of the people to intelligent automation production line.At present Although many automatic production lines can well complete task, cost is very high, and can be only done single work, complicated Operation needs Duo Tai robots sequential working, once middle which link breaks down, whole system will paralyse, and produce effect Rate is low, and floor space is big, high cost, it is necessary to multiple sensors collaboration completes to recognize work, when producing object and change, it is necessary to Hardware is changed, increases cost, therefore very flexible, it is difficult to adapt to the production model of multi-varieties and small-batch.
The content of the invention
In order to overcome the shortcoming of above-mentioned prior art, it is an object of the invention to provide a population visual machine collaborative assembly Method and model system, its is practical, adapts to various production objects, and control is flexible, and scalability is strong, and required space is small, respectively Part independently working, can individually be repaired during failure, have no effect on the work of whole system, and robustness is good, can be real Now efficiently, Flexible Production.
To achieve these goals, the technical solution adopted by the present invention is:
One population visual machine collaborative assembly method, including control platform 1, vision robot's work station 2 and undertake control The wireless module 3 communicated between platform processed 1 and vision robot's work station 2, it is characterised in that the control platform 1 is according to defeated The sequence information for entering calculate needed for parts number and species, the quantity of intelligent vision robot devoted oneself to work, and according to reality The working condition on border, dynamic regulation is operated in the quantity of the intelligent vision robot of disparate modules.
Independent task is completed per class intelligent vision robot, is not interfere with each other, when a certain class robot breaks down, can Individually to be repaired to it, the operation of whole system is had no effect on, with good robustness.When assembling object changes When, control instruction is sent to vision robot's work station 2 by control platform 1, allow vision robot's work station 2 to run for this The linkage editor of object, so can well meet the production model of multi-varieties and small-batch, and system flexibility is very strong.
Vision robot's work station 2 includes a conveyer 23 for annular, and feeding is disposed with along conveyer 23 Area 231, assembly section 233,238 and discharging area 235, feeding area 231 are disposed about cloth near element storage area 22, discharging area 235 It is set to product storage area 26, the feeding intelligent vision robot 21 of feeding area 231 and the blanking intelligent vision robot of discharging area 235 27 all have feeding module and cutting module, and control platform 1 is operated in both according to real-time assembling situation come dynamic regulation The quantity of the intelligent vision robot of module, the assembling intelligent vision robot of assembly section only has load module, and each intelligence is regarded Feel robot is equipped with the wireless module 211 communicated between control platform 1 and intelligent vision robot and and for seeking The vision sensor 213 of path and element identification is looked for, the collaborative assembly method includes four collaborations of aspect:All intelligence are regarded Feel robot cooperateed with the start and stop of conveyer 23, the collaboration between feeding intelligent vision robot 21 and conveyer 23, assembling Intelligent vision robot 24,25,28,29 and cooperateing between conveyer 23, blanking intelligent vision robot 27 and conveyer Collaboration between 23.
The control platform 1 is operated in the intelligent vision machine of disparate modules according to actual working condition, dynamic regulation The process of the quantity of people is:
Control platform 1 sends order of starting working to vision robot's work station 2, and allows vision robot's work station 2 to transport The program of the corresponding assembling object of row, all intelligent vision robots start simultaneously at work, and feeding intelligent vision robot 21 leads to The image information for crossing the return of vision sensor 213 recognizes that path moves to element storage area 22 and picks required element, then Move to feeding area 231 and start feeding, assembling intelligent vision robot recognizes required element 232 by vision sensor 242, and Finished product 236 is placed on conveyer 23, blanking intelligent vision robot 27 recognizes qualified finished product by vision sensor 273 236, it is grabbed down from conveyer 23 and is placed on products storage area 26, all intelligent vision robots each step work in, Job information is sent to control platform 1 by wireless module 3, control platform 1 is integrated information, to obtain conveyer On number of elements, the finished product quantity on conveyer, add up qualified finished product quantity, intelligently regarded according to these information dynamic regulations The quantity of the quantity for feeling robot and the intelligent vision robot for being operated in feeding module and cutting module, preferably to complete Fittage.
The feeding process of the feeding intelligent vision robot 21 will cooperate with feeding, institute according to the speed of conveyer 23 State assembling intelligent vision robot 24,25,28,29 assembling process to be cooperateed with according to the speed of conveyer 23 crawl needed for Element is assembled, and the blanking process of the blanking intelligent vision robot 27 will be cooperateed with according to the speed of conveyer 23 grabs Lower finished product.
After the blanking intelligent vision robot 27 completes finished product blanking each time, blanking information is sent to control platform 1, when finished product is qualified, control platform counts the qualified quantity of finished product 261 and adds 1, and puts it into box 271, when finished product is unqualified, Blanking intelligent vision robot 27 is put into waste product area after unqualified finished product is captured, and control platform 1 notifies that feeding is intelligently regarded Feel robot 21, the quantity of each class component 232,234,237,239 of required sorting adds 1, counted when the qualified finished product 261 of assembling is counted on When amount meets desired finished product quantity on order, control platform 1 is sent completely assignment instructions to vision robot's work station 2, institute Assembling intelligent vision robot is stated to clean out the workpiece of rigging position 243, feeding intelligent vision robot 21 and blanking and intelligently regard Feel that robot 27 is responsible for knocked-down element on cleaning conveyer 23, raw material storeroom 22, Ran Houyun is put back to after being reclaimed Move to appointed place and send completion cleaning work instruction to control platform 1, control platform 1 sends halt instruction, all intelligence Vision robot enters holding state, while the stop motion of conveyer 23.
Present invention also offers a kind of model system based on group's visual machine collaborative assembly method, its feature exists In:
Control platform 1 is served as by a computer;
By a wireless receiving transmitter as wireless module 3;
It is main by colony intelligence vision robot, the conveyer 23 of annular, element storage area 22 and products storage area 26 Vision robot's work station 2 of part.
In vision robot's work station 2, conveyer 23 is arranged in center, the region at the major axis two ends of conveyer 23 It is respectively feeding area 231 and discharging area 235, the region parallel to major axis both sides is assembly section 1 and assembly section 2 238, unit Part storage area 22 is arranged near feeding area 231, and products storage area 26 is arranged near discharging area 235, the feeding of feeding area 231 The blanking intelligent vision robot 27 of intelligent vision robot 21 and discharging area 235 all has feeding module and cutting module, control Platform processed 1 is operated in the quantity of the intelligent vision robot of both modules according to real-time assembling situation come dynamic regulation, dress Assembling intelligent vision robot 24,25,28,29 with area 1 and assembly section 2 238 only has load module, and each intelligence is regarded Feel robot is equipped with the wireless module 211 communicated between control platform 1 and intelligent vision robot and and for seeking Look for the vision sensor 213 of path and element identification.
In vision robot's work station 2, element to be assembled includes red base 221, black core 222, spring 223 With blue cap 224.First, the order comprising component kind and expectation processed finished products quantity by the input in control platform 1 Information, then control platform 1 by wireless module 3 to vision robot's workbench 2 send start-up operation instruct and run phase The program answered, all intelligent vision robots 21,24,25,27,28,29 and conveyer 23 enter working condition, feeding intelligence Vision robot 21 recognizes path by the image information that vision sensor 213 is obtained, and moves to element storage area 22, then The image information that is obtained by vision sensor 213 extracts the features such as color, area, radius of circle come recognition component, according to control The element that the instruction crawl that platform processed 1 sends is specified, then moves to feeding area 231, and the speed according to conveyer 23 starts Feeding, the information that assembling intelligent vision robot 24,25,28,29 is returned by vision sensor 242, extract color, face successively The feature such as product and radius of circle carries out template matches, and red base 221, black core 222, spring 223 and blue cap are recognized successively 224 start assembling, and the finished product 236 that will be assembled is placed on conveyer 231, and then blanking intelligent vision robot 27 passes through The image information that vision sensor 273 is returned, extracts the features such as color, area, radius of circle and finished product is identified, and judge it It is whether qualified, it is qualified, it is placed in box 271, it is unqualified, waste product area is placed on, while sending job information, control to control platform Platform processed calculates qualified finished product number, when it meets order requirements, notifies that vision robot's workbench 1 performs clean-up task, After the completion of clean-up task, control platform 1 sends halt instruction, all intelligent vision robots to vision robot's workbench 2 21st, 24,25,27,28,29 and conveyer 23 enter holding state.
Compared with prior art, group's machine vision Intelligent assembly Synergistic method of the invention uses intelligent vision machine with system Device people's work compound, feeding work is planned according to sequence information, can formulate phase according to the demand of the different production object of client The program answered, it is not necessary to change hardware configuration and increase sensor, flexibility is strong, can well meet multi-varieties and small-batch Production model, and due to introducing vision sensor, operation can be simplified, work independently, can be independent during failure The work for not influenceing whole system is repaired, robustness is good, and greatly improves production efficiency, production equipment input is saved again And floor space.
Brief description of the drawings
Fig. 1 is the Organization Chart of group's visual machine collaborative assembly simulation system of the present invention.
Fig. 2 is the workflow diagram of group's visual machine collaborative assembly method of the present invention.
Fig. 3 is the workpiece identification template matches flow chart of group's visual machine collaborative assembly simulation system of the present invention.
Specific embodiment
Describe embodiments of the present invention in detail with reference to the accompanying drawings and examples.
As shown in figure 1, group's visual machine collaborative assembly model system of the present invention, including control platform 1, visual machine is artificial Make station 2 and undertake the wireless module 3 communicated between control platform 1 and vision robot's work station 2, control platform 1 is integrated and regarded Feel the job information that robot workstation 2 passes back, sent to vision robot's workbench 2 according to the job information after integration and controlled System order.
Vision robot's work station 2 include element storage area 22, feeding area 231, assembly section 1, assembly section 2 238, The 4 kinds of elements 221,222,223,224 needed for assembling are placed in discharging area 235 and products storage area 26, wherein element storage area 22, Feeding area 231 and discharging area 235 are respectively positioned at the two ends of the oval major axis of conveyer 23, and assembly section one 233, assembly section 2 238 both sides for being located parallel to the major axis of conveyer 23, around conveyer 23 laid 6 intelligent vision robots 21,24, 25、27、28、29:1 feeding intelligent vision robot, 21,4 assembling 24,25,28,29 and 1 blankings of intelligent vision robot Intelligent vision robot 27, each intelligent vision robot is furnished with the wireless mould communicated between control platform 1 and robot Block 211,241,272 and vision sensor 213,242,273, wherein feeding area intelligent vision robot 21 and discharging area intelligently regards Feel that robot 27 is removable intelligent vision robot, because they will be completed from the crawl raw material workpiece of element storage area 22 221- 224 and the finished product 261 that will process be sent to products storage area 26.And feeding intelligent vision robot 21 and blanking intelligent vision Feeding program module and blanking program module are respectively provided with the program of robot 27, control platform 1 passes intelligent vision robot The job information for returning is integrated, and which program work module they are in dynamic regulation, preferably to complete assembly work.
As shown in Fig. 2 the workflow of group's visual machine collaborative assembly method, comprises the following steps:
First, sequence information is input into by control platform 1, then control platform 1 show which kind of is assembled according to sequence information The number of elements and species of object and needs.
Secondly, control platform 1 sends order of starting working to vision robot's workbench 2, and makes visual machine artificial Make the program that platform 2 runs corresponding assembling object, all intelligent vision robots 21,24,25,27,28,29 and conveyer 23 enter working condition simultaneously
All intelligent vision robots 21,24,25,27,28,29 cooperate according to the instruction of control platform 1, feeding intelligence Energy vision robot 21 recognizes that path moves to element storage area 22 by the image information that vision sensor 213 is returned, and passes through The features such as image zooming-out color, area, the radius of circle that vision sensor 213 is returned carry out template matches, the element needed for identification 221st, 222,223,224, the element required for picking is put into box 212, and is counted, when box 212 is filled, feeding Intelligent vision robot 21 moves to feeding area 231 by vision sensor 213, and feeding process will be according to the speed of conveyer 23 Spend to cooperate with feeding.Then, the element 214 in box 212 is placed on conveyer 23 successively, and the workpiece placed every time Species will notify control platform 1, when not having element in box 212, feeding intelligent vision robot 21 by wireless module 211 Element storage area 22 is moved to again by vision sensor 213, starts to carry work, then transported by vision sensor 213 Move to feeding area 231 to be cooperateed with conveyer 23 and carry out feeding work, be so repeated, until receiving control platform 1 Assignment instructions are completed, after completion assignment instructions are received, the feeding area 231 of conveyer 23 is moved to, conveyer is cleared up On workpiece, be returned to element storage area 22, then feeding area 231 is moved to by the image information that vision sensor 213 is returned And by radio communication 211 notify control platform 1 complete clean-up task, when receive control platform 1 be stopped order after, Feeding Visual intelligent robot 21 enters holding state.
Assembling intelligent vision robot 24,25,28,29 completes assembly work in a certain order, captures before this red Color base 221, then captures black core 222, spring 223 and is sequentially placed into red base 221, finally buckles blue cap 224, Tighten, complete an assembly work for finished product 236;Assembling intelligent vision robot 24,25,28,29 passes through vision sensor 242 The image of the workpiece 232,234,237,239 moved on echo-plex device 23, extracts color, area, radius of circle, average gray Etc. feature, template matches are carried out, judge whether it is the workpiece needed for current assemble sequence, if sending the control command of crawl, Then specified assembly area 243 is put it to, and the workpiece information that control platform 1 is captured is informed by wireless module 241, connect The identification work into workpiece needed for next operation, if it is not, not sending control command then, required workpiece is waited until always Occur, then send control command;When the intelligent vision robot 24,25,28,29 of assembly section 1, assembly section 2 238 is assembled After what a finished product 236, control command is sent, be put into after being captured on conveyer 23, and notify that control platform 1 transmits dress Put and be placed on 23 a finished product 236, subsequently into next round assembly work, and so on carry out being sent until control platform 1 Assignment instructions are completed, after being connected to completion task order, stops assembly work, if assembly section 1, assembly section 2 238 do not exist Element, then assembly section 1, the intelligent vision robot 24,25,28,29 of assembly section 2 238 enter holding state, if in the presence of , then be placed on element on conveyer 23, subsequently into holding state by element.
Whether blanking intelligent vision robot 27 recognizes on conveyer 23 there is finished product 236 by vision sensor 273, when After having finished product to occur, the image of the workpiece by being moved on the conveyer 23 that vision sensor 273 is returned extracts color, face The features such as product, radius of circle carry out template matches, recognize whether it is finished product first, and in the case where being finished product, have seen table Whether face has crackle qualified to judge whether, if qualified, the box 271 of blanking intelligent vision robot 27 is put into after being captured In, and notify that the quantity of qualified finished product 261 plus 1 by control platform 1, if unqualified, waste product area is put into after being captured, and will Underproof information is sent to control platform 1, and the quantity of required each element 221-224 is added 1 by control platform 1, and notifies feeding The quantity of each element 221,222,223,224 adds 1 needed for intelligent vision robot 21, when the box of blanking intelligent vision robot 27 After son 271 is filled, the image information returned by vision sensor 273 moves to products storage area 26, by finished product 261 all After being discharged into products storage area 26, then the discharging area 235 of conveyer 23 is moved to, so worked repeatedly, until receiving control After the completion task of platform processed 1, if having finished product in box 271, finished product is placed into discharge to products storage area 26, if not having, The discharging area 235 for moving to conveyer 23 enters holding state.
Job information is passed through wireless by all intelligent vision robots 21,24,25,27,28,29 in the work of each step Module 3 sends control platform 1 to, and control platform 1 is integrated information, to obtain the number of elements on conveyer, transmission Finished product quantity on device, add up qualified finished product quantity, the quantity according to these information dynamic regulation intelligent vision robots with And the quantity of the intelligent vision robot of feeding module and cutting module is operated in, preferably to complete fittage.
Finally, all intelligent vision robots 21,24,25,27,28,29 pass through radio communication 211,241 by job information Control platform 1 is sent to, control platform 1 is compared with sequence information after the job information of return is integrated, judged whether Into fittage, if completing, assignment instructions are sent completely to vision robot's work station 2, each region intelligent robot starts Cleaning work is carried out, cleaning sends cleaning work and completes instruction after completing to control platform 1, and moves to designated area, then Control platform 1 to vision robot's workbench 2 send halt instruction, all intelligent vision robots 21,24,25,27,28, 29 enter holding state, the stop motion of conveyer 23.
As shown in figure 3, intelligent vision robot is in workpiece identification, image is returned to by vision sensor, extract color, The information such as elemental area, radius of circle, circle number, then compare with template successively, it is first determined whether it is red to meet color More than threshold value, radius of circle within the specified range, if meeting, the target in image is red base for color, red pixel area 221, if it is not satisfied, continuing to determine whether to meet color for black and black area, average gray and radius of circle are in specified range Interior, if meeting, the target in image is black core 222, if it is not satisfied, then judging whether to meet radius of circle, circle number and putting down Within the specified range, if meeting, the target in image is spring 223 to equal gray scale, if it is not satisfied, then continuing to determine whether to meet Color is less than threshold value for blue, blue pixel area less than threshold value, radius of circle, if meeting, the target in image is blue cap Son 224, if it is not satisfied, continuing to determine whether to meet color for blue, blue pixel area is more than threshold more than threshold value, radius of circle Value, if meeting, the target in image is finished product 236, if it is not satisfied, then matched without any template.Wherein feeding intelligence Can vision robot 21 and the red base 221 of the assembling needs identification of intelligent vision robot 24,25,28,29, black core 222, bullet Spring 223 and blue cap 224, blanking intelligent vision robot 27 only need to recognize finished product 236.
In the feature in extracting image, it can be directly that can determine whether by the RGB channel of access images to extract color, and Whether then by the RGB scopes for accessing each pixel to meet and specifying, satisfaction is then added up elemental area, and what is finally obtained is tired Plus and as elemental area.And the calculating on radius of circle and circle number will use loop truss, method is as follows:
First have to pre-process the image for obtaining:
Median filter process is carried out to image.Median filter uses each pixel value in center pixel square neighborhood Median pixel value is replaced, and can remove the marginal information that denoising retains image again.Can be defined as in two dimensional image:
Yi=med { xij}=med { x(i+m),(j+n)(m,n)∈A,(i,j)∈I2}
Binary conversion treatment is carried out to image.Binaryzation makes image become simple, and data volume reduces, and can highlight interested Objective contour.Definition m is predetermined threshold value, and f (x, y) is the gray value of pixel coordinate (x, y), and g (x, y) is the gray value for obtaining. Can be expressed as:
Rim detection is carried out to image using Canny operators.The principle of Canny rim detections is limited with single order local derviation Difference calculates amplitude and the direction of gradient.If f (x, y) is image, the gradient of f (x, y) is approximate with 2 × 2 first differences point Formula calculates two array f of the partial derivative of x and yx' (x, y) and fy′(x,y):
fx′(x,y)≈Gx=[f (x+1, y)-f (x, y)+f (x+1, y+1)-f (x, y+1)]/2
fy′(x,y)≈Gy=[f (x, y+1)-f (x, y)+f (x+1, y+1)-f (x+1, y)]/2
By first-order difference convolution maskWithObtain amplitude and the direction of 2 gradients:
In order to obtain rational edge, non-maxima suppression is carried out to gradient magnitude, carried out with dual threashold value-based algorithm detection and The real edge of connection.
Expansion process is carried out to edge image.Expansion process is to carry out convolutional calculation to image and core, obtains kernel covering area The max pixel value in domain so that the highlight regions in image increase, and are compensated to pixel, form UNICOM domain.It is to be located to define X The image of reason, B is structural element, and X is by the result that B expands:
Then to using random Hough transformation loop truss by pretreated image, method is as follows:
The equation of circle is in two-dimensional space:
(x-a)2+(y-b)2=r2
In formula:(a, b) is central coordinate of circle, and r is round radius.Determine 3 unknown parameters of a, b, r, it is necessary to be taken on circle 3 point (x1,y1)、(x2,y2)、(x3,y3), obtain equation group during 3 points are substituted into above formula:
(x1-a)2+(y1-b)2=r2
(x2-a)2+(y2-b)2=r2
(x3-a)2+(y3-b)2=r2
Solve equation group and can obtain central coordinate of circle (a, b) and radius r.
The principle of random Hough transformation determines central coordinate of circle to randomly select 3 points in all of marginal point in the picture (a1,b1) and radius r1.Then take in the 4th point of (x4,y4) substitute into first equation in, obtain radius r4, by r4In substitution following formula:
r4-r11
δ is error amount set in advance, works as δ1During less than δ, it is defined as candidate's circle.After determining candidate's circle, take a little generation Enter to calculate, work as δ1-iAccumulator adds 1 during less than δ, when the value of accumulator reaches predetermined threshold, is defined as 1 proper circle.
In the present invention, the element letter that control platform 1 is returned according to all intelligent vision robots 21,24,25,27,28,29 Breath, finished product quantity information can dynamically update existing number of elements on conveyer 23, existing processing on conveyer 23 Good finished product, the accumulative finished product quantity for processing, when existing number of elements exceedes threshold value on conveyer 23, control platform 1 will send the order of pause feeding to feeding intelligent vision robot 21, and allow feeding intelligent vision robot 21 to go to assist Blanking intelligent robot 27 goes to complete finished product blanking work, when existing number of elements is less than a certain threshold value on conveyer 23 When, control platform 1 is notified that blanking intelligent robot 27 goes to assist feeding robot 21 to complete feeding work, when conveyer 23 When the upper existing finished product quantity for processing exceedes a certain threshold value, control platform 1 is notified that feeding intelligent vision robot 21 is temporary Stop feeding work, and notify that feeding intelligent vision robot 211 assists blanking intelligent vision robot 27 to complete finished product blankers Make, when the quantity of qualified finished product meets order requirements, control platform 1 to all intelligent vision robots 21,24,25,27, 28th, completion assignment instructions are sent 29, and wait the flush instructions of conveyer 23 that feeding area intelligent vision robot 21 sends, After receiving the instruction, control platform 1 sends halt instruction, all intelligent vision robots to vision robot's workbench 2 21st, 24,25,27,28,29 enter holding state, and conveyer 23 stops.

Claims (10)

1. a population visual machine collaborative assembly method, including control platform (1), vision robot's work station (2) and undertake The wireless module (3) communicated between control platform (1) and vision robot's work station (2), it is characterised in that the control platform (1) parts number and species needed for the sequence information according to input is calculated, the quantity of intelligent vision robot devoted oneself to work, And according to actual working condition, dynamic regulation is operated in the quantity of the intelligent vision robot of disparate modules.
2. group's visual machine collaborative assembly method according to claim 1, it is characterised in that complete per class intelligent vision robot Into independent task, it does not interfere with each other, when assembling object changes, by control platform (1) to vision robot's work station (2) control instruction is sent, allows vision robot's work station (2) to run the linkage editor for this object.
3. group's visual machine collaborative assembly method according to claim 1, it is characterised in that vision robot's work station (2) including a conveyer for annular (23), feeding area (231), assembly section (233,238) are disposed with along conveyer (23) With discharging area (235), feeding area (231) are disposed about element storage area (22), and discharging area (235) are disposed about products storage area (26), the blanking intelligent vision robot (27) of the feeding intelligent vision robot (21) of feeding area (231) and discharging area (235) All there is feeding module and cutting module, control platform (1) is operated in both according to real-time assembling situation come dynamic regulation The quantity of the intelligent vision robot of module, the assembling intelligent vision robot of assembly section only has load module, and each intelligence is regarded Feel that robot is equipped with wireless module (211) and and the use communicated between control platform (1) and intelligent vision robot In the vision sensor (213) for finding path and element identification, the collaborative assembly method includes four collaborations of aspect:It is all Intelligent vision robot is cooperateed with the start and stop of conveyer (23), between feeding intelligent vision robot (21) and conveyer (23) Collaboration, assembling intelligent vision robot (24,25,28,29) and cooperateing between conveyer (23), blanking intelligent vision machine Cooperateing between people (27) and conveyer (23).
4. group's visual machine collaborative assembly method according to claim 3, it is characterised in that control platform (1) basis Actual working condition, the process that dynamic regulation is operated in the quantity of the intelligent vision robot of disparate modules is:
Control platform (1) sends order of starting working to vision robot's work station (2), and allows vision robot's work station (2) The program of the corresponding assembling object of operation, all intelligent vision robots start simultaneously at work, feeding intelligent vision robot (21) image information returned by vision sensor (213) recognizes that path is moved to required for element storage area (22) is picked Element, then moves to feeding area (231) and starts feeding, and assembling intelligent vision robot is recognized by vision sensor (242) Required element (232), and finished product (236) is placed on conveyer (23), blanking intelligent vision robot (27) is by vision Sensor (273) recognizes qualified finished product (236), it is grabbed down from conveyer (23) and is placed on products storage area (26), owns Intelligent vision robot sends job information to control platform (1) in the work of each step by wireless module (3), controls Platform (1) is integrated information, to obtain the number of elements on conveyer, the finished product quantity on conveyer, accumulative conjunction Lattice finished product quantity, quantity according to these information dynamic regulation intelligent vision robots and is operated in feeding module and blanking die The quantity of the intelligent vision robot of block, preferably to complete fittage.
5. group's visual machine collaborative assembly method according to claim 4, it is characterised in that the feeding intelligent vision machine The feeding process of people (21) will cooperate with feeding according to the speed of conveyer (23), the assembling intelligent vision robot (24, 25th, 28 element needed for, assembling process 29) will cooperate with crawl according to the speed of conveyer (23) is assembled, it is described under The blanking process of material intelligent vision robot (27) will be cooperateed with according to the speed of conveyer (23) grabs lower finished product.
6. group's visual machine collaborative assembly method according to claim 4, it is characterised in that the blanking intelligent vision machine After people (27) completes finished product blanking each time, blanking information is sent to control platform (1), when finished product is qualified, control platform system Count qualified finished product (261) quantity and add 1, and put it into box (271), when finished product is unqualified, blanking intelligent vision robot (27) waste product area is put into after unqualified finished product is captured, and control platform (1) notifies feeding intelligent vision robot (21), institute Each class component (232,234, the 237,239) quantity that need to be sorted adds 1, is ordered when qualified finished product (261) the quantity satisfaction of assembling is counted on On list during desired finished product quantity, control platform (1) is sent completely assignment instructions, the dress to vision robot's work station (2) Workpiece, feeding intelligent vision robot (21) and the blanking for cleaning out rigging position (243) with intelligent vision robot are intelligently regarded Feel that robot (27) is responsible for knocked-down element in cleaning conveyer (23), raw material storeroom (22) put back to after being reclaimed, Then move to appointed place and send completion cleaning work to control platform (1) and instruct, control platform (1) sends stopping and refers to Order, all intelligent vision robots enter holding state, while conveyer (23) stop motion.
7. the model system of group's visual machine collaborative assembly method described in claim 1 is based on, it is characterised in that:
Control platform (1) is served as by a computer;
By a wireless receiving transmitter as wireless module (3);
Based on colony intelligence vision robot, the conveyer (23) of annular, element storage area (22) and products storage area (26) Want vision robot's work station (2) of part.
8. model system according to claim 7, it is characterised in that:
In vision robot's work station (2), conveyer (23) is arranged in center, the area at conveyer (23) major axis two ends Domain is respectively feeding area (231) and discharging area (235), and the region parallel to major axis both sides is assembly section one (233) and assembly section Two (238), element storage area (22) is arranged near feeding area (231), and it is attached that products storage area (26) are arranged in discharging area (235) Closely, the blanking intelligent vision robot (27) of the feeding intelligent vision robot (21) of feeding area (231) and discharging area (235) is all With feeding module and cutting module, control platform (1) is operated in both moulds according to real-time assembling situation come dynamic regulation The quantity of the intelligent vision robot of block, assembly section one (233) and assembly section two (238) assembling intelligent vision robot (24, 25th, 28,29) there was only load module, each intelligent vision robot is equipped with and control platform (1) and intelligent vision robot Between the wireless module (211) that communicates and and for finding the vision sensor (213) that path and element are recognized.
9. model system according to claim 7, it is characterised in that:
In vision robot's work station (2), element to be assembled includes red base (221), black core (222), spring And blue cap (224) (223);
Intelligent vision robot returns to image in workpiece identification by vision sensor, extracts color, elemental area, circle half Footpath, circle number information, then compare with template successively, it is first determined whether it is red, red pixel area to meet color More than threshold value, radius of circle within the specified range, if meeting, the target in image is red base (221), if it is not satisfied, after It is continuous to judge whether to meet color for black and black area, average gray and radius of circle are within the specified range, if meeting, image In target be black core (222), if it is not satisfied, then judging whether to meet radius of circle, circle number and average gray in specified model In enclosing, if meeting, the target in image is spring (223), if it is not satisfied, then continue to determine whether to meet color for it is blue, Blue pixel area is less than threshold value less than threshold value, radius of circle, if meeting, the target in image is blue cap (224), if not Meet, continue to determine whether to meet color for blue, blue pixel area is more than threshold value more than threshold value, radius of circle, if meeting, Target in image is finished product (236), if it is not satisfied, then matched without any template;Wherein feeding intelligent vision machine People (21) and assembling intelligent vision robot (24,25,28,29) need the red base (221) of identification, black core (222), spring (223) and blue cap (224), blanking intelligent vision robot (27) only needs to identification finished product (236).
10. model system according to claim 9, it is characterised in that:
In the feature in extracting image, color is directly judged by the RGB channel of access images, and elemental area is by visiting Ask whether each pixel meets the RGB scopes specified, satisfaction is then added up, final obtain cumulative and as pixel faces Product;The calculating of radius of circle and circle number will use loop truss, and method is as follows:
First have to pre-process the image for obtaining:
Median filter process is carried out to image, median filter is middle by each pixel value in center pixel square neighborhood Pixel value is replaced, and can remove the marginal information that denoising retains image again, is defined in two dimensional image:
Yi=med { xij}=med { x(i+m),(j+n)(m,n)∈A,(i,j)∈I2}
Binary conversion treatment is carried out to image, binaryzation makes image become simple, data volume reduces, and can highlight target interested Profile, definition m is predetermined threshold value, and f (x, y) is the gray value of pixel coordinate (x, y), and g (x, y) is the gray value for obtaining, and is represented For:
g ( x , y ) = 255 , f ( x , y ) &GreaterEqual; m 0 , f ( x , y ) < m
Rim detection is carried out to image using Canny operators, the principle of Canny rim detections is with the finite difference of single order local derviation To calculate amplitude and the direction of gradient, if f (x, y) is image, the gradient of f (x, y) divided with 2 × 2 first differences approximate expression come Calculate two array f ' of the partial derivative of x and yx(x, y) and f 'y(x,y):
f′x(x,y)≈Gx=[f (x+1, y)-f (x, y)+f (x+1, y+1)-f (x, y+1)]/2
f′y(x,y)≈Gy=[f (x, y+1)-f (x, y)+f (x+1, y+1)-f (x+1, y)]/2
By first-order difference convolution maskWithObtain amplitude and the direction of 2 gradients:
In order to obtain rational edge, non-maxima suppression is carried out to gradient magnitude, detected and connected with dual threashold value-based algorithm Real edge;
Expansion process is carried out to edge image, expansion process is to carry out convolutional calculation to image and core, obtains kernel covering region Max pixel value so that the highlight regions in image increase, and are compensated to pixel, form UNICOM domain.It is processed to define X Image, B is structural element, and X is by the result that B expands:
D ( X ) = X &CircleTimes; B = { ( x , y ) | B x y I X &NotEqual; &phi; }
Then to using random Hough transformation loop truss by pretreated image, method is as follows:
The equation of circle is in two-dimensional space:
(x-a)2+(y-b)2=r2
In formula:(a, b) is central coordinate of circle, and r is round radius.Determine a, b, r3 unknown parameter, it is necessary to take 3 points on circle (x1,y1)、(x2,y2)、(x3,y3), obtain equation group during 3 points are substituted into above formula:
(x1-a)2+(y1-b)2=r2
(x2-a)2+(y2-b)2=r2
(x3-a)2+(y3-b)2=r2
Solve equation group and can obtain central coordinate of circle (a, b) and radius r,
The principle of random Hough transformation determines central coordinate of circle (a to randomly select 3 points in all of marginal point in the picture1, b1) and radius r1, then take in the 4th point of (x4,y4) substitute into first equation in, obtain radius r4, by r4In substitution following formula:
r4-r11
δ is error amount set in advance, works as δ1During less than δ, it is defined as candidate's circle.After determining candidate's circle, take and substitute into a little meter Calculate, work as δ1-iAccumulator adds 1 during less than δ, when the value of accumulator reaches predetermined threshold, is defined as 1 proper circle.
CN201611209458.3A 2016-12-23 2016-12-23 Group's visual machine collaborative assembly method and model system Expired - Fee Related CN106774208B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611209458.3A CN106774208B (en) 2016-12-23 2016-12-23 Group's visual machine collaborative assembly method and model system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611209458.3A CN106774208B (en) 2016-12-23 2016-12-23 Group's visual machine collaborative assembly method and model system

Publications (2)

Publication Number Publication Date
CN106774208A true CN106774208A (en) 2017-05-31
CN106774208B CN106774208B (en) 2017-12-26

Family

ID=58920344

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611209458.3A Expired - Fee Related CN106774208B (en) 2016-12-23 2016-12-23 Group's visual machine collaborative assembly method and model system

Country Status (1)

Country Link
CN (1) CN106774208B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108459572A (en) * 2018-03-20 2018-08-28 广东美的制冷设备有限公司 Monitoring method, device, system, robot and air conditioner production equipment
CN109060823A (en) * 2018-08-03 2018-12-21 珠海格力智能装备有限公司 Method and device for detecting coating quality of heat dissipation paste of radiator
CN109299720A (en) * 2018-07-13 2019-02-01 沈阳理工大学 A kind of target identification method based on profile segment spatial relationship
CN110561415A (en) * 2019-07-30 2019-12-13 苏州紫金港智能制造装备有限公司 Double-robot cooperative assembly system and method based on machine vision compensation
CN110580373A (en) * 2018-06-07 2019-12-17 能力中心-虚拟车辆研究公司 Preprocessing collaborative simulation method and device
CN111843981A (en) * 2019-04-25 2020-10-30 广州中国科学院先进技术研究所 Multi-robot cooperative assembly system and method
CN112157408A (en) * 2020-08-13 2021-01-01 盐城工学院 Industrial robot double-machine cooperation carrying system and method
CN112363470A (en) * 2020-11-05 2021-02-12 苏州工业园区卡鲁生产技术研究院 User-cooperative robot control system
CN112589401A (en) * 2020-11-09 2021-04-02 苏州赛腾精密电子股份有限公司 Assembling method and system based on machine vision
CN114115151A (en) * 2021-11-24 2022-03-01 山东哈博特机器人有限公司 Industrial robot cooperative assembly method and system based on MES
CN114161202A (en) * 2021-12-29 2022-03-11 武汉交通职业学院 Automatic industrial robot feeding and discharging system for numerical control machine tool

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104950684A (en) * 2015-06-30 2015-09-30 西安交通大学 Swarm robot collaborative scheduling measurement and control method and system platform
CN204725516U (en) * 2015-01-19 2015-10-28 西安航天精密机电研究所 A kind of single vision being applicable to pipelining coordinates multirobot navigation system
CN205734182U (en) * 2016-06-30 2016-11-30 长沙长泰机器人有限公司 For many group process equipment co-operating intelligent robot processing lines

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204725516U (en) * 2015-01-19 2015-10-28 西安航天精密机电研究所 A kind of single vision being applicable to pipelining coordinates multirobot navigation system
CN104950684A (en) * 2015-06-30 2015-09-30 西安交通大学 Swarm robot collaborative scheduling measurement and control method and system platform
CN205734182U (en) * 2016-06-30 2016-11-30 长沙长泰机器人有限公司 For many group process equipment co-operating intelligent robot processing lines

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108459572A (en) * 2018-03-20 2018-08-28 广东美的制冷设备有限公司 Monitoring method, device, system, robot and air conditioner production equipment
CN110580373A (en) * 2018-06-07 2019-12-17 能力中心-虚拟车辆研究公司 Preprocessing collaborative simulation method and device
CN110580373B (en) * 2018-06-07 2023-09-15 虚拟汽车研究有限公司 Preprocessing collaborative simulation method and device
CN109299720B (en) * 2018-07-13 2022-02-22 沈阳理工大学 Target identification method based on contour segment spatial relationship
CN109299720A (en) * 2018-07-13 2019-02-01 沈阳理工大学 A kind of target identification method based on profile segment spatial relationship
CN109060823A (en) * 2018-08-03 2018-12-21 珠海格力智能装备有限公司 Method and device for detecting coating quality of heat dissipation paste of radiator
CN111843981A (en) * 2019-04-25 2020-10-30 广州中国科学院先进技术研究所 Multi-robot cooperative assembly system and method
CN110561415A (en) * 2019-07-30 2019-12-13 苏州紫金港智能制造装备有限公司 Double-robot cooperative assembly system and method based on machine vision compensation
CN112157408A (en) * 2020-08-13 2021-01-01 盐城工学院 Industrial robot double-machine cooperation carrying system and method
CN112363470A (en) * 2020-11-05 2021-02-12 苏州工业园区卡鲁生产技术研究院 User-cooperative robot control system
CN112589401B (en) * 2020-11-09 2021-12-31 苏州赛腾精密电子股份有限公司 Assembling method and system based on machine vision
CN112589401A (en) * 2020-11-09 2021-04-02 苏州赛腾精密电子股份有限公司 Assembling method and system based on machine vision
CN114115151A (en) * 2021-11-24 2022-03-01 山东哈博特机器人有限公司 Industrial robot cooperative assembly method and system based on MES
CN114161202A (en) * 2021-12-29 2022-03-11 武汉交通职业学院 Automatic industrial robot feeding and discharging system for numerical control machine tool

Also Published As

Publication number Publication date
CN106774208B (en) 2017-12-26

Similar Documents

Publication Publication Date Title
CN106774208B (en) Group&#39;s visual machine collaborative assembly method and model system
CN107899814A (en) A kind of robot spraying system and its control method
US20180243776A1 (en) Intelligent flexible hub paint spraying line and process
CN109483573A (en) Machine learning device, robot system and machine learning method
CN106423656B (en) Automatic spraying system and method based on cloud and images match
CN110281231B (en) Three-dimensional vision grabbing method for mobile robot for unmanned FDM additive manufacturing
US20180210432A1 (en) Industrial robot process cloud system and working method thereof
CN202924613U (en) Automatic control system for efficient loading and unloading work of container crane
CN102923578A (en) Automatic control system of efficient handing operation of container crane
CN105500370B (en) A kind of robot off-line teaching programing system and method based on body-sensing technology
CN110456746A (en) A kind of real-time scheduling method of multi items swinging cross automated production
CN111906788B (en) Bathroom intelligent polishing system based on machine vision and polishing method thereof
CN104458748A (en) Aluminum profile surface defect detecting method based on machine vision
CN104299246B (en) Production line article part motion detection and tracking based on video
CN109726777A (en) PCB appearance detection system and detection method Internet-based
CN106681508A (en) System for remote robot control based on gestures and implementation method for same
CN114924513B (en) Multi-robot cooperative control system and method
CN107444644A (en) A kind of unmanned plane movement supply platform and unmanned plane for orchard operation
Niemueller et al. Proposal for Advancements to the LLSF in 2014 and beyond
CN107479552A (en) Track machine people&#39;s self-organizing control system based on Agent
CN206229582U (en) A kind of full-automatic glue spraying streamline based on three-dimensional stereoscopic visual
CN111352398A (en) Intelligent precision machining unit
CN117022971B (en) Intelligent logistics stacking robot control system
Christensen et al. Integrating vision based behaviours with an autonomous robot
CN204288242U (en) Based on the Control During Paint Spraying by Robot trajectory extraction device that curved three-dimensional is rebuild

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20171226

Termination date: 20211223