CN106774208B - Group's visual machine collaborative assembly method and model system - Google Patents

Group's visual machine collaborative assembly method and model system Download PDF

Info

Publication number
CN106774208B
CN106774208B CN201611209458.3A CN201611209458A CN106774208B CN 106774208 B CN106774208 B CN 106774208B CN 201611209458 A CN201611209458 A CN 201611209458A CN 106774208 B CN106774208 B CN 106774208B
Authority
CN
China
Prior art keywords
vision robot
intelligent vision
robot
control platform
feeding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201611209458.3A
Other languages
Chinese (zh)
Other versions
CN106774208A (en
Inventor
韩九强
于洋
郑辑光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN201611209458.3A priority Critical patent/CN106774208B/en
Publication of CN106774208A publication Critical patent/CN106774208A/en
Application granted granted Critical
Publication of CN106774208B publication Critical patent/CN106774208B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

One population visual machine collaborative assembly method, including control platform, vision robot's work station and undertake the wireless module to be communicated between control platform and vision robot's work station, control platform calculates required parts number and species, the quantity for the intelligent vision robot devoted oneself to work according to the sequence information of input, and the quantity of the intelligent vision robot of disparate modules is operated according to the working condition of reality, dynamic regulation;Present invention also offers the model system of methods described, the present invention uses intelligent vision robot work compound, according to sequence information planning, corresponding program can be formulated according to different demands, hardware configuration and increase sensor need not be changed, flexibility is strong, Multi-varieties and Small-batch Production pattern can be met, and due to introducing vision sensor, process can be simplified, work independently, it can individually be repaired during failure, not influence the work of whole system, robustness is good, production efficiency is greatly improved, saves production equipment input and space again.

Description

Group's visual machine collaborative assembly method and model system
Technical field
The invention belongs to technical field of industrial automatic control, more particularly to a population visual machine collaborative assembly method and mould Type system.
Background technology
With the rapid development of economy, competition becomes fiercer, the cycle of model change gradually reduces, for more The demand of kind small lot production model is increasingly strong, causes more concerns of the people to intelligent automation production line.At present Although many automatic production lines can complete task well, cost is very high, and can be only done single work, complicated Process needs Duo Tai robots sequential working, once which middle link breaks down, whole system will paralyse, and produce effect Rate is low, and floor space is big, and cost is high, it is necessary to multiple sensors collaboration is completed to identify work, when producing object and change, it is necessary to Hardware is changed, increases cost, therefore very flexible, it is difficult to adapt to the production model of multi-varieties and small-batch.
The content of the invention
The shortcomings that in order to overcome above-mentioned prior art, it is an object of the invention to provide a population visual machine collaborative assembly Method and model system, its is practical, adapts to a variety of production objects, control is flexible, and scalability is strong, and required space is small, respectively Part independently working, it can individually be repaired during failure, have no effect on the work of whole system, robustness is good, can be real Now efficient, Flexible Production.
To achieve these goals, the technical solution adopted by the present invention is:
One population visual machine collaborative assembly method, including control platform 1, vision robot's work station 2 and undertake control The wireless module 3 to be communicated between platform 1 and vision robot's work station 2 processed, it is characterised in that the control platform 1 is according to defeated The sequence information entered calculates required parts number and species, the quantity for the intelligent vision robot devoted oneself to work, and according to reality The working condition on border, dynamic regulation are operated in the quantity of the intelligent vision robot of disparate modules.
Independent task is completed per class intelligent vision robot, is not interfere with each other, can when certain a kind of robot breaks down Individually to be repaired to it, the operation of whole system is had no effect on, there is good robustness.When assembling object changes When, control instruction is sent to vision robot's work station 2 by control platform 1, allows the operation of vision robot's work station 2 to be directed to this The linkage editor of object, can so meet the production model of multi-varieties and small-batch well, and system flexibility is very strong.
Vision robot's work station 2 includes an annular conveyer 23, and feeding is disposed with along conveyer 23 Area 231, assembly section 233,238 and discharging area 235, feeding area 231 are disposed about element storage area 22, the neighbouring cloth in discharging area 235 It is set to product storage area 26, the feeding intelligent vision robot 21 of feeding area 231 and the blanking intelligent vision robot of discharging area 235 27 all have feeding module and cutting module, and control platform 1 is operated in both according to real-time assembling situation come dynamic regulation The quantity of the intelligent vision robot of module, the assembling intelligent vision robot of assembly section only have load module, and each intelligence regards Feel robot is equipped with the wireless module 211 to be communicated between control platform 1 and intelligent vision robot and and for seeking The vision sensor 213 of path and element identification is looked for, the collaborative assembly method includes the collaboration of four aspects:All intelligence regard Feel robot cooperateed with the start and stop of conveyer 23, the collaboration between feeding intelligent vision robot 21 and conveyer 23, assembling Intelligent vision robot 24,25,28,29 and cooperateing between conveyer 23, blanking intelligent vision robot 27 and conveyer Collaboration between 23.
The control platform 1 is operated in the intelligent vision machine of disparate modules according to actual working condition, dynamic regulation The process of the quantity of people is:
Control platform 1 sends order of starting working to vision robot's work station 2, and allows vision robot's work station 2 to transport The program of the corresponding assembling object of row, all intelligent vision robots start simultaneously at work, and feeding intelligent vision robot 21 is logical The element that the image information identification path of the return of vision sensor 213 is moved to required for element storage area 22 picks is crossed, then Moving to feeding area 231 and start feeding, assembling intelligent vision robot identifies required element 232 by vision sensor 242, and Finished product 236 is placed on conveyer 23, blanking intelligent vision robot 27 identifies qualified finished product by vision sensor 273 236, it is grabbed down from conveyer 23 and is placed on products storage area 26, all intelligent vision robots each step work in, Job information is sent to control platform 1 by wireless module 3, control platform 1 is integrated information, to obtain conveyer On number of elements, the finished product quantity on conveyer, add up qualified finished product quantity, intelligently regarded according to these information dynamic regulations The quantity of the quantity for feeling robot and the intelligent vision robot for being operated in feeding module and cutting module, preferably to complete Fittage.
The feeding process of the feeding intelligent vision robot 21 will cooperate with feeding according to the speed of conveyer 23, institute State assembling intelligent vision robot 24,25,28,29 assembling process crawl is cooperateed with according to the speed of conveyer 23 needed for Element is assembled, and the blanking process of the blanking intelligent vision robot 27 will grab according to the speed of conveyer 23 to cooperate with Lower finished product.
After the blanking intelligent vision robot 27 completes finished product blanking each time, blanking information is sent to control platform 1, when finished product is qualified, control platform counts the qualified quantity of finished product 261 and adds 1, and puts it into box 271, when finished product is unqualified, Blanking intelligent vision robot 27 is put into waste product area after unqualified finished product is captured, and control platform 1 notifies feeding intelligently to regard Feel robot 21, the quantity of each class component 232,234,237,239 of required sorting adds 1, is counted when counting on the qualified finished product 261 of assembling When amount meets desired finished product quantity on order, control platform 1 is sent completely assignment instructions to vision robot's work station 2, institute Assembling intelligent vision robot is stated to clean out the workpiece of rigging position 243, feeding intelligent vision robot 21 and blanking and intelligently regard Feel that robot 27 is responsible for knocked-down element on cleaning conveyer 23, raw material storeroom 22, Ran Houyun is put back to after being reclaimed Move to appointed place and sent to control platform 1 and complete cleaning work instruction, control platform 1 sends halt instruction, all intelligence Vision robot enters holding state, while the stop motion of conveyer 23.
Present invention also offers a kind of model system based on group's visual machine collaborative assembly method, its feature exists In:
Control platform 1 is served as by a computer;
By a wireless receiving transmitter as wireless module 3;
Conveyer 23, element storage area 22 and products storage area 26 by colony intelligence vision robot, annular are main Vision robot's work station 2 of part.
In vision robot's work station 2, conveyer 23 is arranged in center, the region at the major axis both ends of conveyer 23 It is feeding area 231 and discharging area 235 respectively, the region parallel to major axis both sides is assembly section 1 and assembly section 2 238, member Part storage area 22 is arranged near feeding area 231, and products storage area 26 is arranged near discharging area 235, the feeding of feeding area 231 Intelligent vision robot 21 and the blanking intelligent vision robot 27 of discharging area 235 all have feeding module and cutting module, control Platform 1 processed is operated in the quantity of the intelligent vision robot of both modules according to real-time assembling situation come dynamic regulation, dress Assembling intelligent vision robot 24,25,28,29 with area 1 and assembly section 2 238 only has load module, and each intelligence regards Feel robot is equipped with the wireless module 211 to be communicated between control platform 1 and intelligent vision robot and and for seeking Look for the vision sensor 213 of path and element identification.
In vision robot's work station 2, element to be assembled includes red base 221, black core 222, spring 223 With blue cap 224.First, by inputting the order comprising component kind and expectation processed finished products quantity in control platform 1 Information, then control platform 1 sends to start working by wireless module 3 to vision robot's workbench 2 instructs and runs phase The program answered, all intelligent vision robots 21,24,25,27,28,29 and conveyer 23 enter working condition, feeding intelligence The image information that vision robot 21 is obtained by vision sensor 213 identifies path, moves to element storage area 22, then The image information obtained by vision sensor 213 extracts the features such as color, area, radius of circle come recognition component, according to control The element that the instruction crawl that platform 1 processed is sent is specified, then moves to feeding area 231, is started according to the speed of conveyer 23 Feeding successively, the information that assembling intelligent vision robot 24,25,28,29 is returned by vision sensor 242, extraction color, face The feature such as product and radius of circle carries out template matches, identifies red base 221, black core 222, spring 223 and blue cap successively 224 start to assemble, and the finished product 236 assembled are placed on conveyer 231, then blanking intelligent vision robot 27 passes through Finished product is identified the feature such as the image information that vision sensor 273 returns, extraction color, area, radius of circle, and judges it It is whether qualified, it is qualified, it is placed in box 271, it is unqualified, waste product area is placed on, while job information, control are sent to control platform Platform processed calculates qualified finished product number, when it meets order requirements, notifies vision robot's workbench 1 to perform clean-up task, After the completion of clean-up task, control platform 1 sends halt instruction, all intelligent vision robots to vision robot's workbench 2 21st, 24,25,27,28,29 and conveyer 23 enter holding state.
Compared with prior art, group's machine vision Intelligent assembly Synergistic method of the invention uses intelligent vision machine with system Device people's work compound, feeding work is planned according to sequence information, phase can be formulated according to the demand of the different production object of client The program answered, it is not necessary to change hardware configuration and increase sensor, flexibility is strong, can meet multi-varieties and small-batch well Production model, and due to introducing vision sensor, process can be simplified, worked independently, can be independent during failure Being repaired does not influence the work of whole system, and robustness is good, and greatly improves production efficiency, saves production equipment input again And space.
Brief description of the drawings
Fig. 1 is the Organization Chart of group's visual machine collaborative assembly simulation system of the present invention.
Fig. 2 is the workflow diagram of group's visual machine collaborative assembly method of the present invention.
Fig. 3 is the workpiece identification template matches flow chart of group's visual machine collaborative assembly simulation system of the present invention.
Embodiment
Describe embodiments of the present invention in detail with reference to the accompanying drawings and examples.
As shown in figure 1, group's visual machine collaborative assembly model system of the present invention, including control platform 1, visual machine are artificial Make station 2 and undertake the wireless module 3 to be communicated between control platform 1 and vision robot's work station 2, control platform 1 is integrated and regarded Feel the job information that robot workstation 2 passes back, sent and controlled to vision robot's workbench 2 according to the job information after integration System order.
Vision robot's work station 2 include element storage area 22, feeding area 231, assembly section 1, assembly section 2 238, 4 kinds of elements 221,222,223,224 needed for assembling are placed in discharging area 235 and products storage area 26, wherein element storage area 22, Feeding area 231 and discharging area 235 respectively positioned at ellipse the major axis of conveyer 23 both ends, and assembly section 1, assembly section 2 238 are located parallel to the both sides of the major axis of conveyer 23, around conveyer 23 laid 6 intelligent vision robots 21,24, 25、27、28、29:1 feeding intelligent vision robot 21,4 assembling intelligent vision robots 24,25,28,29 and 1 blankings Intelligent vision robot 27, each intelligent vision robot are furnished with the wireless mould to be communicated between control platform 1 and robot Block 211,241,272 and vision sensor 213,242,273, wherein feeding area intelligent vision robot 21 and discharging area intelligently regard It is removable intelligent vision robot to feel robot 27, because they will complete to capture raw material workpiece 221- from element storage area 22 224 and the finished product 261 processed is sent to products storage area 26.And feeding intelligent vision robot 21 and blanking intelligent vision Feeding program module and blanking program module are respectively provided with the program of robot 27, control platform 1 passes intelligent vision robot The job information returned is integrated, and which program work module they are in dynamic regulation, preferably to complete assembly work.
As shown in Fig. 2 the workflow of group's visual machine collaborative assembly method, comprises the following steps:
First, sequence information is inputted by control platform 1, then control platform 1 show which kind of is assembled according to sequence information The number of elements and species of object and needs.
Secondly, control platform 1 sends order of starting working to vision robot's workbench 2, and makes visual machine artificial Make the program that platform 2 runs corresponding assembling object, all intelligent vision robots 21,24,25,27,28,29 and conveyer 23 enter working condition simultaneously
All intelligent vision robots 21,24,25,27,28,29 cooperate according to the instruction of control platform 1, feeding intelligence The image information identification path that energy vision robot 21 is returned by vision sensor 213 moves to element storage area 22, passes through Image zooming-out color that vision sensor 213 returns, area, the feature such as radius of circle carry out template matches, identify required element 221st, 222,223,224, the element required for picking is put into box 212, and is counted, when box 212 is filled, feeding Intelligent vision robot 21 is moved to feeding area 231 by vision sensor 213, and feeding process will be according to the speed of conveyer 23 Spend to cooperate with feeding.Then, the element 214 in box 212 is placed on conveyer 23 successively, and the workpiece placed every time Species will notify control platform 1, when not having element in box 212, feeding intelligent vision robot 21 by wireless module 211 Element storage area 22 is moved to again by vision sensor 213, starts to carry work, is then transported by vision sensor 213 Move to cooperate with conveyer 23 to feeding area 231 and carry out feeding work, be so repeated, until receiving control platform 1 Assignment instructions are completed, after completion assignment instructions are received, the feeding area 231 of conveyer 23 is moved to, clears up conveyer On workpiece, be returned to element storage area 22, then the image information returned by vision sensor 213 moves to feeding area 231 And by radio communication 211 notify control platform 1 complete clean-up task, when receive control platform 1 be stopped order after, Feeding Visual intelligent robot 21 enters holding state.
Assembling intelligent vision robot 24,25,28,29 completes assembly work in a certain order, captures before this red Color base 221, then captures black core 222, spring 223 is sequentially placed into red base 221, finally buckles blue cap 224, Tighten, complete the assembly work of a finished product 236;Assembling intelligent vision robot 24,25,28,29 passes through vision sensor 242 The image of the workpiece 232,234,237,239 moved on echo-plex device 23, extraction color, area, radius of circle, average gray Etc. feature, template matches are carried out, judge whether it is the workpiece needed for current assemble sequence, if sending the control command of crawl, Then specified assembly area 243 is put it to, and the workpiece information of the crawl of control platform 1 is informed by wireless module 241, is connect The identification work into workpiece needed for next process, if it is not, not sending control command then, waits until required workpiece always Occur, then send control command;When the intelligent vision robot 24,25,28,29 of assembly section 1, assembly section 2 238 assembles After what a finished product 236, control command is sent, is put into after being captured on conveyer 23, and notifies the transmission dress of control platform 1 Put and a finished product 236 is placed on 23, subsequently into next round assembly work, and so on carry out until control platform 1 is sent Assignment instructions are completed, are connected to after completing task order, stop assembly work, if assembly section 1, assembly section 2 238 are not present Element, then assembly section 1, the intelligent vision robot 24,25,28,29 of assembly section 2 238 enter holding state, if in the presence of Element, then element is placed on conveyer 23, subsequently into holding state.
Blanking intelligent vision robot 27 identifies on conveyer 23 whether there is finished product 236 by vision sensor 273, when After having finished product appearance, the image of the workpiece moved on the conveyer 23 that is returned by vision sensor 273, extraction color, face The features such as product, radius of circle carry out template matches, identify whether it is finished product first, and in the case where being finished product, seen table Whether face has crackle qualified to judge whether, if qualified, the box 271 of blanking intelligent vision robot 27 is put into after being captured In, and notify control platform 1 that the quantity of qualified finished product 261 is added into 1, if unqualified, waste product area is put into after being captured, and will Underproof information is sent to control platform 1, and required each element 221-224 quantity is added 1 by control platform 1, and notifies feeding The quantity of each element 221,222,223,224 adds 1 needed for intelligent vision robot 21, when the box of blanking intelligent vision robot 27 After son 271 is filled, the image information that is returned by vision sensor 273 moves to products storage area 26, by finished product 261 all After being discharged into products storage area 26, then the discharging area 235 of conveyer 23 is moved to, so worked repeatedly, until receiving control After the completion task of platform 1 processed, if having finished product in box 271, finished product is placed into discharge to products storage area 26, if not having, The discharging area 235 for moving to conveyer 23 enters holding state.
All intelligent vision robots 21,24,25,27,28,29 pass through job information wireless in the work of each step Module 3 sends control platform 1 to, and control platform 1 is integrated information, to obtain the number of elements on conveyer, transmission Finished product quantity on device, add up qualified finished product quantity, according to the quantity of these information dynamic regulation intelligent vision robots with And the quantity of the intelligent vision robot of feeding module and cutting module is operated in, preferably to complete fittage.
Finally, all intelligent vision robots 21,24,25,27,28,29 by radio communication 211,241 by job information Control platform 1 is sent to, control platform 1 is compared with sequence information after the job information of return is integrated, judged whether Into fittage, if completing, assignment instructions are sent completely to vision robot's work station 2, each region intelligent robot starts Cleaning work is carried out, cleaning sends cleaning work to control platform 1 after completing and completes instruction, and is moved to designated area, then Control platform 1 to vision robot's workbench 2 send halt instruction, all intelligent vision robots 21,24,25,27,28, 29 enter holding state, the stop motion of conveyer 23.
As shown in figure 3, intelligent vision robot is in workpiece identification, image is returned to by vision sensor, extraction color, The information such as elemental area, radius of circle, circle number, are then compared with template successively, it is first determined it is red whether to meet color Color, red pixel area are more than threshold value, radius of circle within the specified range, if satisfied, then the target in image is red base 221, if not satisfied, continuing to determine whether to meet that color is black and black area, average gray and radius of circle in specified range It is interior, if satisfied, then the target in image is black core 222, if not satisfied, then judging whether to meet radius of circle, circle number and putting down Equal gray scale within the specified range, if satisfied, then the target in image is spring 223, if not satisfied, then continuing to determine whether to meet Color is blueness, blue pixel area is less than threshold value, radius of circle is less than threshold value, if satisfied, then the target in image is blue cap Son 224, if not satisfied, continuing to determine whether to meet that color is more than threshold for blueness, blue pixel area more than threshold value, radius of circle Value, if satisfied, then the target in image is finished product 236, if not satisfied, then matched without any template.Wherein feeding intelligence Energy vision robot 21 and assembling intelligent vision robot 24,25,28,29 need to identify red base 221, black core 222, bullet Spring 223 and blue cap 224, blanking intelligent vision robot 27 only need to identify finished product 236.
In the feature in extracting image, extraction color directly can determine whether by the RGB channel of access images, and The RGB scopes whether elemental area is then specified by accessing each pixel to meet, satisfaction are then added up, and what is finally obtained is tired Add and as elemental area.And the calculating on radius of circle and circle number will use loop truss, method is as follows:
First have to pre-process the image of acquisition:
Median filter process is carried out to image.Median filter uses each pixel value in center pixel square neighborhood Median pixel value is replaced, and can remove the marginal information that denoising retains image again.It can be defined as in two dimensional image:
Yi=med { xij}=med { x(i+m),(j+n)(m,n)∈A,(i,j)∈I2}
Binary conversion treatment is carried out to image.Binaryzation makes image become simple, and data volume reduces, and can highlight interested Objective contour.Definition m is predetermined threshold value, and f (x, y) is the gray value of pixel coordinate (x, y), and g (x, y) is obtained gray value. It can be expressed as:
Rim detection is carried out to image using Canny operators.The principle of Canny rim detections is limited with single order local derviation Difference calculates the amplitude of gradient and direction.If f (x, y) is image, f (x, y) gradient 2 × 2 first differences point approximation Formula calculates the two of x and y partial derivative array fx' (x, y) and fy′(x,y):
fx′(x,y)≈Gx=[f (x+1, y)-f (x, y)+f (x+1, y+1)-f (x, y+1)]/2
fy′(x,y)≈Gy=[f (x, y+1)-f (x, y)+f (x+1, y+1)-f (x+1, y)]/2
By first-order difference convolution maskWithObtain amplitude and the direction of 2 gradients:
In order to obtain rational edge, to gradient magnitude carry out non-maxima suppression, with dual threashold value-based algorithm carry out detection and Connect real edge.
Expansion process is carried out to edge image.Expansion process is to carry out convolutional calculation to image and core, obtains kernel covering area The max pixel value in domain so that the highlight regions in image increase, and are compensated to pixel, form UNICOM domain.X is defined to be located The image of reason, B are structural element, and X is by the B results expanded:
Then to using random Hough transformation loop truss by pretreated image, method is as follows:
The equation of circle is in two-dimensional space:
(x-a)2+(y-b)2=r2
In formula:(a, b) is central coordinate of circle, and r is the radius of circle.Determine 3 unknown parameters of a, b, r, it is necessary to be taken on circle 3 point (x1,y1)、(x2,y2)、(x3,y3), 3 points are substituted into above formula and obtain equation group:
(x1-a)2+(y1-b)2=r2
(x2-a)2+(y2-b)2=r2
(x3-a)2+(y3-b)2=r2
Solve equation group and can obtain central coordinate of circle (a, b) and radius r.
The principle of random Hough transformation determines central coordinate of circle to randomly select 3 points in marginal points all in the picture (a1,b1) and radius r1.Then take in the 4th point of (x4,y4) substitute into first equation, obtain radius r4, by r4Substitute into following formula:
r4-r11
δ is error amount set in advance, works as δ1During less than δ, it is defined as candidate's circle.After determining candidate's circle, take a little generation Enter to calculate, work as δ1-iAccumulator adds 1 during less than δ, when the value of accumulator reaches predetermined threshold, is defined as 1 proper circle.
In the present invention, the element that control platform 1 returns according to all intelligent vision robots 21,24,25,27,28,29 is believed Breath, finished product quantity information can dynamically update existing number of elements on conveyer 23, existing processing on conveyer 23 Finished product well, the accumulative finished product quantity processed, when existing number of elements exceedes threshold value on conveyer 23, control platform 1 will send the order of pause feeding to feeding intelligent vision robot 21, and allow feeding intelligent vision robot 21 to go to assist Blanking intelligent robot 27 goes to complete finished product blanking work, when existing number of elements is less than a certain threshold value on conveyer 23 When, control platform 1 is notified that blanking intelligent robot 27 goes to assist feeding robot 21 to complete feeding work, when conveyer 23 When the upper existing finished product quantity processed exceedes a certain threshold value, control platform 1 is notified that feeding intelligent vision robot 21 is temporary Stop feeding work, and notify feeding intelligent vision robot 211 to assist blanking intelligent vision robot 27 to complete finished product blankers Make, when the quantity of qualified finished product meets order requirements, control platform 1 to all intelligent vision robots 21,24,25,27, 28th, 29 send and complete assignment instructions, and the flush instructions of conveyer 23 for waiting feeding area intelligent vision robot 21 to send, After receiving the instruction, control platform 1 sends halt instruction, all intelligent vision robots to vision robot's workbench 2 21st, 24,25,27,28,29 enter holding state, and conveyer 23 stops.

Claims (7)

1. based on the model system of group's visual machine collaborative assembly method, including control platform (1), vision robot's work station (2) wireless module (3) to be communicated between control platform (1) and vision robot's work station (2), the control platform are undertaken and (1) required parts number and species, the quantity for the intelligent vision robot devoted oneself to work are calculated according to the sequence information of input, And the quantity of the intelligent vision robot of disparate modules is operated according to the working condition of reality, dynamic regulation;
The model system:
Control platform (1) is served as by a computer;
By a wireless receiving transmitter as wireless module (3);
Based on colony intelligence vision robot, the conveyer (23) of annular, element storage area (22) and products storage area (26) Want vision robot's work station (2) of part;
It is characterized in that:
In vision robot's work station (2), element to be assembled includes red base (221), black core (222), spring And blue cap (224) (223);
Intelligent vision robot returns to image, extraction color, elemental area, circle half in workpiece identification, by vision sensor Whether footpath and circle number information, are then compared with template successively, it is first determined meet color for red, red pixel area More than threshold value and radius of circle within the specified range, if satisfied, then the target in image is red base (221), if not satisfied, after It is continuous judge whether to meet color for black and black area, average gray and radius of circle within the specified range, if satisfied, then image In target be black core (222), if not satisfied, then judging whether to meet radius of circle, circle number and average gray in specified model In enclosing, if satisfied, then the target in image be spring (223), if not satisfied, then continue to determine whether to meet color for it is blue, Blue pixel area is less than threshold value and radius of circle is less than threshold value, if satisfied, then the target in image is blue cap (224), if It is unsatisfactory for, continues to determine whether to meet that color is more than threshold value for blueness, blue pixel area and radius of circle is more than threshold value, if full Foot, then the target in image is finished product (236), if not satisfied, then matched without any template;Wherein feeding intelligent vision Robot (21) and assembling intelligent vision robot (24,25,28,29) need to identify red base (221), black core (222), Spring (223) and blue cap (224), blanking intelligent vision robot (27) only need to identify finished product (236).
2. model system according to claim 1, it is characterised in that:
In vision robot's work station (2), conveyer (23) is arranged in center, the area at conveyer (23) major axis both ends Domain is feeding area (231) and discharging area (235) respectively, and the region parallel to major axis both sides is assembly section one (233) and assembly section Two (238), element storage area (22) are arranged near feeding area (231), and it is attached that products storage area (26) are arranged in discharging area (235) Closely, the feeding intelligent vision robot (21) of feeding area (231) and the blanking intelligent vision robot (27) of discharging area (235) be all With feeding module and cutting module, control platform (1) is operated in both moulds according to real-time assembling situation come dynamic regulation The quantity of the intelligent vision robot of block, assembly section one (233) and assembly section two (238) assembling intelligent vision robot (24, 25th, 28,29) there was only load module, each intelligent vision robot is equipped with and control platform (1) and intelligent vision robot Between the wireless module (211) that communicates and the vision sensor (213) for finding path and element identification.
3. model system according to claim 1, it is characterised in that independent task is completed per class intelligent vision robot, It does not interfere with each other, when assembling object changes, control is sent to vision robot's work station (2) by control platform (1) and referred to Order, vision robot's work station (2) operation is allowed to be directed to the linkage editor of this object.
4. model system according to claim 1, it is characterised in that vision robot's work station (2) includes a ring The conveyer (23) of shape, feeding area (231), assembly section (233,238) and discharging area are disposed with along conveyer (23) (235), feeding area (231) are disposed about element storage area (22), and discharging area (235) are disposed about products storage area (26), on Expect that the feeding intelligent vision robot (21) of area (231) and the blanking intelligent vision robot (27) of discharging area (235) all have Feeding module and cutting module, control platform (1) are operated in both modules according to real-time assembling situation come dynamic regulation The quantity of intelligent vision robot, the assembling intelligent vision robot of assembly section only have load module, each intelligent vision machine People is equipped with the wireless module (211) to be communicated between control platform (1) and intelligent vision robot and for finding path With the vision sensor (213) of element identification, the collaborative assembly method includes the collaboration of four aspects:All intelligent vision machines Device people cooperates with the start and stop of conveyer (23), the collaboration between feeding intelligent vision robot (21) and conveyer (23), dress With intelligent vision robot (24,25,28,29) and cooperateing between conveyer (23), blanking intelligent vision robot (27) with Collaboration between conveyer (23).
5. model system according to claim 1, it is characterised in that the control platform (1) according to reality working condition, The process that dynamic regulation is operated in the quantity of the intelligent vision robot of disparate modules is:
Control platform (1) sends order of starting working to vision robot's work station (2), and allows vision robot's work station (2) The program of the corresponding assembling object of operation, all intelligent vision robots start simultaneously at work, feeding intelligent vision robot (21) image information returned by vision sensor (213) identifies that path is moved to required for element storage area (22) picks Element, then move to feeding area (231) and start feeding, assembling intelligent vision robot is identified by vision sensor (242) Required element (232), and finished product (236) is placed on conveyer (23), blanking intelligent vision robot (27) passes through vision Sensor (273) identifies qualified finished product (236), and it is grabbed down from conveyer (23) and is placed on products storage area (26), owns Intelligent vision robot sends job information to control platform (1) in the work of each step by wireless module (3), controls Platform (1) is integrated information, to obtain the number of elements on conveyer, the finished product quantity on conveyer, accumulative conjunction Lattice finished product quantity, according to the quantity of these information dynamic regulation intelligent vision robots and it is operated in feeding module and blanking die The quantity of the intelligent vision robot of block, preferably to complete fittage.
6. model system according to claim 1, it is characterised in that the feeding mistake of the feeding intelligent vision robot (21) Journey will cooperate with feeding, the dress of the assembling intelligent vision robot (24,25,28,29) according to the speed of conveyer (23) With process will according to the speed of conveyer (23) come cooperate with crawl needed for element assembled, the blanking intelligent vision machine The blanking process of people (27) will grab lower finished product according to the speed of conveyer (23) to cooperate with.
7. model system according to claim 1, it is characterised in that the blanking intelligent vision robot (27) is complete each time Into after finished product blanking, blanking information is sent to control platform (1), when finished product is qualified, control platform counts qualified finished product (261) Quantity adds 1, and puts it into box (271), and when finished product is unqualified, blanking intelligent vision robot (27) is by unqualified finished product Waste product area, and control platform (1) notice feeding intelligent vision robot (21), each class component of required sorting are put into after crawl (232,234,237,239) quantity adds 1, meets desired finished product number on order when counting on qualified finished product (261) quantity of assembling During amount, control platform (1) is sent completely assignment instructions, the assembling intelligent vision robot to vision robot's work station (2) The workpiece, feeding intelligent vision robot (21) and blanking intelligent vision robot (27) for cleaning out rigging position (243) are responsible for Knocked-down element on conveyer (23) is cleared up, element storage area (22) is put back to after being reclaimed, then moves to and specifies ground Point simultaneously sends completion cleaning work instruction to control platform (1), and control platform (1) sends halt instruction, all intelligent vision machines Device people enters holding state, while conveyer (23) stop motion.
CN201611209458.3A 2016-12-23 2016-12-23 Group's visual machine collaborative assembly method and model system Expired - Fee Related CN106774208B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611209458.3A CN106774208B (en) 2016-12-23 2016-12-23 Group's visual machine collaborative assembly method and model system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611209458.3A CN106774208B (en) 2016-12-23 2016-12-23 Group's visual machine collaborative assembly method and model system

Publications (2)

Publication Number Publication Date
CN106774208A CN106774208A (en) 2017-05-31
CN106774208B true CN106774208B (en) 2017-12-26

Family

ID=58920344

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611209458.3A Expired - Fee Related CN106774208B (en) 2016-12-23 2016-12-23 Group's visual machine collaborative assembly method and model system

Country Status (1)

Country Link
CN (1) CN106774208B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108459572A (en) * 2018-03-20 2018-08-28 广东美的制冷设备有限公司 Monitoring method, device, system, robot and air conditioner production equipment
EP3579126A1 (en) * 2018-06-07 2019-12-11 Kompetenzzentrum - Das virtuelle Fahrzeug Forschungsgesellschaft mbH Co-simulation method and device
CN109299720B (en) * 2018-07-13 2022-02-22 沈阳理工大学 Target identification method based on contour segment spatial relationship
CN109060823A (en) * 2018-08-03 2018-12-21 珠海格力智能装备有限公司 The thermal grease coating quality detection method and device of radiator
CN111843981B (en) * 2019-04-25 2022-03-11 深圳市中科德睿智能科技有限公司 Multi-robot cooperative assembly system and method
CN110561415A (en) * 2019-07-30 2019-12-13 苏州紫金港智能制造装备有限公司 Double-robot cooperative assembly system and method based on machine vision compensation
CN112157408A (en) * 2020-08-13 2021-01-01 盐城工学院 Industrial robot double-machine cooperation carrying system and method
CN112363470A (en) * 2020-11-05 2021-02-12 苏州工业园区卡鲁生产技术研究院 User-cooperative robot control system
CN112589401B (en) * 2020-11-09 2021-12-31 苏州赛腾精密电子股份有限公司 Assembling method and system based on machine vision
CN114115151A (en) * 2021-11-24 2022-03-01 山东哈博特机器人有限公司 Industrial robot cooperative assembly method and system based on MES
CN114161202A (en) * 2021-12-29 2022-03-11 武汉交通职业学院 Automatic industrial robot feeding and discharging system for numerical control machine tool

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104950684A (en) * 2015-06-30 2015-09-30 西安交通大学 Swarm robot collaborative scheduling measurement and control method and system platform
CN204725516U (en) * 2015-01-19 2015-10-28 西安航天精密机电研究所 A kind of single vision being applicable to pipelining coordinates multirobot navigation system
CN205734182U (en) * 2016-06-30 2016-11-30 长沙长泰机器人有限公司 For many group process equipment co-operating intelligent robot processing lines

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204725516U (en) * 2015-01-19 2015-10-28 西安航天精密机电研究所 A kind of single vision being applicable to pipelining coordinates multirobot navigation system
CN104950684A (en) * 2015-06-30 2015-09-30 西安交通大学 Swarm robot collaborative scheduling measurement and control method and system platform
CN205734182U (en) * 2016-06-30 2016-11-30 长沙长泰机器人有限公司 For many group process equipment co-operating intelligent robot processing lines

Also Published As

Publication number Publication date
CN106774208A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
CN106774208B (en) Group's visual machine collaborative assembly method and model system
CN107899814A (en) A kind of robot spraying system and its control method
CN106853433B (en) Intelligent automobile paint spraying method based on cloud computing
US20180243776A1 (en) Intelligent flexible hub paint spraying line and process
CN110281231B (en) Three-dimensional vision grabbing method for mobile robot for unmanned FDM additive manufacturing
CN109483573A (en) Machine learning device, robot system and machine learning method
CN104156726B (en) A kind of workpiece identification method and device based on geometric characteristic
CN101537618A (en) Visual system for ball picking robot in stadium
CN202924613U (en) Automatic control system for efficient loading and unloading work of container crane
CN104647377B (en) A kind of industrial robot based on cognitive system and control method thereof
CN111906788B (en) Bathroom intelligent polishing system based on machine vision and polishing method thereof
CN104299246B (en) Production line article part motion detection and tracking based on video
CN105500370B (en) A kind of robot off-line teaching programing system and method based on body-sensing technology
CN109328973A (en) A kind of intelligent system and its control method of tapping rubber of rubber tree
CN109926817A (en) Transformer automatic assembly method based on machine vision
CN108500979A (en) A kind of robot grasping means and its system based on camera communication connection
CN107481244B (en) Manufacturing method of visual semantic segmentation database of industrial robot
CN104458748A (en) Aluminum profile surface defect detecting method based on machine vision
CN114029951B (en) Robot autonomous recognition intelligent grabbing method based on depth camera
CN110040394A (en) A kind of interactive intelligent rubbish robot and its implementation
CN108038861A (en) A kind of multi-robot Cooperation method for sorting, system and device
CN106681508A (en) System for remote robot control based on gestures and implementation method for same
CN107344171A (en) A kind of low-voltage air switch the System of Sorting Components and method based on Robot Visual Servoing
CN107444644A (en) A kind of unmanned plane movement supply platform and unmanned plane for orchard operation
CN207823268U (en) Automatic spray apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20171226

Termination date: 20211223