CN103994729A - Object detector, robot system and object detection method - Google Patents

Object detector, robot system and object detection method Download PDF

Info

Publication number
CN103994729A
CN103994729A CN201310740930.6A CN201310740930A CN103994729A CN 103994729 A CN103994729 A CN 103994729A CN 201310740930 A CN201310740930 A CN 201310740930A CN 103994729 A CN103994729 A CN 103994729A
Authority
CN
China
Prior art keywords
scanning
view field
shooting
range image
composograph
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310740930.6A
Other languages
Chinese (zh)
Inventor
一丸勇二
近藤纯平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yaskawa Electric Corp
Original Assignee
Yaskawa Electric Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yaskawa Electric Corp filed Critical Yaskawa Electric Corp
Publication of CN103994729A publication Critical patent/CN103994729A/en
Pending legal-status Critical Current

Links

Abstract

The invention provides an object detector, a robot system and an object detection method capable of greatly reducing detection process and improving processing efficiency; a sensor unit (300) comprises the following units: a laser light source (321) emitting a laser slit light (L); a rotary mirror (322) enabling the laser slit light (L) emitted by the laser light source (321) to move to a projection portion of the work piece (W) in a scanning direction (A); a camera (310) used for continuously shooting, in an expected frame rate, a profile of the work piece (W) containing the moving projection portion when the rotary mirror (322) enables the laser slit light (L) to move on the projection portion of the work piece (W), and the camera can output a plurality of corresponding shooting frames (F); a distance image forming portion (333) using the shooting frames (F) to form one distance picture (DP) corresponding to each scanning; a composite image forming portion (336) used for compositing the plurality of DPs, based on different scanning in different time, so as to form a composite picture (CP).

Description

Article detection device, robot system and object detecting method
Technical field
Embodiments of the present invention relate to article detection device, robot system and object detecting method.
Background technology
In patent documentation 1, record the technology of carrying out object detection by light cross-section method.In light cross-section method, by the light irradiating from light source is moved to direction of scanning to the projection position of project objects, the outward appearance of the object with desired interval to the projection position that comprises this and move is made a video recording continuously, thereby detects the 3D shape of object.
Prior art document
Patent documentation
Patent documentation 1: TOHKEMY 2009-250844 communique
Summary of the invention
In above-mentioned light cross-section method, in the time making as mentioned above light move to direction of scanning and make a video recording continuously to the projection position of project objects, cross on object along during the roughly Zone Full of direction of scanning, must make a video recording up hill and dale.Therefore, in order to carry out above-mentioned continuous shooting, need the long time, Check processing is difficult to high efficiency.
The present invention makes in view of such problem, and its object is, a kind of article detection device, robot system and the object detecting method that significantly shorten and can realize the raising for the treatment of effeciency that can realize Check processing is provided.
In order to address the above problem, according to a viewpoint of the present invention, the article detection device that application detects the object of detected object, is characterized in that having: LASER Light Source, the laser of described LASER Light Source illumination slit shape; Scanner section, described scanner section makes the described laser irradiating from described lasing light emitter move to the direction of scanning of regulation to the projection position of described project objects; Video camera, in the time that described scanner section moves the described Projection Division displacement of described laser on described object, the outward appearance of described video camera described object to the projection position that comprises described movement with desired interval is made a video recording continuously, and multiple view data corresponding to output; Range image generating unit, described range image generating unit is carried out repeatedly each shooting operation of shooting continuously for described video camera, generate 1 3D shape image of described object by multiple described view data; And composograph generating unit, by described range image generating unit, the multiple described 3D shape image based on different multiple described shooting operations generates respectively mutually in time synthesizes described composograph generating unit, and generates the composograph for detection of the 3D shape of described object.
In addition, in order to address the above problem, according to another viewpoint of the present invention, application object detecting method, is characterized in that having following steps: from the laser of light source illumination slit shape; The described laser of described irradiation is moved to the direction of scanning of regulation to the projection position of object; In the time that the described Projection Division displacement of described laser is moved, the outward appearance of the described object with desired interval to the projection position that comprises described movement is made a video recording continuously, and multiple view data corresponding to output; For each shooting operation of carrying out repeatedly shooting continuously, generate the step of 1 3D shape image of described object by multiple described view data; And the multiple described 3D shape image based on different multiple described shooting operations generates respectively mutually is in time synthesized, and detect the 3D shape of described object.
Invention effect
According to the present invention, can realize the significantly shortening of Check processing, therefore can realize the raising for the treatment of effeciency.
Brief description of the drawings
Fig. 1 is the system pie graph that represents an example of the entirety formation of the robot system of an embodiment.
Fig. 2 is the side view with the state representation of part perspective by an example of the formation of sensor unit.
Fig. 3 is the vertical view with the state representation of part perspective by an example of the formation of sensor unit.
Fig. 4 is the key diagram of an example of the formation for sensor unit is described.
Fig. 5 is the key diagram of an example of the formation for sensor unit is described.
Fig. 6 is the key diagram that represents an example of the shooting frame of exporting from video camera.
Fig. 7 is the block diagram that represents an example of the functional formation of sensor controller.
Fig. 8 is for illustrating in the time scanning for the first time by the key diagram of the performed processing of the control of cooperation control part.
Fig. 9 is for illustrating in the time scanning for the first time by the key diagram of the performed processing of the control of cooperation control part.
Figure 10 is for illustrating in the time scanning for the N time by the key diagram of the performed processing of the control of cooperation control part.
Figure 11 is for illustrating in the time scanning for the N time by the key diagram of the performed processing of the control of cooperation control part.
Figure 12 is for illustrating in the time scanning for the N+1 time by the key diagram of the performed processing of the control of cooperation control part.
Figure 13 is for illustrating in the time scanning for the N+1 time by the key diagram of the performed processing of the control of cooperation control part.
Figure 14 is the key diagram that represents an example of the composograph generating by composograph generating unit.
Figure 15 is the key diagram for an example removing processing of being carried out by composograph generating unit is described.
Figure 16 is the key diagram for an example removing processing of being carried out by composograph generating unit is described.
Figure 17 illustrates whenever each end of scan, the key diagram of an example of the composograph being generated by composograph generating unit.
Figure 18 illustrates whenever each end of scan, the key diagram of an example of the composograph being generated by composograph generating unit.
Figure 19 illustrates whenever each end of scan, the key diagram of an example of the composograph being generated by composograph generating unit.
Figure 20 is the process flow diagram that represents an example of the control step of the performed object detecting method of sensor controller.
description of reference numerals
Embodiment
Below, an embodiment of the invention are described with reference to accompanying drawing.
First,, with reference to Fig. 1, an example of the entirety formation of the robot system of present embodiment is described.
As shown in Figure 1, the robot system 1 of present embodiment has accumulator 20(container), robot 100, conveyer 30, sensor unit 300(article detection device), robot controller 200(controller).
Accumulator 20 is for example the container of the case shape that formed by resin or metal etc., and is configured near the pedestal 21 of configuration robot 100.In the inside of this accumulator 20, put into arbitrarily the workpiece W(object of multiple detected objects) (at random).In addition, each workpiece W also can not put into such container such as accumulator 20 grade, also can load in the first-class such applicable mounting surface of pedestal 21.In addition, as each workpiece W and their shape, being not particularly limited, can be various workpiece and shape.Now, each workpiece W and their shape can be consistent with each other or similar, also can be mutually different.But, in each figure, the shape simplification of each workpiece W is represented with elliptical shape.
Robot 100 carries out the handover operation (so-called with choosing in cabinet) that the multiple workpiece W in accumulator 20 are kept successively and transferred.As robot 100, as long as can carry out handover operation as described above, do not limit especially, for example, can also use vertical articulated robot or horizontal articulated robot (SCARA robot) etc.In this example, as robot 100, apply vertical articulated robot, robot 100 has base station 101 and the rotation of being fixed on applicable stationary plane (for example, base plate etc.) and is arranged on freely the arm 102 on base station 101.Arm 102 has multiple indirect from base station 101 sides to the front contrary with base station.In this arm 102, be built-in with and drive respectively above-mentioned multiple indirectly multiple servo motors (not shown).In addition, the front end of arm 102 be provided with can holding workpiece W holding device 103.
As holding device 103, as long as device that can holding workpiece W, do not limit especially, for example, can also use can by finger parts control (mode of maintenance) workpiece W grasping device, can adsorb by the driving such as air or electromagnetic force the adsorbent equipment etc. of (mode of maintenance) workpiece W.In this example, as holding device 103, application grasping device, holding device 103 has the pair of finger parts 103a that can control workpiece W.Pair of finger parts 103a drives by the applicable actuator (not shown) being built in holding device 103, carries out on-off action by zooming in or out mutual interval.In addition, as holding device 103, be not limited to have the device of above-mentioned structure, for example, can also be with thering are multiple finger parts and carrying out grabbing device of grabbing workpiece W etc. by the swing of these multiple finger parts.
Such robot 100 is by utilizing the finger parts 103a of grasping device 103 to control successively singly the multiple workpiece W in accumulator 20, and they are transferred and loaded to predefined mounting position in conveyer 30, thereby transfers operation.
Conveyer 30 is carried to the equipment relevant to subsequent processing loading in the locational workpiece W of predefined mounting.
Sensor unit 300, by light cross-section method, detects respectively the multiple workpiece W in accumulator 20, and detection (also comprises three-dimensional position and posture as these multiple workpiece W three-dimensional outer shape separately.Identical below) 3D shape.This sensor unit 300 supports by applicable support unit 50 in the mode of the top that is positioned at accumulator 20.In addition, sensor unit 300 also can be installed on the applicable position (for example, front of arm 102 etc.) of robot 100.About sensor unit 300, illustrate in greater detail below.
Robot controller 200 is for example made up of the computing machine with arithmetical unit, memory storage, input media etc., and is connected in the mode that can intercom mutually with robot 100 and sensor unit 300.The three-dimensional shape information of the 3D shape separately of the multiple workpiece W in the expression accumulator 20 of this robot controller 200 based on being detected by sensor unit 300, the action of control 100 entirety (for example, arm 102 each indirectly drives, the on-off action of the finger parts 103a of grasping device 103 etc.).About robot controller 200, illustrate in greater detail below.
Then,, with reference to Fig. 2~Fig. 5, an example of the formation of sensor unit 300 is described.
As shown in Fig. 2~Fig. 5, sensor unit 300 has laser scanner 320, video camera 310, sensor controller 330.
Laser scanner 320 has LASER Light Source 321, rotating mirror 322(scanner section), motor 323 and angle detector 324.Laser (following, to be called " laser slit the light ") L of LASER Light Source 321 illumination slit shapes.Rotating mirror 322 receives the laser slit light L irradiating from LASER Light Source 321, makes it to reflections such as workpiece W, and makes it to projections such as workpiece W.Motor 323 rotates rotating mirror 322, thereby changes the anglec of rotation of rotating mirror 322.By utilizing motor 323 that rotating mirror 322 is rotated, can make projection position on the workpiece W of laser slit light L in the full TT of view field etc. to (the direction of scanning of regulation of the direction shown in arrow A.Below, be called " direction of scanning A ") mobile, the full TT of view field is made up of multiple view fields of the projection objects that becomes workpiece W etc.Angle detector 324 detects the anglec of rotation of rotating mirror 322.
In the time that move to direction of scanning A at the projection position on the workpiece W of laser slit light L described above in the full TT of view field etc., video camera 310 is made a video recording to the outward appearance of the workpiece W in the full TT of view field at the projection position that comprises this and move etc. continuously with the frame frequency (interval) expected.Particularly, generate in the view field from counter-scanning direction (reverse direction of the direction shown in the arrow A) side end of the TT of projection position Quan view field of laser slit light L, until to during the view field of the direction of scanning A successively mobile direction of scanning A side end that arrives the full TT of view field (, during crossing over the whole region of the full TT of view field), video camera 310 is made a video recording continuously.Then, video camera 310 outputs multiple shooting frame F(view data corresponding with above-mentioned continuous shooting.With reference to Fig. 6 described later).In Fig. 6, an example of the shooting frame F exporting from video camera 310 is shown.As shown in Figure 6, in shooting frame F, comprise the outward appearance of workpiece W in the full TT of view field etc. and moment action that move, the projection position of laser slit light L on this workpiece W etc. as mentioned above.
Sensor controller 330 is for example made up of the computing machine with arithmetical unit and memory storage etc., and controls the action of sensor unit 300 entirety.The multiple shooting frame Fs of this sensor controller 330 based on exporting from video camera 310, detect respectively the multiple workpiece W in accumulator 20, and detect these multiple workpiece W 3D shape separately.About sensor controller 330, illustrate in greater detail below.
Then,, with reference to Fig. 7, an example of the functional formation of sensor controller 330 is described.
As shown in Figure 7, sensor controller 330 has light source control portion 341, motor control portion 342, image acquiring unit 331, image storage part 332, range image generating unit 333, range image storage part 334, cooperation control part 335, test section 340, composograph generating unit 336 and composograph storage part 337.
Light source control portion 341 controls above-mentioned LASER Light Source 321, and irradiating laser slit light L.Motor control portion 342 inputs the rotation angle information of the rotating mirror 322 being detected by above-mentioned angle detector 324, and rotation angle information based on this input, controls said motor 323, and rotating mirror 322 is rotated.
Image acquiring unit 331 is obtained the multiple shooting frame F that export from above-mentioned video camera 310.The multiple shooting frame F that obtain by image acquiring unit 331 are stored in image storage part 332.
Range image generating unit 333 is carried out repeatedly the scanning each time (shooting operation) of shooting continuously for video camera 310, use is stored in the multiple shooting frame F in this scanning of image storage part 332, according to the principle of triangulation, calculate the distance of video camera 310 and workpiece W etc.Then, range image generating unit 333 is created on a range image DP(3D shape image that comprises the above-mentioned range information calculating in the appearance images of workpiece W in the full TT of view field etc.With reference to Fig. 9 described later, Figure 11, Figure 13 etc.).In addition, below, for convenience of explanation, by in each scanning, be called " shooting frame F1 " with the shooting frame F that makes a video recording for the first time corresponding, to be called " shooting frame F2 " with the shooting frame F that makes a video recording for the second time corresponding, to be called " shooting frame F3 " with the shooting frame F that makes a video recording for the third time corresponding ... (following identical).
Video camera 310 and range image generating unit 333 are controlled in 335 cooperations of cooperation control part, make: in each scanning of Multiple-Scan, video camera 310 repeatedly makes a video recording continuously, and range image generating unit 333 use with carried out by above-mentioned video camera 310 repeatedly continuously multiple shooting frame F corresponding to shooting generate an above-mentioned range image DP.Later scanning (the sparse shooting operation for the second time of carrying out after the scanning for the first time (operation of initially making a video recording) of the initial execution in Multiple-Scan and for the first time scanning.The 1st, 2,3,4,5,6,7,8,9 ... inferior scanning) in, this cooperation control part 335 is carried out different control processing mutually.In addition, the N time in later for the second time scanning (N is more than 2 even number) scanning (the second, 4,6,8 ... inferior scanning) and the scanning (the 3rd, 5,7,9 of the N+1 time ... inferior scanning) in, cooperation control part 335 is carried out different control processing mutually.
Below, the processing performed by the control of cooperation control part 335 is described successively in the time scanning for the first time, the performed processing of control of the control part 335 that passes through by the performed processing of the control of cooperation control part 335 and when the N+1 time scanning to cooperate in the time of the N time scanning.In addition, below, for convenience of explanation, illustrate that 16 T1 of view field that the full TT of view field equates by mutual area~T16(is with reference to Fig. 8 described later, Figure 10, Figure 12 etc.) when forming, scanning for the first time video camera 310 continuously K time (K is positive integer) of shooting be 16 times (, K=16), and when later scanning for the second time video camera 310 continuously K/n time (n is more than 2 integer) of shooting be the situation of 8 times (, n=2).But in fact, the full TT of view field for example, is made up of the view field of more (480), when in the time scanning for the first time and for the second time later scanning, the number of times of shooting is more continuously for video camera 310.
First,, with reference to Fig. 8 and Fig. 9, the processing of carrying out by the control of cooperation control part 335 is described in the time scanning for the first time.
In Fig. 8 and Fig. 9, in scanning for the first time, by the control of cooperation control part 335,16 T1~T16 of view field that video camera 310 comprises the full TT of view field all carry out 16 shootings continuously successively.; video camera 310 become at the projection position of the laser slit light L of above-mentioned movement respectively the T1 of view field timing, become the T2 of view field timing, become the T3 of view field timing ..., become the timing of the T14 of view field; become the T15 of view field timing, become the timing of the T16 of view field, the outward appearance of workpiece W in the full TT of view field that comprises this projection position etc. is made a video recording.Then 16 shooting frame F1~F16(that, video camera 310 outputs are made a video recording corresponding with above-mentioned 16 times are continuously with reference to Fig. 9 upside).
Therefore, by the control of cooperation control part 335, range image generating unit 333 is stored in this in image storage part 332 16 shooting frame F1~F16 in scanning for the first time by use, generates 1 the range image DP(corresponding with whole 16 the T1~T16 of view field that comprise in the TT of view field entirely with reference to Fig. 9 downside).In addition, below, for convenience of explanation, the range image DP in scanning is for the first time called to " range image DP1 ".That is, range image generating unit 333 is used this shooting frame F1~F16 of 16 in scanning for the first time, calculates video camera 310 and the T1~T16(of view field, the whole region of the full TT of view field) in the distance of workpiece W etc.Then, part p1~p16(appearance images, corresponding with the T1~T16 of view field of the workpiece W of range image generating unit 333 in the full TT of view field etc., the overall pp corresponding with the whole region of the full TT of view field) in the generation above-mentioned range image DP1 that comprises the above-mentioned range information calculating.As shown in Fig. 9 downside, in the overall pp of the appearance images of the workpiece W of range image DP1 in the full TT of view field etc., comprise range information.Therefore,, by using this range image DP1, test section 340 can detect multiple workpiece W in accumulator 20 3D shape (aftermentioned) separately.The range image DP1 being generated by range image generating unit 333 while scanning is for the first time stored in range image storage part 334, and is exported to test section 340.
Then,, with reference to Figure 10 and Figure 11, the processing of carrying out by the control of cooperation control part 335 is described in the time of the N time scanning.
In Figure 10 and Figure 11, in the N time scanning, by the control of cooperation control part 335, video camera 310 in 16 T1~T16 of view field that comprise in the full TT of view field so that 1/2 sparse 8 T1 of view field, the T3 that are partitioned into of mode, T5, T7, T9, T11, T13, the T15 that the total area is the full TT of view field carries out 8 shootings continuously successively.; video camera 310 become at the projection position of the laser slit light L of above-mentioned movement respectively the T1 of view field timing, become the T3 of view field timing, become the T5 of view field timing, become the T7 of view field timing, become the T9 of view field timing, become the T11 of view field timing, become the T13 of view field timing, become the timing of the T15 of view field, the outward appearance of workpiece W in the full TT of view field that comprises this projection position etc. is made a video recording.Then 8 shooting frame F1~F8(that, video camera 310 outputs are made a video recording corresponding with above-mentioned 8 times are continuously with reference to Figure 11 upside).
Therefore, by the control of cooperation control part 335, range image generating unit 333 is used 8 shooting frame F1~F8 in this N time scanning being stored in image storage part 332, generates with 1 range image DP(in 16 T1~T16 of view field that comprise in the TT of view field entirely, that above-mentioned 8 T1 of view field, T3, T5, T7, T9, T11, T13, T15 are corresponding with reference to Figure 11 downside).In addition, below, for convenience of explanation, the range image DP in the N time scanning is called to " range image DPN "., range image generating unit 333 is used 8 shooting frame F1~F8 in this N time scanning, the distance of the workpiece W in calculating video camera 310 and the T1 of view field, T3, T5, T7, T9, T11, T13, T15 etc.Then, the appearance images of the workpiece W of range image generating unit 333 in the full TT of view field etc., with the T1 of view field, T3, T5, T7, T9, T11, part p1, p3 that T13, T15 are corresponding, p5, p7, p9, p11, p13, p15, the above-mentioned range image DPN that generation comprises the above-mentioned range information calculating.As shown in Figure 11 downside, in part p1, the p3 of the appearance images of the workpiece W of range image DPN in the full TT of view field etc., p5, p7, p9, p11, p13, p15, comprise range information.The range image DPN being generated by range image generating unit 333 in the time of the N time scanning is stored in range image storage part 334.
Then, with reference to Figure 12 and Figure 13, the processing of carrying out by the control of cooperation control part 335 when the N+1 time scanning is described.
In Figure 12 and Figure 13, in the N+1 time scanning, by the control of cooperation control part 335,8 T2 of view field, T4 in 16 T1~T16 of view field that video camera 310 comprises the full TT of view field, T6, T8, T10, T12, T14, T16 carry out 8 continuously shootings successively, described 8 T2 of view field, T4, T6, T8, T10, T12, T14, T16 so that the total area to be 1/2 the mode of the full TT of view field sparse is partitioned into and is configured to imbed respectively 8 T1 of view field, T3, T5, T7, T9, T11, T13, T15 each other.; video camera 310 become at the projection position of the laser slit light L of above-mentioned movement respectively the T2 of view field timing, become the T4 of view field timing, become the T6 of view field timing, become the T8 of view field timing, become the T10 of view field timing, become the T12 of view field timing, become the T14 of view field timing, become the timing of the T16 of view field, the outward appearance of workpiece W in the full TT of view field that comprises this projection position etc. is made a video recording.; by the control of cooperation control part 335, the shooting that makes the video camera 310 in scanning for 1 time in the shooting timing of 1 scanning in this N+1 time scanning video camera 310 in (1 operation) and above-mentioned the N time scanning regularly using by the beginning of each scanning as benchmark phase each other the mode of staggered timing stagger.Then 8 shooting frame F1~F8(that, video camera 310 outputs are made a video recording corresponding with above-mentioned 8 times are continuously with reference to Figure 13 upside).
Therefore, by the control of cooperation control part 335, range image generating unit 333 is used 8 frame F1~F8 that make a video recording in this N+1 time scanning being stored in image storage part 332, generates 1 the range image DP(corresponding with above-mentioned 8 T2 of view field, T4, T6, T8, T10, T12, T14, T16 in 16 T1~T16 of view field that comprise in the full TT of view field with reference to Figure 13 downside).In addition, below, for convenience of explanation, the range image DP in the N+1 time scanning is called to " range image DPN1 "., range image generating unit 333 is used 8 shooting frame F1~F8 in this N+1 time scanning, the distance of the workpiece W in calculating video camera 310 and the T2 of view field, T4, T6, T8, T10, T12, T14, T16 etc.Then, the above-mentioned range image DPN1 appearance images of the workpiece W of range image generating unit 333 in the full TT of view field etc., that comprise the above-mentioned range information calculating with generation in the T2 of view field, T4, T6, T8, T10, T12, part p2, p4 that T14, T16 are corresponding, p6, p8, p10, p12, p14, p16.As shown in Figure 11 downside, in part p2, the p4 of the appearance images of the workpiece W of range image DPN1 in the full TT of view field etc., p6, p8, p10, p12, p14, p16, comprise range information.The range image DPN1 being generated by range image generating unit 333 when the N+1 time scanning is stored in range image storage part 334.
In addition, when making above-mentioned the N time scanning and the first sparse shooting operation at once, above-mentioned 8 T1 of view field, T3, T5, T7, T9, T11, T13, to cut apart view field corresponding with cutting apart view field and first for T15, with these 8 T1 of view field, T3, T5, T7, T9, T11, T13, 1 range image DPN that T15 is corresponding is corresponding with the 1st 3D shape image, above-mentioned the N+1 time scanning is simultaneously corresponding with the second sparse shooting operation, above-mentioned 8 T2 of view field, T4, T6, T8, T10, T12, T14, to cut apart view field corresponding with cutting apart view field and second for T16, with these 8 T2 of view field, T4, T6, T8, T10, T12, T14, 1 range image DPN1 that T16 is corresponding is corresponding with the 2nd 3D shape image.On the contrary, when making above-mentioned the N+1 time scanning and the first sparse shooting operation at once, above-mentioned 8 T2 of view field, T4, T6, T8, T10, T12, T14, to cut apart view field corresponding with cutting apart view field and first for T16, with these 8 T2 of view field, T4, T6, T8, T10, T12, T14, 1 range image DPN1 that T16 is corresponding is corresponding with the 1st 3D shape image, and above-mentioned the N time scanning is corresponding with the second sparse shooting operation, above-mentioned 8 T1 of view field, T3, T5, T7, T9, T11, T13, to cut apart view field corresponding with cutting apart view field and second for T15, with these 8 T1 of view field, T3, T5, T7, T9, T11, T13, 1 range image DPN that T15 is corresponding is corresponding with the 2nd 3D shape image.
As shown in Figure 7, after the above-mentioned end of scan for the first time, test section 340 obtains this that export from range image generating unit 333 range image DP1 scanning for the first time.Then, test section 340 uses above-mentioned obtained range image DP1, the 3D shape separately of the multiple workpiece W in detection accumulator 20.Afterwards, whenever above-mentioned later each end of scan for the second time, it is following that test section 340 obtains the synthetic composograph CP(forming of the range image DP by the scanning before the range image DP of, scanning based on up-to-date that export from composograph generating unit 336 and the scanning based on immediately up-to-date as described below).Then, test section 340 uses above-mentioned obtained composograph CP, the 3D shape separately of the multiple workpiece W in detection accumulator 20.The detection signal that represents the testing result based on test section 340 is exported to above-mentioned robot controller 200.
Thus, robot controller 200 obtains the above-mentioned detection signal of exporting from test section 340.Then, robot controller 200 keeps with the three-dimensional shape information based on being represented by the above-mentioned detection signal being obtained and for example transfers 1 workpiece W(, is easy to the workpiece W etc. keeping most) mode, above-mentioned robot 100 is moved.In addition, in this example, to transfer (, 1 workpiece W of 1 workpiece W → scanning → handover of scanning → handover whenever each end of scan, robot 100 ...) situation of 1 workpiece W is illustrated, but disclosed embodiment is not limited to this.For example, also can transfer successively whenever each end of scan, robot 100 (, 1 workpiece W of scanning → handover 1 workpiece W → handover of 1 workpiece W → scanning → handover of 1 workpiece W → handover ...) multiple (for example 2) workpiece W.
Whenever above-mentioned later each end of scan for the second time, composograph generating unit 336 by based on be stored in range image storage part 334 mutually multiple (in this example 2) range image DP of different repeatedly (in this example 2 times) scanning is synthetic in time, and generate composograph CP(with reference to Figure 14 described later).Composograph CP is the image for detection of the 3D shape separately of the multiple workpiece W in accumulator 20.Particularly, whenever above-mentioned later each end of scan for the second time, composograph generating unit 336 is synthetic the range image DP of the scanning before the range image DP of the scanning based on up-to-date and scanning based on immediately up-to-date, and generates composograph CP., composograph generating unit 336 by the part that comprises range information in the range image DP of the scanning based on up-to-date, with scanning based on immediately up-to-date before the range image DP of scanning in part synthetic.Thus, the part that comprises range information in the range image DP of composograph generating unit 336 scanning before the scanning based on immediately up-to-date by the part utilization that does not comprise range information in the range image DP of the scanning based on up-to-date is supplemented.So, composograph generating unit 336 generates composograph CP.As shown in figure 14, in the overall pp of the appearance images of the workpiece W of composograph CP in the full TT of view field etc., comprise range information.Therefore,, by using this composograph CP, test section 340 can detect multiple workpiece W in accumulator 20 3D shape separately.
At this, in the present embodiment, as mentioned above, after the end of scan for the first time, use this range image DP1 in scanning for the first time, afterwards, whenever later for the second time each end of scan, robot 100 uses the synthetic composograph CP forming by the range image DP of the scanning before the range image DP of the scanning based on up-to-date and the scanning based on immediately up-to-date, based on the three-dimensional shape information detecting by test section 340, transfers 1 workpiece W.Therefore, in the time that robot 100 transfers 1 workpiece W as mentioned above, for with this handover immediately after the range image DP of the up-to-date scanning carried out synthetic, the handover (, staying the attitude of original position in 1 workpiece W of above-mentioned handover) that immediately do not reflect above-mentioned 1 workpiece W in the range image DP in the scanning before up-to-date scanning.Therefore, if composograph generating unit 336 unconditionally former state is used the range image DP in the scanning before above-mentioned immediately up-to-date scanning, range image DP in itself and above-mentioned up-to-date scanning is synthetic, generate composograph CP, form the composograph CP that is comprising the content that reliability is low, these right and wrong are preferred.
Therefore, in the present embodiment, in the time that robot 100 transfers 1 workpiece W as mentioned above, composograph generating unit 336 is by the range image DP in the up-to-date scanning of carrying out after immediately transferring, synthesize with the image after removing the part corresponding with 1 workpiece W of above-mentioned handover in range image DP in scanning before immediately up-to-date scanning, and generates composograph CP.,, in the range image DP in the scanning of composograph generating unit 336 before above-mentioned immediately up-to-date scanning, remove 1 part that workpiece W is corresponding with above-mentioned handover.Then, composograph generating unit 336 is by the part that comprises range information in the range image DP in above-mentioned up-to-date scanning, the part that comprises range information in the range image DP of the scanning based on before above-mentioned immediately up-to-date scanning after treatment is synthetic with removing, and generates composograph CP.
For example, as shown in figure 15, may there is following situation: for example, after the end of scan of the N+1 time (for the third time), robot 100 use by the range image DPN1 in this N+1 time scanning and immediately the synthetic composograph CP(forming of the range image DPN of the N time (for example for the second time) before it in scanning referred to here as " composograph CPN1 "), based on the three-dimensional shape information being detected by test section 340, transfer a certain workpiece W(referred to here as " workpiece Wa ").In addition, in the example shown in Figure 15, for convenience of explanation, be made as robot 100 after the N time scanning and before the N+1 time scanning and do not carry out the handover of workpiece W.Now, in the range image DPN1 in the N+1 time scanning, the part corresponding with the workpiece Wa of above-mentioned handover is removed.In Figure 15, remove range image DPN1(after treatment referred to here as " range image DPN1 ' " at this) in, the part (part corresponding with removed workpiece Wa) of missing data represented with symbol Pa.And, immediately for example, range image DP(in the scanning of the N+2 time (the 4th time) after the N+1 time is referred to here as " range image DPN2 ") in the part p1 that comprises range information, p3, p5, p7, p9, p11, p13, p15, be synthesized with the part p2 that comprises range information, the p4, p6, p8, p10, p12, p14, the p16 that remove in the range image DPN1 ' based on scanning for the N+1 time after treatment, thereby generation composograph CP(is referred to here as " composograph CPN2 ").In this composograph CPN2, exist due to the part Paa that removes the part Pa of missing data in the range image DPN1 ' based on the N+1 time scanning after treatment and cause missing data, within the object of the above-mentioned detection that this part Paa does not carry out by test section 340 at this composograph of use CPN2.
In addition, at this, after scanning before above-mentioned immediately up-to-date scanning and before above-mentioned up-to-date scanning, for example, in the time that robot 100 carries out the handover of workpiece W, because other the workpiece W not transferring skew occurs or any situation such as collapse, the configuration state of workpiece W is likely changed.Now, the part that the configuration state of above-mentioned workpiece W has occurred to change likely spreads all over larger scope.Now, for with the range image DP of up-to-date scanning synthetic, immediately in the range image DP in the scanning before up-to-date scanning, do not reflect the variation (the original attitude, not changing in the configuration state of above-mentioned workpiece W) of the configuration state of above-mentioned workpiece W.Therefore, if the part that composograph generating unit 336 has occurred to change for the configuration state of above-mentioned workpiece W generates composograph CP, form the composograph CP that is comprising the content that reliability is low, these right and wrong are preferred.
Therefore, in the present embodiment, be speculated as in the time that variation has occurred the configuration state of above-mentioned workpiece W, particularly, the range image DP in above-mentioned up-to-date scanning, generate with the scanning before scanning based on immediately up-to-date position composograph CP, corresponding (or this position near position.Identical below) in the deviation of data content while being greater than the threshold value of regulation, composograph generating unit 336 by the image after position large the deviation ratio threshold value of removing data content in the range image DP in up-to-date scanning, with immediately up-to-date scanning before scanning in range image DP in to remove image after the position that the deviation ratio threshold value of data content is large synthetic, and generate composograph CP.That is, composograph generating unit 336 is removed the large position of deviation ratio threshold value in the range image DP in above-mentioned up-to-date scanning, data content, and the range image DP in the scanning before above-mentioned immediately up-to-date scanning is also removed this position.Then, composograph generating unit 336 is by removing the part that comprises range information in the range image DP based on above-mentioned up-to-date scanning after treatment, the part that comprises range information in the range image DP of the scanning based on before above-mentioned immediately up-to-date scanning after treatment is synthetic with removing, and generates composograph CP.In addition, when larger than the scope of regulation at the large position of the deviation ratio threshold value of above-mentioned data content, also can restart and do not remove processing from above-mentioned scanning for the first time.
For example, as shown in figure 16, may there is following situation: for example, after the N+1 time (for the third time) scanning and for example, before the scanning of immediately its N+2 time (the 4th time) afterwards, because a certain workpiece W(is referred to here as " workpiece Wb ") occur skew the configuration state of workpiece W is changed, therefore the range image DP(in scanning by the N+2 time is referred to here as " range image DPN2 "), with the range image DP(in the N+1 time scanning referred to here as " range image DPN1 ") and immediately for example, range image DP(in the scanning of the N time (for the second time) before it referred to here as " range image DPN ") synthesize the composograph CP(forming referred to here as " composograph CPN1 "), the deviation ratio threshold value of the data content in corresponding component R b is large.In addition, in the example shown in Figure 16, for convenience of explanation, be made as robot 100 after the N time scanning and before the N+1 time scanning and do not carry out the handover of workpiece W, and after the N+1 time scanning and before the N+2 time scanning, robot 100 does not carry out the handover of workpiece W.Now, in the range image DPN2 in the N+2 time scanning, the large component R b of deviation ratio threshold value of data content is removed, and for the range image DPN1 in the N+1 time scanning, this component R b is also removed.In Figure 16, remove range image DPN1(after treatment referred to here as " range image DPN1 ' " at this) and range image DPN2(referred to here as " range image DPN2 ' ") in, represent the part (part corresponding with removed component R b) of missing data with symbol Pb.And, remove part p1, p3, p5, the p7, p9, p11, p13, the p15 that comprise range information in the range image DPN2 ' based on the N+2 time scanning after treatment, be synthesized with removing part p2, the p4, p6, p8, p10, p12, p14, the p16 that comprise range information in the range image DPN1 ' based on the N+1 time scanning after treatment, thereby generation composograph CP(is referred to here as " composograph CPN2 ").In this composograph CPN2, exist due to the part Pb, the part Pb that Pb causes missing data that remove the missing data in range image DPN2 ', the DPN1 ' based on the N+2 time and the N+1 time scanning after treatment, this position Pb is not within the object of the above-mentioned detection that uses this composograph CP to carry out by test section 340.
Then,, with reference to Figure 17~Figure 19, an example of the composograph CP being generated by composograph generating unit 336 whenever each end of scan is described.
In the example shown in Figure 17~Figure 19, after the end of scan for the first time, based on the three-dimensional shape information that uses this range image DP1 in scanning for the first time to detect, transfer a certain workpiece W(referred to here as " workpiece W1 " by robot 100).Now, after the end of scan for the second time, in the range image DP1 in scanning, the part corresponding with the workpiece W1 of above-mentioned handover is removed for the first time.In Figure 17, remove range image DP1(after treatment referred to here as " range image DP1 ' " at this) in, the part (part corresponding with removed workpiece W1) of missing data represented with symbol P1.And, part p1, p3, p5, the p7, p9, p11, p13, the p15 that in range image DP2 in scanning, comprise range information for the second time, be synthesized with removing part p2, the p4, p6, p8, p10, p12, p14, the p16 that comprise range information in the range image DP1 ' based on scanning for the first time after treatment, thereby generate composograph CP(referred to here as " composograph CP2 ").In this composograph CP2, exist due to the part P11 that removes the part P1 of missing data in the range image DP1 ' based on scanning for the first time after treatment and cause missing data, this part P11 is not within the object of the above-mentioned detection that uses this composograph CP2 to carry out.
Afterwards, based on using the three-dimensional shape information detecting according to the above-mentioned for the second time composograph CP2 that scanning generates, transfer a certain workpiece W(referred to here as " workpiece W2 " by robot 100).Now, after the end of scan for the third time, in the range image DP2 in scanning, the part corresponding with the workpiece W2 of above-mentioned handover is removed for the second time.In Figure 17, remove range image DP2(after treatment referred to here as " range image DP2 ' " at this) in, the part (part corresponding with removed workpiece W2) of missing data represented with symbol P2.And, part p2, p4, p6, the p8, p10, p12, p14, the p16 that in range image DP3 in scanning, comprise range information for the third time, be synthesized with removing part p1, the p3, p5, p7, p9, p11, p13, the p15 that comprise range information in the range image DP2 ' based on scanning for the second time after treatment, thereby generate composograph CP(referred to here as " composograph CP3 ").In this composograph CP3, exist due to the part P22 that removes the part P2 of the middle missing data of range image DP2 ' based on scanning for the second time after treatment and cause missing data, this part P22 is not within the object of the above-mentioned detection that uses this composograph CP3 to carry out.In addition, in this composograph CP3, in above-mentioned composograph CP2, in the part P11 of missing data, data are restored.
Then,, based on using the three-dimensional shape information detecting according to the above-mentioned for the third time composograph CP3 that scanning generates, transfer a certain workpiece W(referred to here as " workpiece W3 " by robot 100).In addition, after above-mentioned scanning for the third time and before the 4th scanning, because a certain workpiece W(is referred to here as " workpiece W0 ") there is skew the configuration state of workpiece W is changed, therefore by the range image DP4 in the 4th scanning, be greater than threshold value with the deviation of the data content in component R 0 composograph CP3, corresponding based on above-mentioned scanning generation for the third time.Now, after the 4th end of scan, in the range image DP4 in the 4th scanning, the large component R 0 of deviation ratio threshold value of data content is removed.In addition, in the range image DP3 in scanning, the large component R 0 of deviation ratio threshold value of the part corresponding with the workpiece W3 of above-mentioned handover and data content is removed for the third time.In Figure 18, remove range image DP4(after treatment referred to here as " range image DP4 ' " at this) and range image DP3(referred to here as " range image DP3 ' ") in, represent the part (part corresponding with removed component R 0) of missing data with symbol P0, and represent the part (part corresponding with removed workpiece W3) of missing data with symbol P3.And, remove part p1, the p3, p5, the p7 that comprise range information in the range image DP4 ' based on the 4th scanning after treatment, p9, p11, p13, p15, with the range image DP3 ' removing in scanning for the third time after treatment in comprise range information part p2, p4, p6, p8, p10, p12, p14, p16 be synthesized, thereby generate composograph CP(referred to here as " composograph CP4 ").In this composograph CP4, exist due to part P0, the part P0 that P0 causes missing data of missing data removing in range image DP4 ', the DP3 ' based on the 4th time and scanning for the third time after treatment, and exist due to the part P33 that removes the part P3 of missing data in the range image DP3 ' based on scanning for the third time after treatment and cause missing data, these parts P0, P33 be not within the object of the above-mentioned detection that uses this composograph CP4 to carry out.In addition, in this composograph CP4, in above-mentioned composograph CP3, in the part P22 of missing data, data are restored.
Afterwards, based on the three-dimensional shape information that uses the composograph CP4 generating according to above-mentioned the 4th scanning to detect, transfer some workpiece W(referred to here as " workpiece W4 " by robot 100).Now, after the 5th end of scan, remove after treatment based on the 4th time scanning range image DP4 ' in, the part corresponding with the workpiece W4 of above-mentioned handover is removed.In Figure 18, this remove range image DP4 ' after treatment (referred to here as " and range image DP4 " ") in, represent the part (part corresponding with removed workpiece W4) of missing data with symbol P4.And, part p2, the p4, p6, the p8 that in range image DP5 in the 5th scanning, comprise range information, p10, p12, p14, p16, with remove the range image DP4 based on the 4th scanning after treatment " in comprise range information part p1, p3, p5, p7, p9, p11, p13, p15 be synthesized, thereby generate composograph CP(referred to here as " composograph CP5 ").In this composograph CP5, exist owing to removing the range image DP4 based on the 4th scanning after treatment " in part P0, part P00, the P44 that P4 causes missing data of missing data, these parts P00, P44 be not within the object of the above-mentioned detection that uses this composograph CP5 to carry out.In addition, in this composograph CP5, data recovery in the part P33 of missing data in above-mentioned composograph CP4, and in above-mentioned composograph CP4 in the part P0 of missing data, after treatment based on the range image DP3 of scanning for the third time owing to removing " in the part corresponding with the part P0 of missing data; data are restored, and become above-mentioned part P44.
And, based on the three-dimensional shape information that uses the composograph CP5 generating according to above-mentioned the 5th scanning to detect, transfer some workpiece W(referred to here as " workpiece W5 " by robot 100).Now, after the 6th end of scan, in the range image DP5 in the 5th scanning, the part corresponding with the workpiece W5 of above-mentioned handover is removed.In Figure 19, remove range image DP5(after treatment referred to here as " range image DP5 ' " at this) in, the part (part corresponding with the workpiece W5 removing) of missing data represented with symbol P5.And, part p1, p3, p5, the p7, p9, p11, p13, the p15 that in range image DP6 in the 6th scanning, comprise range information, be synthesized with removing part p2, the p4, p6, p8, p10, p12, p14, the p16 that comprise range information in the range image DP5 ' based on the 5th scanning after treatment, thereby generate composograph CP(referred to here as " composograph CP6 ").In this composograph CP6, exist due to the part P55 that removes the part P5 of missing data in the range image DP5 ' based on the 5th scanning after treatment and cause missing data, this part P55 is not within the object of the above-mentioned detection that uses this composograph CP6 to carry out.In addition, in this composograph CP6, in above-mentioned composograph CP5, in part P00, the P44 of missing data, data are restored.
As previously mentioned, sensor controller 330 is for example made up of the computing machine with arithmetical unit or memory storage etc., and controls the action of sensor unit 300 entirety.The multiple shooting frame Fs of this sensor controller 330 based on exporting from video camera 310, detect respectively the multiple workpiece W in reservoir 20, and detect these multiple workpiece W 3D shape separately.And provide example of the functional formation of explanation sensor controller 330 with reference to Fig. 7, in this example, sensor controller 330 has light source control portion 341, motor control portion 342, image acquiring unit 331, image storage part 332, range image generating unit 333, range image storage part 334, cooperation control part 335, test section 340, composograph generating unit 336 and composograph storage part 337.Therefore aforementioned light source control portion 341, motor control portion 342, image acquiring unit 331, image storage part 332, range image generating unit 333, range image storage part 334, cooperation control part 335, test section 340, composograph generating unit 336 and composograph storage part 337, each function that each function of completing separately completed separately, also can unify to be completed by sensor controller 330.For example, range image generating unit is carried out repeatedly each shooting operation of 1 of shooting shooting operation continuously for video camera, generate 1 3D shape image of described object by multiple view data, be sensor controller 330 and carry out repeatedly each shooting operation of 1 of shooting shooting operation continuously for described video camera, generate 1 3D shape image of described object by multiple described view data.Again for example composograph generating unit by range image generating unit, the multiple described 3D shape image based on mutually different multiple described shooting operations generates respectively in time synthesizes, and generation is for detection of the composograph of the 3D shape of described object, be the multiple described 3D shape image of sensor controller 330 based on different multiple described shooting operations generates respectively mutually in time synthetic, and generate the composograph for detection of the 3D shape of described object.
Then,, with reference to Figure 20, an example of the control step of the object detecting method that the sensor controller 330 of sensor unit 300 carries out is described.
In Figure 20, the processing example shown in this flow process starts as the connection of the power supply by sensor unit 300.
First,, in step S10, sensor controller 330 resets to 1 by the value of the variable C for scanning times is counted.
Afterwards, transfer to step S20, sensor controller 330, by light source control portion 341, is controlled LASER Light Source 321, makes its irradiating laser slit light L.
Then,, in step S30, sensor controller 330, by motor control portion 342, according to the rotation angle information of the rotating mirror 322 of inputting from angle detector 324, is controlled motor 323, and rotating mirror 322 is rotated.Thus, make to be received and the laser slit light L that makes its reflection moves to direction of scanning A to the projection position of the upper projections such as the workpiece W in the full TT of view field by rotating mirror 322.
Afterwards, transfer to step S40, sensor controller 330 is by the value of the variable C in this moment of detection, and which time scanning detects is.In the time that the value of the variable C in this moment is 1,, when current scanning is while scanning for the first time, transfer to step S50.
In step S50, sensor controller 330 is by cooperation control part 335, control video camera 310, with desired frame frequency (shooting timing), to the outward appearance sequence photography of the workpiece W in the full TT of view field that comprises the projection position of moving by the control of above-mentioned steps S20 etc. 16 times, and 16 shooting frame F1~F16 corresponding to output.; by the control of the cooperation control part 335 in this step S50; video camera 310 become at the projection position of the laser slit light L of above-mentioned movement respectively the T1 of view field timing, become the T2 of view field timing, become the T3 of view field timing ..., become the T14 of view field timing, become the T15 of view field timing, become the timing of the T16 of view field; outward appearance to the workpiece W in the full TT of view field that comprises this projection position etc. is made a video recording, and 16 frame F1~F16 that make a video recording corresponding to output and this continuous shooting of 16 times.
And in step S60, sensor controller 330, by image acquiring unit 331, obtains 16 shooting frame F1~F16 that export from video camera 310 in above-mentioned steps S50.16 shooting frame F1~F16 that obtain are stored in image storage part 332.
Afterwards, transfer to step S70, sensor controller 330 is by cooperation control part 335, command range image production part 333, and use this that be stored in image storage part 332 16 shooting frame F1~F16 in scanning for the first time, generate 1 range image DP1 corresponding to whole 16 T1~T16 of view field comprising with the full TT of view field.In addition, in the time scanning for the first time, generate the method for range image DP1 described above.The range image DP1 generating is stored in range image storage part 334, and is exported to test section 340.
Then,, in step S80, sensor controller 330, by test section 340, obtains the range image DP1 exporting from range image generating unit 333 in above-mentioned steps S70.Then, sensor controller 330, by test section 340, uses above-mentioned obtained range image DP1, the 3D shape separately of the multiple workpiece W in detection accumulator 20.The detection signal that represents the testing result based on test section 340 is exported to robot controller 200.Thus, robot controller 200 obtains the detection signal of exporting from test section 340 in this step S80, and keeps and transfer the mode of 1 workpiece W with the three-dimensional shape information based on being represented by the detection signal that this was obtained, and robot 100 is moved.Afterwards, transfer to step S170 described later.
On the other hand, in step S40, in the time that the value of the variable C in this moment is above-mentioned N, i.e., when this scanning is the N time scanning, transfer to step S90.
In step S90, sensor controller 330 is by cooperation control part 335, control video camera 310, with desired frame frequency (shooting timing) to the outward appearance sequence photography of the workpiece W in the full TT of view field that comprises the projection position of move by the control of above-mentioned steps S20 etc. 8 times, and 8 frame F1~F8 that make a video recording corresponding to output., by the control of the cooperation control part 335 in this step S90, video camera 310 becomes respectively the timing of the T1 of view field at the projection position of the laser slit light L of above-mentioned movement, become the timing of the T3 of view field, become the timing of the T5 of view field, become the timing of the T7 of view field, become the timing of the T9 of view field, become the timing of the T11 of view field, become the timing of the T13 of view field, become the timing of the T15 of view field, outward appearance to workpiece W in the full TT of view field that comprises this projection position etc. is made a video recording, and output 8 the shooting frame F1~F8s corresponding with these 8 times continuous shootings.
Afterwards, transfer to step S100, sensor controller 330, by image acquiring unit 331, obtains 8 shooting frame F1~F8 that export from video camera 310 in above-mentioned steps S90.8 shooting frame F1~F8 that obtain are stored in image storage part 332.
Then, in step S110, sensor controller 330 is by cooperation control part 335, command range image production part 333, use is stored in 8 shooting frame F1~F8 in this N time of image storage part 332 scanning, generates 1 the range image DPN corresponding with above-mentioned 8 T1 of view field, T3, T5, T7, T9, T11, T13, T15 in 16 T1~T16 of view field that comprise in the full TT of view field.In addition, in the time of the scanning of the N time, generate the method for range image DPN described above.The range image DPN generating is stored in range image storage part 334.Afterwards, transfer to step S150 described later.
On the other hand, in step S40, in the time that the value of the variable C in this moment is above-mentioned N+1,, in the time that this scanning is the N+1 time scanning, transfer to step S120.
In step S120, sensor controller 330 is by cooperation control part 335, control video camera 310, with desired frame frequency (shooting timing), to the shooting 8 times continuously of the outward appearance of the workpiece W in the full TT of view field that comprises the projection position of moving by the control in above-mentioned steps S20 etc., and 8 shooting frame F1~F8 corresponding to output., by the control of the cooperation control part 335 in this step S120, video camera 310 becomes respectively the timing of the T2 of view field at the projection position of the laser slit light L of above-mentioned movement, become the timing of the T4 of view field, become the timing of the T6 of view field, become the timing of the T8 of view field, become the timing of the T10 of view field, become the timing of the T12 of view field, become the timing of the T14 of view field, become the timing of the T16 of view field, outward appearance to workpiece W in the full TT of view field that comprises this projection position etc. is made a video recording, and output 8 the shooting frame F1~F8s corresponding with these 8 times continuous shootings.
Afterwards, transfer to step S130, sensor controller 330, by image acquiring unit 331, obtains 8 shooting frame F1~F8 that export from video camera 310 in above-mentioned steps S120.8 shooting frame F1~F8 that obtain are stored in image storage part 332.
Then, in step S140, sensor controller 330 is by cooperation control part 335, command range image production part 333, and use 8 shooting frame F1~F8 in this N+1 time scanning being stored in image storage part 332, generate 1 range image DPN1 corresponding to above-mentioned 8 T2 of view field, T4, T6, T8, T10, T12, T14, T16 in 16 T1~T16 of view field that comprise with the full TT of view field.In addition, the method that generates range image DPN1 the N+1 time when scanning is described above.The range image DPN1 generating is stored in range image storage part 334.
Afterwards, transfer to step S150, sensor controller 330 is by composograph generating unit 336, to be stored in the range image DP based on scanning (up-to-date scanning) corresponding to value in range image storage part 334 and variable C this moment, synthetic with the range image DP based on immediately its scanning before, thereby generate composograph CP.In addition, the method for generation composograph CP is described above.The composograph CP generating is stored in composograph storage part 337, and is exported to test section 340.
Then,, in step S160, sensor controller 330, by test section 340, obtains the composograph CP exporting from composograph generating unit 336 in above-mentioned steps S150.Then, sensor controller 330, by test section 340, uses above-mentioned obtained composograph CP, the 3D shape separately of the multiple workpiece W in detection accumulator 20.The detection signal that represents the testing result based on test section 340 is exported to robot controller 200.Thus, robot controller 200 obtains the detection signal of exporting from test section 340 in this step S160, and the three-dimensional shape information representing with the detection signal based on being obtained by this keeps and transfer the mode of 1 workpiece W, and robot 100 is moved.
Afterwards, transfer to step S170, sensor controller 330 adds after 1 the value of variable C, turns back to above-mentioned steps S20, repeats identical step.
In addition, the processing example shown in this flow process as the power supply by sensor unit 300 close finish.
As mentioned above, in the sensor unit 300 of present embodiment, carry out the detection of workpiece W by light cross-section method.The laser slit light L irradiating from LASER Light Source 321 is to projections such as workpiece W, and the outward appearance of the workpiece W at the projection position that comprises this laser slit light L etc. is made a video recording by video camera 310.Now, by rotating mirror 322 is rotated, laser slit light L is moved to direction of scanning A to the projection position of the projections such as workpiece W.In the time that this moves, video camera 310, with desired frame frequency, is made a video recording continuously, exports thus multiple shooting frame F of the action that comprises respectively the projection position that the moment moves from video camera 310.Range image generating unit 333 is used these multiple shooting frame F, generates 1 range image DP of workpiece W etc.Thus, can detect the 3D shape of workpiece W.
In the method for cutting off at the common light based on such, when move and while making a video recording to direction of scanning A at the projection position that makes as mentioned above laser slit light L on workpiece W etc., from after the end of the counter-scanning direction side of workpiece W etc. produces projection position, move successively to this projection position to direction of scanning A so the end of arrival direction of scanning, projection position A side during (in other words, cross on workpiece W etc. along during the roughly whole region of direction of scanning A), must make a video recording up hill and dale.Consequently, in order to carry out above-mentioned continuous shooting, need the long time, Check processing is difficult to high efficiency.
Therefore, in the present embodiment, use the synthetic image obtaining mutually by the multiple shooting frame F in each scanning of Multiple-Scan, detect the 3D shape of workpiece W, and do not use as mentioned above on workpiece W etc. the 3D shape that detects workpiece W to the multiple shooting frame F in direction of scanning A successively mobile 1 scanning., multiple range image DP that composograph generating unit 336 generates above-mentioned range image generating unit 333 respectively based on (mutually different on the time) Multiple-Scan are synthetic, thereby generate composograph CP.Then, detect the 3D shape of workpiece W based on this composograph CP.
By adopting as mentioned above the method synthetic image in Multiple-Scan, except using the shooting frame F in up-to-date scanning, conversion has completed the old shooting frame F in the scanning of execution, thereby the required time of making a video recording can make the detection of shape of workpiece W time is the time of 1 scanning amount for obtaining up-to-date shooting frame F.Consequently, compared with method in the past, the significantly shortening of Check processing can be realized, the raising for the treatment of effeciency can be realized.Or, in the case of also spending the time equating with previous methods, can extend the time that can allow in 1 scanning, therefore can use the slow video camera of camera speed 310.Consequently, can realize cost compared with previous methods declines.Or, camera speed reduction, resolution can also be increased to the individual several times of above-mentioned Multiple-Scan in the case of also spending the time equating with previous methods and not make.
In addition, in the present embodiment, especially, by rotating mirror 322 is rotated, thereby laser slit light L is moved to direction of scanning A to the projection position of the projections such as workpiece W.So, by using rotating mirror 322, can make easily to move to direction of scanning A to the laser slit light L of the reflections such as workpiece W.
In addition, in the present embodiment, especially, based on the control of cooperation control part 335, except using by 8 times in up-to-date scanning (later scanning for the second time) 8 the shooting frame F1~F8 that produce that make a video recording continuously, 8 shooting frame F1~F8 that 8 times shooting produces continuously during conversion scans by 1 time that completes execution before immediately up-to-date scanning, generate composograph CP.Thus, the required time of making a video recording can make the SHAPE DETECTION of workpiece W time is 1/2 of previous methods, realizes the significantly shortening of Check processing, and treatment effeciency can be brought up to twice.Or, in the case of also can spending the time identical with previous methods, can be by the time lengthening to 2 times that can allow in 1 scanning, therefore can use camera speed than slow 1/2 video camera 310 in the past.Consequently, can realize cost compared with previous methods.Or, camera speed reduction, resolution can also be increased to 2 times in the case of also spending the time equating with previous methods and not make.
But, in the scanning of carrying out at first (scanning for the first time), due to the scanning not existing before it, therefore can not implement said method.In the present embodiment, based on the control of cooperation control part 335, in scanning for the first time, different from later for the second time scanning, video camera 310 carries out 16 shootings continuously successively to the full TT of view field of workpiece W etc., uses these 16 shooting frame F1~F16.Thus, even not realizing in the scanning for the first time of the shortening based on sparse shooting, also can generate reliably 1 range image DP.
In addition, in the present embodiment, especially, the shooting timing of the N time scanning and the N+1 time scanning using by the beginning of each scanning as benchmark phase each other the mode of staggered timing stagger.Thus, 8 shooting frame F1~F8 in 8 shooting frame F1~F8 in the N time scanning and the N+1 time scanning can be carried out mutually supplementing such location.Consequently, synthetic by these, can easily and reliably obtain identical with previous methods, for detection of the image of the 3D shape of workpiece W.
In addition, in the robot system 1 of present embodiment, by robot 100, keep successively and transfer the multiple workpiece W in accumulator 20.In the present embodiment, in the time carrying out above-mentioned handover, based on the control of robot controller 200, as mentioned above, according to the 3D shape of the each workpiece W being detected by sensor unit 300, robot 100 is moved.Thus, can smoothly and carry out reliably the handover of above-mentioned multiple workpiece W.In addition, as mentioned above, because the 3D shape of each workpiece W can promptly be detected efficiently, therefore can successfully carry out by robot 100 reliably the handover to above-mentioned multiple workpiece W.
In addition, in the present embodiment, especially, in the time that robot 100 carries out the handover of some workpiece W, composograph generating unit 336 is removed the part corresponding with 1 workpiece W of above-mentioned handover in the range image DP generating in the scanning before immediately transferring, the range image DP generating in image after this is removed and the scanning after immediately transferring is synthetic, and generates composograph CP.Thus, the content of the part that reliability is low can be removed, the control of robot 100 can be carried out.Consequently, misoperation or the insignificant action of robot 100 can be avoided, the handover operation of good workpiece W can be carried out.In addition, for the range image DP that so comprises inaccurate content, by after re-start scanning and on same position rewrite reflection latest Status data, can again become the content that reliability is high.
In addition, in the present embodiment, especially, in the time having there is variation in the configuration state that is speculated as workpiece W, particularly, when the deviation of the data content on position composograph CP, corresponding generating during as the range image DP generating in up-to-date scanning, with scanning before immediately up-to-date scanning is greater than the threshold value of regulation, composograph generating unit 336 is removed the large position of deviation ratio threshold value of data content in the range image DP generating in up-to-date scanning.In addition, this position in the range image DP immediately generating in the scanning before up-to-date scanning is also removed.And, synthetic by these having been carried out remove 2 range image DP of processing, generate composograph CP.Thus, same as described above, the content of the part that reliability is low can be removed, and the control of robot 100 can be carried out.Consequently, misoperation or the insignificant action of robot 100 can be avoided, the good handover operation of workpiece W can be carried out.
In addition, embodiments of the present invention are not limited to foregoing, in the scope that does not depart from its purport and technological thought, can carry out various distortion.
For example, in the above-described embodiment, in the time of the N time scanning, video camera 310 becomes respectively the timing of the T1 of view field at the projection position of the laser slit light L of above-mentioned movement, become the timing of the T3 of view field, become the timing of the T5 of view field, become the timing of the T7 of view field, become the timing of the T9 of view field, become the timing of the T11 of view field, become the timing of the T13 of view field, become the timing of the T15 of view field, the outward appearance of the workpiece W in the full TT of view field that comprises this projection position etc. is made a video recording, in the time of the N+1 time scanning, become respectively the timing of the T2 of view field at the projection position of the laser slit light L of above-mentioned movement, become the timing of the T4 of view field, become the timing of the T6 of view field, become the timing of the T8 of view field, become the timing of the T10 of view field, become the timing of the T12 of view field, become the timing of the T14 of view field, become the timing of the T16 of view field, the outward appearance of workpiece W in the full TT of view field that comprises this projection position etc. is made a video recording, but embodiments of the present invention are not limited to this.That is, also can be in the time of the N time scanning, video camera 310 becomes respectively the timing of the T2 of view field at the projection position of the laser slit light L of above-mentioned movement, become the timing of the T4 of view field, become the timing of the T6 of view field, become the timing of the T8 of view field, become the timing of the T10 of view field, become the timing of the T12 of view field, become the timing of the T14 of view field, become the timing of the T16 of view field, the outward appearance of workpiece W in the full TT of view field that comprises this projection position etc. is made a video recording, in the time of the scanning of the N+1 time, become respectively the timing of the T1 of view field at the projection position of the laser slit light L of above-mentioned movement, become the timing of the T3 of view field, become the timing of the T5 of view field, become the timing of the T7 of view field, become the timing of the T9 of view field, become the timing of the T11 of view field, become the timing of the T13 of view field, become the timing of the T15 of view field, the outward appearance of workpiece W in the full TT of view field that comprises this projection position etc. is made a video recording.
In addition, in the above-described embodiment, in later for the second time scanning, video camera 310 is to so that the total area is multiple view fields of 1/2 the mode rarefaction of the full TT of view field, carry out successively K/2 shooting continuously, range image generating unit 333 use K/2 shooting frame F generate 1 range image DP, and composograph generating unit 336 synthesizes to generate composograph CP by 2 range image DP, establishes n=2.But, if n is more than 2 integer, do not limit especially, also can establish n=3, also can establish n=4, also n can be made as to larger integer.For example, in the time establishing n=3, in the 2nd later scanning, video camera 310 is to so that multiple view fields that the total area is 1/3 the mode rarefaction of the full TT of view field carry out K/3 shooting continuously successively, range image generating unit 333 use K/3 shooting frame F generate 1 range image DP, and composograph generating unit 336 synthesizes to generate composograph CP by 3 range image DP.
In addition, in the above-described embodiment, in the time scanning for the first time, such sparsely scanning while not carrying out the 2nd later scanning, but embodiments of the present invention are not limited to this, also can in the time scanning for the first time, carry out sparsely scanning.
In addition, in the above-described embodiment, sensor unit 300 is applied to robot system 1, but sensor unit can also be applied to robot system system in addition.
In addition, the arrow shown in Fig. 7 represents the example flowing of signal, and is not intended to limit the flow direction of signal.
In addition, the process flow diagram shown in Figure 20 is not limited to embodiment to carry out illustrated step, can carry out appending and delete or the change etc. of order of step in the scope that does not depart from purport of the present invention and technological thought.
In addition, except having narrated above, also the method based on above-mentioned embodiment etc. suitably can be combined to utilize.
In addition,, although there is no example one by one, above-mentioned embodiment etc., not departing from the scope of its purport, can apply various changes and implement.

Claims (9)

1. an article detection device, detects the object of detected object, it is characterized in that having:
LASER Light Source, the laser of described LASER Light Source illumination slit shape;
Scanner section, described scanner section makes the described laser irradiating from described LASER Light Source move to the direction of scanning of regulation to the projection position of described project objects;
Video camera, in the time that described scanner section moves the described Projection Division displacement of described laser on described object, the outward appearance of described video camera described object to the projection position that comprises described movement with desired interval is made a video recording continuously, and multiple view data corresponding to output;
Range image generating unit, described range image generating unit is carried out repeatedly each shooting operation of shooting continuously for described video camera, generate 1 3D shape image of described object by multiple described view data; And
Composograph generating unit, by described range image generating unit, the multiple described 3D shape image based on different multiple described shooting operations generates respectively mutually in time synthesizes described composograph generating unit, and generates the composograph for detection of the 3D shape of described object.
2. article detection device as claimed in claim 1, is characterized in that,
Described scanner section is rotating mirror, and described rotating mirror receives the described laser irradiating from described LASER Light Source and it is reflected to described object.
3. article detection device as claimed in claim 2, is characterized in that,
Described article detection device also has cooperation control part, and described video camera and described range image generating unit are controlled in described cooperation control part cooperation, make:
In initial shooting operation in described multiple shooting operations, described video camera carries out K shooting continuously successively to the full view field that becomes projection objects of described object, and described range image generating unit generates described 1 the 3D shape image corresponding with described full view field by K described view data, wherein K is positive integer, and
In more than 1 sparse shooting operation of carrying out after described initial shooting operation in described multiple shooting operations, described video camera is to so that the sparse multiple view fields of cutting apart that are partitioned into of the mode of the 1/n that the total area is described full view field, carry out successively K/n shooting continuously, and described range image generating unit generates described 1 the 3D shape image corresponding with each sparse shooting operation by K/n described view data, and wherein n is more than 2 integer.
4. article detection device as claimed in claim 3, is characterized in that,
Multiple described sparse shooting operations comprise:
The first sparse shooting operation, by the control of described cooperation control part, described video camera is to so that the total area be described full view field 1/2 mode sparse multiple first cutting apart view field and carry out successively K/2 shooting continuously of being partitioned into, and described range image generating unit generates with described multiple first and cuts apart 1 the 1st 3D shape image corresponding to view field by the individual described view data of K/2; And
The second sparse shooting operation, by the control of described cooperation control part, described video camera is cut apart view field to multiple second and is carried out successively K/2 shooting continuously, and described range image generating unit generates with described multiple second and cuts apart 1 the 2nd 3D shape image corresponding to view field by K/2 described view data, described second cut apart that view field is configured to imbed respectively so that the total area be described full view field 1/2 mode sparse be partitioned into described multiple first cut apart view field each other
Described composograph generating unit by range image generating unit described in described the first sparse shooting operation generate described the 1st 3D shape image, with range image generating unit described in described the second sparse shooting operation generate described the 2nd 3D shape image synthesize, thereby generate described composograph.
5. article detection device as claimed in claim 4, is characterized in that,
Video camera described in the control of described cooperation control part so that the shooting of the described video camera in an operation in described the first sparse shooting operation timing, with described the second sparse shooting operation in an operation in described video camera shooting timing taking each operation start become interlaced timing as benchmark.
6. a robot system, is characterized in that, has:
Container, described container is put into multiple objects;
Robot, described robot keeps successively and transfers the multiple described object in described container;
Article detection device as described in claim 4 or 5; And
Controller, the 3D shape separately of the described multiple objects in the described container based on being detected by described article detection device, makes described robot motion.
7. robot system according to claim 6, is characterized in that,
Described robot based on described article detection device in the synthetic detection to described 3D shape with described the 1st 3D shape image that immediately described the first sparse shooting operation before it generates according to described the 2nd 3D shape image generating in described the second sparse shooting operation, while transferring corresponding described object
Described composograph generating unit is by described the 1st 3D shape image generating in the immediately described first sparse shooting operation thereafter of described the second sparse shooting operation, synthesize with the image of having removed the part corresponding with the described object of described handover in described the 2nd 3D shape image generating in this second sparse shooting operation, thereby generates described composograph.
8. robot system according to claim 7, is characterized in that,
In the time that the deviation of the data content near the position at described the 1st 3D shape image generating in described the first sparse shooting operation, position described composograph, corresponding with described composograph generating unit based on immediately the described second sparse shooting operation before it generates or this position is greater than the threshold value of regulation
Described composograph generating unit by the image of having removed the large position of threshold value described in the deviation ratio of described data content in described the 1st 3D shape image generating in described the first sparse shooting operation and obtain, with described immediately second sparse shooting operation before it in removed the position that threshold value is large described in the deviation ratio of described data content in described the 2nd 3D shape image that generates and the image obtaining synthesized, thereby generate described composograph.
9. an object detecting method, is characterized in that, comprises the following steps:
From the laser of light source illumination slit shape;
The described laser of described irradiation is moved to the direction of scanning of regulation to the projection position of project objects;
In the time that the described Projection Division displacement that makes described laser is moving, the outward appearance of the described object with desired interval to the projection position that comprises described movement is made a video recording continuously, and multiple view data corresponding to output;
For each shooting operation of carrying out repeatedly shooting continuously, generate 1 3D shape image of described object by multiple described view data; And
Multiple described 3D shape image based on different multiple described shooting operations generates respectively mutually is in time synthesized, detect the 3D shape of described object.
CN201310740930.6A 2013-02-19 2013-12-26 Object detector, robot system and object detection method Pending CN103994729A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2013-030143 2013-02-19
JP2013030143A JP2014159988A (en) 2013-02-19 2013-02-19 Object detector, robot system, and object detection method

Publications (1)

Publication Number Publication Date
CN103994729A true CN103994729A (en) 2014-08-20

Family

ID=51308963

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310740930.6A Pending CN103994729A (en) 2013-02-19 2013-12-26 Object detector, robot system and object detection method

Country Status (2)

Country Link
JP (1) JP2014159988A (en)
CN (1) CN103994729A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111417976A (en) * 2017-11-29 2020-07-14 松下知识产权经营株式会社 Work support system, work support method, and program
CN114509019A (en) * 2020-11-17 2022-05-17 莱卡地球系统公开股份有限公司 Scanning device for scanning environment and capable of recognizing scanning moving object

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6075605A (en) * 1997-09-09 2000-06-13 Ckd Corporation Shape measuring device
CN1218159C (en) * 2001-03-25 2005-09-07 欧姆龙株式会社 Optical metering installation
JP2007025830A (en) * 2005-07-13 2007-02-01 Hitachi Plant Technologies Ltd Method and device for recognizing three-dimensional object
JP2007040841A (en) * 2005-08-03 2007-02-15 Ricoh Co Ltd Three-dimensional shape measuring apparatus, and three-dimensional shape measuring method
CN102331240A (en) * 2010-06-03 2012-01-25 索尼公司 Testing fixture and inspection method
CN102528810A (en) * 2010-10-25 2012-07-04 株式会社安川电机 Shape measuring apparatus, robot system, and shape measuring method
CN102735166A (en) * 2011-04-14 2012-10-17 株式会社安川电机 Three-dimensional scanner and robot system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3575693B2 (en) * 2001-03-25 2004-10-13 オムロン株式会社 Optical measuring device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6075605A (en) * 1997-09-09 2000-06-13 Ckd Corporation Shape measuring device
CN1218159C (en) * 2001-03-25 2005-09-07 欧姆龙株式会社 Optical metering installation
JP2007025830A (en) * 2005-07-13 2007-02-01 Hitachi Plant Technologies Ltd Method and device for recognizing three-dimensional object
JP2007040841A (en) * 2005-08-03 2007-02-15 Ricoh Co Ltd Three-dimensional shape measuring apparatus, and three-dimensional shape measuring method
CN102331240A (en) * 2010-06-03 2012-01-25 索尼公司 Testing fixture and inspection method
CN102528810A (en) * 2010-10-25 2012-07-04 株式会社安川电机 Shape measuring apparatus, robot system, and shape measuring method
CN102735166A (en) * 2011-04-14 2012-10-17 株式会社安川电机 Three-dimensional scanner and robot system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111417976A (en) * 2017-11-29 2020-07-14 松下知识产权经营株式会社 Work support system, work support method, and program
CN114509019A (en) * 2020-11-17 2022-05-17 莱卡地球系统公开股份有限公司 Scanning device for scanning environment and capable of recognizing scanning moving object

Also Published As

Publication number Publication date
JP2014159988A (en) 2014-09-04

Similar Documents

Publication Publication Date Title
JP5508895B2 (en) Processing system and processing method
US20190322053A1 (en) Systems and methods for an improved peel operation during additive fabrication
EP3863791B1 (en) System and method for weld path generation
JP5729622B2 (en) Blurless imaging system
JP6457653B2 (en) Laser scanning apparatus and laser scanning system
JP2005521123A5 (en)
CN102342100A (en) System and method for providing three dimensional imaging in network environment
US20180339364A1 (en) System and method for machining of relatively large work pieces
JP2013086184A (en) Workpiece takeout system, robot apparatus and method for manufacturing material to be processed
CN103994729A (en) Object detector, robot system and object detection method
US10810783B2 (en) Dynamic real-time texture alignment for 3D models
CN103560093A (en) Flip chip bonding apparatus
EP2946274B1 (en) Methods and systems for creating swivel views from a handheld device
JP2001239484A (en) Article processing system
US20210209849A1 (en) Multiple maps for 3d object scanning and reconstruction
KR102517023B1 (en) Machine control system, machine and communication method
JP2022519185A (en) Industrial robotic equipment with improved touring path generation, and how to operate industrial robotic equipment according to improved touring path
KR20200084265A (en) Device, method and computer program for editing time slice images
KR20150008457A (en) Method and system for achieving moving synchronization in remote control and computer storage medium
CN109194916B (en) Movable shooting system with image processing module
JP7065802B2 (en) Orbit generator, orbit generation method, program, and robot system
JP2013001548A (en) Picking system
CN104883549B (en) Scanning projector and its synchronous method of scan-image
CN109218612B (en) Tracking shooting system and shooting method
JP6597320B2 (en) Manufacturing abnormality determination method, manufacturing abnormality determination program, and manufacturing apparatus

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140820