CN1136685A - Method for identifying object in three dimensional space - Google Patents

Method for identifying object in three dimensional space Download PDF

Info

Publication number
CN1136685A
CN1136685A CN 95100535 CN95100535A CN1136685A CN 1136685 A CN1136685 A CN 1136685A CN 95100535 CN95100535 CN 95100535 CN 95100535 A CN95100535 A CN 95100535A CN 1136685 A CN1136685 A CN 1136685A
Authority
CN
China
Prior art keywords
edge
image
circle
point
recognition methods
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 95100535
Other languages
Chinese (zh)
Other versions
CN1045831C (en
Inventor
笹田胜弘
中原智治
荒木秀和
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Electric Works Co Ltd
Original Assignee
Matsushita Electric Works Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matsushita Electric Works Ltd filed Critical Matsushita Electric Works Ltd
Publication of CN1136685A publication Critical patent/CN1136685A/en
Application granted granted Critical
Publication of CN1045831C publication Critical patent/CN1045831C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

A method for recognizing a three-dimensional objective in which a two-dimensional image is obtained from the same viewing point as a distance image obtained by picking up the objective in a three-dimensional space and the two-dimensional image is utilized to previously limit an existence zone of the objective under detection for the distance image and to perform objective detecting operation over the partial distance image in the limited zone, thereby realizing a sufficiently high speed detection of the objective, in particular, the three-dimensional objective.

Description

The method of identification object in three dimensional space
The present invention relates to a kind of method of discerning object in the three dimensions, particularly a kind of object recognition methods that detects object position, attitude or posture and shape.
The method of in the past discerning the objective thing has several; The method that is disclosed just like day disclosure special permission communique No.2-230478 wherein, this method is according to conform to all outline lines of object model all kinds of parameters of the position and attitude that obtains object of all the edge line segments in the image, extracts parametric family then to detect the position of object from these parameter spaces.Another kind is the method that is disclosed by day disclosure special permission communique No.4-54409, this method is seen tested object as an object model that contains many rigid bodies and have the joint that is connected to these rigid bodies of rotary freedom, find the candidate attitude of each rigid body then according to the characteristic in the image (such as the line segment of appointment or the point of appointment etc.) when conforming to, from candidate attitude separately the object image is done estimation again and do contrast to determine a suitable attitude with image simultaneously with the object model.Suchlike method also has good several.
In method before, must make that all the edge line segments in the image conform to all outline lines of object model.Thereby, when object complex-shaped, the edge line number in the image and the outline line number of object model will be very big, and will increase the two the needed time and reduced the precision that detects of conforming to that makes so very unfriendly.Even adopt a kind of method in back, if object has flexible structure, then rigid body number and piece-ups are just very big, thereby also make the final detection to an object 3 d pose need for a long time.
Therebetween, by the proposition of the day disclosure special permission communique No.6-29695 method of high speed detection object of a kind of special identification shape such as cylindrical object.In the method, earlier from two different angles are oblique the measured target thing is taken two images with two imaging apparatus, from two images, extract curved edge again, some curved edges that are positioned at predetermined distance range are counted as the edge of identical ellipse, center oval in the image is regarded as at directly round center, calculate, following oval center also mutually combines two images, calculate with last according to the trigonometry surveying again, the center of following oval corresponding straight circle, and then calculate the orientation of the line segment be connected with two centers obtaining or direction finding cylindrical center line, thereby can determine this cylindrical position and attitude.
Yet also there is a problem in this method.When discerning an annulation or a plurality of cylindrical object if desired, then it can be thought the combination that is not those curved edges of same circle by mistake in same circle originally, some edges of basal surface be can't see on screen under the right cylinder simultaneously, so just can not measure the direction of the center line of cylindrical object, also just can't determine its attitude, in addition, the center of circle is being projected one when visual, if contain the plane of this circle and the optical axis off plumb words of imaging apparatus, this center of circle will be projected to a point that departs from the center of circle in the image.Also have, because the sideline of circle almost has half by coverage and invisible down, the accuracy in the center of circle of calculating according to these sidelines certainly will be lower like this.Thereby the accuracy of the position that obtains thus and the value of attitude and reliability also must be lower.
This shows that purpose of the present invention is exactly to propose a kind of object method of identification of realizing high speed detection with pin-point accuracy.
According to one aspect of the present invention, above-mentioned target reaches like this: it has proposed a kind of object method of identification, this method comprises following several steps: the object in the three dimensions is videotaped obtain a spacing image, obtain a two-dimensional image from the identical visual angle that obtains this spacing image, this two-dimensional image is determined a partial images, and the partial images of two-dimensional image appointment is applied in the spacing image goes, in order to the spacing image is extracted partial images, again the spacing partial images is made the object detecting operation.
Another aspect by the present invention is to have proposed a kind of object method of identification, and a two-dimensional image is wherein arranged, and this two-dimensional image is with the spacing image that the object in the three dimensions is videotaped the back acquisition identical visual angle to be arranged; The principal axis of inertia and the center of gravity of object in two-dimensional image; The spacing image be project by inertia to axle and and the perpendicular plane of two-dimensional image on; The principal axis of inertia on projection sound plane and the center of gravity of object; And the position and the attitude of the principal axis of inertia of object in the three dimensions that detects by 2 principal axis of inertia of trying to achieve and 2 centers of gravity of trying to achieve.
By another aspect of the present invention, a kind of object method of identification has been proposed, in a three dimensions, videotape and obtain soot-and-whitewash having circle-shaped object from a plurality of directions, obtain an edge image and an intensity gradient direction image by this soot-and-whitewash, by the processing of stereoscopic image is formed an edge image as with reference to simultaneously from this with reference to position disparity image of formation image and the other edge image, only extract curved edge and extract a circle from the reference image from curved edge, the three-dimensional information that uses the position disparity by the marginal point in the circle that is extracted to find detects and is extracted round position and attitude.
In the present invention, use two-dimensional image to limit the district that exists of measured target thing in the spacing image in advance in the testing process of object, the detecting operation of object is to carry out on the spacing partial images in the restricted area simultaneously.Thereby the present invention can realize measuring ability at a high speed compared with the prior art, because the detecting operation of prior art is finished on whole spacing image.At this moment, according to the surface configuration pattern of object, the statistical picture characteristics of structure quality, the classification of color or brightness or the two-dimentional geometric figure characteristic that is obtained by the three-dimensional geometric shapes characteristics of object are determined the partial images in the two-dimensional image.
Though can make various modifications and changes to be fit to different needs for the present invention, will be described in detail some example by listed chart below.Yet self-evident, the present invention never is limited to those examples of introducing here, and on the contrary, hope can comprise all modifications, structure modify and the equivalent practice, as long as they drop among the present invention's the spirit and scope.
Be described with reference to following accompanying drawing, these and other characteristics of the present invention will come into plain view:
Fig. 1 is the process flow diagram of explanation by the embodiment operation of a system of the present invention.
Fig. 2 is to be the block diagram of the system of Fig. 1 embodiment.
Fig. 3 is the skeleton view of measured target thing example among Fig. 1 embodiment.
Fig. 4 a is the sketch of the two-dimensional image of model object among key diagram 1 embodiment.
Fig. 4 b is the sketch of the spacing image of model object among key diagram 1 embodiment.
Fig. 5 is the operational flowchart that more specifically changes of key diagram 1 embodiment.
Fig. 6 is the sketch of the operation of key diagram 5 variations.
Fig. 7 is another operational flowchart that more specifically changes among key diagram 1 embodiment.
Fig. 8 is the sketch of key diagram 7 change operation.
Fig. 9 is another operational flowchart that more specifically changes among key diagram 1 embodiment.
Figure 10 is the sketch of the operation of key diagram 9 variations.
Figure 11 is another operational flowchart that more specifically changes among key diagram 1 embodiment.
Figure 12 is the front elevation of the example of the detection object during Figure 11 changes.
Figure 13 a is the sketch of the two-dimensional image during explanation Figure 11 changes.
Figure 13 b is the sketch of the spacing image during explanation Figure 11 changes.
Figure 14 is the sketch of the operation during explanation Figure 11 changes.
Figure 15 is the process flow diagram of the operation of explanation another embodiment of the present invention.
Figure 16 is the sketch of an operation of explanation Figure 15 embodiment.
Figure 17 is the sketch of another operation among explanation Figure 15 embodiment.
Figure 18 is the process flow diagram of the operation of explanation another embodiment of the present invention.
Figure 19 is the process flow diagram of the operation of another embodiment of explanation the present invention.
Figure 20 a is the sketch of the two-dimensional image among explanation Figure 19 embodiment.
Figure 20 b is the sketch of spacing image among explanation Figure 19 embodiment.
Figure 21 a illustrates that the spacing image is at the later sketch of process pretreatment operation among Figure 19 embodiment.
Figure 21 b illustrates that two-dimensional image is at the later sketch of process pretreatment operation among Figure 19 embodiment.
Figure 22 is the process flow diagram that the embodiment operation of observational technique of the present invention is used in explanation.
Figure 23 is the skeleton view of the observed object among explanation Figure 22 embodiment.
Figure 24 a is the sketch of two-dimensional image among explanation Figure 22 embodiment.
Figure 24 b is the sketch of spacing image among explanation Figure 22 embodiment.
Figure 25 is the operational flowchart that explanation the present invention searches another embodiment of the position of an object and attitude.
Figure 26 is the skeleton view of an example of object among Figure 26.
Figure 27 is the sketch that has used that plane of object upper surface among explanation Figure 25 embodiment.
Figure 28 a has drawn the sketch of Figure 25 embodiment midplane rotation status.
Figure 28 b is the sketch of explanation model image.
Figure 29 is the block diagram of a system.It is columniform wherein having used the present invention and detected material.
Figure 30 is the skeleton view of the example of the detection object in Figure 29 system.
Figure 31 a is the sketch of explanation Figure 30 system neutral body with reference to an edge image of image.
Figure 31 b is the sketch of position disparity image in explanation Figure 30 system.
Figure 32 a is the sketch of an edge image example in explanation Figure 30 system.
Figure 32 b is the sketch of curved edge in explanation Figure 30 system.
Figure 32 c is the sketch that the circumferential edges that extracts in Figure 30 system is described.
Figure 33 is the process flow diagram of a kind of alter operation of explanation the present invention.
Figure 34 is the skeleton view of another embodiment of the present invention.
Figure 35 shows the skeleton view of object among Figure 34 embodiment.
Figure 36 a is the sketch of edge image among explanation Figure 35 embodiment.
Figure 37 be the another kind of alter operation of explanation the present invention process flow diagram.
Figure 38 is the skeleton view that the image of another embodiment of the present invention obtains state.
Figure 39 is the skeleton view of object among Figure 38 embodiment.
Figure 40 a is the sketch of an edge image among Figure 38 embodiment.
Figure 40 b is the sketch that the circumferential edges that extracts among Figure 38 embodiment is described.
Figure 41 is the process flow diagram of the another kind of alter operation of explanation the present invention.
Figure 42 is the process flow diagram of the operation of explanation another embodiment of the present invention.
Figure 43 a is a sketch how to search for marginal point among explanation Figure 42 embodiment.
Figure 43 b is the sketch that the edge that is extracted out among Figure 42 embodiment is described.
Figure 44 is how explanation extracts curved edge in Figure 42 embodiment a sketch.
Figure 45 a is the sketch that the curved edge that is extracted out in Figure 42 embodiment is described.
Figure 45 b is the sketch that the curved edge that is extracted out in Figure 42 embodiment is described.
Figure 46 is the process flow diagram of the alter operation of explanation another embodiment of the present invention.
Figure 47 a is the sketch that the curved edge that is extracted among Figure 46 embodiment is described.
Figure 47 b is the sketch that the dual edge image with position difference that calculates among Figure 46 embodiment is described.
Figure 47 c is the sketch that the circumferential edges that is extracted among Figure 46 embodiment is described.
Figure 48 is the process flow diagram of the change operation of explanation another embodiment of the present invention.
Figure 49 is the skeleton view of object among Figure 48 embodiment.
Figure 50 a is the sketch of an edge image among explanation Figure 48 embodiment.
Figure 50 b is the sketch that is extracted curved edge among explanation Figure 48 embodiment.
Figure 50 c is the sketch that is extracted curved edge among explanation Figure 48 embodiment.
Figure 51 is a sketch how to choose curved edge among explanation Figure 48 embodiment.
Figure 52 is the process flow diagram of the operation of explanation another embodiment of the present invention.
Figure 53 is the skeleton view of the object among Figure 52 embodiment.
Figure 54 a is the sketch of edge image among explanation Figure 52 embodiment.
Figure 54 b is the sketch that the circumferential edges that extracts among Figure 52 embodiment is described.
Figure 55 is the process flow diagram of explanation another embodiment of the present invention alter operation.
Figure 56 is the process flow diagram that extracts operation among explanation Figure 55 embodiment.
Figure 57 is the process flow diagram of explanation another embodiment of the present invention alter operation.
Figure 58 is the skeleton view of object among Figure 57 embodiment.
Figure 59 a is the sketch of edge image among explanation Figure 57 embodiment.
Figure 59 b is the sketch that the circumferential edges that extracts among Figure 57 embodiment is described.
Figure 60 is the sketch that how to extract a circumferential edges among explanation Figure 57 embodiment.
Figure 61 is the process flow diagram of explanation another embodiment of the present invention alter operation.
Figure 62 is from the sketch on the observed plane, top of object among the explanation Figure 61 embodiment.
Figure 63 is the sketch of explanation Figure 61 embodiment midplane rotation status.
Figure 64 is the process flow diagram of another embodiment of the present invention alter operation.
Figure 65 is the process flow diagram of another embodiment operation.
Figure 66 is the process flow diagram of another embodiment alter operation.
Figure 67 is the sketch of a kind of operation format of explanation Figure 66 embodiment.
Figure 68 has illustrated the operational flowchart that can be used for the one embodiment of the invention of flowing water assembly control system.
Figure 69 is the skeleton view of object among Figure 68 embodiment.
Figure 70 a is the sketch of edge image among explanation Figure 68 embodiment.
Figure 70 b is the sketch that the circumferential edges that extracts among Figure 68 embodiment is described, and
Figure 71 is the sketch that the Suo Beier that for example uses in Figure 29 to Figure 31 embodiment (Sobel) operator has been described.
Below in conjunction with embodiment the present invention is elaborated.The present invention's first and second characteristics are to use a spacing image and use a two-dimensional image with identical viewpoint shooting with the spacing image.Though it is a lot of to form the method for spacing image, the spacing image that mention here is by stereoscopic image, promptly obtains from two-dimensional image.In Fig. 2, reference marks 10a and 10b have represented to take the TV camera of stereoscopic image.The image that 10a took the photograph one of among TV camera 10a and the 10b is linked to each other with two-dimensional image storer 4 as three-dimensional with reference to image.The image of being taken by TV camera 10a and 10b is combined in stereoscopic image processor 2, simultaneously the distance of calculating according to relevant result is shone upon to solid with reference to going on the image.In three-dimensional image generator 3, form the spacing image like this.The two-dimensional image that is obtained by a TV camera 10a is deposited in the two-dimensional image storer 4.On the basis of two-dimensional image and spacing image, object exists district's extraction apparatus 5 will exist the object of object to exist the district to extract, and the position or the attitude detection of object is come out according to object existence district and at its spacing image nearby by three-dimensional position/gesture detector 6.
Fig. 3 has drawn needs an example of the object of detection.Fig. 4 a is the two-dimensional image I2 of this model object, and Fig. 4 b is spacing image I3.Zones of different has been stamped different hachures to represent the different height of each object in spacing figure I3.Also will take this kind expression mode in the description afterwards.
At first be noted that how to use detect on the spacing image less than that measured target thing surface on the shape pattern that presents extract object and have the district.Utilizing object to exist district's extraction apparatus 5 to extract object existence district is what to be undertaken by pattern match shown in Figure 5.For example, on the surface of measured target thing a letter " B " is arranged, then this letter " B " is just as check pattern, as shown in Figure 6, the coupling of pattern be between two-dimensional image I2 and check pattern, carry out divide the part be equal to or greater than a threshold values to have district A1 to extract one corresponding to coupling as object, it comprises the part that extracts and covers a circle that this center of circle is corresponding with the center of gravity G1 that is extracted part.And the detecting operation of object is to carry out existing on the partial images of the corresponding spacing image of district A1 I3 with target.The detecting operation of this object not only can use two kinds of methods of the existing technology that has illustrated also can use the another kind of method that below will illustrate.
Shown in Fig. 7 and 8, object is set exists district A2 to extract the edge from two-dimensional image I2, the extracting section that the light intensity (shading value) at edge can be equal to or greater than certain threshold values is come out, so just can set an object and have district A2, its center is corresponding to the center of gravity G2 that is extracted part, and the object detecting operation then is to carry out existing on the partial images of the corresponding spacing image of district A2 I3 with object.This object detecting operation is to utilize such fact, and promptly the difference of the statistical picture characteristic of object structure makes that the density at the edge among the two-dimensional image I2 is also different.
If have only one to be red and other is blue look therein in a plurality of cuboids of the heart, as shown in Figure 9, then color extracts and can carry out (red for extracting in described example) like this extract the part of red component more than or equal to threshold values from two-dimensional image I2.As shown in figure 10, can be provided with a center corresponding to there is district A3 in the object of extraction part center of gravity G3 the object detecting operation can finish existing on the partial images of the corresponding spacing image of district A3 I3 with object.Self-evident, only need to replace color just can finish and extract a certain specific part with its brightness.At this moment, should notice that object exists the district to set on target component is equal to or greater than its basis of extraction part of two-dimensional image of threshold values.And be intended for the partial images of two-dimensional image with setting district corresponding image.
In addition, it is regional to utilize the two-dimensional geometry characteristics of the figure of trying to achieve from the geometrical feature of three-dimensional image can limit existences of measured target thing in advance.Figure 11 has illustrated a kind of like this situation, and it has utilized two two-dimensional geometry characteristics that straight line joins with the T font.Say clearlyer, at first, after object shown in Figure 12 120 is taken down, obtain 3 shown in Figure 13 a two-dimensional image I2 and one shown in Figure 13 b between apart from visual I3.In this case, connecting as the T word line shape by two straight lines shown in the reference number 130 has appearred in the two-dimensional geometry characteristics.Then, from two-dimensional image I2, extract edge and do Hough (Hough) conversion with detection of straight lines.The intersection point of tested line also is extracted out.Just judge whether to exist T one shape to connect then, use the intersection point G4 in the T-shape connection of extracting shown in Figure 14.Simultaneously, will have the corresponding zone of intersection point G4 that preset range and center be connected with T-shape is added to spacing image I3 and gets on to detect the three-dimensional position or the attitude of an object from the spacing partial images.
Shown in Figure 15 and 16, from two-dimensional image I2, detect principal axis of inertia EA1 and center of gravity G5, simultaneously spacing image I3 is hinted obliquely to one by on principal axis of inertia EA1 and the plane perpendicular to two-dimensional image I2.As shown in Figure 7, when from the image of projection, detecting a principal axis of inertia EA2 and a center of gravity G6, then from two principal axis of inertia EA1 and EA2, can detect the attitude of the principal axis of inertia, and from center of gravity G5 and G6, can detect the position of the principal axis of inertia.
In addition, as shown in figure 18, behind attitude that detects the principal axis of inertia as mentioned above and position, just can find the principal axis of inertia and the center of gravity of object model, and the attitude and the position of they and the detected principal axis of inertia are coincided, the object model is rotated finding overlapping or matching degree between object model and the object around the principal axis of inertia, when matching degree when being maximum, the three-dimensional position and the attitude that can detect object then according to the position and the attitude of object.In this case, because the position of the principal axis of inertia of the position of representing object in the three dimensions and attitude and the attitude reference system that fixes, be that in the six-freedom degree of the object in the three dimensions five have been fixed, then the position of the object in the three dimensions and attitude just can detect at high speed exactly.
As among the spacing image I3 of the two-dimensional image I2 of Figure 20 a and Figure 20 b as seen, when object is as shown in figure 12 stacked, a measured target thing location is provided with a predetermined height earlier at preprocessing process.If the height in this district is higher than predetermined altitude, then from spacing image I3, this district is extracted and (see Figure 21 a),, then from two-dimensional image I2, extract (seeing Figure 21 b), as shown in figure 19 if highly equate with predetermined altitude.Thereby, be subjected to zone that two-dimensional image I2 ' and spacing image I3 ' predetermined altitude reference standard limited must after as mentioned above pre-service (being identical with processing shown in Figure 11 in this example), could detect the three-dimensional position and the attitude of object.
A kind of like this recognition methods of objective thing also can be applicable to the situation that object is checked.For example, Figure 23 has just shown such a case, and rectangular column 8 will be inserted in the square hole 70 in the substrate 2 and go, and wishes to check out whether this root rectangular column 8 becomes vertical (Q=0 °) with substrate plate 2 when inserting.Figure 22 then is the process flow diagram that the pattern match operation of having mentioned the use front is described.Promptly can from the stereoscopic image of two TV camera, obtain as such two-dimensional image I2 as shown in Figure 24 a and the visual I3 of the such spacing shown in Figure 24 b.Say more in detail, a circle 80 on the top surface of rectangular column 8 (also can replace with an opening or a screw hole) can be used as check pattern.Exist the district to be limited by front embodiment similar methods to object, then just can detect position and the attitude of rectangular column 8 in three dimensions.Just can find the inclination angle Q of rectangular column 8 thereby just can judge also whether its insertion state is good according to testing result.
Figure 25 to 28 shows the example of a detection method, and this example is to detect the position of object and attitude according to the spacing partial images that utilizes aforesaid method to extract.Say in particular; When the object 8 that occurs as shown in figure 26, to extract from the spacing image with the corresponding partial images of the end face of object 8 by spacing image that the shooting of object 8 is obtained and according to its structure, the three-dimensional information that will obtain from partial images is added to a plane and gets on to find the plane P a of the end face that comprises object 8 in three dimensions as shown in figure 27 simultaneously.And the coefficient a by establishing an equation under the least square method calculating, b and c then can obtain plane P a:
Ax+by+cz+1=0 secondly, as shown in figure 27, point of origin P o around the camera-picture coordinate system rotates the three-dimensional space position of the point on the end face of object, thereby the normal Paz of plane P a and the Z axle of the visual coordinate system shown in Figure 28 a are coincided, utilizing the x that puts on the object end face after rotating and y coordinate values that these positions are projected a plane with reference to coordinate system then gets on the x-y plane P xy of above-mentioned visual coordinate system (in this case, for).So must be between the projection image that obtains and the model image M (shown in Figure 28 b) that deposited in the past through the position and the attitude of pattern match with the object that finds projection image, the position of the object that finds also must be through position and the attitude of an inverse transformation with the end face of searching above-mentioned rotation object before.For position and the attitude that on two-dimensional image, detects object.The method that in the example that illustrated, is to use the pattern with the model image to mate, but fully aware of, also operable fully such as the methods such as Hough transformation of broad sense.
In the above-described embodiments, the object in the three dimensions is taken, obtained a spacing image and obtain two-dimensional image with same viewpoint simultaneously, extracted and limited the position of object and the zone that attitude must be operated after testing.And the principal axis of inertia and the center of gravity of object have been extracted according to two-dimensional image.Yet when object had such as cylindrical shape, circumferential section can utilize the edge image to extract, and simultaneously position, attitude and the radius of circumference was measured to realize the high speed identification to object.
In this case, system is as shown in figure 29 thrown light on by 19 pairs of objects 18 of light fixture, and is taken by two video cameras 11 and 12 pairs of objects 18.A/D converter 13 and 14 analog picture signals with gamma camera 11 and 12 convert digital image signal to, and frame memory 15 and 16 is used for storing digital image signal, and processor 17 links to each other with 16 with frame memory 15.Use such system, the parallel bodies conversion is to be undertaken by the stereoscopic image of an edge image is handled, promptly two video cameras 11 and 12 orientation and focal length mate, convert picture intelligence to image, thereby the pixel that makes the correspondence of two images overlaps on the scale value of Building Y to realize the efficient search that has of respective point, take by gamma camera 11 corresponding to the position between the Building X scale value of two images that calculate, and map goes (three-dimensional with reference to image) to form the potential difference image to the edge image that extracts.When needs spacing image, just add the conversion from the potential difference to the spacing.For example, as shown in Figure 30, place right cylinder 18a, 18b and 18c with differing heights.When by two gamma cameras when their top is taken these objects, just obtained solid shown in Figure 31 a with reference to image and the visual ID of the potential difference shown in 31b.In Figure 31 b, a thick line is represented a higher object.
The image that is stored in the frame memory 15 and 16 by image acquisition must be through parallel perspective transformations to obtain an edge image and to obtain a gradient direction image according to the image after the conversion.In order from image, to extract the edge, just can use at the sort of rope Bel (Sobel) operator shown in Figure 71.Use this operator, the operator center is moved to the purpose pixel, the value of the shading value of purpose pixel pixel nearby and the value of operator are multiplied each other, thus about the gradient amplitude Gx and the Gy along X and Y direction of all pixels, gradient intensity | G| and gradient direction θ are calculated.When I (x, y) during the shading value of the pixel in expression coordinate points (x, y), can be with gradient G x and the Gy of following The Representation Equation along x and y direction:
Gx(x,y)={I(x+1,y-1)+2I(x+1,y)+I(x+1,y+1)}
-{I(x-1,y-1)+2I(x-1,y)+I(x-1,y+1)}
Gy(x,y)={I(x-1,y-1)+2I(x,y-1)+I(x+1,y-1)}-
I (x-1, y+1)+2I (x, y+1)+I (x+1, y+1) } use Gx and Gy, just can find gradient intensity | the gradient direction θ on G| and each pixel location is as follows.Image with pixel of these values is called gradient intensity image and gradient direction image.
|G|=(Gx 2+Gy 2) 1/2
θ=tan -1(Gy/Gx) to gradient intensity | after the value of G| proposes a threshold values, the gradient intensity that those gradient intensity values are lower than the pixel of threshold values all is set to 0, the point that has high gradient intensity near at hand, those gradient intensity that promptly form crestal line are extracted out as marginal point, so just obtained the such edge image IE shown in Figure 32 a.Shown in Figure 32 b, from the edge image of solid, only extract curve component 3 21 and 322 with reference to figure, and shown in Figure 32 c, from the curve component extraction circumferential components of having extracted 322.
Potential difference is handled by stereoscopic image according to two three-dimensional outline maps and is calculated in a manner described, the potential difference of calculating shone upon to three-dimensional edge image with reference to image go to set up a potential difference image, the three-dimensional position of each point is to calculate from the counterpart of the circular potential difference image that extracts on circular edge in a manner described.The position of the end face circle center of right cylinder object 18 and attitude are to record according to the three dimensional local information that calculates, and come the position and the attitude of recognition objective thing 18 thus.
Only extracting the way of curved portion from three-dimensional edge image with reference to image can use, for example, below this method finish, extract straight line portion by Hough transformation, in preset range, the edge that occurs is wiped from the straight line that is extracted.To do some explanation to following situation: i.e. three types object 18a as shown in figure 35,18b and 18c are transmitted along forwarder 92, object 18 then is one of in these three kinds of objects, and how to choose this object district from this object of three types.As shown in figure 37, outline map as if by parallel perspective transformations and set up Grad image and gradient direction is visual sets up.In this case, take three-dimensional gamma camera 11 and almost just gathered image, so just obtained the edge image of the object 18 shown in Figure 36 a from the object top with reference to image.When edge image Hough transformation to extract straight line and when from the straight line 126 and 127 that extracts, the edge that occurs being wiped in preset distance, just the edge 122 from forwarder 92 has all been wiped to 125.Thereby, only extracted the circumferential edges 121 of object 18 upper surfaces.Simultaneously, when calculating the three-dimensional position of each point on the circumferential edges from the potential difference of the circumference that extracts and when the home position of the end face circumference of measuring object 18 from the three dimensional local information that calculates and radius, just can determine object 18 according to measured home position (height) and radius is any in three types that belong to as shown in figure 35.
And shown in Figure 38 and 39, when in right cylinder object 18,18a, 18b and 18c the top edge straight line composition being arranged, when in preset distance, the linear edge that has extracted being wiped so, also may wipe a part of curved portion at them.For fear of this defective, shown in Figure 40 and 41, do such arrangement and wipe those and appear at the straight line that extracts 177,178,179,17a, 17b and 17c in the preset distance and the value that angle had between their gradient direction and the rectilinear direction vector in predetermined 90 ° of value scopes.So just can shown in Figure 40 b, realize to wipe linear edge and can not wipe part circumferential edges 171 again simultaneously.
The method of use shown in Figure 42 to 45 also can realize wiping the straight line composition in the edge image and keep the curve composition.That is, from the edge image of solid of storing, find out marginal point with reference to image, simultaneously shown in Figure 43 a, begin to search in the direction of the clock from the right the purpose marginal point 201 found out surrounded those to find out the marginal point that exists in addition.If during the sample degree direction of the gradient direction of a marginal point and the marginal point 201 that deposited in the past different (differ a predetermined value or less than predetermined value), then this marginal point is stored and make next purpose marginal point, as long as this storing process has marginal point to go on doing always, can not find till the marginal point around being performed until purpose marginal point 201, simultaneously, when the marginal point of such storage outnumbered predetermined value the time, these marginal points just are aggregate to group of edge lines 431 to 435 as Figure 43 b.As shown in figure 44, to each edge group, the angle 224 that investigation is made of line segment that connects starting point 221 and mid point 222 and line segment by fillet genesis initial point 221 and terminal point 224, make the edge line that has only those angles to surpass some predetermined values to stay, so also just can only extract those curved edges 431,434 and 435 shown in Figure 45 a, and the circumferential edges 434 as shown in Figure 45 b and 435 can extract from the edge Figure 45 a.
Extracting circumferential edges from curved edge can carry out in the following manner.At first, by the such fact of method utilization shown in Figure 46, promptly the bottom of object 18 does not have corresponding part with its left part and right part image, and its potential difference detection ratio is just quite low like this.Shown in Figure 47 a, from the edge image that has only curved edge, investigate the potential difference that curved edge 261,262 and 263 calculates.Shown in the digraph of the marginal point that has potential difference that calculates among Figure 47 b resembled, curved edge 262 and 263 almost was to calculate by their tentiometer.Because curved edge 261 does not all have corresponding part at its left part and right part image, because of its potential difference is not calculated as yet yet, when the potential difference ratio that they have calculated when some curves is just cancelled these curves during less than a predetermined value, curved edge 261 just is cancelled like this, so just the like that only curve 262 and 263 on the object end face shown in Figure 47 c is retained down, thereby from then on can extract edge 262 and 263 to form a circle.
When object 18 has similar the sort of ring-type shown in Figure 49,, only extract one of them circular edge (being outer circular edge) in Figure 49 because the end face of object has two circular edges.Say that in particular the edge of an object 18 image IE and has just obtained the visual IE2 in edge shown in Figure 50 b when straight line portion shown in Figure 50 a after the edge image is wiped free of.When seeing that object is shinny on background, the gradient direction 335 at outer shroud edge 331 is inside, and the gradient direction 336 at interior ring edge 332 is outside, shown in Figure 51, uses equation for straight circle
x 2+ y 2+ ax+by+r=0 can try to achieve coefficient a and b and constant r with least square method.Straight circle is used equation be applied to the single edges 332 of end face to calculate the center of circle (a/2,-b/2) 334 o'clock, chosen the arbitrfary point 339 in the straight rounded edge, found the direction of pointing to the center of circle 334 of calculating from the point chosen 339, and extracted on the end face such edge and make direction 338 and be a predetermined value (90 °) or less than this value by the difference between the gradient direction of reconnaissance 336.Except edge 332, even for top edge 331,33a, 33b, 33c, 33d, 33e and 33f, if do poor with between the straight round heart direction of this point sensing of gradient direction that aforesaid operational computations goes out each marginal point, extract those its difference simultaneously and be equal to or less than edge on the end face of predetermined value (90 °), then just can obtain solid shown in Figure 50 c with reference to the visual IE3 in the edge of image, cancel inside circumference edge 332,33a, 33c and 33e among this figure.About this point, when the predetermined value of this difference was hanked greater than 90 °, then outer circumference edge 331,33b, 33d and 33f had been wiped and inboard round edge 332,33a, and 33c and 33e are retained.
The embodiment of Figure 52 is like this design, when object 18 is to be similar to those right cylinders shown in Figure 53 and to be stacked its edge image IE just shown in Figure 54 a the time, just detects the highest that in the stacking object, with position and the attitude of measuring it.Appear in a plurality of curved edges in the edge image, select one owing to its position is in the highest that edge that greatest difference of rates is arranged, simultaneously will this selected edge be arranged to shown in Figure 54 a with reference to edge 381, and to check this edge, have a look at this and whether form a round part with reference to edge 381.If can not be used as a round part with reference to the edge, be only second to the edge at the edge of selecting just now 381 as the reference edge with regard to the size of choosing a potential difference so, simultaneously also to reexamine this edge, whether have a look at it is the part of circle, aforesaid operations repeats, until finding one to be that a part of edge of circle is as the reference edge.Thereby, reference number is put at other edge, 381 beginnings are put on reference number 382,383 or the like with the order that increases progressively from the reference edge, and investigate other edge one by one by the above-mentioned numeral order that increases progressively, and have a look at the part whether they form same circle.Those are determined is that the edge of forming the part of same circle just is comprised in same group of interior conduct with reference to the edge.Also carry out such processing to having other edge that is equal to or less than the predetermined value distance with the reference edge, thereby the edge 381,383,384 and 386 that will have same circumference combines shown in Figure 54 b like that, so just can detect the position and the attitude of the object 18 of highest level according to it.
In the embodiment shown in Figure 55, from the edge image of Figure 54 a, chosen one owing to being that object is in the edge that highest level has greatest difference of rates, the this edge edge of selecting as with reference to edge 381, other edge by the order that increases progressively with reference edge 381 distances use 382,383 successively ... represent Deng reference number.Then, pressing establishes an equation can try to achieve coefficient a with least square method, b, and h, g and f (being used for elliptic equation):
Ax 2+ by 2+ 2hxy+2gx+2fy=1 is applied to ellipse with reference to edge 381, to calculate its center coordinate (x 0, y 0).Shown in Figure 56, there is straight line 413 to pass, and intersects at 2: 414 and 415 with this ellipse with reference to the central point 412 of marginal point 417 with ellipse 411.The distance 416 of that point 414 of 417 and point 417 has surpassed the predetermined value at any point in reference edge if keep to the side in two intersection points a little, and that the should just no longer regard a circumferential section as with reference to the edge, subsequently just chooses a conduct wantonly with reference to the edge.Otherwise, when if the above-mentioned distance of mentioning is no more than this predetermined value, a part of so just this being regarded as circumference with reference to the edge, combining with edge 382 nearby with reference to edge 381, can be in above-mentioned equation with least square method in the hope of coefficient a, b, h, g and f are applied to ellipse with reference to the edge thus, to calculate its center coordinate (x 0, y 0).So just can try to achieve with reference to the distance between edge 381 and near the edge it 382, also try to achieve the ellipse that has found simultaneously.If all distances of trying to achieve are all less than a predetermined value, then can judge a part that constitutes same circle with reference to edge 381 and near edge 382 thereof.Even if there is a distance to be not less than predetermined value, then these edges of decidable do not constitute the part of same circle yet, carry out above-mentioned operation again so just combine with near the next edge line 383 it with reference to edge 381.If repeat sequence of operations until all combinable edges are all got and are over, those edges that belong to same circle so all should be able to couple together becomes a circumference.By such connection, the center of circle coordinate of above-mentioned ellipse can be expressed as:
x 0=(hf/bg)/(ab-h 2)
y 0=(hg/af)/(ab=h 2)
Embodiment shown in Figure 57 also be to detect shown in Figure 58 by a plurality of right cylinder objects 18 stack get up wherein be in top that object.Figure 59 a's edge image IE of these objects among Figure 58 has drawn.In Figure 59 a, be corresponding to its potential difference maximum of top edge line with reference to edge 441.In the case, press the method for the foregoing description, edge 442 should be as reference edge 441 be detected as the part of same circle.Yet since this object just in time be at a slant or other what former thus detect less than.Thereby just might produce erroneous judgement and with edge 443 with couple together with reference to edge 441.In the example that illustrated,, be also will do a last judgement when belonging to same circumference to determine whether they belong to same circumference when judging by the method for the foregoing description with reference to edge 441 and edge 443 for fear of this type of mistake occurring.This is last to carry out when judging only this below situation: shown in Figure 60, make a plane 453 according to the three-dimensional information that the more any potential difference on any and other edge 443 on the reference edge 441 is set up at 2; The three-dimensional information of setting up according to any 2 potential difference on a bit any and edge 443 except that 2 that had chosen just now on the reference edge 441 is formed a plane 454 again; If the difference of the direction of the normal on these two planes during less than predetermined value, then just must be judged at last.By aforesaid operations, just can shown in Figure 59 b, extract the circumferential edges 441 of the end face of top object like that.
Finished and some explanations have been done in operation just coming after the extraction of the end face circumferential edges of right cylinder object.The embodiment of Figure 61 to 63 is identical with the situation of Figure 25, calculates the three-dimensional position of these points according to the tentiometer of the circumferential edges that extracts, establishes an equation down simultaneously to use least square method can calculate coefficient a, b and c:
Ax+by+cz+1=0 obtains the plane P a of the end face that comprises object 18 shown in Figure 62 then, in the three-dimensional position of the point of the object end face point of origin P around the gamma camera coordinate system 0Rotation, so just the normal Paz of plane P a just coincides with the Z axle of gamma camera coordinate system shown in Figure 63.Press simultaneously and establish an equation (being applicable to straight circle) and least square method can be in the hope of coefficient a, b and constant r can remove to calculate straight circle (right cirle) like this at rotation later center 491 and radius 492.
x 2+ y 2+ ax+by+r=0 did the explanation example in, with the gamma camera coordinate system as the reference coordinate system.Calculate the home position 486 of straight circle of the end face of object 18 by inverse transformation (before promptly rotating), can obtain position, attitude and the radius of object like this home position 491 of the straight circle that calculates.In coordinate system rotation back straight circle is calculated, utilizing least square method to use at above-mentioned ellipse can be in the hope of coefficient a in the equation, b, h, g and f.Simultaneously ellipse is converted to position and attitude and the long and short axle radius of standard form in the hope of oval cylinder.
In the embodiment shown in Figure 64, the three-dimensional position of the end face point of object makes the normal Paz of plane P a overlap with the Z axle of coordinate system round the initial point rotation of above-mentioned coordinate system.The distance 493 (promptly being the poor of Z axle value) of the marginal point of extremely being justified by foundation plane P a after afterwards, those rotate in three dimensions surpasses the point of predetermined value and all need cancel.Utilize the x of object 18 top edge point (except that the top point of having used) after rotating, the coordinate values of y, can be by straight circle application equation and least square method in the hope of coefficient a, b and constant r like this can be in the hope of straight circle at rotation later home position 491 and radiuses 492.Under this situation, the gamma camera coordinate system is to be used as with reference to coordinate system.By the home position 491 of the straight circle that calculates being made the directly home position 486 of circle of end face that inverse transformation (promptly to rotation before) can calculate object 18, so also can try to achieve target 18 and rotate later position, attitude and radius at coordinate system.And then, in coordinate system rotation back straight circle is calculated, by the oval equation of using above-mentioned, utilize least square method can find coefficient a, b, h, g and f, simultaneously ellipse convert to standard form also can try to achieve later on the position of oval cylinder, attitude with and long and short axle radius.
In the example of Figure 65, the three-dimensional position of the end face point of object rotates round the initial point of above-mentioned coordinate system, makes the normal Paz of plane P a and the Z axle of this coordinate system coincide.Afterwards, those are done rotation back and all will be cancelled above the point of predetermined value to the distance of circumferential edges point (be Building Z mark plant poor) by the plane P a that is built in three dimensions.Utilize least square method, try to achieve a plane P a who comprises right cylinder object 18 top edge once more in three dimensions, the three-dimensional position with object end face point overlaps with the z axle of coordinate system round the normal Paz that the point of origin P o that videotapes coordinate system makes the feasible plane P a that has just tried to achieve of rotation simultaneously.Utilize the x of the later object 18 top edge point (except that the top point of having used) of rotation and the coordinate values of y, use equation by straight circle, utilize the least square method can be in the hope of coefficient a, b and constant r be to calculate straight circle at rotation later home position 491 and radius 492.By the home position of the straight circle that calculates is done end face that inverse transformation (promptly to rotation before) can obtain object 18 directly the home position 486 of circle rotate later position, attitude and radius in the hope of object at coordinate system.Even when this kind situation, further after the coordinate system rotation, can calculate straight circle, use equation by the top ellipse of mentioning, utilize the least square method can be in the hope of coefficient a, b, h, g and f, and ellipse is converted to the position that also can try to achieve oval cylinder, attitude and the long and short radius of standard form.
In the embodiment of Figure 66, calculated the three-dimensional position of point as mentioned above according to the tentiometer of the circumferential edges that extracts, utilized least square method to calculate coefficient a, b and c by above mentioned planar applications equation again to obtain a plane P a who comprises object 18 end faces.In addition, shown in Figure 67, utilize least square method can in three dimensions, try to achieve one and comprise the directly ball 531 of circle of object 18 end faces.When using ball, owing to the straight circle of end face can be presented on the same plane, thereby this ball is not well-determined.Yet the centre of sphere of this ball is subjected to certain restriction, and promptly the centre of sphere must be vertical with above-mentioned plane P a also by on the straight line of the straight round heart on this plane at one.In addition, because the ball of being tried to achieve may be any one ball, the radius ratio end face that need only satisfy it is circle big directly, and it has comprised end face and directly justify, yet in case after any one value on this ball centre of sphere axes of coordinates was fixed up, ball just had been determined.In the present embodiment, the Z axle value of the centre of sphere is fixed on Z axle value c 0On, this is worth c 0Exist the Z axle value in district different and with object greater than straight radius of a circle.In other words, following formula is exactly an expression formula of asking for the ball equation of coefficient a and b and constant R by least square method:
x 2+ y 2+ z 2+ ax+by-2C 0The end face that Z+R=0 can obtain right cylinder object 18 by the intersection point that calculates plane P a and straight line is the center of circle 533 of circle directly, this straight-line pass centre of sphere 532 and have the direction vector identical with the normal vector 537 of plane P a.Square deducting the centre of sphere and can obtain radius 534 from ball and footpath 535 to the sqrt square subsequently of the distance 536 the straight center of circle.Therefore, even in this kind situation, also can try to achieve position, attitude and the radius of object 18.
The embodiment of Figure 68 to 70 is the example that the present invention is applied to automatic assembly work.Figure 69 more describes hope in detail will be had the parts 541 in two holes and have the parts 542 of two projections to assemble up with robot, just detect hole and projection earlier to measure its position and attitude.Be the position and the attitude of components of assays 541 and 542.
More particularly, set up a three-dimensional edge image with reference to image (see Figure 70 a), a gradient direction image and a potential difference image, and from the edge image, extract curved edge, promptly shown in Figure 70 b, extract such 4 circular edges 701 to 704.When calculate from 4 circular edges with three dimensions during the normal direction on the line segment direction that is connected the mid point and the center of circle of corresponding 4 circles 543 to 546 and the plane determined by 2 circles, have the position and the attitude of the parts 542 of two ridges just can decide.Other two circumference are also made similar operation, then have the position component in two holes and attitude also can determine.
According to the present invention, as top explained, by used identical viewpoint can obtain a two-dimensional image when obtaining the spacing image by three-dimensional object is taken.In this two-dimensional image, stipulate a partial images again, should be applied to the spacing image with partial images of extraction from the spacing image by the two dimension partial images, and the detecting operation of object carries out on the spacing partial images.That is, when carrying out the object detecting operation, utilize two-dimensional image that the existence zone of the measured target thing in the spacing image is limited earlier, and the detecting operation of object just carry out on the localized area of this spacing partial images.Thereby, can understand that come compared with the prior art that detects, the present invention can realize the detecting operation of higher speed in whole spacing area of image.

Claims (23)

1. an object recognition methods is characterized in that, comprises the steps:
Obtain a spacing image by the object in the three dimensions is taken, obtain a two-dimensional image from the identical viewpoint that obtains this spacing image;
In two-dimensional image, stipulate out a partial images;
The two-dimentional partial images of regulation is applied to the spacing image to extract the partial images of spacing image; And
The spacing partial images is used for the object detecting operation.
2. object recognition methods as claimed in claim 1 is characterized in that the step of the partial images in the regulation two-dimensional image is carried out according to the shape pattern.
3. object recognition methods as claimed in claim 1 is characterized in that, the step of stipulating out partial images in two-dimensional image is to carry out according to the statistical picture feature of structure.
4. object recognition methods as claimed in claim 1 is characterized in that, stipulates out that in two-dimensional image partial images carries out according to color or brightness.
5. object recognition methods as claimed in claim 1 is characterized in that, stipulates out that in two-dimensional image partial images is to carry out according to the two-dimensional geometry characteristic of trying to achieve from the three-dimensional geometry feature of object.
6. object recognition methods, it is characterized in that, obtain a two-dimensional image the object in three dimensions is taken the spacing used identical viewpoint of image that obtains, the principal axis of inertia of object image and center of gravity in two-dimensional image, the spacing image is mapped to one to get on by the principal axis of inertia and the plane vertical with two-dimensional image, the principal axis of inertia and the center of gravity of the object image on projection plane, and the position and the attitude that from two acquired principal axis of inertia and two acquired centers of gravity, detect a principal axis of inertia of object in the three dimensions.
7. object recognition methods as claimed in claim 6, it is characterized in that, make the principal axis of inertia of the object that is detected at three dimensions and the principal axis of inertia and the center of gravity of a center of gravity and an object model coincide, the object model rotates round its principal axis of inertia, and obtains position and the attitude of registration to detect object in the three dimensions between object model and the object.
8. as the object recognition methods of claim 1 or 6, it is characterized in that,, from the spacing image, extract the regulation zone of an expression specified altitude and from the two-dimensional image in this regulation zone, stipulate out partial images as pre-service.
9. as the object recognition methods of claim 1 or 6, it is characterized in that, in the object detecting operation, the three-dimensional information that obtains from the spacing partial images is applied on the plane, to videotape coordinate is that the rotation of initial point do makes the normal on plane and the z axle that videotapes coordinate system coincide, postrotational three-dimensional information is projected on the x-y plane of coordinate system, projection image and the model of before having stored are made temporary position and the attitude of contrast in the hope of getting object, this temporary position and attitude are calculated position and the attitude of object before rotation through inverse transformation.
10. an object method of identification is characterized in that, takes soot-and-whitewash of acquisition from a plurality of angles to having circular object to one in three dimensions, tries to achieve an edge image and a gradient direction image from this soot-and-whitewash again; Handle edge image of generation as reference by stereoscopic image, set up a potential difference image from this edge image and other edge image again; From reference edge image, only extract curved edge; And utilize the three-dimensional information of trying to achieve to detect to be extracted round position and attitude by the potential difference of extracting rounded edge point.
11. the object recognition methods as claim 10 is characterized in that, extracts straight line from reference edge image, all is wiped free of only to extract curved edge being extracted straight line edge nearby.
12. object recognition methods as claim 10, it is characterized in that, extract straight line from reference edge image, wipe the edge that the angle that also will wipe those their gradient directions and rectilinear direction vector in those edges close with the straight line that extracts is almost 90 °, only to extract curved edge.
13. object recognition methods as claim 10, it is characterized in that, when from reference edge image, only extracting curved edge, be arranged to the object edge point with reference to the marginal point in the edge image, the point that search is surrounded by object edge point is to seek next marginal point, the point that searches deposited make next object edge point fully, this searched to the gradient direction of point and the gradient direction of object edge point between have one to be equal to or less than the poor of predetermined value, this is deposited operation and goes on up to till can not find next object edge point around this object edge point always, and then, from distributing, phase marginal point on the same group only extracts curved edge with curve and linear edge grouping and classifying.
14. the object recognition methods as claim 10 is characterized in that, extracts circle from curved edge, the curved edge that extracts those predetermined numbers with the marginal point that calculates through potential difference or predetermined percentage is to extract circle.
15. object recognition methods as claim 10, it is characterized in that, by least square method circle is added on the curved edge that extracted to calculate the center of circle of this circle, between the direction of the direction in the marginal point sensing center of circle of curved edge and the gradient of marginal point a difference is arranged, if this difference of some curved edges is in preset range, then these curved edges are extracted, and circle extracts from the curved edge that these have extracted.
16. object recognition methods as claim 10, it is characterized in that, the curved edge that has extracted coupled together be combined into a complete circle, curved edge is coupled together forming the circle in the image according to increasing order by its distance, and those distances be equal to or greater than the curved edge of predetermined value can not be as forming circle.
17. object recognition methods as claim 16, it is characterized in that, when from the 2-D data of the curved edge that connected, utilizing least square method to find bowlder, and if only if the curved edge that these could have been connected when separately point is in the distance range of the circle that they have found on the curved edge that has connected is judged to be on same circle and forms a circle.
18. object recognition methods as claim 10, it is characterized in that, have only when following principle satisfies two to be considered to form round curved edge and just can be judged as on same circle, this principle is: according on 2 on article one curved edge and the second curved edge a bit between the three-dimensional information set up of potential difference a determined plane normal with according to 2 on the second curved edge with article one curved edge on the normal on the determined plane of three-dimensional information set up of some potential difference between (except that top point of having got) between a difference is arranged, if this difference is that also formation is round on same circle in predetermined range with interior these two curved edges that then show.
19. object recognition methods as claim 10, it is characterized in that, utilize the three-dimensional information that has extracted the point on the rounded edge to try to achieve the plane at this rounded edge place, three-dimensional information is done rotation around the initial point that videotapes coordinate system makes the normal on the plane that is tried to achieve overlap with the z axle that videotapes coordinate system, use rounded edge and calculate the round center of circle and radius by least square method, the coordinate values of calculating is obtained position and the attitude of object before rotation through inverse transformation in later x and the y coordinate values of rotation.
20. object recognition methods as claim 19, it is characterized in that, obtain the plane that rounded edge is put, those corresponding to all removing from the plane of having tried to achieve at the rounded edge point that is equal to or greater than predetermined value, and calculate the center of circle and the radius of this circle in three dimensions.
21. object recognition methods as claim 19, it is characterized in that, obtain the plane that rounded edge is put, those all remove from the plane of being tried to achieve corresponding to the rounded edge point that is equal to or greater than predetermined value in three dimensions, the rounded edge of being determined by the rounded edge point that stays is asked the plane at this rounded edge place again, and the plane of trying to achieve is rotated.
22. the object recognition methods of right 10 is characterized in that, the rounded edge of having extracted is put into a plane and the ball in three dimensions gets on, to calculate the center of circle and the radius of directly being justified by the determined ball of the intersecting plane of this plane and ball cross section.
23. the object recognition methods as claim 10 is characterized in that, concerns to come the position and the attitude of detection and Identification object according to value and its position in three dimensions of measured circle.
CN95100535A 1994-02-23 1995-02-23 Method for identifying object in three dimensional space Expired - Fee Related CN1045831C (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2573194 1994-02-23
JP25731/94 1994-02-23

Publications (2)

Publication Number Publication Date
CN1136685A true CN1136685A (en) 1996-11-27
CN1045831C CN1045831C (en) 1999-10-20

Family

ID=12173963

Family Applications (1)

Application Number Title Priority Date Filing Date
CN95100535A Expired - Fee Related CN1045831C (en) 1994-02-23 1995-02-23 Method for identifying object in three dimensional space

Country Status (1)

Country Link
CN (1) CN1045831C (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102789642A (en) * 2011-05-16 2012-11-21 索尼公司 Method and device for disappeared direction determination and method and device for camera self-calibration
CN104809940A (en) * 2015-05-14 2015-07-29 广东小天才科技有限公司 Geometrical stereoscopic graph projection device and projection method
CN109816704A (en) * 2019-01-28 2019-05-28 北京百度网讯科技有限公司 The 3 D information obtaining method and device of object
CN110096999A (en) * 2019-04-29 2019-08-06 达闼科技(北京)有限公司 Chessboard recognition methods, chessboard recognition device, electronic equipment and can storage medium
CN112052847A (en) * 2020-08-17 2020-12-08 腾讯科技(深圳)有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4556986A (en) * 1983-03-09 1985-12-03 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Optical stereo video signal processor

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102789642A (en) * 2011-05-16 2012-11-21 索尼公司 Method and device for disappeared direction determination and method and device for camera self-calibration
CN102789642B (en) * 2011-05-16 2017-08-25 索尼公司 Direction of extinction determines method and apparatus, camera self-calibration method and device
CN104809940A (en) * 2015-05-14 2015-07-29 广东小天才科技有限公司 Geometrical stereoscopic graph projection device and projection method
CN104809940B (en) * 2015-05-14 2018-01-26 广东小天才科技有限公司 Geometry stereographic projection device and projecting method
CN109816704A (en) * 2019-01-28 2019-05-28 北京百度网讯科技有限公司 The 3 D information obtaining method and device of object
US11288492B2 (en) 2019-01-28 2022-03-29 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and device for acquiring 3D information of object
CN110096999A (en) * 2019-04-29 2019-08-06 达闼科技(北京)有限公司 Chessboard recognition methods, chessboard recognition device, electronic equipment and can storage medium
CN110096999B (en) * 2019-04-29 2021-07-30 达闼科技(北京)有限公司 Chessboard recognition method, chessboard recognition device, electronic equipment and storable medium
CN112052847A (en) * 2020-08-17 2020-12-08 腾讯科技(深圳)有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN112052847B (en) * 2020-08-17 2024-03-26 腾讯科技(深圳)有限公司 Image processing method, apparatus, electronic device, and computer-readable storage medium

Also Published As

Publication number Publication date
CN1045831C (en) 1999-10-20

Similar Documents

Publication Publication Date Title
CN1162681C (en) Three-D object recognition method and pin picking system using the method
US5692061A (en) Method of utilizing a two-dimensional image for detecting the position, posture, and shape of a three-dimensional objective
CN1299556C (en) Pick and place machine with component placement inspection
CN106326892B (en) Visual landing pose estimation method of rotary wing type unmanned aerial vehicle
CN1247955C (en) Method and apparatus for three-dimensional color scanning
CN1573811A (en) Map generation device, map delivery method, and map generation program
CN1940961A (en) Feature point detection apparatus and method
CN1531826A (en) Method and apparatus for omni-directional image and 3-dimensional data acquisition with data annotation and dynamic range extension method
CN112465960B (en) Size calibration device and method for three-dimensional model
CN1539120A (en) Three-dimensional electronic map data creation method
CN103198487A (en) Automatic calibration method for video monitoring system
CN1661323A (en) Apparatus for surface inspection and method and apparatus for inspecting substrate
KR20110055412A (en) Calibration apparatus
CN1801896A (en) Video camera rating data collecting method and its rating plate
CN109739239B (en) Planning method for uninterrupted instrument recognition of inspection robot
CN1853195A (en) Program for correcting image distortion, apparatus for correcting image distortion, method for correcting image distortion, and recording medium storing program for correcting image distortion
CN101038163A (en) Single-vision measuring method of space three-dimensional attitude of variable-focus video camera
CN100345154C (en) Visual quick identifying method for football robot
CN1878297A (en) Omnibearing vision device
US20130058526A1 (en) Device for automated detection of feature for calibration and method thereof
CN1959740A (en) Image processing method and device, program for processing image, and storage medium thereof
WO2007064897A1 (en) Improved method of pattern location using color image data
CN112254675B (en) Space occupancy rate acquisition and judgment equipment and method containing moving object
CN1045831C (en) Method for identifying object in three dimensional space
Von Hansen Robust automatic marker-free registration of terrestrial scan data

Legal Events

Date Code Title Description
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C06 Publication
PB01 Publication
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 19991020

Termination date: 20140223