EP0243493A1 - Zielansteuerung und steuerungssystem zur positionierung von automatisch gesteuerten fahrzeugen - Google Patents
Zielansteuerung und steuerungssystem zur positionierung von automatisch gesteuerten fahrzeugenInfo
- Publication number
- EP0243493A1 EP0243493A1 EP87900361A EP87900361A EP0243493A1 EP 0243493 A1 EP0243493 A1 EP 0243493A1 EP 87900361 A EP87900361 A EP 87900361A EP 87900361 A EP87900361 A EP 87900361A EP 0243493 A1 EP0243493 A1 EP 0243493A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- images
- light source
- reflector elements
- target member
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 claims description 23
- 238000003384 imaging method Methods 0.000 claims description 2
- FGUUSXIOTUKUDN-IBGZPJMESA-N C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 Chemical compound C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 FGUUSXIOTUKUDN-IBGZPJMESA-N 0.000 claims 3
- 238000003032 molecular docking Methods 0.000 claims 1
- 230000005855 radiation Effects 0.000 claims 1
- 210000000056 organ Anatomy 0.000 abstract 1
- 239000013598 vector Substances 0.000 description 53
- 239000011159 matrix material Substances 0.000 description 25
- 238000013459 approach Methods 0.000 description 13
- ADTDNFFHPRZSOT-PVFUSPOPSA-N ram-330 Chemical compound C([C@H]1N(CC2)C)C3=CC=C(OC)C(OC)=C3[C@]32[C@@]1(O)CC[C@@H](OC(=O)OCC)C3 ADTDNFFHPRZSOT-PVFUSPOPSA-N 0.000 description 11
- 230000008859 change Effects 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 238000011156 evaluation Methods 0.000 description 5
- 230000035945 sensitivity Effects 0.000 description 5
- 230000008030 elimination Effects 0.000 description 4
- 238000003379 elimination reaction Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000009467 reduction Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000009472 formulation Methods 0.000 description 3
- 238000002955 isolation Methods 0.000 description 3
- 230000000670 limiting effect Effects 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 229910052724 xenon Inorganic materials 0.000 description 3
- FHNFHKCVQCLJFQ-UHFFFAOYSA-N xenon atom Chemical compound [Xe] FHNFHKCVQCLJFQ-UHFFFAOYSA-N 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- NCGICGYLBXGBGN-UHFFFAOYSA-N 3-morpholin-4-yl-1-oxa-3-azonia-2-azanidacyclopent-3-en-5-imine;hydrochloride Chemical group Cl.[N-]1OC(=N)C=[N+]1N1CCOCC1 NCGICGYLBXGBGN-UHFFFAOYSA-N 0.000 description 1
- 238000006424 Flood reaction Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 229940050561 matrix product Drugs 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 230000001681 protective effect Effects 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 238000001454 recorded image Methods 0.000 description 1
- 238000011946 reduction process Methods 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0225—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving docking at a fixed facility, e.g. base station or loading bay
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B66—HOISTING; LIFTING; HAULING
- B66F—HOISTING, LIFTING, HAULING OR PUSHING, NOT OTHERWISE PROVIDED FOR, e.g. DEVICES WHICH APPLY A LIFTING OR PUSHING FORCE DIRECTLY TO THE SURFACE OF A LOAD
- B66F9/00—Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes
- B66F9/06—Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes movable, with their loads, on wheels or the like, e.g. fork-lift trucks
- B66F9/075—Constructional features or details
- B66F9/0755—Position control; Position detectors
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/87—Combinations of systems using electromagnetic waves other than radio waves
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0234—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/37—Measurements
- G05B2219/37555—Camera detects orientation, position workpiece, points of workpiece
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/45—Nc applications
- G05B2219/45049—Forklift
Definitions
- This invention relates to method and apparatus for determining the position and orientation of a target with respect to a sensor.
- Both the target and the sensing unit may be affixed to movable objects, although generally the target will be stationary, and the sensing unit will be attached to a self-propelled vehicle.
- the vehicle carries a sensing unit, the output of which controls some function of the vehicle, such as its forward motion, steering control or the vertical location of the forks.
- the target might be the pallet itself or the frame or rack on which the pallet is supported.
- Some of the prior art devices employ specialized marks whose dimensions are known; others utilize special light projectors and sense the light reflected from the target for positioning a vehicle, or a component of the vehicle, such as the forks of a lift truck.
- This invention is directed to a target member for use in a positioning system for providing at least three identifiable images positioned with respect to a sensor carried by a self-propelled vehicle. These three images are located so as to provide an unambiguous frame of reference thereby to allow the determination of all six positional and orientational degrees of freedom from a single observation of the target by the sensor.
- the target includes at least three reflector elements mounted on a support member.
- the target and the reflector elements provide a thin, essentially flat surface while at the same time providing to the vehicle mounted sensor the appearance of a target having considerable depth.
- the mirrors are selected so that the images of a light source carried by the vehicle define a plane and a circle.
- the plane is not normal to the sensor viewing axis, and the circle does not include the sensor.
- a light source was chosen as the identifying means so that it could be readily detected by commercially available sensing devices, such as a small television camera.
- the reflector elements have at least two different radii of curvature.
- two of the reflectors may be convex and have the same radius of curvature, and the third reflector may be concave.
- the radius of curvature and diameter of each are selected so that the reflection of the light source may be viewed by a sensor when the vehicle is within a predetermined field of vi ⁇ w with respect to the target. It is necessary, of course, to allow the vehicle to approach the target from some distance, and to identify the location of the target from some acceptable viewing angle.
- the target member may also include retroreflector members which provide a brilliant and rather large reflection whenever the light source is flashed. The position of the reflections from the retroreflector members are used to determine the area on the sensor's image plane where the desired reflections of the light source from the curved reflector elements are to be found.
- the target member may also include coded means for identifying the specific target, such as a bar code that may be scanned to confirm that the target within the field of view of the sensor is the one to be engaged by the vehicle.
- coded means for identifying the specific target such as a bar code that may be scanned to confirm that the target within the field of view of the sensor is the one to be engaged by the vehicle.
- a guidance system may be programmed to manuever the vehicle into proper position with respect to the target.
- the identifying means is a light source, and specifically a xenon strobe lamp for providing a short duration, high intensity pulse of light directed away from the front of the vehicle along the axis of the camera and back to the sensor via the mirrors.
- the camera and the light source would be collocated. It is possible to use half silvered mirrors so that the center of the light source falls upon the axis of the lens of the camera.
- a practical embodiment of the invention places the light source immediately above the camera. It has been found that this slight offsetting does not appreciably affect the accuracy of the measurements taken.
- an object of this invention to provide a target member for use in a positioning system that employs at least three reflector elements with each of the reflector elements being so configured to form images of an identifying means in a plane that is oriented other than normal to a line from the identifying means to the plane.
- Fig. 1 is a perspective view showing a rack capable of supporting a plurality of pallets, and an automatically guided vehicle carrying a light source and a camera for sensing the reflections of the light source in a reflector member attached to a selected pallet;
- Fig. 2 is a perspective view showing the camera and light source located below and to the right of a line passing perpendicular to and through the center of the target or reflector member;
- Fig. 3A is a view showing the reflections of the light source in the reflector elements.
- Fig. 3B is a. view showing the reflections from the mirrors on the target as they appear on the image plane of the camera;
- Fig. 4 is a perspective view showing the camera and light source mounted on the forklift vehicle
- Fig. 5 illustrates a target or reflector member with attached retroreflector members, spherical reflector elements, and a bar code
- Fig. 6 is a plan view showing various locations of an automatically guided vehicle, such as a fork lift, with respect to a pallet;
- Figs. 7A-7D represent the reflections of the light source in the reflector elements at the various locations of the vehicle with respect to the pallet, as shown in Fig. 6;
- Figs. 8A-8D represents the images of the reflections shown in Figs. 7A-7D as they appear on the image plane of the vehicle carried camera;
- Figs. 9A-9D represent the electrical signals on the image plane of the sensor at a single location of the vehicle.
- Fig. 9A shows the signals due to the images of the retroreflectors as a result of the first flash of the light source.
- Fig. 9B shows the signals resulting from ambient light.
- Fig. 9C shows the signals when the light source is flashed a second time.
- Fig. 9D represents the electrical signals that remain after processing;
- Fig. 10 is a vector diagram illustrating the directional relationship between a first image point P and the final image point P'. Tne illustrated vector is located at the nodal point of the sensor lens;
- Fig. 11 is a block diagram of the video processing circuit used to identify and locate images of the retroreflectors and other reflector elements; and
- Figs. 12A-12C are timing diagrams showing the various signals that occur during various times during the operation of this invention.
- a storage rack 10 is shown supporting an object such as a pallet 15.
- the pallet 15 is provided with a target member 20.
- a wooden pallet 15 is shown, and the target member 20 is illustrated as being a separate component attached to the pallet. It should be understood, however, that any type of pallet structure may be used, and the target member 20 may either be a separate unit or it may be formed integral with the pallet itself.
- a vehicle 30, such as a forklift truck carries the identification means 35 (Figs. 2 and 4), such as a high intensity light source, and an imaging sensing means 40, which is preferably a miniature TV camera, such as a Sony XC-37 CCD camera.
- the light source and camera are preferably mounted together as a unit 45, with the light source 35 immediately adjacent and above the camera lens (Fig. 4). In the preferred embodiment, the vertical distance separating the light source and camera lens is approximately one inch.
- the target member 20 is shown in more detail in Fig. 5, and it includes a generally flat support member 50, three reflector elements 52, 53, and 54, and three retroreflector members 62, 63, and 64.
- a bar code 71 for uniquely identifying the pallet may also be printed on or attached to the support member.
- the light source and camera unit 45 is preferably aligned with the direction of travel of the vehicle. It is possible, however, rotatably to mount the camera on the vehicle so that it may scan through a large field of view, both horizontally and vertically. If this were done, however, the camera would be associated with a drive unit and position indicating device so that the proper corrections would be considered when calculating the relative location of the target.
- the preferred embodiment of the invention employs two convex reflector elements 52 and 54, and one concave reflector element 53.
- Each of the reflector elements is spherical in shape, and all are horizontally arranged on the support member 50.
- Both reflectors elements 52 and 54 have the same radius of curvature, and the radii of curvatures of all of the elements and their diameters are selected to provide a reasonable field of view A (Fig. 6) such that a reflection of the identifying means 35 will be viewable by the camera as long as the vehicle is within the field of view. In the embodiment described, it is preferred to have a minimum field of view of + 10° from the mirror platen normal. Typical mirrors may be approximately 1.5 inches in diameter and have a radius of curvature of 3 inches or greater.
- Mirrors 52 and 54 may be type 380 convex mirrors, and mirror 53 may be type 100 concave mirror, both manufactured by ROLYN. It should be emphasized that the identifying means 35, while preferably a brilliant light source, could also be any means that could be detected by sensing means 40. A brilliant xenon flash lamp Las been found effective for this purpose.
- the light/camera unit 45 is shown positioned below and to the right of the center line 60 passing through the target 20, but within the field of view A. Under these conditions, the reflection of the identifying means or light source 35 appear as images P A , P O , P B in mirrors 52, 53 and 54, respectively, as shown in Fig. 3. Since the mirrors are curved surfaces in the embodiment shown, the images P A and P B will appear toward the lower right portion of mirrors 52 and 54, and the image P O will be toward the upper left in mirror 53.
- P O ', P B ' of the identifying means would be grouped on the image plane 70 of the camera towards its upper left hand corner, as shown in Fig. 3B. (It will be assumed for the following illustration that the images formed on the image plane are not inverted or reversed.)
- the absolute location of the reflections, the spacing between the reflections, and the relative position of all the reflections will provide information sufficient to determine from a single observation the location of the vehicle with respect to the pallet and the orientation or rotation of the pallet. As the location of the vehicle changes, the observed position of the identifying means or reflections on the image plane of the camera will also change, as will be explained.
- the images P A , P O and P B of the identifying means 35 in the reflector elements 52-54 will define a plane 80, and these images will also define a circle 82.
- the plane 80 is not normal or perpendicular to the center line 60 of the target 20; it is in fact essentially parallel to the upper surrace of the pallet.
- the circle 82 will not include the lens of the camera 40.
- FIGs. 9A-9D represent the images appearing on the image plane of the camera 40 during one sequence of operations necessary to gather positional information.
- the preferred method of this invention provides for flashing the light source 35 and recording the positions of the reflections from the retroreflector members 62, 63, and 64.
- the circuit for accomplishing this is shown in Fig. 11.
- These reflections are identified in Fig. 9A as reflections 162, 163, and 164, respectively.
- These reflections are easily identified because they each occupy a plurality of pixels on the image plane 70 of the camera since they are physically large components of the target 20 and since the retroreflectors return a large percentage of the light emitted by the identifying means 35 back toward the source. For this reason, the effective sensitivity of the camera is reduced at this stage of the operation so that only the reflections of the retroreflectors are likely to be found at the image plane. Also, because of the high intensity of the reflected light, there may be some blooming of the image.
- the positions of each of the retroreflector images is recorded in memory means.
- Microprocessor means 310 performs a calculation by reference to the positions of the retroreflector images 162-164, and an area 200 is defined in whicii the reflections from the reflector elements 52-54 are likely to be found.
- This defined area 200 may be located anywhere on the image plane of the camera and will vary in area in proportion to the separation of the vehicle from the target. In other words, the closer the vehicle is to the target, the more widely separated will be the images, and the defined area will consequently be larger.
- images are those resulting from ambient light and may include such reflections 165 as overhead lights, specular reflections from metal objects within the field of view, and other light sources.
- the next step is to reduce the effective sensitivity of the camera and again flash the light source 35.
- the image plane will contain the image of the retroreflectors 162-164, the ambient reflections 165, and also reflections P A , P O and P B of the light source in each of the reflector elements 52-54. All of the images within the defined area 200 are recorded. Any images from the retroreflectors that bloom into the defined area are removed from memory, and the images recorded in Fig. 9B are effectively subtracted from those in Fig. 9C, and what remains are images P A ', P O ' and P B ', reflections of the light source in the reflector elements 52,54, as shown in Fig. 9D. The center of each of these images P A ', P O ' and P B ' are calculated and the video signals from the camera image plane are evaluated in accordance with the procedure later defined. As shown in Fig. 5, the retroreflector members
- each retroreflector should be known so that an area in which the reflector elements are positioned can be defined. Also, the position of the retroreflector elements further define a second area 210 in which the image 170 of the bar code 71 may be found, and at some appropriate time during the analysis of the image, the bar code may be read to confirm that the proper target is being approached.
- the reflector elements 52-54 do not all have to be spherical in shape. All that is .necessary is that the images of the identifying means carried by the vehicle be viewable by the camera. This means that one or more of the reflector elements could be a retroreflector. Using spherical mirrors, however, reduces the cost of the target and also provides relatively smaller images, images whose position can therefore be determined with a high degree of accuracy.
- the mirrors are evenly spaced and are horizontally aligned, and that the center line of the target, that is, the center mirror, is the desired final position of the light/camera unit 45. It should be recognized, however, that any orientation of the mirrors and any position of the camera unit with respect to the target would be acceptable and would not depart from this invention. All that the control circuit would need is information regarding the final desired position of each of the reflections on the image plane. Because of convention, and for purposes of explanation, it will be assumed that the desired final position will be on the center line with the images equally spaced on the image plane and that the images are in a horizontal line. The area between the retroreflectors is provided with a dark background in order to minimize random noise and false data.
- Figs. 6, 7A-7D and 8A-8D it is assumed that the light/camera unit 45 is in the same horizontal plane as the target, and the vehicle 30 is positioned to the right of the center line 60 in location 1.
- the reflections of the light source are shown in Fig. 7A
- the images of the reflections on the image plane 70 of the camera 40 are shown in Fig. 8A and would be located in the left center of the image plane.
- the images are close together and unequally spaced.
- the location of the images on the image plane and their relative spacing are all important to the calculations for determining the vehicle's relative location with respect to the target.
- both the light source 35 and the camera 40 are connected to electronic control and processing circuits 300.
- a microprocessor system 310 which provides control signals to control the flow of data to the remainder of the circuit.
- a timing logic circuit shown generally at 320, responds to instructions from the microprocessor system 310 to control the light source or flash 35, the video information from the camera 40, and the way that information is processed and stored in the remainder of the circuit.
- 330 provides the means for recording the images on the image plane of the video camera 40.
- a multiplexer 340 operating under control of the microprocessor 310 and timing logic circuit 320, transfers video information into the video RAM 330 through a serial-to-parallel converter 350, and out of the video RAM 330 through a parallel-to-serial converter 360.
- a digital-to-analog (D/A) converter 370 responds to a digital signal from the microprocessor to establish a threshold level for the output of the video camera unit, and that threshold level determines what video information passes from the camera 40 through a comparator circuit 380 into a selection circuit 390, which circuit includes a first AND gate 392, a second AND gate 394, and an exclusive OR gate 396.
- each interval between vertical blanking pulses 302 represents one-half frame.
- the interval designated 401 includes all of the odd numbered lines of one frame or screen, while the interval 402 represents all of the even numbered lines.
- the intervals 401 and 402 together comprise one complete frame.
- a power up signal is provided by the microprocessor system on line 316 to the timing logic 320 when the system is first turned on to synchronize the microprocessor system with the timing logic circuit.
- the timing logic circuit sends a reset pulse on line 321 to the video camera unit to initialize this device.
- the camera 40 provides a video output signal to tne comparator circuit 380, and part of this output is a pulse 302 representing the vertical blanking interval. During each vertical blanking interval, the timing logic circuit provides a vertical sync pulse back to the microprocessor on line 324.
- the microprocessor system 310 controls the sequencing of operation of the entire system.
- the microprocessor establishes an initial threshold level for the camera by sending a digital value on the microprocessor bus 315 to the D/A converter 370.
- This threshold level shown in Fig. 12A, as the dashed line 372, limits those signals that may pass through the comparator circuit 380.
- This initial level is set high enough that all reflections, except from the retroreflectors within the field of view of the camera, will be ignored and will not be passed on to the selection circuit 390.
- the microprocessor 310 sends a flash enable signal 311 on line 312 to timing logic circuit 320.
- This signal extends through the vertical blanking interval 302.
- the microprocessor generates a strobe signal on line 313, and the timing logic circuit 320 in response thereto sends a flash trigger pulse 322 on line 323 at the beginning of interval 401 to the light source or strobe 35.
- the light source 35 is a high intensity xenon strobe which floods the area in front of the camera with a short duration pulse of high intensity light.
- the threshold level 372 during intervals 401 and 402 is set high enough that only the video signals exceeding the predetermined threshold value, such as those representing the reflections 162-164 returned by the retroreflectors 62-64, will be allowed to pass through the comparator 380. Although the duration of the flash may be measured in microseconds, the light energy of the reflections therefrom will be retained on the camera image plane for one complete frame.
- the video camera unit 40 provides an output on line 326 from the internal camera clock, basically a 3.58 MHz series of pulses, to the timing logic circuit 320. These clock pulses are converted by the timing logic circuit into pixel clock pulses on line 327, with each pixel clock pulse representing a single pixel as it appears on the camera image plane. As shown in
- pixel clock pulses are provided to the serial-to-parallel converter 350 and the parallel-toserial converter 360 for two intervals after a start pulse.
- the camera a Sony XC-37 CCD Camera, has an image plane providing an array of 384 x 491 pixels. Half of those pixels will be interrogated during the first interval and the other half during the second interval. Thus, each pixel on the camera image plane is separately and uniquely identified and it can be determined whether or not the output from each pixel exceeds the predetermined threshold level.
- the output of the comparator 380 is applied on line 381 to selection circuit 390, and to both the AND gate 392 and exclusive OR gate 396. During intervals 401 and 402, the exclusive OR enable line 318 is low and therefore the video information from the comparator 380 will be passed directly on line 391 into the serial-to-parallel converter 350.
- the target 20 typically includes three retroreflectors in the preferred embodiment, it is expected that only three intense returns, or images 162, 163 and 164, will exceed the threshold 372, and therefore these images will be processed through the serial-to-parallel converter 350 and sent on the video RAM data bus 355, through the multiplexer 340 under control of signals provided by timing circuit 320 on bus 328, and into the video RAM 330 via bus 335 where they will be stored or recorded in electronic form.
- the images of the retroreflectors are so large, it is possible to speed up the process by scaling the pixel clock and limiting the video data stored in the RAM 330 by storing every fourth pixel of every fourth line of each frame and still detect the presence of the retroreflectors.
- the threshold level is set by the microprocessor to that shown at 373 so that low level images 165 and 166, which exceed the threshold level during intervals 403 and 404, may pass through the comparator 380.
- the selection circuit 390 remains inactive at this time, so those images will be recorded in the video RAM 330 in accordance with the process described above.
- Fig. 12C illustrates the next step in the process.
- the microprocessor 310 increases the threshold level slightly to that shown at 374. This will permit most of the images due to ambient light to pass through the comparator 380, but will eliminate some that might have been marginal, such as image 166. This will also tend to eliminate noise from the camera itself and slight changes in size of the image due to camera movement.
- the flash is once again enabled and it is triggered a second time at the beginning of interval
- the selection circuit 390 provides the. means for comparing the images due to ambient light temporarily stored in the recording means or memory 330 and the images resulting from both ambient light and from the light source, and for thereafter recording only those images due to reflections of the light source in the video RAM 330.
- the location of the retroreflector images is known and was determined during intervals 401 and 402, the location of the reflections of the light source in the reflector elements with respect to each other and with respect to the image plane of the camera can now be accurately determined by analyzing the data stored in RAM 330 and the location of the vehicle in relation to the target calculated.
- One of the target points, the center one, P o is chosen as a target reference point or origin.
- the other two points, P A and P B have 3-dimensional vector offsets from P O . These offset vectors are identified as a and b in the target coordinate system.
- TARGET POINTS AS VIEWED BY SENSOR Consider an axis system fixed with respect to the camera.
- the specific axis system chosen is the right-handed system shown in Fig. 2, with x representing horizontal, positive to the right; y representing horizontal, positive along forward lens axis; and z representing vertical, positive up.
- P O will be at some vector location, R.
- the image points will correspond to three points which, from the viewpoint of the sensor, are at locations R, R + ⁇ , R + ⁇ .
- the vectors R, ⁇ and ⁇ , and the rotation matrix M, are initially unknown.
- DIRECTION VECTORS u, v, w.
- the direction of any single source or target point can be established with a camera system, but not distance.
- the direction can be defined by a vector from image point to lens center (thin lens) or second nodal point (thick lens). See Fig. 9.
- the direction vectors corresponding to P o , P A , and P B be called u, v, w, respectively. These vectors could be chosen as unit vectors (generally convenient for analysis) . A more convenient and natural normalization is to scale these vectors so that the lens-axis or y-component equals focal length. The x and a components are then simply the horizontal ( ⁇ ) and vertical (n) components of location in the focal plane. (With due regard for signs and "image reversal.") BASELINE VECTOR EQUATIONS. In terms of known direction vectors u, v, w, the basic vector equations become
- ⁇ o , ⁇ A , ⁇ B are (unknown) scalars proportional to distance.
- Equations (1) through (3) are underdetermined. There are 12 unknowns (three components each for R, ⁇ , ⁇ vectors, plus the three scalar ⁇ 's), and nine scalar equations. The "missing" three equations come from scalar invariance relationships.
- SCALAR INVARIANCE RELATIONSHIPS THREE SCALAR EQUATIONS.
- ⁇ and ⁇ are unknown as vectors, partial information comes from scalar invariants of (rigid-body) rotation. Specifically, if
- Equations (1) through (6) collectively form a fully determined baseline equation set. They give 12 nonlinear (scalar) equations in 12 (scalar) unknowns, if evaluation of unknown rotation matrix M is temporarily regarded as a "later step.”
- the algorithmic steps are based upon mathematical analysis that successively reduces the dimensionality of the problem.
- the ultimate step becomes that of solving a single nonlinear equation in one unknown, after which the reduction steps are retraced to give explicit values of the other variables.
- R, ⁇ , and ⁇ are known.
- D and E coefficients are combinations of (previous) known constants.
- Equations (12) and (13) are near the end of the reduction process. They could be solved simultaneously, using (for example) a 2-dimensional version of Newton successive approximation. That approach is not preferred, largely because of practical problems in determining two distinct solutions (a requirement previously referred to), and because convergence would probably be slower than for alternatives discussed below.
- Equations (12) and (13) could also be combined to give a purely 1-dimensional equation in ⁇ --a fourth-order polynomial equation of form
- Equation (14) The practical disadvantage of the Equation (14) approach is the complexity, lengthy software code and computation time involved in evaluating the P coefficients prior to solving for roots.
- Equation (13) can be solved for ⁇ as an explicit function of ⁇ :
- Equation (12) can then be written as if a function only of ⁇ :
- Equation (16) is a value of ⁇ for which Equation (16) holds.
- Equation (15) is a value of ⁇ for which Equation (16) holds.
- Equation (15) is a value of ⁇ for which Equation (16) holds.
- Equation (16) For the class of problems and target-point geometries that occur in applications of the type described here, there are normally two distinct real roots to Equation (16), and two complex roots. Physical solutions correspond to the two real roots. Of the two physical solutions, one is "usually” identifiable as not valid. For some of the retroreflector target-point geometries (vs. mirror configurations), and in presence of mosaic quantization and/or other sources of errors, resolution of correct versus incorrect solution is not necessarily reliable. This problem is strictly a data error problem, not an algorithmic problem. Empirical studies indicate that this type of problem does not occur for the mirror configuration and realistic distance/angle combinations.
- Equation (15) A potential problem exists with Equation (15), in that for some value of ⁇ (say, ⁇ o ) a zero denominator occurs. Theoretically (i.e., in absence of roundoff errors), this case must imply that the numerator is also zero and that a definite (finite) limiting value exists.
- Equation (15) is replaced with
- Equation (17b) is the limiting form of (17a) as ⁇ approaches ⁇ 1 .
- This formulation is equivalent to dividing out an ( ⁇ - ⁇ 1 ) factor from a pure polynomial form.
- the pair of equations (15 and 16) yield double roots for ⁇ and ⁇ . Procedures for establishing both ⁇ and ⁇ pairs and selecting the physically valid values are required.
- the ⁇ parameter is a required intermediate variable. It is obtained from Equation (9'), rewritten in the explicit form
- Equation (10') It could also be evaluated from Equation (10'), or (provided ⁇ ⁇ 0 ) from Equation (11'), since ⁇ and ⁇ have, in principle, been evaluated to make these three equations compatible.
- M is a 3 x 3 matrix, hence with nine elements. These nine elements are not independent, however.
- the constraint that M be a rotation matrix implies that only three degrees of freedom exist. These three degrees of freedom can be identified with pitch, roll, and yaw angles, but that identification is neither required nor useful in the steps to solve for M.
- P has vector a as its first column, vector b as second column, etc.
- the matrix P is nonsingular (provided only that the pallet vectors a and b are not collinear), so an immediate formal solution results:
- unit ( ⁇ ) unit (a) (29) where unit "( )" means normalized to unit length.
- Equation (33) is the same as in Equation (33).
- Equation (28) is the form used for software evaluation of M. It requires a matrix-times-raatrix multiplication, no explicit matrix inversion.
- the matrix M can be considered to be the product of three canonic matrices associated with pure pitch, roll, and yaw. The convention chosen for the order of multiplication is
- the yaw matrix Y is first applied to the vector, then the roll matrix R, then the pitch matrix P.
- the canonic matrices are
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Structural Engineering (AREA)
- Transportation (AREA)
- Aviation & Aerospace Engineering (AREA)
- Automation & Control Theory (AREA)
- Civil Engineering (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mechanical Engineering (AREA)
- Geology (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Forklifts And Lifting Vehicles (AREA)
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US788989 | 1985-10-18 | ||
US06/788,989 US4684247A (en) | 1985-10-18 | 1985-10-18 | Target member for use in a positioning system |
US06/789,280 US4678329A (en) | 1985-10-18 | 1985-10-18 | Automatically guided vehicle control system |
US789280 | 1985-10-18 |
Publications (1)
Publication Number | Publication Date |
---|---|
EP0243493A1 true EP0243493A1 (de) | 1987-11-04 |
Family
ID=27120859
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP87900361A Withdrawn EP0243493A1 (de) | 1985-10-18 | 1986-10-10 | Zielansteuerung und steuerungssystem zur positionierung von automatisch gesteuerten fahrzeugen |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP0243493A1 (de) |
KR (1) | KR880700512A (de) |
CA (1) | CA1277392C (de) |
WO (1) | WO1987002484A1 (de) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9378554B2 (en) | 2014-10-09 | 2016-06-28 | Caterpillar Inc. | Real-time range map generation |
US9449397B2 (en) | 2014-10-15 | 2016-09-20 | Caterpillar Inc. | Real-time visual odometry system for determining motion of a machine with a range detection unit |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4869635A (en) * | 1988-03-31 | 1989-09-26 | Caterpillar Industrial Inc. | Apparatus for controllably positioning a lift mast assembly of a work vehicle |
GB8822795D0 (en) * | 1988-09-28 | 1988-11-02 | Gen Electric Co Plc | Automated vehicle control |
FI85260C (fi) * | 1990-07-11 | 1992-03-25 | Vesa Kaehoenen | Foerfarande och system foer att instaella hydraultrycket som verkar pao ett hydrauliskt griporgan. |
GB2259823A (en) * | 1991-09-17 | 1993-03-24 | Radamec Epo Limited | Navigation system |
IT1275664B1 (it) * | 1994-11-16 | 1997-10-17 | Consorzio Telerobot | Sistema per il controlllo e guida automatica di un gruppo elevatore a forche |
DE19541379C2 (de) * | 1995-11-07 | 2001-01-18 | Fraunhofer Ges Forschung | Verfahren zur Bestimmung der Position eines Fahrzeuges in einer Fahrebene |
DE19738163A1 (de) * | 1997-09-01 | 1999-03-11 | Siemens Ag | Verfahren zur Andockpositionierung einer autonomen mobilen Einheit unter Verwendung eines Leitstrahles |
FR2799002B1 (fr) * | 1999-09-28 | 2006-07-21 | Thomson Csf | Procede d'imagerie laser active |
JP2001206696A (ja) * | 2000-01-21 | 2001-07-31 | Nippon Yusoki Co Ltd | フォークリフト |
US7473884B2 (en) | 2005-04-21 | 2009-01-06 | Avago Technologies Ecbu Ip (Singapore) Pte. Ltd. | Orientation determination utilizing a cordless device |
US7609249B2 (en) | 2005-04-21 | 2009-10-27 | Avago Technologies Ecbu Ip (Singapore) Pte. Ltd. | Position determination utilizing a cordless device |
FI117835B (fi) | 2005-06-22 | 2007-03-15 | Sime Oy | Paikoitusmenetelmä |
US7796119B2 (en) | 2006-04-03 | 2010-09-14 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Position determination with reference |
BE1018160A3 (nl) * | 2008-05-26 | 2010-06-01 | Egemin Nv | Automatisch gestuurd voertuig en werkwijze voor het sturen daarbij toegepast. |
SE534240C2 (sv) * | 2008-12-05 | 2011-06-14 | Datachassi Dc Ab | Förfarande och system för tillhandahållande av dockningshjälp |
WO2013059145A1 (en) | 2011-10-19 | 2013-04-25 | Crow Equipment Corporation | Identifying evaluating and selecting possible pallet board lines in an image scene |
GB2513912B (en) * | 2013-05-10 | 2018-01-24 | Dyson Technology Ltd | Apparatus for guiding an autonomous vehicle towards a docking station |
US9990535B2 (en) | 2016-04-27 | 2018-06-05 | Crown Equipment Corporation | Pallet detection using units of physical length |
EP3269679B1 (de) | 2016-07-14 | 2019-09-11 | Toyota Material Handling Manufacturing Sweden AB | Flurförderer |
EP3269680B1 (de) | 2016-07-14 | 2020-09-30 | Toyota Material Handling Manufacturing Sweden AB | Flurförderer |
EP3269678B1 (de) | 2016-07-14 | 2019-03-06 | Toyota Material Handling Manufacturing Sweden AB | Flurförderer |
US11459221B2 (en) | 2020-04-24 | 2022-10-04 | Autoguide, LLC | Robot for stacking elements |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
HU170235B (de) * | 1974-05-29 | 1977-04-28 | ||
SE7804927L (sv) * | 1978-04-28 | 1979-10-29 | Volvo Ab | Anordning for att orientera exempelvis ett lyftdon i forhallande till en last |
US4331417A (en) * | 1980-03-07 | 1982-05-25 | Rapitsan Division, Lear Siegler, Inc. | Vehicle alignment and method |
FR2495797A1 (fr) * | 1980-12-09 | 1982-06-11 | Onera (Off Nat Aerospatiale) | Systeme de pilotage automatique d'un vehicule terrestre autonome |
DE3305277A1 (de) * | 1983-02-16 | 1984-08-16 | Ing. Günter Knapp GmbH & Co. KG, Graz | Verfahren und vorrichtung zum selbsttaetigen ein- und auslagern von stueckgut |
-
1986
- 1986-10-10 WO PCT/US1986/002151 patent/WO1987002484A1/en not_active Application Discontinuation
- 1986-10-10 EP EP87900361A patent/EP0243493A1/de not_active Withdrawn
- 1986-10-10 KR KR870700519A patent/KR880700512A/ko not_active Application Discontinuation
- 1986-10-17 CA CA000520829A patent/CA1277392C/en not_active Expired - Fee Related
Non-Patent Citations (1)
Title |
---|
See references of WO8702484A1 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9378554B2 (en) | 2014-10-09 | 2016-06-28 | Caterpillar Inc. | Real-time range map generation |
US9449397B2 (en) | 2014-10-15 | 2016-09-20 | Caterpillar Inc. | Real-time visual odometry system for determining motion of a machine with a range detection unit |
Also Published As
Publication number | Publication date |
---|---|
KR880700512A (ko) | 1988-03-15 |
CA1277392C (en) | 1990-12-04 |
WO1987002484A1 (en) | 1987-04-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US4678329A (en) | Automatically guided vehicle control system | |
EP0243493A1 (de) | Zielansteuerung und steuerungssystem zur positionierung von automatisch gesteuerten fahrzeugen | |
US5513276A (en) | Apparatus and method for three-dimensional perspective imaging of objects | |
US5838428A (en) | System and method for high resolution range imaging with split light source and pattern mask | |
US5745235A (en) | Measuring system for testing the position of a vehicle and sensing device therefore | |
US5303034A (en) | Robotics targeting system | |
US5974365A (en) | System for measuring the location and orientation of an object | |
US6141105A (en) | Three-dimensional measuring device and three-dimensional measuring method | |
EP0330429B1 (de) | Verfahren und Apparat zur Ueberwachung des Oberflächenprofils eines Werkstückes | |
JP4111166B2 (ja) | 3次元形状入力装置 | |
US5094538A (en) | Digitizing the surface of an irregularly shaped article | |
US20140034731A1 (en) | Calibration and self-test in automated data reading systems | |
US20020060795A1 (en) | High speed camera based sensors | |
JP2586931B2 (ja) | カメラの測距装置 | |
WO2005043076A1 (en) | Method for calibrating a camera-laser-unit in respect to a calibration-object | |
JPH0467607B2 (de) | ||
US6556307B1 (en) | Method and apparatus for inputting three-dimensional data | |
US6730926B2 (en) | Sensing head and apparatus for determining the position and orientation of a target object | |
EP0488392B1 (de) | Abstandsmessgerät | |
JP2001012930A (ja) | 表面欠陥検査装置 | |
JP3991501B2 (ja) | 3次元入力装置 | |
US20120056999A1 (en) | Image measuring device and image measuring method | |
JP2859946B2 (ja) | 非接触型測定装置 | |
US6411918B1 (en) | Method and apparatus for inputting three-dimensional data | |
JP3324367B2 (ja) | 3次元入力カメラ |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): DE FR GB IT SE |
|
17P | Request for examination filed |
Effective date: 19871016 |
|
17Q | First examination report despatched |
Effective date: 19890126 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 19910212 |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: LUKOWKSI, FRANK, J., JR. Inventor name: HAMMILL, HARRY, B., III |