Method and system for sorting a post item in a delivery bin according to identification information
FIELD OF TECHNOLOGY
The invention relates to a method and a system for sorting an object provided with identifier data, preferably a postal item, the sorting comprising both sorting into a rack and collecting from a rack by identifier data.
STATE OF THE ART
A sorting rack typically comprises delivery bins, into which the postal items are directed manually on the basis of data on an address tag. Each distribution path can be allocated an individual sorting rack with bins, into which postal items are sorted and from which postal items are collected for delivery. Preliminary postal delivery operations typically include steps for sorting postal items in a rack in accordance with address data, with sorting into the rack being performed by the street, road or number, and the postal items collected from the rack manually according to a delivery list. These steps are time consuming and require experienced staff. The work performance also requires printouts. There have been efforts to develop various partly automated work steps have been developed to facilitate sorting operations.
Prior art systems and methods for sorting postal items in a sorting rack equipped with bins have been depicted in US 6,881,890, for instance. A computer-controlled scanner reads a bar code on an envelope, and then the corresponding address is searched in the databank and information about the bin number corresponding to the address is transmitted to an input/output circuit. The input/output circuit is connected as an integrated part of a circuitry including an input line from an infrared detector in each bin and an output line to a guide light provided in each bin. When the computer transmits the address number to the input/output circuit, it turns on the guiding light of the associated bin, allowing the sorter to put the postal item into this particular bin. When a postal item has been identified in a bin, the guide light of this bin is switched off and identification of the next postal item can be started. With a view to detection of erroneous sorting, the system described in US 6,881,890 comprises an output line of the input-output circuit connected to a
warning light provided in each bin. When the sorter has sorted a postal item into the wrong bin, a warning light in this bin gives an alarm, so that the sorter may remove the item from the erroneous bin and resort it into the correct bin indicated by the guide light. However, this system does not guarantee that the postal item is sorted into the correct bin, should the bar code be incorrectly read in the identification step or had it not been read at all. If the bar code is defective or torn or otherwise destroyed in the identification step, the bar code cannot be read and the postal item will require resorting carried out completely by hand.
Prior art methods and systems require that the bar code associated with a postal item can be correctly read and interpreted for the correct delivery bin in a sorting rack to be allocated for the postal item. If the bar code is defective, the postal item will probably be directed to the wrong bin or will have to be rejected and transferred to manual sorting, for instance. The situation is the same if the bar code has been torn, distorted, bent, skewed, set upside down, partly destroyed or otherwise deteriorated. Bar codes that have been erroneously or wrongly interpreted or not read at all will delay both the sorting and the delivery of postal items and also increase the time, work steps and costs required by sorting.
SUMMARY OF THE INVENTION
The purpose of the invention is to eliminate the problems mentioned above and to provide a method and a system for sorting an object equipped with identifier data, in which an identification unit reads partial identifier data, completes the partial identifier data into corrected identifier data and compares the identifier data thus formed with sorting data stored in a database, the object being directed to the correct delivery bin on the basis of the comparison. The method and system of the invention markedly increase sorting automation, because the number of incorrectly interpreted identifier data and thus of wrongly sorted objects decreases appreciably in the sorting step. Accordingly, the step of collecting objects is enhanced, because the proportion of incorrectly sorted objects has been minimised in the sorting step. The collecting step is further enhanced by the fact that, whenever possible, the collecting area of the sorting rack is formed of those bins alone into which objects have been sorted, or that empty bins are not even examined. In accordance with the invention, sorting quality and efficiency are improved, thus reducing the time, work steps and costs required for sorting and speeding up delivery of the object.
Under one aspect, the invention provides a system for sorting at least one object equipped with identifier data, the system comprising a sorting rack including several delivery bins and associated guide means, which are connected with the controller unit of a computer and a database including sorting data communicating with the computer, the system comprising an identification unit, which is connected to the controller unit of the computer and the database including sorting data communicating with the computer, in order to read identifier data, the system being characterised by the fact that the identification unit comprises reading means for reading at least partial identifier data of the object, processing means for complementing the read partial identifier data into corrected identifier data, and comparative means for comparing the corrected identifier data with sorting data stored in the database, the sorting data serving as a basis for controlling the sorting of the object by means of the guide means.
In one embodiment of the invention, the sorting system comprises a switch connected to the controller unit of the computer and the guide means for setting the operating mode of the digitalised sorting rack into a mode for collecting from the rack, the collection list comprising the identifier data of the objects stored in a database, the collection list serving as a means for generating a control signal in the controller unit of the computer for transmission to the guide means of the delivery bins, and guide means for guiding the collection of the object under said control signal.
According to a second aspect, the invention provides a method for sorting at least one object equipped with identifier data in a sorting rack, which comprises a plurality of delivery bins, the method comprising steps for
- connecting the delivery bin to the controller unit of a computer,
- storing the sorting data in a database communicating with the computer, - setting the object to be sorted in an identification unit connected with the controller unit of the computer for reading the identifier data,
the method being characterised by comprising steps for
- reading the at least partial identifier data of the object,
- comparing the read identifier data with the sorting data,
- complementing the read partial identifier data into corrected identifier data, and
- directing the object to the delivery bin on the basis of the comparison.
In another embodiment of the invention, the sorting method identifies and indicates the delivery bins for controlling the sorting rack and switches the mode of the digitalised sorting rack either into a mode for sorting into the rack or into a mode for collecting from the rack.
According to a third aspect, the invention provides a system for sorting an object for delivery, the system comprising a digitalised sorting rack with a plurality of delivery bins and associated guide means connected to the controller unit of a computer and a database equipped with sorting data communicating with the computer with a view to controlling the sorting for delivery, the system being characterised by the feature that said guide means are adapted to indicate by said sorting data the delivery bin presently in turn and to indicate discriminatingly the delivery bin next in turn with a view to controlling the sorting of the object for delivery.
According to a fourth aspect, the invention provides a method for sorting an object for delivery into a digitalised sorting rack comprising a plurality of delivery bins, the method comprising steps for
- connecting a delivery bin to the controller unit of a computer,
- storing the sorting data in a database communicating with the computer,
the method being characterised of comprising steps for
- indicating by said sorting data the delivery bin presently in turn and
- indicating by said sorting data discriminatingly the delivery bin next in turn for controlling the sorting of the object with a view to delivery.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention is explained in greater detail below with reference to the accompanying drawings, of which
Figure Ia is a block diagram of the sorting system of one embodiment comprising a digitalised sorting rack,
Figure Ib is a block diagram of a second embodiment of a sorting system comprising a digitalised sorting rack,
Figure 2 is a block diagram of an identification unit of one embodiment,
Figure 3 is a block diagram of the identification unit of a second embodiment,
Figure 4 shows the user interface of an embodiment of the identification unit of an application,
Figure 5 is a flow chart of a method of one embodiment for sorting an object into a rack by the identifier data of the object,
Figure 6 is a flow chart of a method of a second embodiment for sorting an object into a rack by the identifier data of the object,
Figure 7 is a flow chart of a method of one embodiment for identifying an object for sorting, and
Figure 8 is a flow chart of a method of a second embodiment for identifying an object for sorting.
DETAILED DESCRIPTION OF THE INVENTION
Figure Ia shows an embodiment of the system of the invention for sorting an object equipped with identifier data. The sorting rack 10 comprises a plurality of delivery bins 11, into which the objects to be sorted are sorted by the desired sorting criteria, and from which the sorted objects are collected by the desired collecting criteria. In this application, the term "sorting" denotes both sorting the object into a rack, i.e. input, and collection of the object from the rack, i.e. output. The sorting of the object is controlled by a computer 30 and an indicator means 13 installed in each delivery bin and a recognition means 15 installed in connection with each array of delivery bins 112 consisting of a plurality of delivery bins. The array of delivery bins 112 is preferably a column of bins consisting of delivery bins 11, as shown with a dashed line 112 in figure Ia. In the system of the invention, sorting the object into the rack and collecting the object from the rack are mutually independent
operations. In other words, in the system of the invention, sorting into the rack is a separate independent function and collection from the rack is a separate independent function.
Identifier data implies data about the object to be sorted that are needed for carrying out the sorting, in other words, for transferring the object into the correct bin. The identifier data may be any data allowing the object to be sorted into the rack. Thus, for instance, during sorting of the object for delivery, the identifier data may consist of address data or a technical identifier, such as address identifier data, delivery point identifier data or any similar data for delivery control, so that the object does not require any actual address data at all. The identifier data may be provided in different forms in the object, such as in the form of alphanumerical signs, bar codes, a radio-frequency identifier, preferably as an RFID, an optical identifier or an electrical identifier. The identifier data may be printed out or programmed directly in the object, on a tag to be affixed to the object, a printed circuit board or any other substrate, or it may be integrated in the object in any similar manner known per se. The common term "substrate" will be used below to designate the various manners for connecting the identifier data to the object mentioned above.
As shown in figure Ia, a system of the invention comprising a digitalised sorting rack 10, is controlled by a computer 30 comprising at least one controller unit and a memory, and also by a switch SW 40 and an I/O controller I/O 50, which are connected both to the controller unit of the computer and to the rack. In one embodiment, the switch 40 is connected between the controller unit and the I/O controller 50 and the I/O controller is connected between the controller unit and the bins 11 of the rack. The switch SW has the function of switching the digitalised sorting rack either into a mode for sorting into the rack or into a mode for collecting from the rack. In the embodiment of figure Ia, a connection 131 is arranged from the I/O controller 50 to the indicator means 13 of each bin and a connection 151 to the recognition means 15 of each array of delivery bins 112. The connections 131, 151 can be carried out as wire communication lines consisting of output lines 131 and input lines 151. The lines 131, 151 preferably form a bus controlled from the controller unit of the computer. In a second embodiment, the I/O controller may be in wireless communication, instead of communication lines, with the indicator means 13 provided in each bin, and e.g. communicate by radio or optical means with the recognition means 15 provided in each bin array, and then the I/O
controller, the indicator and recognition means are equipped with the transmission and reception means required for the communication in question.
The sorting of the object is controlled by means of an indicator means 13 installed in each delivery bin 11, preferably a signal light or any other light-emitting component, and an recognition means 15, preferably a distance sensor, installed in connection with each array of delivery bins 112, preferably column of bins,. The signal light 13 indicates to the sorter into which bin 11 the object presently in turn shall be placed, and the distance sensor 15 identifies the input of the object into the bin, and then the signal light is turned off. Optionally, as the distance sensor 15 identifies the input of the object into a bin, the identifier data of the new object are read, and as a result of this, the bin into which the new object shall be placed is indicated to the sorter, and the signal light of the preceding bin is switched off. Of course, the delivery bin indicated for the new object may be the same bin as the one indicated for the preceding object.
During the collecting step pertaining to the sorting, the signal light 13 indicates to the sorter the bin from which the object presently in turn shall be collected, and the distance meter 15 identifies the collection of the object from the bin in the manner described in connection with figure 5. After this, the light signal of this particular bin is turned off and the bin of the object to be collected next is indicated by the related light signal being turned on. In one embodiment, the collecting step uses simultaneously two mutually different light signals of two bins, so that the light signal relating to bin of the object to be collected next in turn after the presently collected object, i.e. the bin to be collected next, indicates that this bin is in turn to be collected next after the bin presently in turn for collection. In other words, the light signals indicate the bin presently in turn to be collected and the bin next in turn to be collected with the light signal of the former continuously turned on and the light signal of the latter blinking, for instance. When the distance sensor of the bin to be collected next identifies collection of the object from the bin, it transmits an acknowledge signal of successful collection to the computer 30 acting as the central processing unit. Then the computer shifts the light arrangement by one step forward, i.e. the light signal of the preceding, i.e. collected bin goes out, the light signal of the subsequent bin, i.e. the one presently in turn to be collected, stops blinking and is turned on continuously, and the light signal of the bin subsequent to the bin presently in turn, i.e. of the bin in turn to be collected next, starts blinking.
Both the light signals mentioned above may also be lighted continuously, yet with a mutually different light or emitting clearly different signals.
In accordance with figure Ia, the controller unit 30 of the computer is connected with an identifying unit 20 for at least partial reading and identification of the identifying data and a database DB 60, in which data required for sorting are stored, such as sorting data and collection lists. The data stored in the database can be retrieved from other data systems when the sorting starts and they can be changed and/or updated in real time, provided that the database is integrated in the data network. The identifying unit includes either a connected memory or the identifying unit uses the memory of the computer as its memory for storing data it has identified. In addition, one or more applications SOVl, SOV2 70, 80 have been stored in the memory connected to the computer controller unit for performing various functions as desired, such as identification, processing, correction and/or comparison.
The digitalised sorting rack 10, which consists of a plurality of bin columns, can be divided into several rack sections, whose sorting functions can be selected as mutually different. The rack section comprises an array of bin columns 112, which is illustrated with a dashed line in figure Ia and with reference 114, and bin columns can be combined into independently acting rack sections 114 by programming or any other connections. The numbers of bin columns or bin rows included in the rack section can be freely selected. In the system of the invention, sorting the object into the rack and collecting the object from the rack are mutually independent operations. Each rack section of the sorting rack then comprises conductors connected over a separate switch SW 40 to the computer 30 from the I/O controller 50 to this rack section. The divided sorting rack allows the sorting functions to be determined by rack sections, so that the different sorting steps may proceed in parallel. Each switch SW has the function of switching this rack section either in a mode for sorting into the rack or a mode for collecting from the rack.
Figure Ib illustrates a system of a second embodiment of the invention. It differs from the embodiment in figure Ia only in that, besides the indicator means 13, each bin has an individual recognition means 15. This provides a communication between the I/O controller 50 and the indicator means 13 of each bin and a communication 151 to the recognition means 15 of each bin 112. The communications 131, 151 can be carried out as wire communications consisting of
output lines 131 and input lines 151. The communications 131, 151 preferably form a bus controlled from the controller unit of the computer. In a second embodiment, instead of lines, the I/O controller may be in wireless communication with an indicator means 13 provided in each bin, and a recognition means 15 provided in each bin over the radio or optically, for instance, and then the I/O controller, the indicator and recognition means are equipped with appropriate transmission and reception means required for communication.
Figure 2 is a block diagram of an identifying unit 20 of one embodiment. In accordance with the invention, an identifying unit 20 communicating with the sorting rack 10 comprises reading means 22, by means of which it reads the at least partial identifying data of the object 2 to be sorted, and a memory, in which it stores the identifying data it has read. The memory (not illustrated) may be a separate or integrated storage means in the reading means or a memory card or the identifying unit may use the computer memory as its memory for storage of the data it has identified. In the identifying unit of figure 2 the camera KAM 22 reads, i.e. images in this case, the identifying data related to the object 2, which may be provided in the object as such or connected to a substrate 5 for instance. The substrate 5 may be a printout of alphanumeric signs, a bar code tape, an infrared printout or any similar substrate affixed to the object, in which the readable identifying data are generated. The camera 22 is connected to the controller unit and memory of the computer 30 and over this to the applications SOVl, SOV2 as shown in figure 1. The computer, for instance, can be equipped with an IEEE 1394 PCI card for connecting the camera. The identifying unit may further comprise a display 32, a keyboard 34 or a combination of these connected to the computer, such as a contact display screen, or any other data feed means, by means of which the applications SOVl, SOV2 can be controlled over their user interfaces. There may be one or more applications SOVl, SOV2 in use. The camera is preferably a digital camera. The camera resolution may be e.g. 768 x 1024 pixels and the pixel size may be e.g. 6.25 x 6.25 μm2. The camera my have an imaging rate of e.g. 15 full-pixel pictures per second. The imaging distance, i.e. the distance between the front surface of the camera objective and the imaging area on the identification substrate is e.g. 370 mm.
The object to be sorted is placed on the identification substrate 19 in connection with the identifying unit for reading of the identifying data in a manner such that the identifying data of the object get on the identification substrate, i.e. in this case on the imaging area of the camera 21 with the identifying data facing the camera. This
application uses generally the term imaging area to denote the identifying area. In figure 2, the camera is equipped with a lens 221, by means of which the imaging area of the camera can be adjusted as desired. A light source 24 is further mounted in the vicinity of the camera and oriented so as to efficiently illuminate the imaging area adjusted with the lens 221 of the camera 21. This is particularly useful in spaces where sorting is performed with irregular illumination. The light source 24 can be mounted in the same stand as the camera, in which it can be shifted and its light incident angle can be adjusted as well. A lens (not illustrated in the figure) can be mounted in front of the light source of preferably one or more LEDs, the lens allowing collection of light from one or more LEDs and focussing it efficiently to the imaging area. The focal length of the lens in front of the LEDs of the light source may be e.g. 50 mm and its diameter 25.4 mm. The LEDs have the additional advantage of allowing pulsation e.g. at a frequency of 60 Hz, the light appearing as continuously switched on, and of having long service life and low power consumption. The light source 24 preferably comprises five LEDs disposed e.g. as a cross and adjusted so as to distinctly indicate the imaging area formed on the identification substrate, on which the camera and the lens are focused. In other words, the camera lens is focused on the identification substrate with the central LED out of five LEDs being directed to indicate the centre of the imaging area. In this manner, the LEDs of the light source 24 indicate the centre and the corners of the imaging area. However, the LEDs should be switched off when an image is taken in order to prevent heir emission light from interfering with the identification of the identifying data. However, imaging takes only about 10 ms, so that the substrate 5 including the identifying data can still be correctly positioned even though the LEDs are switched off for such a short period.
In one embodiment, a second light source 26 is mounted on the other side of the identification substrate with respect to the camera, perpendicularly to the camera lens, the light source illuminating the identification substrate from the side opposite to the camera and forming light points 261 on the identification substrate. The light points are formed on the identification substrate e.g. by drilling one or more holes, preferably five holes 261 in the shape of a cross, for instance, at the second light source 26. The second light source 26 preferably comprises five LEDs, which are embedded e.g. in the shape of a cross in the identification substrate 19, preferably in a light or transparent plastic sheet. By means of the light points pointed at the camera, the light detector (not illustrated in the figure) detects when an object to be imaged is in the imaging area, the object covering at least part of the light points
and preferably all of the light points, whereby the light received by the light detector from the second light source decreases or ceases. When the light detector of the camera no longer receives light from the second light source, the camera starts imaging the identifying data of the object, which should now be located in the imaging range on the identification substrate. In a second embodiment, the first and the second light source 24, 26, each preferably comprising five LEDs in cross shape, indicate the centre and the corners of the imaging area on the identification substrate. Then the first light source 24 and the second light source 26 are disposed with respect to each other such that the camera pulsation is set to be synchronised into the same phase as the second light source and into the opposite phase relative to the light source, so that the camera is activated to image the identifying data of the first object on the identification substrate under the control of said pulsation.
A third light source 28 is mounted in the vicinity of the camera and is oriented so as to efficiently illuminate the imaging area adjusted by the lens 221 of the camera 21 on the identification substrate 19. This is especially useful in spaces where sorting is performed in irregular light conditions. The light source 28 can also be mounted in the same stand as the camera, where it can be shifted in the horizontal plane and its incident angle can also be adjusted. This allows adjustment of the light incident angle so as to avoid mirror reflection from glossy samples. A lens (not shown in the figures) can be mounted in front of the light source 28 to allow collection of light from one or more LEDs and focussing it on the imaging area on the identification substrate. According to a third embodiment, the first light source 24, the second light source 26 and the third light source 28, each preferably comprising five e.g. cross-shaped LEDs, are mutually disposed so that the camera pulsation is set to be synchronised in the same phase as the third light source 28 and in the opposite phase relative to the first light source 24 and the second light source 26, so that the camera is activated to image the identifying data of the object on the identification substrate under the control of said pulsation.
In one embodiment, the person who performs the sorting may carry a reading means 22, preferably a camera, as a means of automated sorting. However, the person carrying out sorting should have both his hands free for handling the object. For this reason, the camera should be fixed to the sorter's shoulder, helmet or any other location leaving his hands free. The moment of imaging could be selected e.g. using a push button provided on a glove that the sorter is wearing. Data transmission
between the camera and the computer can occur by wireless means, e.g. over the radio.
Figure 3 shows a bloc diagram of an identifying unit 20 of a second embodiment. In accordance with the invention, an identifying unit 20 connected to a digitalised sorting rack 10 comprises reading means 21, by means of which it reads the at least partial identifying data of the object to be sorted 2, and a memory in which it stores the identifying data it has read. The memory (not illustrated) may be a separate or integrated memory means or memory card in the reading means, or the identifying unit may use the computer memory as its data storage for storage of data it has identified. In the identifying unit of figure 3, the receiver RF 21 reads, in other words, receives in this case, identifying data on the substrate 5 connected to the object 2. In this case, the substrate 5 comprises a transmitter means 23, which transmits receivable identifying data on e.g. radio frequency, in optical form or any similar form. The receiver 21 is preferably a radio receiver RF which receives the radio signal transmitted by the radio transmitter 23 of the substrate 5, preferably an RFID signal. The receiver 21 my also be an optical receiver, preferably an optical detector, which receives the optical signal transmitted by the optical transmitter 23 of the substrate 5. The receiver 21 is connected to the controller unit and memory of the computer 30 and over this to applications SOVl, SOV2 as illustrated in figure 1. The identifying unit may further comprise a display 32 connected to the computer, a keyboard 34 or a combination of these, such as a contact display, or any other data input means, allowing control of applications SOVl, SOV2 over their interfaces. The object to be sorted is placed on the identification substrate 19 connected to the identifying unit for reading of the identifying data. There may be one or more applications SOVl, SOV2 in use.
The embodiments of the digital sorting rack 10 illustrated in figures 1-3 comprise applications SOVl, SOV2, 70, 80, which are intended to be stored in the memory of the computer 30 and to perform various necessary functions, such as identification, processing, correction and/or comparison. It also possible to connect a display 32 connected to a computer, a keyboard 34 or a combination of these, such as a contact display or any other data input means, to the identifying unit 20, 21, 22 connected to the computer, these means allowing control of the applications SOVl, SOV2 over their interfaces. The processor speed and the storage capacity affect the imaging, image manipulation and text reading speed of the apparatus. A flat-panel display as such does not require much space. In addition to indication of the interfaces of the
applications, the display can be used to indicate to the user when an image has been generated for identification and to indicate the results of the identification.
Figure 4 exemplifies the interface 90 of an application of an identifying unit of one embodiment. The interfaces of these embodiments, which are visualised on the computer display 32 and controlled by the keyboard 34, typically comprise storage selections 92, processing and identification selections 94 and windows 96a, 96b for showing the identification results. The identified identifier data, i.e. the data found in the database, is displayed in the window 940. The interface additionally comprises signal lights 910 to guide the user and an ending push-button 920. The application together with the interface are started e.g. from the start menu of windows or any other operating system, and then the actual sorting application opens in the computer memory. When the sorting application of one embodiment starts, a first, second and third light source 24, 26, 28 are simultaneously switched on under the control of the controller unit of the computer in the identifying unit 20 of the sorting rack. The first and the second light source indicate the centre and the corners of the imaging area and the pulsation of the identifying unit, preferably a digital camera, starts as described above.
The interface exemplified in figure 4 comprises three light signals, of which the first one 910a is switched on when the application is being started, the second light signal 910b is switched on when the initialisation is finished and the third light signal 910c is switched on when the object can be removed from the imaging area. The first light signal 910a "database being initialised" is lighted if the user, when initialising the application, has clicked "yes"as a reply to the program's question whether there have been changes in the database since the previous use. As the initialisation of the database ends, the light signal 910a is switched off. The second light signal 910b "identification completed, put another object in the imaging area" indicates when the identification is completed and a new object can be placed in the imaging area. This light signal 910b is switched off when a new object is detected in the imaging area. The third light signal 910c indicates when the object can be removed from the imaging area. The light signal 910c is switched on immediately when an image of the object has been generated for processing. When the identification is completed and another object can be placed in the imaging area, the light signal 910c is switched off. The sorting application is ended e.g. with the aid of the STOP push-button 920. The application completes the identification that is going on and then stops the program. If the user wishes to continue the sorting after
the program has stopped, he can do so e.g. by pressing the arrow key at the left upper corner. This restarts the application.
Figure 5 is a flow chart of a method for sorting at least one object equipped with identifier data into a sorting rack comprising a plurality of delivery bins. First, the user of the digitalised sorting rack activates the rack 500 and sets the rack in the desired sorting mode 502. The user assumedly starts by selecting the rack mode in step 502, and then the rack is set in the selected mode, while sorting data 504 are loaded in the database, unless they are already provided in the database. Then the user of the rack grips the object to be identified and places it on the identification substrate, and then the at least partial identifier data are read and stored 506 in a memory connected to the identification unit 20. Next, the read identifier data are transferred from the identification unit to a first application stored in the computer memory, which compares the read identifier data with the sorting data 508 provided in the database. Unless the read identifier data comply with the sorting data, the second application stored in the computer memory complements the read incomplete identifier data to the correct identifier data 510, which correspond to the sorting data in the database. Then the controller unit of the computer indicates to the user the bin into which the object should be sorted 512 and the user places this object in the indicated bin 514. In step 516, the user decides whether to continue sorting into the rack or to stop sorting. If the sorting into the rack is continued (NO), the user proceeds to step 506, where the identifier data of the new (following) object are read. If the sorting into the rack is ended (YES), the user proceeds to step 518, where he decides whether to continue the sorting at all (YES), returning to step 502, or whether to stop sorting (NO), proceeding to step 530. A shift from step 516 to step 518 usually means that all the objects of one batch have been sorted into the correct bins.
As further illustrated in figure 5, in step 502, the user selects the mode collecting from the rack, and the rack is set into the selected mode, while the collection list
520 is loaded into the database, unless it is previously provided in the database.
Then, on the basis of the collection list, the user is indicated the bin from which the object is taken 522 next, and at the same time discriminatingly the bin 524 next in turn, from which the object in turn next after the presently collected object is collected. The object 526 is taken from the bin and the collection is continued until all the objects in the bin have been taken out through steps 528 and 526, in other words, all the objects present (sorted) in one single bin are collected from it. When
the first object is taken from the following bin, the recognition means connected to the bin identifies this particular collecting operation in step 530, and then the collection control proceeds by one step through steps 522-528, unless the user wishes to stop the collection in step 532. At the same time as the recognition means connected to the bin identifies collection from the following bin in step 530, the mode of this particular bin is changed to a bin to be collected. In step 532, the user decides whether to continue collecting from the rack or to stop. If collection from the rack is continued (NO) the user proceeds to step 522, in which a new (following) bin is indicated, from which the object is collected. If collection from the rack is stopped (YES), the user proceeds to step 518, in which he decides whether to continue sorting (YES) at all, returning to step 502, or to terminate sorting (NO), proceeding to step 534. A shift from step 532 to step 518 usually means that all the objects pertaining to one batch have been collected from the bins for delivery, for instance.
In the method of the invention for sorting an object into a rack and collecting it from the rack, these operations are mutually independent operations. As described above, method steps 500, 502, 520-532, 518 and 534 described above constitute a completely independently operating method for collecting an object from the rack on the basis of a collecting list stored in a database. The method is applicable to the system of the invention, which comprises a digitalised sorting rack and a sorting rack section.
Figure 6 is a flow chart of a method of one embodiment by steps, when step 502 of figure 5 selects as sorting mode "sorting into the rack". In other words, the flow chart of figure 6 shows steps 502-518 of figure 5 in greater detail. In this embodiment, the object is preferably a postal item, to which at least partial identifier data, e.g. an address field, an address identifier or a delivery area identifier has been connected. When the application has been activated 600 and the light signal "identification completed, put new object on the imaging area" has been switched on, the first object to be identified is placed on the imaging area on the identification substrate 602. When the arrival of the object in the imaging area has been detected, the light signal "identification completed, put new object on the imaging area" is switched off. The following step 604 comprises monitoring of the moment the object is immobilised and the user brings the centre of the address field on the central LED light point and immobilises the movement of the object at this location. Step 606 comprises detection of the immobilisation of the object,
processing an image of the address field and switching on the light signal "remove object from imaging area", allowing the user to remove the object from the imaging area 608. The image of the address field of the object, preferably a postal item, is displayed in the first window of the display, and a corrected, e.g. straightened image of the address field is shown in the second window, as illustrated in figure 4. The partial identifier data read in step 610 is pre-processed and compared with the address directory in the database in the manner explained below, and the identification results are shown on the display over the interface in the field "found address" (cf. window 940 in figure 4). In step 612, the delivery bin of the object is indicated. After this, the light signal "identification completed, put a new object in the imaging area" is switched on, signalling that a new object, preferably a postal item, can be placed in the imaging area on the identification substrate 614. Step 616 comprises a decision of whether to continue sorting or to stop sorting. If the sorting is continued (NO), the user proceeds to step 604, where he reads the identifier data of the new (following) object). If the sorting is stopped (YES), the user proceeds to step 618. When wishing to stop sorting, the user presses e.g. the STOP push button (figure 4), and then the image manipulation in process is completed and the application is turned off.
Figure 7 is a flow chart of a method of one embodiment for identifying partial identifier data of an object by comparing the read partial identifier data with sorting data stored in the database and by complementing defective identifier data to corrected identifier data. The method described here relates to steps 508 and 510 illustrated in figure 5 and to steps 606, 610 and 614 illustrated in figure 6, which will be exemplified in greater detail below. The flow chart of figure 7 shows an embodiment of the identification method steps, by means of which imaging (reading) and identification (pre-processing, comparison) with the aid of the camera are automated. The described example assumes that the identifier data of the object, preferably a postal item, is an alphanumeric address field and that pre-processing is performed three times at the most. Further, reading is preferably text reading and identification is preferably text identification in this example.
When sorting is started 700 and the object, preferably a postal item, has been appropriately placed on the imaging area on the identification substrate, the identifier data of the object are imaged and stored 702. In step 704, the most recent image is compared with the preceding image, and if they are identical (YES), it is stated that no new object has been introduced in the imaging area, in other words,
after two successive imaging operations, the same (preceding) object is still in the imaging area. When the image release is automated, e.g. by synchronising the camera with the light sources 24, 26 in figure 2, the camera generates successive images at given intervals, and the "difference image" of these is calculated. Two successive images can be taken at 100 ms intervals, for instance. By criteria determined on the difference image, the user may conclude whether the two last images were identical. When a comparison in step 704 states that the two last images were not identical (NO), the user knows that a new object, preferably a postal item, has been brought in the imaging area. After automatic imaging, the image can be subjected to preliminary pre-processing, e.g. by modifying the image pixel values in the user interface (window 94 in figure 4) either to a lighter or a darker shade according to the selection. Then the image is subjected to post¬ processing 706, 708, 710, 712 and the user is informed of completed imaging by means of the light signal 910b (figure 4) of the user interface. The post- processing steps, which are explained in greater detail below, comprise image straightening 706, image pre-processing 708 and reading corrected identifier data in the image 710, and the iteration cycles needed in steps 708 and 710 for trimming of the corrected identifier data in step 712. Unless one wishes to continue the image post¬ processing for one reason or another, when the correct address data are not found in the database, the user can interrupt the sorting in step 716 and proceed to step 702, or terminate sorting in step 718. As an end result of step 714, after the corrected identifier data of the object, preferably a postal item, have been read, the address complying best with the corrected identifier data is searched in the address database (sorting data) and the bin corresponding to this address is indicated to the user for sorting. At the same time, the read (partial) identifier data, the identified address and an image of the address field of the object, preferably a postal item, are stored in the directory defined as identifier data.
The directory in which the identification results are stored is written in the user interface (figure 4) in the field "identification result directory". There are three options under the text "store" for selecting what to store. The options are e.g.
"image", "unidentified" and "identification results". By selecting "image", the user stores an image of the object that has not been processed. By selecting
"identification results", the user stores the results of identification. By selecting "unidentified", the user stores the results and the images also when no address has been identified in the object. In this case, however, "image" and "identification results" should be selected for the image of unidentified objects and its
identification results to be stored. In the storage of the data and the image of objects whose address has not been identified, the file name will read "unidentified".
The straightening of the image in step 706 of figure 7 depends on the angular range selected for examination in the user interface. The determination of the angular range is explained in the description below. If the angular range selected for examination is e.g. in the range +30 degrees to -30 degrees, the image is turned e.g. at an angle of -30 degrees and the pixels of the turned image are summed row-wise. This yields a vertical image profile, on which a Fourier transformation is calculated. This operation is performed by steps of 7.5 degrees a total of 9 times. This will cover the entire desired angular range. The Fourier transformation indicates the spatial frequencies in the profile. An examination of the suitable number of spatial frequencies in the calculated Fourier transformations allows the conclusion of the angle of the image in which the partial identifier data text is horizontal. When the text is in a straight position in the image, its profile comprises a large number of relatively high frequencies.
The following is a description of an example of performing image straightening in accordance with the read text in step 706. If there has been a selection in the user interface for examining 360 degrees, i.e. all the angles, the image is turned first at an angle of -42 degrees. The image pixels are summed separately both by rows and columns, yielding an image profile both in the horizontal and the vertical direction. The Fourier transformations are calculated on both the profiles. The angle is changed by steps of 7 degrees totally 13 times. Since both the vertical and the horizontal profile is calculated on each image, the angular ranges will be examined in the range -42 to +42 degrees and +48 to +132 degrees. Since the difference between these angular ranges is smaller than the length of an individual step (48 - 42 = 6<7), one can conclude that the examined angular range is in the range -42 to +132 degrees. Considering also that 3 degrees can be added to the end points of the examined angular range, the examined angular range is in fact half of all the conceivable angles. 3 degrees can be added to the end points, because this is less than half of the length of one step, in other words, with a text at an angle of 45 degrees, the algorithm interprets it at an angle of 42 degrees, which is consequently a value with an error of 3 degrees. If the text is at an inclination of e.g. 3.5 degrees, the algorithm interprets it at an inclination of 0 or 7 degrees, i.e. a value with an error of 3.5 degrees. Since an addition of 3 degrees to the end points of the range does not impair the angular resolution of the algorithm, this addition can be made.
One may further note that the text may be also upside down in these images, in other words, noting that the text is e.g. at an inclination of 15 degrees, it could equally well be at an inclination of 195 degrees. However, the algorithm described above yields a value of the text orientation angle in the range -45 to +135 degrees. When the image is positioned at such an angle, the text may be upside down. To eliminate this error, optical character recognition (OCR) is performed on the image; the same is done on an image turned upside down. After this the user examines which of the images comprises the postal code and post office. If the image is turned upside down, the optical character recognition will not find any text that makes sense. This allows the conclusion of the angle of the image in which the text is straight.
The following is an exemplifying description of the control of the post-processing steps 706, 708, 710 and 712 from the user interface of the application, with reference to window 94 of the user interface in figure 4. The selection "angular range" determines the size of the angular range in which the inclination should be examined. The selection "limit of reliable recognition" selects the limit determining the minimum acceptable value for recognition reliability. Values under this value are classified as unidentified. The limit of reliable identification can be selected e.g. in the range 0.00 to 1.00 and the selection can be made e.g. with a 0.05 resolution. The reliability value indicates the degree to which the corrected identifier data comply with the "found address" data in the database. If the correlation is total, the reliability value is 1 , and the smaller the reliability value, the lower the correlation. The bin indicates the bin number to which the address corresponds. The processing duration indicates the duration of the image pre-processing and the comparison between the identification result and the address directory. In the field "postal code and post office" the user fills in the postal code and post office of the mail items being sorted. This information is needed in searching the street address of a postal item and in the straightening of the text of a postal item, if the selection 360 degree has been made under "angular range". The selection "maximum number of image manipulations" is selected as the maximum number of image manipulations of one single image. The number of manipulations can be selected e.g. in the range 1 to 3. The selection "darkness of image in first manipulation" determines the effect of the first pre-processing of the image. The higher the selected value, the higher the whiteness of the image. The darkness of the image in the first manipulation can be selected e.g. in the range 0.00 to 2.00, and the selection can be made e.g. with a resolution of 0.1. Accordingly, the selection "darkness of image in the third
manipulation" determines the effect of the third pre-manipulation of the image in this case. The higher the selected value, the higher the whiteness of the image. The darkness of the image in the third manipulation can be selected e.g. in the range 0.00 to 4.00, with the selection made e.g. with a resolution of 0.1.
Figure 8 is a flow chart of a method of a second embodiment for identifying partial identifier data of an object by comparing the read partial identifier data with sorting data stored in the database and by complementing defective identifier data to corrected identifier data. The method described below relates to steps 508 and 510 illustrated in figure 5 and to steps 606, 610 and 612 illustrated in figure 6, which will be exemplified in greater detail below. In the method of figure 8, the comparison uses the identification result obtained in optical character recognition as explained in conjunction with image straightening (step 710 in figure 7). When the comparison is started 800, this identification result is processed 802 and the identification result is compared with sorting data stored in the database, such as an address directory, for instance. Then, in step 804, the identification result is modified by e.g. changing all its characters to capital characters, all the 0 numbers to the letter O and all the 1 numbers and L letters to the letter I, by eliminating the special characters and the space characters. The same operations have been performed for the address directory in the database for the address directory and the identification result to be compatible. In addition, all the rows containing 2 or fewer characters 804 are deleted from the identification result. Next, the row having the postal code and postal office 806 is searched in the read identifier data. Step 808 comprises a comparison of the identification result with the database, and if the identification result is adequate (YES), step 810 calculates the correlation between the row preceding the postal code and the street names in the database. The following is an exemplifying description of two different manners of calculating the correlation between two strings of characters. Step 812 comprises a comparison to conclude whether the highest correlation is adequate, and if this is the case (YES), the user proceeds to step 816. Unless the identification result of step 808 or the highest correlation in step 812 is adequate (NO), the highest correlation between all the rows and the street names is calculated in the database in step 814. Step 816 comprises calculation of the correlation between the row where the street name was found and the potential addresses. Then the correlation between the row preceding the address and the names 818 of the persons living in the same street is calculated, yielding the highest name correlation. Step 820 comprises a comparison to conclude whether the highest name correlation is adequate, and if this is the case (YES), the
user proceeds to step 824. Unless the highest name correlation in step 820 is adequate (NO), the correlations between all the rows and the database names of the persons living in the same street are calculated in step 822. Finally, in step 824, the correct bin is assigned for the sorted object on the basis of the highest summed correlation, the sum correlation.
The following is an example of the structure of the address directory stored as a file in the database. The program adds a bin number to the address directory, the number being the same for all of those living at the same address.
.. i 2 3 4 5
00330; Perustie;30 B 25;Meikalainen;Liisa Riitta Maria
00330; Perustie;30 B 25;Meikalainen; Matti Sakari
00330; Perustie;30 B 25;Perusfirma Oy;
in which the numbered address fields denote:
1 = Postal code
2 = Street name
3 = Te remaining address components
4 = Name 1, i.e. family name or company name
5 = Name 2, i.e. first names or blank (a blank in the case of a company name).
The following is an exemplifying description of two substantially different manners of calculating the correlation between two character strings. The method in figure 8 uses both these calculating procedures in the calculation of the correlation between the two character strings, selecting the higher correlation value of the values thus calculated in two different manners.
Manner 1 :
Character strings to be compared:
Character string 1 : MATTI MEIKALAINEN Character string 2 : M ATTI-M AKEL AINEN
Character string 2 is the result of the optic character recognition described above and character string 1 is the character string in the database with which the recognition result is compared. The underline signifies a blank character in this context.
The method (1) for calculating the correlation between two character strings:
Character string 1 is glided over character string 2 and the number of coinciding identical characters in each situation is calculated. Initially the last character in character string 1 and the first character in character string 2 coincide, and then the character string 1 is moved by steps of one character over character string 2.
1st step:
MATTI MEIKALAINEN
MATTI MAKELAINEN 0 identical characters coincide
2nd step:
MATTI_MEIKALAINEN
MATTI_MAKELAINEN 8 identical characters coincide (characters T, K, L, A, I, N, E, and N)
Etc.
Results:
The number of hits out of the characters glided in the different steps is calculated. The calculation of correlation coefficients should take account of the ratio of character sting lengths to the number of hits.
Calculation of the correlation coefficient
R = Nmax/(MJ1 + O,5|MJ1 - MJ2|).
in which Nmax is the highest number of this, MJl is character string 1 and MJ2 is character string 2. In this exemplifying case, the correlation coefficient obtained is 0.46.
Manner 2:
Character strings to be compared:
Character string 1: MATTI_MEIKALAINEN
Character string 2: M ATTI_M AKEL AINEN
Character string 2 is the result of the optic character recognition and character string 2 is the character string in the database with which the recognition result is compared.
Method (2) for calculating the correlation between two character strings:
The correlation between two character strings is examined by taking two successive characters at a time from character string 1 and by examining whether an identical character pair is found in character string 2.
Step 1
Two characters taken from the character string: MA
Is this character pair found in character string 2: Yes
Step 2
Two characters taken from character string 1 ; AT
Is this character pair found in character string 2: Yes
Etc.
Results:
The calculation of the correlation coefficient takes account of the number of correct character pairs found in character string 2 and the lengths of the character strings. Calculation of the correlation string:
r = N/(MJ1 - 1 + O,5|MJ1 - MJ2|),
in which N is the number of found character pairs, MJl and MJ2 are character string 1 and character string 2. The correlation coefficient obtained in this example is 0.67.
The method steps illustrated in figures 5 - 8, or at least part of the method steps can be carried out by a computer program or programs, which can be stored in the memory of computer 30 for performance of various functions such as identification, processing, correction and/or comparison. The computer program consists of an encoded programmable product.