GB2622449A - Article processing apparatus, system and method therefor - Google Patents

Article processing apparatus, system and method therefor Download PDF

Info

Publication number
GB2622449A
GB2622449A GB2216480.0A GB202216480A GB2622449A GB 2622449 A GB2622449 A GB 2622449A GB 202216480 A GB202216480 A GB 202216480A GB 2622449 A GB2622449 A GB 2622449A
Authority
GB
United Kingdom
Prior art keywords
article
belt
image
processing means
processing device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2216480.0A
Other versions
GB202216480D0 (en
Inventor
Van Rijnberk Jeroen
Honoré Pas Reinier
De Groot Sebastiaan
Van Klaarbergen Swen
Albertus Van Der Tuuk Willem
Liefhebber Freek
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sita BV
Original Assignee
Sita BV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sita BV filed Critical Sita BV
Publication of GB202216480D0 publication Critical patent/GB202216480D0/en
Priority to PCT/EP2023/061406 priority Critical patent/WO2023209234A1/en
Priority to PCT/EP2023/061403 priority patent/WO2023209232A1/en
Priority to PCT/EP2023/061400 priority patent/WO2023209230A1/en
Publication of GB2622449A publication Critical patent/GB2622449A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10821Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum further details of bar or optical code scanning devices
    • G06K7/10861Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum further details of bar or optical code scanning devices sensing of data fields affixed to objects or articles, e.g. coded labels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10712Fixed beam scanning
    • G06K7/10722Photodetector array or CCD scanning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10792Special measures in relation to the object to be scanned
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10821Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum further details of bar or optical code scanning devices
    • G06K7/1096Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum further details of bar or optical code scanning devices the scanner having more than one scanning window, e.g. two substantially orthogonally placed scanning windows for integration into a check-out counter of a super-market
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30112Baggage; Luggage; Suitcase
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Toxicology (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Control Of Conveyors (AREA)
  • Warehouses Or Storage Devices (AREA)
  • Discharge Of Articles From Conveyors (AREA)
  • Sorting Of Articles (AREA)

Abstract

An imaging processing device comprises processing means configured to: receive an image of a region of interest comprising a first article 100 in response to the article being placed on a belt 70, the belt configured to convey the article along a path; b. determine a plurality of coordinates in three-dimensional space associated with the region of interest; c. determine from the plurality of coordinates whether a tub has been placed on the belt and/or whether the first article has an irregular shape; and/or whether a second article has been placed on the belt. The first article is determined to be conveyable along the path in response to a determination that a tub has been placed on the belt. The first article is determined not to be conveyable along the path in response to a determination that the first article has an irregular shape or that a second article has been placed on the belt. The coordinates are from point cloud data generated from a camera. The invention is useful for self-service luggage drop systems in airports or other transport hubs and prevents irregularly shaped or sized luggage interfering with conveyor operation by preventing these items from being conveyed.

Description

Intellectual Property Office Application No GI32216480.0 RTM Date:5 May 2023 The following terms are registered trade marks and should be read as such wherever they occur in this document:
JAVA
AZURE
KINECT
INTEL REAESENSE
MICROSOFT
BASLER
HALCON
MVTEC
Intellectual Property Office is an operating name of the Patent Office www.gov.uk/ipo ARTICLE PROCESSING APPARATUS, SYSTEM AND METHOD THEREFOR
FIELD OF THE INVENTION
This invention relates to a system for processing articles. In particular, but not exclusively, this invention relates to a system, apparatus, and method for processing articles such as an item of baggage or an item such as a parcel which may be processed at airports, seaports, railways, and other mass transport locations with an item handling system. The system or apparatus may be retrofitted to an existing system or apparatus which is already installed at a particular location.
BACKGROUND OF THE INVENTION
At transportation hubs, such as airports, items of luggage that are to be checked in need to processed in a number of ways at the point of drop off. For example, said items of luggage will often have attached a bag tag, the bag tag holding information about the baggage journey in the form of radio-frequency identification (RFID) tags and/or visual barcodes. Said baggage journey may be linked to a passenger name record/passenger journey, but not exclusively. Such tags need to be read at the point of drop off to retrieve this information. Furthermore, after having been dropped off, the item of luggage will often be conveyed along a path from the point of drop off to a destination where it will be sorted for loading onto a mode of transport, such as an aircraft. The item will often be conveyed by way of a conveyor belt and it is, therefore, important to assess whether the item is suitable to be conveyed in such a way. For example, items of luggage that are too round in shape may roll of the belt as they are being conveyed. Items that are irregular in shape may damage or get stuck on the belt.
Typically, for the above described processing, a manual operator is required at the point of drop off to ensure that the bag tags are successfully read and that the item of luggage is suitable to be conveyed. Existing infrastructure at drop off points reflects this. However, there has been a general move towards self-service processes at transportation hubs to increase the efficiency of the passenger experience.
The inventors of the present invention have appreciated that there is a need for apparatus that can facilitate the processing of items of luggage at self-service drop-off points that can be retrofitted to existing infrastructure.
SUMMARY OF THE INVENTION
The invention is defined by the independent claims below to which reference should now be made. Optional features are set forth in the dependent claims.
According to an aspect of the present invention there is provided an imaging processing device comprising: processing means, wherein the processing means is configured to: a. receive an image of a region of interest comprising a first article in response to the article being placed on a belt wherein the belt is configured to convey the article along a path between an origin and a destination; b. determine a plurality of coordinates in three-dimensional space associated with the region of interest; c.determine from the plurality of coordinates: i. whether a tub has been placed on the belt; ii. that the first article is conveyable along the path in response to a determination that a tub has been placed on the belt; and/or d. determine from the plurality of coordinates: i. whether the first article has an irregular shape; and/or U. whether a second article has been placed on the belt; hi. that the first article is not conveyable along the path in response to a determination that the first article has an irregular shape or that a second article has been placed on the belt. This provides the advantage of assessing whether an article will be successfully conveyed along the path without the need for a manual operator, by identifying items which are irregular in shape such that they might damage the path as they are being conveyed or get stuck.
In some embodiments the processing means is further configured to determine, from the depth data associated with the image, the infrared data associated with the image and the plurality of coordinates, whether the first article has a substantially round shape. This provides the advantage of assessing whether an article will be successfully conveyed along the path without the need for a manual operator, by identifying items which are round and may roll of the path as they are being conveyed.
In some embodiments the processing means is configured to determine whether a tub has been placed on the belt by determining from the plurality of coordinates whether a height of the first article comprises an edge corresponding to a shape of a known tub. This provides the advantage of detecting exactly which tub is being used to convey the item of a set of known of tubs to which the transportation hub has access.
In some embodiments the processing means is configured to determine whether a second article has been placed on the belt by determining whether a shape of the first article in the image comprises extensions which are likely to represent a second article. In some embodiments the processing means is configured to determine whether a second article has been placed on the belt by determining from the plurality of coordinates whether a section of the image represents a height profile that is different from that of the first article. This provides the advantage of preventing multiple items from being processed as a single item. This could lead to items being incorrectly classified as non-conveyable if the two items together form an irregular shape, for example.
In some embodiments the processing means is further configured to determine if the first article has a stable base upon determining that the first article has a round shape. In some embodiments the processing means is further configured to withdraw a conclusion that the first article has a round shape. This provides the advantage of avoiding classifying items as non-conveyable if they have a stable base and would, therefore, not roll off the path as they are being conveyed.
In some embodiments the processing means is further configured to determine a length, width and height of the first article upon determining that the first article is conveyable along the path. This provides the advantage of determining the dimensions of the article. This may be useful when going on to optimise how items should be arranged for transportation. For example, how items should be arranged on an aircraft to optimise cargo space.
In some embodiments the processing means is further configured to determine that the first article should be repositioned on the belt in response to a determination that the whole of the first article is not within the region of interest on the belt. In some embodiments the processing means is further configured to determine that there is not an article on the belt in response to a determination that the dimension of the first article is below the minimum threshold. In some embodiments the processing means is further configured to determine a plurality of dimensions of the first article if it is not determined that the first article is not conveyable along the path. This provides the advantage of determining the dimensions of the article to be conveyed. This may be useful when going on to optimise how items should be arranged for transportation. For example, how items should be arranged on an aircraft to optimise cargo space.
In some embodiments the processing means is further configured to filter the plurality of coordinates in three-dimensional space associated with the first article. In some embodiments the processing means is configured to filter the plurality of coordinates by reducing the density of the data associated with the plurality of coordinates. This provides the advantage of increasing processing speed.
BRIEF DESCRIPTION OF THE DRAWINGS
An embodiment of the invention will now be described, by way of example only, and with reference to the accompanying drawings, in which: Figure 1 is a diagram of a multi-lane article processing apparatus according to aspects of the present invention; Figure 2 is a diagram of a multi-lane article processing apparatus configured to read RFID tags according to aspects of the present invention; Figure 3 is a schematic diagram outlining how the antennas of the apparatus shown in Figure 2 are connected to other elements of the system according to aspects of the present invention; Figure 4 is a flow chart outlining the steps performed by a processing means to determine whether an article is conveyable according to aspects of the present invention; Figure 5 shows how outputs of a camera are combined to generate a combined scan according to aspects of the present invention; Figure 6 is a flow chart outlining the steps performed by a processing means to segment an object from a combined scan according to aspects of the present invention; Figure 7 is a flow chart outlining the steps performed by a processing means to estimate the background in a combined scan according to aspects of the present invention; Figure 8 is a flow chart outlining the steps performed by a processing means to segment an object from a combined scan according to aspects of the present invention; Figure 9 is a flow chart outlining the steps performed by a processing means to segment an object from a combined scan according to aspects of the present invention; Figure 10 is a flow chart outlining the steps performed by a processing means to determine where on a belt an object is according to aspects of the present invention; Figure 11 shows a series of example scans in which it has been determined if the belt is empty or not according to aspects of the present invention; Figure 12 is a flow chart outlining the steps performed by a processing means to determine if a tub is on the belt according to aspects of the present invention; Figure 13 shows an example segmented tub edge from a scan according to aspects of the present invention; Figure 14 shows a series of example scans in which it has been determined if there is a tub on the belt or not according to aspects of the present invention; Figure 15 is a flow chart outlining the steps performed by a processing means to find extended parts in a scan, Figure 16 shows a series of example scans in which extended parts have been detected and categorised according to aspects of the present invention; Figure 17 shows a decision tree outlining how extended parts are classified according to aspects of the present invention; Figure 18 is a flow diagram outlining the steps performed by a processing means to detect holes in a scanned object according to aspects of the present invention; Figure 19 is a flow diagram outlining the steps performed by a processing means to determine whether there are multiple items in a scan or items of irregular shape according to aspects of the present invention; Figure 20 shows a series of example scans in which it has been determined if the scanned item is of an irregular shape or if there are multiple items on the belt according to aspects of the present invention; Figure 21 is a schematic diagram of a round object on a belt according to aspects of the present invention; Figure 22 shows the normals, radius and centre points for an example scan according to aspects of the present invention; Figure 23 is a flow chart outlining the steps performed by a processing means to determine if an item on the belt is round and if it has a stable bottom according to aspects of the present invention; Figure 24 shows a series of example scans in which a roundness metric has been determined and in which it has been determined if the item has a stable bottom according to aspects of the present invention; Figure 25 shows an example scan in which a bounding box is used to determine the length and width of the scanned item according to aspects of the present invention; Figure 26 shows an example scan in which a representation of the depth difference is used to determine the height of the scanned item according to aspects of the present invention; Figure 27 is a set of tables outlining the success of the processing means in classifying scans of the belt environment according to aspects of the present invention; Figure 28 is a flow chart outlining the steps performed by a processing means to determine if an item on the belt is conveyable according to aspects of the present invention; Figure 29 is a set of tables outlining the success of the processing means in classifying scans of the belt environment according to aspects of the present invention; Figure 30 is a diagram indicating the location of cameras on the arch according to aspects of the present invention; and Figure 31 is a diagram indicating the fields of view of cameras on the arch according to aspects of the present invention.
Like features are denoted by like reference numerals. DETAILED DESCRIPTION The following exemplary description is based on a system, apparatus, and method for use in the aviation industry. However, it will be appreciated that the invention may find application outside the aviation industry and in any industry, such as in the packaging or delivery industry where a token or tag associated with an article or item of baggage is detected or read. Accordingly, the invention may find application in any industry where tags or tokens are used. Thus, embodiments of the invention find application in the travel industry in general, such as rail, coach, car, as well for delivery and courier services where tokens may be used.
Additionally, the following embodiments described may be implemented using a C++ programming language using for example an OpenCV library. However, this is exemplary and other programming languages known to the skilled person may be used such as JAVA.
Arch structure An embodiment of one aspect of the present invention will now be described with reference to Figure 1.
Figure 1 illustrates a system 10 for processing items of luggage. In this example, the system forms part of a self-service bag drop at an airport. The system comprises a plurality of lanes. In this example, there is a first lane 20 and a second lane 30. Each lane comprises an arch structure. In this example, each arch comprises a first sidewall 40, a second sidewall 50 and a panel 60 linking the first sidewall and the second sidewall. The panel forms the top of the arch structure and comprises processing means. Furthermore, in this example, the first lane and the second lane are adjacent to one another such that they share a second sidewall. The arch structure of each lane projects from a base 70. In this example the base is a conveyor belt defining a surface on which items of luggage should be placed for processing. The height of the arch is such that the panel is at a distance from the base and the item to be processed. Ideally, the distance between the panel and the base is between lm and 2.5m. This ensures the panel does not obstruct the item from being placed on the belt. The arch comprises features for the processing of an item of luggage. Each of these features is discussed in more detail below.
RFID bag taq reading An embodiment of one aspect of the present invention will now be described with reference to Figures 2 and 3.
Figure 2 illustrates a system 10 for reading a radio-frequency identification (RFID) tag 80 according to aspects of the present invention. Each lane 20, 30 of the system comprises a plurality of antennas. In this example, each lane comprises a first antenna Al, A4 and a second antenna A3, A2. The first antenna and the second antenna are rectangular in cross section and are positioned at each lane in such a way that they oppose one another. In this way, the first antenna and the second antenna form side walls. The first antenna is disposed on the interior surface of the first sidewall 40 of the arch and the second antenna is disposed on the interior surface of the second sidewall 50 of the arch. The side wall structure of the antennas defines an area 90 between the first antenna and the second antenna in which an item to be processed 100 can be positioned. In this example, the item to be processed is an item of luggage. In each lane, the conveyor belt forms a base 70 extending between the first antenna and the second antenna on which items can be placed for processing. The area is, as a result, partially enclosed, in that the item of luggage is still visible during processing.
A RFID tag 80 is attached to the item of luggage 100 to be processed. The RFID tag stores information related to the item to be processed. In this example, the RFID tag also stores information related to the passenger travelling with said item of luggage. In order to retrieve the information stored in the RFID tag, the system further comprises an interrogator 110.
The interrogator is coupled to each of the antennas Al, A2, A3, A4 in the system. The interrogator is configured to prompt each of the antennas to transmit a signal, referred to herein as a prompt signal 120. Said prompt signals are configured to prompt the RFID tag to transmit a signal in response. In other words, if the RFID tag is reached by a prompt signal, it sends out a signal of its own, referred to herein as a response signal 130. The response signal can subsequently be received by the relevant antenna, allowing the information stored in the RFID tag to be retrieved.
Referring back to the configuration of the first Al, A4 and second antennas A2, A3 associated with each lane 20, 30, having two opposing antennas surrounding the item of luggage 100 to be processed is significant. It is known that metal blocks and reflects RFID signals. Therefore, if an item of luggage to be processed contains a metal object between the RFID tag 80 and an antenna, the prompt signal from that antenna may not reach the tag, preventing the information held by the tag from being retrieved. Having an antenna on either side of the item of luggage to be processed is, therefore, advantageous. Such an antenna configuration results in the direction of travel of the prompt signals transmitted by the first antenna opposing the direction of travel of the prompt signals transmitted by the second antenna. This ensures there is an antenna whose prompt signal will not be shielded from powering the RFID tag by the metal object.
The system 10 comprises a number of features to ensure that it is able to correctly determine whether a bag tag 80 is attached to an item of luggage 100 in the first lane 20 of the system or the second lane 30 of the system. For example, the system comprises shielding between lanes. In this example, the system uses a layer of metal located between the first lane and the second lane, with metal being known to shield RFID signals. This prevents a prompt signal 120 from an antenna in the first lane from powering a bag tag in the second lane and vice versa. Furthermore, the system further comprises EMC or EMF absorbing paint. In this example, EMF or EMC absorbing paint is disposed at multiple locations of the system. Furthermore, in this example, the EMF or EMC absorbing paint is disposed between the first lane and the second lane. The EMF or EMC absorbing paint is configured to shield and absorb radio frequency signals. The paint absorbs signals from a bag tag or antenna that have been deflected by metal in the system. This further prevents the prompt signals from an antenna associated with one lane from powering a tag on a bag in the other lane. The EMC or EMF absorbing paint may, therefore, be disposed at any undesirable points of the system onto which signals are deflected.
The system 10 comprises further features to ensure that the antennas Al, A2 associated with a first lane 20 do not receive a response signal 130 from a bag tag 80 associated with a second lane 30, the system then erroneously determining that the tag and, therefore, the item 100 is associated with said second lane. For this purpose, the use of a single interrogator 110 for both lanes, as opposed to one for each lane, is significant. The interrogator prompts each of the plurality of antennas to transmit a prompt signal 120 in succession. In other words, only one antenna is enabled at a time, each for a number of milliseconds. The interrogator is configured to further introduce a small time interval between the prompt signal transmitted by each antenna. In this way, only one of the plurality of antennas transmits a prompt signal at a given time. The prompt signals transmitted by each antenna, therefore, do not interfere with one another. If, for example, an antenna of the first lane and an antenna of the second lane A3, A4 were to transmit prompt signals simultaneously, powering both a bag tag in the first lane and a bag tag in the second lane, both tags would transmit a response signal. The response signal from the bag tag in the first lane may, in such a situation, be picked up by the antenna of the second lane and vice versa. Such a situation is, in part, avoided by having only one antenna transmitting a prompt signal at any one time. Said prompt signal is prevented from powering a bag tag on the lane with which it is not associated due to the shielding between the first lane and the second lane discussed above. Furthermore, the inventors of the present invention found that using two interrogators (i.e. one for each lane) introduced phase differences between prompt signals. The result was either amplification of prompt signals, which would lead to reading RFID tags on the wrong lane, or reduction of prompt signals, which would lead to missing RFID tags.
The system is further configured such that bag tags 80 attached to items of luggage that are not on the conveyor belt 70 are not read. Specifically, the reading field of each antenna Al, A2, A3, A4 is configured such that it does not extend beyond a boundary 140. This boundary is defined as the front end of the conveyor belt of the lane with which the antenna in question is associated, the front end being the passenger-facing end of the belt. In this example, such a configuration is achieved using a software-based calibration tool. For each lane, an RFID tag is placed adjacent to the front end of its conveyor belt. The calibration tool is then used to reduce the strength of the prompt signal 120 of each antenna of said lane to just below the strength at which it powers the RFID tag. The result of this calibration process is that an RFID tag attached to an item of luggage 100 beyond the conveyor belts of the system will not be reached by a prompt signal such as to return a readable response signal 130. This avoids reading bag tags associated with items of luggage that are not yet meant to be processed. For example, by passengers who are queuing to use system.
Figure 3 schematically illustrates the architecture that facilitates determining with which lane a bag tag attached to an item of luggage is associated. As explained above, the plurality of antennas of the system Al, A2, A3, A4 are coupled to a single interrogator 110.
The interrogator is further coupled to a controller, the controller being configured to determine with which of the plurality of lanes the bag tag is associated. In this example, the controller is configured to determine whether the bag tag is attached to an item of luggage in the first lane of the system or the second lane of the system. In this example, the controller comprises a plurality of computing devices, each computing device being associated with one of the plurality of lanes. Specifically, a first computing device 150 is linked to the first lane and a second computing device 160 is linked to the second lane. In this example, the computing devices are located in the panel of the relevant arch structure. The interrogator is coupled to the first and second computing devices by way of a network switch 170 connected to the first computing device and the second computing device. In this way, upon an antenna picking up a response signal from a RFID tag, both computing devices receive telegram strings from the interrogator, said telegram strings comprising RFID tag information. The computing devices are then configured to identify by which antennas the RFID tag was read such that it can be determined with which lane the bag tag is associated. In other words, if the response signal was picked up by an antenna associated with the first lane (Al or A2), the computing devices determine that the item of baggage with which the tag is associated is in the first lane. Corresponding logic applies to a response signal picked up by an antennas associated with the second lane (A3 or A4).
Referring back to Figure 2, the system 10 can be modified such that it accounts for the fact that RFID antenna prompt signals 120 can extend to the back of an antenna Al, A2, A3, A4. This leads to the antenna A2 of the first lane 20 that is adjacent to the second lane 30 being susceptible to powering a bag tag 80 in the second lane, thus reading a response signal 130 from the wrong lane. Alternatively, the antenna A3 of the second lane that is adjacent to the first lane may power a bag tag in the first lane. To compensate for this, the system can be modified such that in the event one of said antennas picks up a response signal, it will only be determined that the bag tag is associated with the first lane if the other antenna associated with the first lane Al also picks up a response signal. Specifically, it will only be determined that a bag tag is associated with lane 1 if either antennas Al or a combination of Al and A2 pick up a response signal. A reading by solely A2 would not suffice. By similar logic, it will only be determined that a bag tag is associated with lane 2 if either antennas A4 or a combination of A3 and A4 pick up a response signal. A reading by solely A3 would not suffice.
Determining the conveyability and dimensions of an item of luqqaqe -V3 An embodiment of one aspect of the present invention will now be described with reference to Figures 4 to 27.
The system 10 comprises an image processing device. Said image processing device comprises processing means configured to ultimately determine whether an item of luggage placed on a conveyor belt of the system is able to be conveyed by the belt between an origin and destination, and then subsequently determine the dimensions of the item of luggage. An item of luggage is deemed to not be conveyable if it is of a substantially round shape. An item of luggage is further deemed to not be conveyable if it is of a substantially irregular shape. In this way, round items which may roll off the belt and irregular items which may damage the belt or get stuck on the belt are prevented from being conveyed. In this example, the origin of the path along which the item of luggage is to be conveyed is an airport bag drop and the destination is an area in which items of luggage are sorted for loading onto a mode of transport.
The image processing device comprises a camera for capturing an image of the belt. If an item of luggage is placed on the belt, it is captured by the camera. In this example, the camera is configured to capture an image in response to weight on the belt being detected (i.e. due to an item being placed on the belt). The weight is detected using a scale coupled to the belt. The camera is configured to capture data representing colours of the image, depth data associated with the image, and infrared data associated with the image. In this example, the camera is an Azure Kinect camera with RGB, depth and infrared outputs. The camera generates depth data using a time of flight sensor. Light is transmitted to the belt and subsequently received by the sensor once reflected by items on the belt and/or the belt itself. From the phase shift between the transmitted and received light, the Euclidean distance to the belt and/or the items on the belt is calculated using the known focal length and optical centre of the camera, providing depth data. The Euclidean distance, d, is calculated using the following equation: d=(x2-Fy2-Fz27?, where x, y and z are respectively the distances in the x, y and z directions.
Figure 4 is a flowchart outlining the steps performed by the processing means upon receiving data from the camera. The processing means of system 10 is first configured to receive RGB, depth and infrared data from the camera and filter said data as summarised in block 180. The processing means is configured to generate an RGB point-cloud infrared image from said data, 'point cloud' referencing a set of coordinates in three-dimensional space representing the captured area of the belt and any items of luggage disposed on said area. In this example, the point cloud information is generated by transforming the Euclidean distances associated with the depth data into a set of X, Y and Z coordinates. The processing means is then further configured to project the RGB point cloud infrared image onto an RGB and depth image. This generates a two dimensional image of the belt and any items on the belt, referred to herein as a combined scan. Such a two dimensional image facilitates the image processing techniques employed by the system, discussed substantially below. In this example, the RGB point cloud is projected using a known pinhole model: x = f*(X/Z)+xo, y = f*(Y/Z)+yo, where f is the desired focal length for the projection and xo and yo are the desired centre coordinates of the image. The projected points are mapped to a grid (image). To avoid holes (i.e. where a pixel does not have a corresponding projected point), in this example, the used focal length is smaller than the actual focal length. For the mapping of points to a grid, linear interpolation is used to smooth the result. The infrared image is also corrected for lens distortion. For this remapping, the same desired focal length is used as with the projection of the point cloud. The images used to create the combined scan are also smoothed using a weighted average filter, the weighting being determined by the value of the alpha channel. They are further scaled, translated and cropped such that they correspond to only a region of interest on the belt. This region of interest is where an item of luggage is to be placed for processing. Figure 5 shows an example camera output 190 and resulting combined scan 200 of an item of luggage 100 on a belt 70.
Referring again to the flowchart of Figure 4, before an item of luggage is placed on the belt, the system is configured to generate a scan of the belt when it is empty (i.e. no items on it) and the belt is extruded from the scan (step 210). This is ultimately used to segment the item of luggage from the belt in the combined scan. In this example, generating the scan of the belt when it is empty forms part of a calibration process. A single scan is generated, which is then used for all following scans and their associated segmentation process. Once an item of luggage has been placed on the belt, a combined scan is generated of the belt and the item (step 220). Using both the scan of the empty belt and the combined scan, the processing means is configured to segment a portion of the combined scan that represents the item on the belt (step 230). Figure 6 is a flowchart outlining part of the segmentation process. At step 240, from a filtered scan of the empty belt 250, what constitutes background in the image is estimated. Then, at step 260, the estimated background 270 and combined scan 200 of an item on the belt are combined such as to generate a scan 280 that represents the difference between them. Then, thresholds are applied to the difference scan. In this example, a first threshold Ti and a second threshold T2 are applied. Ti is a stricter threshold, the result of which only retains pixels 290 that are highly likely to correspond to the item. T2, in contrast, retains pixels 300 that have a medium likeliness of corresponding to the item. Both resulting segmented images are combined (step 310). The result of this is taken to be the segmented object 320 on the belt from the combined scan.
Figure 7 is a flowchart detailing how, in this example, the background in the empty belt scan is estimated (Figure 6, step 240). At step 330, valid pixels are segmented from the depth image. At step 340, the holes in the resulting image are filled. In this example, this is done using a distance transform to find the closest valid pixel for each invalid pixel, and replacing the invalid pixel with the closest valid pixel. At step 350, the resulting filled image is smoothed. In this example, this is done using a bilateral filter. The result is a smoothed height map of the background 360. The comparison between the raw depth and the filtered depth is shown in graph 370.
The processing means is further configured to detect the belt area during segmentation.
This is because more sensitive segmentation is needed above the belt (where an item of luggage would be placed). Furthermore, by detecting the belt area it can be determined if the item is on, before, or after the belt. In this example, the belt is characterized as having low local variance in depth and low local variance in infrared intensity. Using this, the belt area is found by segmenting pixels with said low local variance in depth and infrared intensity. The connected components of the resulting segmentation are then calculated and the object in the centre of the image selected. Specifically, in this example, the segmentation thresholds for infrared intensity are 7 for the belt, and 20 for no belt. The thresholds for depth are 50mm for the belt and 75mm for no belt.
Figure 8 is a flowchart detailing how the two thresholds Ti and T2 are applied in Figure 6, alongside infrared thresholds Tx and Ty. The value of Ty can be configured to deal with belts that do not have a flat surface. Figure 9 is then a flowchart detailing how the high likeliness and medium likeliness images are combined at Figure 6, step 310. The result of the segmentation and the actual item on the belt are shown at step 380.
Referring back to the flowchart of Figure 4, the result of the processes outlined in block 180 is referred to herein as filtered data. In this example, this includes the segmented object and the combined scan. At block 390, the processing means is further configured to derive a plurality of metrics from the filtered data, each of which is discussed in more detail below.
Firstly, the processing means is configured to determine whether the belt is empty (Figure 4, step 400). This is done by counting the number of segmented objects in the image which have sufficient height such that they are likely to be items of luggage. The height of the object is determined by the difference in depth of the segmented object. Specifically, in this example, these are segmented objects with a height of over 25 mm. Once the objects are counted (taken to be the connected components resulting from identifying the segmented objects with a height over 25mm), the processing means is configured to determine if these are objects after the belt, before the belt or above the belt. These regions are depicted in Figure 10, which is a flowchart outlining this process. In this example, an object is categorised as being above the belt 410, before the belt 420 or after the belt 430 if the majority of pixels said object comprises are positioned in the relevant area (step 440). If no object is detected above the belt, it is determined that the belt is empty. This is set as a conclusion. Figures 11A to 11C show three example scans. In Figure 11A, an object of sufficient height is detected above the belt. The belt is not concluded to be empty. In Figure 11B, the objects are of insufficient height to reasonably be items of luggage. The belt is concluded to be empty. In Figure 11C, no objects are detected above the belt, before the belt, or after the belt. The belt is concluded to be empty.
Once it has been determined whether or not the belt is empty, the processing means is further configured to determine whether a tub has been placed on the belt (Figure 4, step 450), a tub being a container configured to hold an item of luggage for conveying said item along a path. Structurally, a tub comprises a base and at least one sidewall projecting from the base, the at least one sidewall extending around the perimeter of the base. In this way, an item of luggage that is placed on the base of the tub is prevented from moving beyond boundaries established by the sidewalls. A tub may have a rectangular base and may comprise four sidewalls, each sidewall extending along an edge of the rectangular base. Items of irregular shape or substantially round shape are considered conveyable when placed in a tub.
This is achieved by looking for a height profile corresponding to the edge of a known tub in the height data of the segmented scan. An example of this process is outlined in the flowchart of Figure 12. First, the segmented object is further segmented such that only a portion having height data within an acceptable range is retained (step 460). Figure 13 shows such a portion 470. Said range is within the bounds of that of known tub edges.
Referring back to Figure 12, a bounding box is then fit to the further segmented portion (step 480). Then, at step 490, 8 rectangles are fit to the bounding box, each having a corner matching with the found bounding box. At step 500, for each rectangle, over what length the edge is found is calculated, and the best match selected (step 510). It is then calculated how likely it is that said rectangle corresponds to the profile of a known tub. If the likeliness metric exceeds a threshold, a tub is concluded to have been detected. In this example, the threshold is 55%. Figures 14A and 14B show two example scans. In Figure 14A, an edge that matches the profile of a known tub 520 is found. It is concluded that a tub is detected. In Figure 14B, no edge that matches the profile of a known tub is found. It is concluded that no tub is detected.
Once it has been determined whether or not there is a tub on the belt, the processing means is further configured to detect what is referred to herein as extended parts of the scan silhouette (Figure 4, step 530). That is extensions to the general profile of the segmented object. Figure 15 is a flowchart outlining an example of how this is done. In this example, relatively thin parts of the segmented image are isolated (step 540). These are considered to potentially constitute extended parts. In this example, the thin parts are isolated by way of morphological opening (also known as rounding) the segmented object. The rounding kernels 550 are illustrated in Figure 15. Element 560 shows a scan of an item on a belt. Element 570 shows the resulting segmented object and Element 580 shows the segmented object of Element 570 post rounding. By subtracting the rounded object image from the segmented object image (step 590), the thin parts are isolated. The result of this subtraction is shown in Element 600. Parts that are too small to be considered extended parts are removed (step 610), leaving only those which are likely to correspond to an extended part of the of the object on the belt. It is further determined if the remaining parts are connected to the main object (step 620). The length of each remaining part is then computed (step 630), with those that are not of sufficient length to correspond to an extended part of the imaged item on the belt being removed. The result of this process on the subtracted image of Element 600 is shown in Element 640.
Once the extended parts have been isolated, the processing means is further configured to categorise each of the extended parts. In this example, the categories are: bag tag, wheel, separate object and outside belt. An extended part is categorised as a bag tag if its corresponding RGB data suggests it is white (standard colour of known tags) and if its infrared intensity is within a range considered typical for an imaged bag tag. Specifically, this range is an infrared intensity of between 30 and 125. Figure 16A shows an example scan which resulted in a conclusion of a bag tag being detected. An extended part is categorised as a wheel if the object is likely to correspond to a suitcase and the Z coordinates of the part are above a value expected for the wheel of known items of luggage (i.e. suitcases), and its dimensions correspond to those of known suitcase wheels. In this example, an object is deemed likely to be a suitcase if a fitted rectangle to its top surface corresponds to the profile of known suitcases and the extended parts have Z> 100 mm to correspond to what is expected of a known wheel. Figure 16B shows an example scan which resulted in a conclusion of wheels being detected. In this example, an extended part is categorised as a separate object if its distance from the main object is larger than 5 pixels, the height difference between the extended part and the main object is over 300 mm, the extended part is large compared to the width of its connection with the main object and it meets a minimum size threshold. Figure 160 shows an example scan which resulted in a conclusion of a separate object being detected. An extended part is categorised as outside of the belt if its distance from the main object is larger than 15 pixels, the height difference between the extended part and the main object is over 300 mm and the majority of pixels are located before or after the belt (see Figure 10). Figure 16D shows an example scan which resulted in a conclusion of a separate object outside of the belt being detected. Figure 17 is a decision tree summarising how the extended parts are classified.
At this stage, the processing means is further configured to detect holes in the segmented object. Figure 18 is a flowchart outlining an example of said process. Holes can be indicative of an item with straps, a backpack for example, or an item that is not conveyable. To detect holes, holes in the segmented object are filled (step 650). An example item of luggage on a belt is shown in Element 660. Element 670 then shows the resulting segmented object and Element 680 shows the segmented object with holes having been filled (referred to herein as a filled object). Using the combined scan, what is potentially background (i.e. not the segmented item) is detected (step 690). In this example, the background is identified by looking for regions in which the infrared intensity is similar and height difference is small. The image comprising the segmented object and the image comprising the filled object are then combined (step 700). The result is an image comprising just the holes. Element 710 shows the detected holes overlaid on the image of Element 660.
At Figure 4, step 720, the processing means is configured to determine whether the scanned object is of an irregular shape or if multiple objects have been scanned. To do this, the height data is used. Figure 19 is a flowchart outlining an example of such a process. Areas of similar height are clustered together (step 730). In this example, this is done iteratively by starting at the point of largest height and using known flood fill algorithms. Element 740 shows an example scan of two items on a belt and Element 750 shows the result of using the flood fill algorithms on the corresponding height data. At step 760, the local height variance in the data is calculated. This allows potential boundaries between different objects to be identified. Element 770 shows the boundaries identified in this way for the scan of Element 740. The processing means is then configured to calculate the properties of each identified separate area (step 780). Said areas which have a sufficiently large height or those which are not narrow in shape are then filtered from the image (step 790). These constitute areas which are likely to correspond to separate items on the belt or which are likely to be indicative of an item with an irregular shape. If only one area is filtered, it is concluded that there are not multiple objects (step 800). If more than one area is filtered and the areas are of comparable size, it is concluded that there are multiple objects (step 810). If more than one area is filtered and the areas are not of a comparable size, it is concluded that the object is of an irregular shape (step 820). In this way, the definition of what constitutes an irregular shape is defined by the algorithm itself.
In this example, three areas of comparable size are filtered. It is, as a result, concluded that multiple objects are on the belt. An example of a scan which resulted in an irregular shape conclusion is shown in Figure 20A. In this example, the item which is deemed to have an irregular shape is a car seat for infants. An example of a scan which resulted in neither a conclusion of irregular shape or multiple objects is shown in Figure 20B. An example of a scan which resulted in a conclusion of an irregular shape due to two neighbouring items is shown in Figure 20C.
At Figure 4, step 830, the processing means is configured to determine the roundness of the segmented object. This is achieved using the point cloud data. For each data point, the X, Y and Z coordinates are used to calculate the normal ([Nx,Ny,Nz]) of said point. Based on the normal, each pixel is assigned a possible centre point ([Cx,Cy,C4), and radius (R), under the assumption that the object is round. For illustration, an example data point is shown in Figure 21 and its resulting normal, radius and centre point. In this example, the normals are calculated as follows: dZdx = (Z[x+1,y] -Z[x-1,y])/2 dXdx = (X[x+1,y] -X[x-1,y])/2 dZdy = (Z[x,y+1] -Z[x,y-1])/2 dYdy = (Y[x,y+1]-Y[x,y-1])/2 Ni = dZdx / dXdx N2 = dZdy / dYdy Nz = 1/ (sqrt(1+Ni*Ni+N2*N2)) Nx = Nt*Nz Ny = N2*Nz For each point, the centre and radius are calculated as follows: For each point of a round object on the belt: X = Cx+R*Nx Y = Cy+R*Ny Z = Cz+R*Nz For a round object on the belt: Cz = R Z = R+R*Nz Calculate radius from Z (for every pixel): R = Z / (1+Nz) Calculate centre (for every pixel): Cx = X-R"Nx Cy = Y-R*Ny.
Figure 22A shows an item on a belt, Figure 22B shows the resulting height/depth data for the item and Figure 22C shows the resulting calculated normals in the x direction and y direction. Figure 22D shows the median radius calculated for the item. In this example, the median radius is 87mm. Figure 22E shows the x-coordinate of the centre point (constant).
In this example the median x-coordinate of the centre point is 18mm. Figure 22F then shows the median y-coordinate of the centre point. In this example the median y-coordinate is 7mm.
Figure 23 is a flowchart outlining an example process for determining whether an imaged item is round from the segmented object and the image comprising the X, Y, Z data. The normals, radii and centre points are calculated as outlined above from the X, Y, Z data (steps 840 and 850). From the segmented object, the most likely radius and centre point for the object, under the assumption it is round, is calculated (step 860). In this example, this is done using principal component analysis to find the least dominant direction. This approach is also valid for spherical and cylindrical shapes. At step 870, pixels sharing a similar radius and centre point are further segmented from the image, and the ratio of pixels belonging to the 'round' object is calculated (step 880). In this example, this is given as a percentage. This provides an indication of how round the object is. If the ratio is sufficiently high, it is concluded that the segmented object is of a round shape. The processing means is further configured to determine whether a round object has a stable base. For example, like a fortune teller's sphere. In this example, this is done by identifying points that exist at a small height away from the identified round shape (step 890). If a sufficient number of points are found to exist in said area (i.e. over a threshold T), an example of which is shown in Element 900, the object is marked as having a stable base (step 910). The round conclusion is, as a result, withdrawn. Figure 24A shows a scanned item resulting in a roundness score of 81%. Figure 24B shows a scanned item resulting in a roundness score of 53%, but that is further found to have a stable base. Figure 24C shows a scanned item resulting in a roundness score of 22%. Figure 24D shows an example scanned item resulting in a roundness score of 54%.
Referring again to the flowchart of Figure 4, the above described processes of block 390 result in a plurality of metrics. Said metrics are then used to draw a number of conclusions about the scanned belt environment. These are outlined in block 920. At step 930, if no object was detected on the belt area at step 400, it is concluded that there is no object. If this is the conclusion, the further steps of 940 to 970 are not carried out and this is the result of the scan of the belt environment. At step 940, if a tub is detected at step 450, this is set as the overall conclusion. If this is the conclusion, the further steps 950 to 970 are not carried out and this is the result of the scan of the belt environment. At step 950, if multiple objects are detected due to extended parts being marked as separate objects or the height data suggesting multiple objects, it is concluded that there are multiple objects on the belt.
If this is the conclusion, the further steps 960 to 970 are not carried out and this is the result of the scan of the belt environment. At step 960, if an irregular shape was detected due to extended parts, holes or the height data, it is concluded that the item on the belt is of an irregular shape. If this is the conclusion, the further step 970 are not carried out and this is the result of the scan of the belt environment. At step 970, if at step 830 it was determined that the object is round, this is set as the overall conclusion and this is the result of the scan of the belt environment. Ultimately, at step 980, if it concluded that the scanned item is of an irregular shape or is round, the item is considered non-conveyable. If at step 990 none of the previous steps of block 920 results in an overall conclusion, it is determined that the segmented object represents an item on the belt that is conveyable.
If a scanned item is found to be conveyable, the processing means is further configured to determine the dimensions of the item. In this example, the length and width of the item are determined from the X, Y coordinates of a bounding box around the item. Figure 25 shows an example bounding box 1000 around a bag 100 in a scan of the belt environment. To determine the height of the object, an image representing the depth difference between the segmented item on the belt and the belt is computed. Noise is then removed from the image. An example of such an image is shown in Figure 26. The height of the bag 100 is then taken to be the maximum depth difference in the image.
A set of tables summarising the accuracy of the algorithm summarised in Figure 4 are shown in Figure 27. Table A shows how many scans resulted in correct and incorrect conclusions by the algorithm (no object, conveyable (OK), multiple objects, non-conveyable, tub detected). Table B then summarises the total correct and incorrect conclusions in Table A. Determining the convevabilitv and dimensions of an item of luggage -V2 An embodiment of one aspect of the present invention will now be described referring to Figures 28 to 29.
System 10 comprises a processing device configured to determine whether an item of luggage placed on the belt is conveyable by the system along a path between an origin and a destination. In this example, the origin is a self-service bag drop and the destination is an area in which luggage is sorted for loading onto a mode of transportation such as an aircraft. The processing device comprises processing means. Said processing means is configured to receive data from a camera. In this example, the camera is configured to generate point cloud data representing a portion of the belt of the system and any articles that are on the belt. That is a plurality of coordinates in three-dimensional space corresponding to said belt environment. In this example, the camera is an Intel RealSense 0435 stereoscopic camera. In this example, the camera is configured to capture an image in response to a command by a common use self-service app. In other examples, the camera is configured to capture an image in response to weight on the belt being detected (i.e. due to an item being placed on the belt). The weight is detected using a scale coupled to the belt. The processing means is then configured to determine whether or not the item is conveyable, among other conclusions discussed in more detail below, using the point cloud data. In this example, the algorithm used by the processing means (see Figure 28) is a PointCloud based algorithm. It is written in C#, and uses only Microsoft.NET libraries: System.Numerics.Vectors https://www.nueetorglpackaaes/Svstern.Nurnerics.Vectorsl. Other programming languages may be used, and may also use Microsoft NET libraries.
Figure 28 is a flowchart outlining the steps performed by the processing means in response to receiving point-cloud data from the camera after an item of luggage has been placed on the belt. Before the process outlined in Figure 28 commences, the system is first calibrated. During said calibration stage, the belt is imaged when it is empty. This generates point cloud data representing the empty belt environment. Furthermore, during this calibration stage, a region of interest on the belt is defined. This region of interest corresponds to where on the belt items of luggage are to be placed for processing. Once calibrated, the system is ready for an item of luggage to be placed on the belt for processing.
Referring now to Figure 28, in a first step, the point cloud data received is filtered in a number of ways, summarised within block 1010. At step 1020 the point cloud data received is subtracted from the point cloud data representing the empty belt environment obtained during the calibration stage. At step 1030, a reduction filter is applied to the point cloud data. The filter reduces the density of the point cloud data. This increases the speed of processing performed by the system. At step 1040, the point cloud data is adjusted such that it is centred on the belt. This is done because the camera is configured to generate the point cloud data such that it is centred on the camera. In this example, this is done by converting the zero point of the z axis from the camera being z=0 to the belt being z=0. Furthermore, at step 1040, the point cloud is rotated such that the belt is level in the point cloud. In this example, the amount of rotation required is calculated during the calibration stage discussed above. At step 1050, all point cloud data outside of the region of interest is removed. This leaves just point cloud data representing the region of interest on the belt environment, where items are expected to be placed for processing. In this example, at step 1060, a filter is applied to reduce the effect of interpolation on the point cloud data, as the Intel RealSense D435 camera is configured to interpolate the point cloud. Resulting from the steps 1020 to 1060 within block 1010 is referred to herein as a filtered point cloud.
Referring still to the flowchart of Figure 28, in a second step, the processing means is configured to derive a plurality of metrics from the filtered point cloud. These metrics are summarised within block 1070. At step 1080, the filtered point cloud is separated into clusters. This identifies separate objects in the point cloud data. In this example, a known k-nearest neighbour algorithm is used to determine which points in the filtered point cloud are close enough to one another to form a cluster. These clusters are then converted into separate objects in the point cloud. As the point cloud represents three-dimensional space, the surface area and dimensions of said objects can be determined by the processing means. At step 1090, it is determined which of the separate objects are sufficiently close such that they collide with one another. Those that do collide with one another are marked as belonging to the same group. In this example, whether separate objects collide is based on a convex hull around one of the objects. At step 1100, metrics that will subsequently be used to determine whether the imaged item on the belt is of an irregular shape are calculated, with the definition of irregular shape being defined by the algorithm itself as discussed below. In this example, said metric is the number of points in the filtered point cloud that are considered noise. VVhether a point is noise is determined by, again, using a known k-nearest neighbour algorithm. If a point cannot be linked to a cluster, it is considered noise. In this example, the configuration values used for the k-nearest neighbour algorithm are stricter than those used to cluster the filtered point cloud at step 1070. Resulting from the steps 1080 to 1100 within block 1170 are metrics that are subsequently used to determine whether the imaged item is conveyable, among other conclusions.
Referring still to the flowchart of Figure 28, in a third step, the processing means is configured to draw a number of conclusions about the imaged belt environment and the imaged item on the belt. These conclusions are summarised within block 1110. At step 1120, it is determined whether or not there are objects on the belt. In this example, if the clustered objects of step 1080 are not sufficiently large, it is determined that the there are no objects on the belt (i.e. the belt is empty). If this is the conclusion, the further steps of block 1110 are not carried out and this is the result of the scan of the belt environment. At step 1130, it is determined whether there are multiple objects on the belt. In this example, if a plurality of the clustered objects of step 1080 are sufficiently large and do not belong to the same group at step 1090, it is determined that there are multiple objects on the belt (for example, a first item of luggage and a second item of luggage). If this is the conclusion, the further steps of block 1110 are not carried out and this is the result of the scan of the belt environment. At step 1040, it is determined whether the entirety of the item of luggage placed on the belt is within the region of interest determined at the calibration stage. If the item is on the edge of the region of interest (i.e. too far in front of the region or too far behind the region), it is determined that the item needs to be repositioned. If this is the conclusion, the further steps of block 1110 are not carried out and this is the result of the scan of the belt environment. At step 1150, it is determined whether an item on the belt corresponds to a tub. In this example, a tub is identified from the filtered point cloud if the cloud comprises a profile that corresponds to that of the edge of a known tub. If so, it is determined that a tub is on the belt. If this is the conclusion, the further steps of block 1110 are not carried out and this is the result of the scan of the belt environment. At step 1160, it is determined whether the scanned item is smooth. In this example, this is determined by whether the ratio of noise points to total cloud points (see step 1100) exceeds a threshold. If the threshold is exceeded, it is determined that the scanned item is non-conveyable due to its irregular shape. If this is the conclusion, the further steps of block 1110 are not carried out and this is the result of the scan of the belt environment. At step 1070, from the filtered point cloud, it is determined how many height peaks there are in the cloud. If this amount exceeds a certain number, it is concluded that the scanned item is non conveyable due to its irregular shape. If this is the conclusion, the further steps of block 1110 are not carried out and this is the result of the scan of the belt environment. At step 1180, it is determined whether the grouped point clouds from step 1090 are of sufficient height and surface area to correspond to an item on the belt (i.e. whether dimensions of the scanned objects exceed a minimum threshold). If not, it is determined that there is no object on the belt. If this is the conclusion, the further steps of block 1110 are not carried out and this is the result of the scan of the belt environment. At step 1190, it is determined whether the shape of the filtered point cloud is of a spherical or cylindrical shape. If so, it is determined that the item on the belt is non-conveyable due to it having a spherical or cylindrical shape. If this is the conclusion, the further steps of block 1110 are not carried out and this is the result of the scan of the belt environment. At step 1200, it is determined if the item has straps. This is based on the amount of separate small objects within the group of the biggest objects. If it is determined that the item comprises straps, the scanned item is deemed non conveyable and the further steps of block 1110 are not carried out and this is the result of the scan of the belt environment. Finally, at step 1200, it is determined whether the scanned item is conveyable. In this example, if none of steps 1120 to 1200 draw a conclusion, the filtered point cloud is considered to correspond to a conveyable item on the belt. This is then the result of the scan of the belt environment. If found to be conveyable, the processing means is further configured to determine the dimensions of the item on the belt. In this example, this is by taking the dimensions of the largest of the separate objects on the belt from step 1080.
A set of tables outlining the accuracy of the algorithm summarised in Figure 28 are shown in Figure 29. Table A shows how many scans resulted in correct and incorrect conclusions by the algorithm (no object, conveyable (OK), multiple objects, non-conveyable, tub detected, repositioning required). Table B then summarises the total correct and incorrect conclusions in Table A. Barcode Scanning An embodiment of one aspect of the present invention will now be described with reference to Figures 30 to 31.
The system 10, as shown in Figure 30, comprises a system configured to read a barcode on a bag tag attached to an item of luggage. In this example, the barcode is a visual barcode. To read the barcode, the system comprises a plurality of cameras 1220 A-D. In this example, the cameras are Basler cameras. Figure 30 shows where on an arch structure of the system the cameras are located. In this example, there are four cameras.
Figure 31 then illustrates the field of view of said cameras 1230A-D. Referring back to Figure 30, a first of the plurality of cameras 1220A is located on the first sidewall 40 of the arch and a second of the plurality of cameras 1220B is located on a second sidewall 50 of the arch. A third of the plurality of cameras 1220C is located on the overhead panel 60 of the arch. Finally, a fourth of the plurality of cameras 12200 is also located on second sidewall, however it is positioned at a distance from the second of the plurality of cameras.
In other words, the fourth of the plurality of cameras is offset from the second of the plurality of cameras. Figure 31 shows the resulting fields of view of the first, second, third and fourth cameras. All fields of view are directed towards the belt 70, where an item of luggage is to be placed for processing. However, with the different positions of the cameras, the field of view of each camera is directed towards the belt from a different angle. This allows the system to look at the bag and, therefore, the associated bag tag from multiple angles, increasing the likelihood that a barcode will be read regardless of how the item of luggage is placed on the belt by the passenger, for example. In other words, the read rate is increased.
The system further comprises processing means configured to read said barcodes and retrieve information associated with the barcodes. In this example, the software used to extract barcode information from images of the barcodes is developed by Basler. Specifically, the software is HaIcon (from MVTec).
The structure of the arch and the positioning of each of the plurality of cameras 1220A-D with reference to said structure is described in more detail below, with reference to figures 1 and 30 of the drawings.
The first sidewall 40 comprises a first elongated portion 1240. The first elongated portion projects from the overhead panel 60 in a direction towards the belt 70. Said first elongated portion projects substantially perpendicularly from the overhead panel. The first elongated portion comprises a proximal end 1250 and a distal end 1260. Said proximal end is positioned adjacent to the overhead panel. Said distal end is the end opposing the proximal end. The first of the plurality of cameras 1220A is coupled to the first elongated portion. In this example, the first of the plurality of cameras 1220A is coupled adjacent to the distal end 1260 of the first elongated portion.
The second sidewall 50 comprises a second elongated portion 1270. The second elongated portion projects from the overhead panel 60 in a direction towards the belt 70.
Said second elongated portion 1270 projects substantially perpendicularly from the overhead panel. The second elongated portion comprises a proximal end 1280 and a distal end 1290. Said proximal end is positioned adjacent to the overhead panel. Said distal end is the end opposing the proximal end. The second of the plurality of cameras 1220B is coupled adjacent to the distal end 1290 of the second elongated portion 1270.
The second sidewall 50 further comprises a third elongated portion 1300. Said third elongated portion projects substantially perpendicularly from the overhead panel 60. The third elongated portion further extends in a direction substantially perpendicular to the direction in which the second elongated portion 1270 projects. In this way, the third elongated portion extends in a direction which is substantially parallel to the belt 70. In this example, a length of the third elongated portion is comparable to a length of the overhead panel. The third elongated portion has a proximal end 1310 and a distal end 1320. The proximal end is defined as that which lies adjacent to the overhead panel. The distal end is that opposing the proximal end.
The second sidewall 50 further comprises a fourth elongated portion 1330. Said fourth elongated portion projects from the distal end 1320 of the third elongated portion 1300 towards the belt 70. In this example, the fourth elongated portion 1330 projects substantially perpendicularly to the third elongated portion 1300 such that a corner is formed between the third 1300 and fourth elongated portions 1330. The fourth of the plurality of cameras 1220D is coupled adjacent to said corner and, therefore, the distal end of the third elongated portion.
A fifth of the plurality of cameras 2 may also be located on the overhead panel 60 of the arch. Usually, the fifth of the plurality of cameras is a camera or scanner which sends data to a processor which is configured to determine a volume of an article or bag.
For example, the processor may be configured to check or determine from the data received from the fifth volume camera any one or more of whether: the bounding box dimension of the baggage. if it is in line with the agreed and predetermined dimensions; the number of articles. Multiple articles are rejected; the conveyability of the article, if a tub is required, or if different orientation of the bag is required; * a tub is present.
The fifth of the plurality of cameras 2 is usually offset from the third of the plurality of cameras 1220C. In the specific embodiments of figure 1, the fifth of the plurality of camera is offset from the third of the plurality of cameras 1220C by a distance substantially equal to a distance of the overhead panel 60. The fifth and third of the plurality of cameras may be offset in a direction substantially parallel to the belt.
Figure 1 shows a group of cameras, labelled as 3 in figure 1 which comprises the first 1220A, second 1220B, third 1220C and fourth 1220D of the plurality of cameras. Usually, the group of cameras 3 are cameras which are configured to scan barcodes. For example, one dimensional barcodes or 2 dimensional barcodes may be scanned, such as those which are usually printed on bag tags. An exemplary camera or scanner which may be used for this purpose is. In a specific example, a barcode scanning camera may be used that delivers 5 frames per second at 20 MP resolution. However, other barcode cameras will be known to the skilled person. The cameras may be configured to scan interleaved 2 out of 5 (ITF) barcodes.
In figure 1, an RFID sensor 4 may be embedded in each of the first sidewall 40 and the second sidewall 50.
As shown in figure 1, an intrusion sensor may also be provided towards one edge of the overhead panel 60. Usually, the intrusion sensor is an infrared intrusion sensor which is configured to detect objects which are larger than a predetermined size for example larger than 5 x 5 x 5cm. The intrusion detector sends a signal to an article handling system comprising a processor which comprises instructions which sends a signal to a belt driving mechanism to stop the belt movement in response to intrusion detection.
The overhead panel 60 links the first sidewall 40 and the second sidewall 50. The third of the plurality of cameras 12200 is coupled to said overhead panel. In this example, the panel comprises a substantially cuboid shape. The substantially cuboid shape has a face 1340 opposing the belt 70. The third of the plurality of cameras is coupled to said face.
Specifically, in this example, the third of the plurality of cameras is positioned substantially in the centre of said face of the panel.
The first elongated portion 1240 of the first sidewall and the second elongated portion 1270 of the second sidewall are of comparable length. Said length is less than a distance defined from the face 1340 of the panel 60 to the belt 70. In this example, said length is approximately half the distance from the face of the panel to the belt. The fourth elongated portion 1330 of the second sidewall, on the other hand, has a length comparable to the distance from the face of the panel to the surface. As a result of the structure of the first and second sidewalls, the first of the plurality of cameras 1220A and the second of the plurality of cameras 1220B are positioned in a same first plane. In this example, said first plane extends substantially parallel to the belt. The third of the plurality of cameras 12200 and fourth of the plurality of cameras 1220D are positioned in a same second plane. Said second plane extends substantially parallel to the belt in this example. As a result of the structure of the first and second sidewalls, a distance between the first plane and belt is smaller than a distance between the second plane and the belt.
In this example, each of the first, second and fourth of the plurality of cameras 1220A,B,D is coupled to a protruding portion 1350A,B,D of the first and second sidewalls 40, 50. Each of the protruding portions has a substantially trapezoid shape, the trapezoid shape comprising at least one surface that diagonally projects from the sidewall to which it forms part of. In this example, each of the first, second and fourth of the plurality of cameras is coupled to such a surface such that their field of view is directed towards the belt 70. In this example, the first of the plurality of cameras is coupled to a first protruding portion 1350A at the distal end 1260 of the first elongated portion 1240. The second of the plurality of cameras is coupled to a second protruding portion 1350B at the distal end 1290 of the second elongated portion 1270. The fourth of the plurality of cameras is coupled to a third protruding portion 1350D at the distal end 1320 of the third elongated portion 1300. Alternative configurations are, however, possible.
General The device may comprise a computer processor running one or more server processes for communicating with client devices. The server processes comprise computer readable program instructions for carrying out the operations of the present invention. The computer readable program instructions may be or source code or object code written in or in any combination of suitable programming languages including procedural programming languages such as C, object orientated programming languages such as C#, C++, Java, scripting languages, assembly languages, machine code instructions, instruction-setarchitecture (ISA) instructions, and state-setting data.
According to specific embodiments, the V3 algorithm, described on pages 15 to 24 may use OpenCV to execute one or more of the processing steps previously described. Of course, embodiments of the invention may use other libraries, and so the skilled person will appreciate that the scope of the claims should not be limited to OpenCV.
OpenCV (Open Computer Vision) is a C++ library, that converts images into Matrix objects, on which mutations and analysis can be applied. Embodiments may use the CSharp wrapper around this Library: OpenCvSharp4 version: 4.6.0.20220608.
Specific embodiments may use the following functions: Operations on arrays: https://docs.opencv.org/4.6.0/d2/de8/group core array.html#gacc40fa15eacOfb83f8ca7 Ob7cc0b588d Mean Sqrt Multiply BitwiseAnd BitwiseOr BitwiseNot Merge MinMaxLoc CountNonZero Absdiff Geometric Image Transformations: https://docs.opencv.org/4.6.0/da/d54/group d4Odbc4a1ea4451983 imgproc transform.html#ga0203d9ee5fcd28 WarpAffine Camera Calibration and 3D Reconstruction: https://docs.opencv.org/4.6.0/d9/d0c/group c30559d calib3d.html#ga69f2545a8b62a6b0fc266060d Undistort ReprojectImageTo3D ProjectPoints Image Filtering: https://d ocs. open cv.o rg/4.6.0/d4/d86/group a25d0511362aeb imgproc filter. html#gaeb1e0c1033e3f6b891 MorphologyEx -Erode, Dilate, Open, Close BilateralFilter Filter2D GetStructuringElement Drawing Functions: https://docs.opencv.org/4.6.0/d6/d6e/group imgproc draw.html FillPoly DrawContours Miscellaneous Image Transformations: https://docs.opencv.org/4.6.0/d7/d1b/group imgproc misc.html FloodFill Threshold DistanceTransform Structural Analysis and Shape Descriptors: https://docs.opencv.org/4.6.0/d3/dc0/group imgproc shape.html#ga2c759ed9f497d4a61 8048a2f56dc97f1 BoundingRect MinAreaRect ContourArea ConvexHull FindContours ConnectedComponentsWithStats HistoGrams: https://docs.opencv.org/4.6.0/d6/dc7/group imgproc hist.html#ga4b2b5fd75503ff9e6844 cc4dcdaed35d CalcHist Morphology operations from the OpenCV documentation may be used: https://docs.opencv.org/4.x/d9/d61/tutorial_py_morphological_ops.html Accordingly, it will be appreciated that in specific embodiments, the detection algorithm uses combinations of these functions, to perform the desired analysis.
The wired or wireless communication networks described above may be public, private, wired or wireless network. The communications network may include one or more of a local area network (LAN), a wide area network (WAN), the Internet, a mobile telephony communication system, or a satellite communication system. The communications network may comprise any suitable infrastructure, including copper cables, optical cables or fibres, routers, firewalls, switches, gateway computers and edge servers.
The system described above may comprise a Graphical User Interface. Embodiments of the invention may include an on-screen graphical user interface. The user interface may be provided, for example, in the form of a widget embedded in a web site, as an application for a device, or on a dedicated landing web page. Computer readable program instructions for implementing the graphical user interface may be downloaded to the client device from a computer readable storage medium via a network, for example, the Internet, a local area network (LAN), a wide area network (WAN) and/or a wireless network. The instructions may be stored in a computer readable storage medium within the client device.
As will be appreciated by one of skill in the art, the invention described herein may be embodied in whole or in part as a method, a data processing system, or a computer program product including computer readable instructions. Accordingly, the invention may take the form of an entirely hardware embodiment or an embodiment combining software, hardware and any other suitable approach or apparatus.
The computer readable program instructions may be stored on a non-transitory, tangible computer readable medium. The computer readable storage medium may include one or more of an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk.
Exemplary embodiments of the invention may be implemented as a circuit board which may include a CPU, a bus, RAM, flash memory, one or more ports for operation of connected I/O apparatus such as printers, display, keypads, sensors and cameras, ROM, a communications sub-system such as a modem, and communications media.

Claims (44)

  1. CLAIMSAn imaging processing device comprising: processing means, wherein the processing means is configured to: a. receive an image of a region of interest comprising a first article in response to the article being placed on a belt wherein the belt is configured to convey the article along a path between an origin and a destination; b. determine a plurality of coordinates in three-dimensional space associated with the region of interest; c. determine from the plurality of coordinates: whether a tub has been placed on the belt; that the first article is conveyable along the path in response to a determination that a tub has been placed on the belt; and/or d. determine from the plurality of coordinates: whether the first article has an irregular shape; and/or whether a second article has been placed on the belt; that the first article is not conveyable along the path in response to a determination that the first article has an irregular shape or that a second article has been placed on the belt.
  2. 2. The image processing device of claim 1 wherein the image comprises depth data associated with the image.
  3. The image processing device of claim 2 wherein the image further comprises infrared data associated with the image.
  4. 4. The image processing device of claim 3 wherein the processing means is further configured to determine, from the depth data associated with the image, the infrared data associated with the image and the plurality of coordinates, whether the first article has a substantially round shape.
  5. The image processing device of claim 4 wherein the processing means is further configured to determine that the first article is not conveyable along the path in response to a determination that the first article has a substantially round shape.
  6. The image processing device of claim 3 wherein the processing means is further configured to determine, from the depth data associated with the image and the infrared data associated with the image, whether the first article has an irregular shape.
  7. 7. The image processing device of claim 3 wherein the image further comprises data representing colours of the image.
  8. 8. The image processing device of claim 7 wherein the processing means is further configured to determine, from the data representing colours of the image, the depth data associated with the image and the infrared data associated with the image, whether a second article has been placed on the belt.
  9. 9. The image processing device of claim 5 wherein the processing means is further configured to determine that the article is conveyable along the path in response to a determination that the first article does not have a substantially round shape, an irregular shape and that a second article has not been placed on the belt.
  10. 10. The image processing device of claim 3 wherein the processing means is further configured to receive an image of an empty belt and determine, from the depth data associated with the image and the infrared data associated with the image, that the belt is empty.
  11. 11. The image processing device of claim 7 wherein the processing means is further configured to generate an image of the first article by combining the depth data associated with the image, the infrared data associated with the image, the data representing colours of the image and the plurality of coordinates.
  12. 12. The image processing device of claim 1 wherein the processing means is further configured to segment a first portion of the image representing the first article from a second portion of the image.
  13. 13. The image processing device of claim 1 wherein the processing means is configured to determine whether a tub has been placed on the belt by determining from the plurality of coordinates whether a height of the first article comprises an edge corresponding to a shape of a known tub.
  14. 14. The image processing device of claim 1 wherein the processing means is configured to determine whether a second article has been placed on the belt by determining from the plurality of coordinates whether a section of the image represents a height profile that is different from that of the first article.
  15. 15. The image processing device of claim 8 wherein the processing means is configured to determine whether a second article has been placed on the belt by determining whether a shape of the first article in the image comprises extensions which are likely to represent a second article.
  16. 16. The image processing device of claim 1 wherein the processing means is configured to determine whether the first article has an irregular shape by determining from the plurality of coordinates whether a height profile of the first article corresponds to that of an irregular shape.
  17. 17. The image processing device of claim 4 wherein the processing means is configured to determine whether the first article has a substantially round shape by determining whether a plurality of normals associated with the plurality of coordinates correspond to a same point.
  18. 18. The image processing device of claim 4 wherein the processing means is further configured to determine if the first article has a stable base.
  19. 19. The image processing device of claim 18 wherein the processing means is further configured to withdraw a conclusion that the first article has a substantially round shape in response to determining that the first article has a stable base.20. The image processing device of claim 9 wherein the processing means is further configured to determine a length, width and height of the first article upon determining that the first article is conveyable along the path.10 15 20 25 21. 22. 23. 24. 25. 24. 25.The image processing device of claim 20 wherein the processing means is configured to determine the height of the first article using the depth data associated with the image by determining a difference between a depth of the first article and a depth of the belt.
  20. The image processing device of claim 20 wherein the processing means is configured to determine the width and length of the first article by fitting a rectangle such that it bounds a portion of the image representing the first article and determining a length and width of the rectangle.
  21. The image processing device of claim 1 wherein the processing means is further configured to determine, from the plurality of coordinates, whether the whole of the first article is not within a region of interest on the belt, the processing means being configured to process articles only within this region of interest.
  22. The image processing device of claim 1 wherein the processing means is further configured to determine, from the plurality of coordinates, whether a dimension of the first article is below a minimum threshold.
  23. The image processing device of claim 1 wherein the processing means is further configured to determine, from the plurality of coordinates, whether the first article comprises straps.
  24. The image processing device of claim 1 wherein the processing means is further configured to determine that the first article is not conveyable along the path in response to a determination that a dimension of the first article is below a minimum threshold, or that the first article comprises straps, or that the whole of the first article is not within a region of interest.
  25. The image processing device of claim 23 wherein the processing means is further configured to determine that the first article should be repositioned on the belt in response to a determination that the whole of the first article is not within the region of interest.
  26. 26. The processing device of claim 24 wherein the processing means is further configured to determine that there is not an article on the belt in response to a determination that the dimension of the first article is below the minimum threshold.
  27. 27. The processing device of claim 24 wherein the processing means is further configured to determine a plurality of dimensions of the first article if it is not determined that the first article is not conveyable along the path.
  28. 28. The processing device of claim 1 wherein the processing means is further configured to filter the plurality of coordinates.
  29. 29. The processing device of claim 28 wherein the processing means is configured to filter the plurality of coordinates by reducing the density of the data associated with the plurality of coordinates.
  30. 30. The processing device of claim 28 wherein the processing means is configured to filter the plurality of coordinates by adjusting the plurality of coordinates such that they are centred on the belt.
  31. 31. The processing device of claim 28 wherein the processing means is configured to filter the plurality of coordinates by removing any coordinates outside of a region of interest.
  32. 32. The processing device of claim 28 wherein the processing means is configured to filter the plurality of coordinates by subtracting a plurality of coordinates corresponding to an empty belt from the plurality of coordinates associated with the region of interest.
  33. 33. The processing device of claim 24 wherein the processing means is configured to determine whether a dimension of the first article is below the minimum threshold and whether a second article has been placed on the belt by forming clusters in the plurality of coordinates.
  34. 34. The processing device of claim 33 wherein the processing means is configured to determine whether a second article has been placed on the belt and whether a dimension of the first article is below the minimum threshold by grouping the clusters.
  35. 35. The processing device of claim 1 wherein the processing means is configured to determine whether the first article is of an irregular shape by clustering the plurality of coordinates and identifying coordinates which cannot be linked to a cluster.
  36. 36. The processing device of any preceding claim wherein the device is configured to generate a signal configured to trigger a belt or conveyor of an article handling system in response to a determination that the first article is conveyable along the path.
  37. 37. The processing device of any preceding claim wherein the device is configured to generate an alert signal for an agent indicating that of an article handling system in response to a determination that the first article is not conveyable along the path.
  38. 38. An article handling system comprising: a camera (2); a belt or conveyor for conveying an article; wherein the camera has a field of view directed towards the belt or conveyor; and further comprising the processing device of any one of claims 1 to 37.
  39. 39. The article handling system of claim 38 wherein the camera is configured to output data associated with an image captured by the camera.
  40. 40. The article handling system of claim 39 wherein the data comprises depth data associated with the image and/or infrared data associated with the image and/or data representing colours of the image.
  41. 41. The article handling system of claim 38 further comprising a sensor configured to detect articles which exceed a predetermined size threshold.
  42. 42. The article handling system of claim 41 wherein the sensor is an infrared intrusion sensor.
  43. 43. A method of determining whether an article is conveyable along a path, the method comprising: a. receiving an image of a region of interest comprising a first article in response to the article being placed on a belt wherein the belt is configured to convey the article along a path between an origin and a destination; b. determining a plurality of coordinates in three-dimensional space associated with the region of interest; c. determining from the plurality of coordinates: i. whether a tub has been placed on the belt; that the first article is conveyable along the path in response to a determination that a tub has been placed on the belt; and/or d. determining from the plurality of coordinates: i. whether the first article has an irregular shape; and/or whether a second article has been placed on the belt; that the first article is not conveyable along the path in response to a determination that the first article has an irregular shape or that a second article has been placed on the belt.
  44. 44. A computer program product which when executed undertakes the method of claim 43.
GB2216480.0A 2022-04-29 2022-11-04 Article processing apparatus, system and method therefor Pending GB2622449A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/EP2023/061406 WO2023209234A1 (en) 2022-04-29 2023-04-28 Article processing apparatus, system and method therefor
PCT/EP2023/061403 WO2023209232A1 (en) 2022-04-29 2023-04-28 Article processing apparatus, system and method therefor
PCT/EP2023/061400 WO2023209230A1 (en) 2022-04-29 2023-04-28 Article processing apparatus, system and method therefor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GBGB2206350.7A GB202206350D0 (en) 2022-04-29 2022-04-29 Article processing apparatus, system and method therefor

Publications (2)

Publication Number Publication Date
GB202216480D0 GB202216480D0 (en) 2022-12-21
GB2622449A true GB2622449A (en) 2024-03-20

Family

ID=81943953

Family Applications (4)

Application Number Title Priority Date Filing Date
GBGB2206350.7A Ceased GB202206350D0 (en) 2022-04-29 2022-04-29 Article processing apparatus, system and method therefor
GB2216475.0A Pending GB2622448A (en) 2022-04-29 2022-11-04 Article processing apparatus, system and method therefor
GB2216480.0A Pending GB2622449A (en) 2022-04-29 2022-11-04 Article processing apparatus, system and method therefor
GB2216482.6A Pending GB2628522A (en) 2022-04-29 2022-11-04 Article processing apparatus, system and method therefor

Family Applications Before (2)

Application Number Title Priority Date Filing Date
GBGB2206350.7A Ceased GB202206350D0 (en) 2022-04-29 2022-04-29 Article processing apparatus, system and method therefor
GB2216475.0A Pending GB2622448A (en) 2022-04-29 2022-11-04 Article processing apparatus, system and method therefor

Family Applications After (1)

Application Number Title Priority Date Filing Date
GB2216482.6A Pending GB2628522A (en) 2022-04-29 2022-11-04 Article processing apparatus, system and method therefor

Country Status (1)

Country Link
GB (4) GB202206350D0 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018104859A1 (en) * 2016-12-05 2018-06-14 Mectho S.R.L. Station for accepting items and process of accepting these latter
CN109102227A (en) * 2018-08-08 2018-12-28 天津航大航空设备有限公司 Luggage category detection method, self-help luggage equipment and storage medium
CN111353985A (en) * 2020-03-02 2020-06-30 电子科技大学 Airport self-service consignment luggage detection method based on depth camera
CN111598063A (en) * 2020-07-22 2020-08-28 北京纳兰德科技股份有限公司 Luggage category determination method and device
CN111783569A (en) * 2020-06-17 2020-10-16 天津万维智造技术有限公司 Luggage specification detection and personal bag information binding method of self-service consignment system
CN111899258A (en) * 2020-08-20 2020-11-06 广东机场白云信息科技有限公司 Self-service consignment luggage specification detection method

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6967579B1 (en) * 2004-03-05 2005-11-22 Single Chip Systems Corporation Radio frequency identification for advanced security screening and sortation of baggage
CA2560430C (en) * 2005-09-21 2015-05-19 Mark Iv Industries Corp. Adaptive channel bandwith in an electronic toll collection system
JP4219942B2 (en) * 2006-05-19 2009-02-04 株式会社日立製作所 RFID system
FR2929605B1 (en) * 2008-04-04 2010-08-27 Ier METHOD AND SYSTEM FOR AUTOMATIC REMOVAL OF OBJECTS FOR THE TRANSPORT OF THESE OBJECTS
US8471683B2 (en) * 2010-06-09 2013-06-25 3M Innovative Properties Company Multilane vehicle tracking system
BR112016013477A2 (en) * 2013-12-13 2017-08-08 3M Innovative Properties Company ADJACENT RADIO FREQUENCY IDENTIFICATION (RFID) READER INTERFERENCE MITIGATION
US9830484B1 (en) * 2014-06-25 2017-11-28 Amazon Technologies, Inc. Tracking locations and conditions of objects based on RFID signals
CN107679438A (en) * 2017-10-13 2018-02-09 李志毅 A kind of ultrahigh-frequency tag reading device and method with image identification function
WO2019126666A1 (en) * 2017-12-22 2019-06-27 Datalogic Usa Inc. Data collection device with anti-microbial illumination
FR3081354B1 (en) * 2018-05-24 2021-06-11 Solystic LUGGAGE SORTING PROCESS IN AN AIRPORT
US11048890B2 (en) * 2019-01-11 2021-06-29 Nec Corporation Walk-through checkout station

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018104859A1 (en) * 2016-12-05 2018-06-14 Mectho S.R.L. Station for accepting items and process of accepting these latter
CN109102227A (en) * 2018-08-08 2018-12-28 天津航大航空设备有限公司 Luggage category detection method, self-help luggage equipment and storage medium
CN111353985A (en) * 2020-03-02 2020-06-30 电子科技大学 Airport self-service consignment luggage detection method based on depth camera
CN111783569A (en) * 2020-06-17 2020-10-16 天津万维智造技术有限公司 Luggage specification detection and personal bag information binding method of self-service consignment system
CN111598063A (en) * 2020-07-22 2020-08-28 北京纳兰德科技股份有限公司 Luggage category determination method and device
CN111899258A (en) * 2020-08-20 2020-11-06 广东机场白云信息科技有限公司 Self-service consignment luggage specification detection method

Also Published As

Publication number Publication date
GB202206350D0 (en) 2022-06-15
GB202216480D0 (en) 2022-12-21
GB2628522A (en) 2024-10-02
GB202216475D0 (en) 2022-12-21
GB2622448A (en) 2024-03-20
GB202216482D0 (en) 2022-12-21

Similar Documents

Publication Publication Date Title
CN111461107B (en) Material handling method, apparatus and system for identifying a region of interest
EP3008666B1 (en) Image based object classification
US20200134857A1 (en) Determining positions and orientations of objects
US20140037159A1 (en) Apparatus and method for analyzing lesions in medical image
WO2016107474A1 (en) Vehicle checking method and system
US20150269740A1 (en) Image Processor Configured for Efficient Estimation and Elimination of Foreground Information in Images
CN111507390A (en) Storage box body identification and positioning method based on contour features
US10546198B2 (en) Method for the computer-aided recognition of a transport container being empty and device for the computer-aided recognition of a transport container being empty
KR101968024B1 (en) System and Method for Recognizing Double Loading of Baggage
WO2018104859A1 (en) Station for accepting items and process of accepting these latter
CN114693946A (en) Image anomaly detection method and device, computer equipment and storage medium
EP3349049B1 (en) Inspection devices and methods for inspecting a container
GB2622449A (en) Article processing apparatus, system and method therefor
WO2023209232A1 (en) Article processing apparatus, system and method therefor
US11062440B2 (en) Detection of irregularities using registration
JP2008225668A (en) Image processor
US11666948B2 (en) Projection instruction device, parcel sorting system, and projection instruction method
KR102457712B1 (en) System and Method for Recognizing Double Loading of Baggage
EP4177694A1 (en) Obstacle detection device and obstacle detection method
US7899245B2 (en) Morphological based segmenter
CN117576414B (en) Method, apparatus and storage medium for pit detection in ore image segmentation
CN114359023A (en) Method, equipment and system for dispatching picture stream to center based on complexity
CN117237962A (en) Vehicle identification method, system and identification equipment
JP2022010961A (en) Detection apparatus and detection program
KR20240076535A (en) Baggage location service system using baggage tage and baggage picture and method thereof