WO2018029361A1 - Improving the quality and power of baggage inspection systems - Google Patents

Improving the quality and power of baggage inspection systems Download PDF

Info

Publication number
WO2018029361A1
WO2018029361A1 PCT/EP2017/070500 EP2017070500W WO2018029361A1 WO 2018029361 A1 WO2018029361 A1 WO 2018029361A1 EP 2017070500 W EP2017070500 W EP 2017070500W WO 2018029361 A1 WO2018029361 A1 WO 2018029361A1
Authority
WO
WIPO (PCT)
Prior art keywords
inspection
operator
image
computer system
images
Prior art date
Application number
PCT/EP2017/070500
Other languages
French (fr)
Inventor
Philip Shaw
Peter Heiland
Michael Torok
Hans-Jugrgen MAAS
Original Assignee
Jec Capital Partners, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jec Capital Partners, Llc filed Critical Jec Capital Partners, Llc
Publication of WO2018029361A1 publication Critical patent/WO2018029361A1/en

Links

Classifications

    • G01V5/20
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour

Definitions

  • the invention relates to improving the quality and power of visual inspection systems, as for baggage. It is particularly but not exclusively to support and to qualify human operators of those systems.
  • Baggage inspection systems can be generally described by a scanner that scans articles of baggage on a moving belt by means of x-ray sources and to provide an image of all the items within the baggage.
  • the image is displayed to an operator, e.g. on a screen.
  • the system uses radiation sources with different power to be able to scan materials with different absorption levels.
  • the measurement reveals the outer shape of the various items and classifies them into subsets of items of certain groups of material. Items assigned to the same groups of material like organic materials, nonorganic or metallic items are represented in the image with a same color. Displaying the groups of materials by different colors helps the operator to quickly assess what items are in the article of baggage and whether or not there are items which need further inspection by a second human operator.
  • image processing tools There might be also different types or means of image processing tools that can be applied to the scanned image and used by the operator. For instance a simple example is contrast enhancement or noise reduction which can help the operator. Also further processing of supplementary images that may originate from screening the baggage with different power levels can help to reveal items and display them even if their absorption is overlaid by the absorption from other items.
  • a capability of the inspection systems may be that the operator can switch between a set of different presentation types of the same result (in terms of different image processing means) or between results from different power levels and corresponding processing.
  • embodiments of the invention provide improved baggage inspection systems, and techniques for training operators of such systems.
  • the systems are typically located at airports but also systems for other areas of operation and checkpoints are considered.
  • images of threat items are superimposed onto the image of screened baggage. This is a method which is used to train operators and also to certify them. Libraries of images of threat items to superimpose into scanned images are used for this purpose. The number of such superimpositions into images as well as the frequency to which those items are superimposed represents variables which allows for setting up training strategies but might be also used to certify operators for different security levels.
  • the library includes images of threat items with different orientation. By superimposing them into the scanned image, the operator learns to identify for instance not only a knife from its side view but also from its head view.
  • Another aspect of the invention is to use the results of the inspection process and to populate an image library that provides a database for additional technologies that may be used to support the manual inspection process afterwards.
  • a third aspect of the invention describes a solution of a cloud inspection system.
  • One part of the system allows for training personnel offline and for sufficiently qualifying them so that they can be appointed as staff member.
  • the second part of the system is to allow to flexibly configure and determine the number and the respective qualifications of remotely working operators in consideration of individual needs at the inspection systems. All aspects of the invention described here are intended to be generally combinable.
  • operator gaze trajectories are measured to evaluate/analyze the actual inspection process. This may use additional cameras, for example with face/gaze and other tracking technologies.
  • Such a technique has the advantage of determining the actual behavior of an operator.
  • a relationship between the operator's attention towards the screen and external disturbances/working environment can be determined.
  • a relationship between operator scan decisions (negative/positive) and gaze distribution/gaze pattern can be determined.
  • a dependence between operator performance and attention over time, and within shift time, can be determined.
  • inspection trajectory is measured to quantify and qualify each individual inspection process.
  • Eye-tracking technologies may be used to measure an operator's gaze trajectory during an image inspection process.
  • Such a technique has the advantage of providing a detailed insight into each actual inspection process, for example: (i) did the operator inspect all areas of the image and to what extent? (ii) what eye movement behaviors are indicative of a true positive/negative decision, or a false negative?
  • Such a technique also has the advantage of providing direct feedback to the operator if the inspection process was not accomplished according to specified guidelines.
  • Eye-tracking technology may be used to measure trajectory of the image inspection process and to calculate an operator/group/benchmark metrics.
  • Such a technique has the advantage of providing a detailed insight into the each operator's performance. A performance can be benchmarked against other operators to detect anomalies (fatigue, need for training etc.). Insight into "inspection pattern" related to false positive and false negative may be given.
  • Such a technique also has the advantage of deriving new training methodologies to address trending inspection problems.
  • a computer vision machine learning algorithm is trained to recognize key visual features that contribute to correctly escalated scan subjects.
  • Eye-tracking/gaze tracking data from a set of image assessments for which the true assessment is known may be used (e.g. test images, real data on images of baggage items that were escalated to a physical inspection).
  • image assessments for which the true assessment is known
  • Such a technique has the advantage of using dwell time, gaze behaviors and other metrics to identify areas of an image that are of interest to a human operator when making an inspection decision, and use these as training data for the machine learning algorithm.
  • Such a technique also has the advantage that a computer vision system trained in this way will be able to assist human operators on future real-world scans for which the results are not yet known.
  • machine learning technologies are used to deliver a semi-automated inspection process.
  • a machine learning algorithm may be trained to provide first-line evaluation of scanner images. Real-time human-operator behavior and decisions may be used to incrementally improve machine learning accuracy.
  • This technique has the advantage that a large quantity of training material that arises from day to day scanning by human operators is ideal for effective ML training.
  • This technique also has the advantage that, dependent of the confidence status of the techniques, ML assisted inspection can be used to support the human operator inspection process and to pre-emptively escalate items that need more detailed inspection.
  • the technique has the further advantage, when using a distributed scanning team, that images that the machine determines to be suspect may be directed to more qualified human operators and/or a large number of operators overall to ensure a high-confidence decision.
  • a network of remote operators can provide an easily scalable inspection capability. Aggregation of multiple operator decisions for a single item of baggage may be used. Delivery of scanning images securely to remote/offsite operators for real-time inspection may be used. There are multiple advantages associated with this technique. The accuracy of each aggregated scanning decision may be increased. An operator's performance value may be determined, such as the closeness to consensus decision. If the individual performance value goes below a threshold, the operator's decision can be re-weighted or even excluded - the operator may need a break or additional training. The working environment of operators is improved. Throughput with positive impact on capex and opex is improved.
  • the effectiveness of a distributed scanning team is improved and a configurable level of scanning confidence delivered.
  • This may use determination of individual operator performance profile.
  • This may use an automated scanning team assembly to provide appropriate capacity and desirable skill level to meet particular scanning needs.
  • the technique allows, e.g., for the possibility to define desirable scanning profiles per flight, per airline, per passenger demographic or threat potential or at any other level of granularity.
  • a combination of a distributed scanning team with computer assisted inspection methods may be included in a network of operators.
  • This technique has the advantage of providing assistance for individual operators during the inspection process.
  • This technique also has the advantage of the ability to analyze assisted operators and non-assisted operators to provide new insight into the inspection process and further input into the ML process.
  • the proficiency of an operator is improved by providing real-time training.
  • a remote operator may be provided with real-time scanning images to assess in training mode. The operator responses may be compared to the consensus decisions from live operators to determine a level of proficiency.
  • This technique has the advantage of the provision of training and certification capabilities "without overhead” - results can be taken for the actual inspection as well as solely for training purposes.
  • This technique also has the abilities of scaling the process according to individual needs: opex and capex burdens can be improved.
  • a first camera can be used to monitor the attention of the operator to the screen thereby giving an indication of operator attention
  • a second camera can be used to monitor the environment in which the operator operates, thereby giving an indication of operator disturbance.
  • Graphs of operator attention and operator disturbance can be plotted, which can show where there is a correlation between disturbances and lack of attention, or where lack of attention has occurred without there being any disturbance.
  • a graph of accuracy and throughput over time can also be created, and can be compared with the graph of operator attention and operator disturbance to see correlations between, e.g., low accuracy and/or throughput, and operator disturbance and/or accuracy.
  • An operator eye-tracking may be monitored to determined alert corresponding to an insufficient operator attention. For example, if the eye-tracking indicates that an operator has inspected below a certain percentage of the baggage area displayed, or a number of items less than the number of items displayed for the baggage, an alert may be flagged to indicate that the operators' check is insufficient.
  • An operator's performance may be analysed over time, for example over an operator shift.
  • An indication of the percentage of the area of baggage which are checked over time, a percentage of the number of items in baggage which are checked over time, a percentage (or number) of threatening items which are checked over time, and the average time spent checking an item of baggage may all be accumulated for an operator.
  • a benchmark or threshold may be applied to one or all parameters, with the performance dropping below the benchmark indicating a potential alert condition.
  • Baggage inspection may be achieved in association with machine learning.
  • a machine is trained based on an assessment of a captured images over time. These captured images are used to assist subsequent baggage inspection.
  • the system may take the database of results of operators inspection which includes image of such items that were correctly inspected as being threat items (true positive results). This information may be stored as metadata for such images in the database. Those images of items are used to train machine learning so that appearance of such items within subsequent baggage inspections can be automatically detected and, e.g., highlighted to assist the operator's baggage inspection.
  • Network solutions are provided. By providing a plurality of operators remotely inspecting baggage images, the number of items per minute and the performance value may be increased, whilst the number of false positives and false negatives can be decreased, based on an aggregated assessment.
  • the system may provide individual and collective consensus. Where multiple operators inspect the same images, the analysis of individual operators may be weighted, for example based on an assessment of their individual performance. This may be to weight their analysis to be more predominant within the group, or at the other extreme to eliminate their analysis from the group.
  • the monitoring of an individual performance within a group may be used to determine when an individual operator should take a break, for example as their individual analysis is farthest away from the group consensus.
  • the system may combine individual operator inspection, remotely working operators, as well as autonomous inspection based on machine learning image inspection capabilities.
  • the system may provide to facilitate training within a group based system. Where multiple operators monitor baggage, then the identification of an individual analysis as being farthest away from the group consensus can be use dot indicate a need for training of that individual operator within the group.
  • the system may also provide the capability to incorporate training for an operator in particular for uncertified operators (trainees) into the real processes of baggage inspection rather than only into processes set up for training.
  • trainees uncertified operators
  • the capability to determine real time data, performance values and consensus decision for operators that are certified and in charge of inspecting baggage allows training in a real environment without overhead effort, to measure uncertified operators and consequently to determine and improve their individual performance until a certification can be issued.
  • Figure 1 is an example schematic diagram of a baggage inspection system
  • Figure 2 is an example flow chart which shows a process operating in a typical security context
  • Figure 3 is an example schema showing how the first operator operates;
  • Figure 4 illustrates an example system implementation for use by a first operator;
  • Figure 5 illustrates an example of a scanned image;
  • Figure 6 illustrates an example system implementation for use by a second operator
  • Figure 7 shows an example possible schema for an image database
  • Figure 8 shows an example system
  • Figure 9 illustrates an example scan
  • Figure 10 illustrates an example cloud-based baggage inspection system
  • FIG. 1 is a schematic diagram of a baggage inspection system.
  • a scanner 2 scans sequentially presented articles 4 of baggage which are arranged in a queue on a moving belt 6.
  • the scanner displays an image 8 of the article of baggage being scanned on a screen 12 which is visible to a human operator 14.
  • the human operator inspects the image and seeks to identify items of interest in the image which he believes need further inspection according to specification or assessment criteria, which ultimately are items that might constitute a threat.
  • the system may use radiation sources with different power to be able to scan materials with different absorption levels. Different materials may be presented in different colours, with the same material being presented in the same colour for different items (see, for example, 10b and 10c).
  • a scan gives rise to an image.
  • a scan may give rise to a set of images.
  • a set of images may show, for example, different perspectives of an article or item.
  • FIG. 2 is a flow chart which shows a process operating in a typical security context.
  • a new bag arrives at step S2.
  • an X-ray scanner creates a scanned image 8.
  • step S6 the operator 14 indicates whether or not he has identified any suspicious items. If he has, the bag is passed to a second operator for manual inspection at step S8. If he has not, the system is set ready for the next bag at step S10. It will readily be appreciated that the X-ray scan and the manual inspection may be carried out in different time frames and with different queues. That is, a number of bags may be scanned while a number of bags are queued for the second operator for manual inspection. After step S8, the system is said ready, for first operator for the next bag at step S10.
  • the operator who is in charge of screening the scanned image is further referenced as the operator or the first operator, whereas the operator who is in charge of accomplishing a physical inspection of a baggage item that was separated by the first operator is further referenced as the second operator.
  • the first operator 14 is required to assess whether or not an article of baggage 4 is to be passed for further physical inspection. That might be along a set of criteria defined from an on-site management level. I.e. there might be various security levels that have to be applied to the inspection and consequently considered by the operator.
  • a basic requirement for the operator is of course to sort out any article of baggage based on identifying items in the image that are clearly defined threat items. Additionally the operator may be generally required to sort out any baggage that appears to contain liquids. To identify liquids the operator may search for items that are displayed as organic items with a certain size and shape.
  • Figure 3 is a schema showing how the first operator operates with assessment criteria and scanned images to generate potential warnings by tagging an article for further inspection. Tagging an article can encompass identifying the suspected threat item(s) within the image.
  • Enhanced security levels may be scaled by the level of certainty (or better, uncertainty) as to the determination of the items within the baggage. Enhanced security levels may also be scaled by widening the specification of items that need further inspection, consequently by increasing the number of items to be detected within the baggage. For instance an enhanced security level may require the operator to search for metallic items that have a certain size and tag those articles of baggage as well. This may be due to the risk that a metallic item is capable of shielding other items from being inspected, e.g. a metallic environment can shield smaller non-metallic items.
  • One variation or a subset of such enhanced security levels may be defined by the size of metallic items that an operator identifies within the article of baggage.
  • Another enhancement of security level may be to require the operator to sort out baggage items that contain any type of non-metallic items that cannot clearly be identified.
  • Another security level may require the operator to search for electrical items so that they can be physically checked by the second operator. Smaller smart devices or power packs as well as electrical items for personal body care within a baggage may by examples of items that are required to be subject of further inspection.
  • the various results of the inspection of the first operator is defined herein. If the operator identifies one or more items within the baggage that needs further physical inspection e.g. by the second operator then this is described as a group of positive inspections.
  • the second operator confirms that the identified item represents in fact an item that the first operator was required to pick out in accordance with the applied security level (whether or not the item is a threat item or not).
  • This case represents a true positive or for reasons of simplicity a positive inspection. If the physical inspection reveals that the first operator identifies an item incorrectly, i.e. the identified item does not represent an item that was required to be picked out by the operator in accordance with the applied security level, then this is further referenced as a false positive. Accordingly if the operator decides to pass through the article of baggage because he does not identify any item to be further inspected, then this is described as a group of negative inspections.
  • a "non performer” is therefore an operator who often picks up on items as potentially needing further inspection but which actually, after further inspection according to the applied security level, are assessed to be items that would not have needed further inspection - someone who induces many false positives. Similarly and actually even worse from a security perspective, a non-performing operator also induces false negatives.
  • Embodiments of the invention provided herein can target a current performance of an operator as compared with known training techniques which only reveal knowledge about something like a general level of performance and quality of the operator.
  • training technique outlined in the Background section there is no further and detailed analysis accomplished and applied that indicates why some operators prove to be performer and some to be non-performer - in particular performing or non-performing on a comparable image.
  • aspects of the invention assist the operator in preventing false positives and consequently improve the efficiency of the inspection process.
  • One aspect of the invention provides an automated analysis of a manual inspection process to provide feedback for improving the manual inspection process.
  • This aspect of the invention provides a solution which allows for a continuously running verification if the operator performs within a defined and acceptable quality level.
  • a visual analysis of the operator's inspection performance is introduced.
  • a camera system is introduced that has a camera 16 to capture the operator 14 while screening the scanned images, in particular the face of the operator.
  • the system also comprises a computer 18 acting as an image processor and data store. With such a system it can be analyzed whether or not the operator is in fact inspecting and examining the image of the scanned baggage.
  • the computer can enable face tracking and calculation processes 20 so that the image of the operator 14 captured by the camera system can be used to generate some clear outputs relating to the operator's engagement with his inspection task:
  • the system can record a trajectory of the operator's inspection of the scanned image 8 and can do an overlay with the scanned image itself.
  • the system can analyze what the operator did before he identified an item for further inspection. As described above such a system may be used for analyzing what the operator is actually doing during the inspection.
  • This knowledge can then also be applied in a feedback mode 30 to support the continuously running inspection process.
  • FIG. 5 shows a scanned image 8 with an area 80 inspected as determined by eye tracking and a feedback score 82 based on the proportion of the full image inspected.
  • the scanned image does not have to be divided into fixed segments.
  • separated items may be already identifiable.
  • the trajectory can be matched against those items and the system can analyze whether all items are inspected properly.
  • the system can match the trajectory of the operator's inspection process and its timeline against those items. This allows to ensure that the operator in fact spent the needed time and the defined level of attention, respectively, on those items.
  • the system might allow for flagging up if the quality falls below a certain threshold, e.g. requiring the operator to screen the image or the respective segment again, but also to suggest to replace the operator.
  • Another aspect of the invention enables the results of a manual inspection process to optionally be used for allowing and automated inspection and/or improving an automated inspection.
  • the current process within the baggage security check includes the step of identifying the item within a scanned image that needs further physical inspection. This is done through an operator interface 13 which can be a processor or interact with a processor associated with the display 12. Consequently that piece of baggage gets tagged to be separated and is led to a second operator 40 ( Figure 6) to check and confirm where appropriate the result of the inspection of the first operator.
  • Existing inspection systems may therefore sometimes provide a screen 42 for the second operator on which an image 8a of the scanned baggage (corresponding to the scan 8) is displayed and the item 10c identified by the first operator is indicated (e.g. by highlighting/marking/tagging, etc.). That gives some sort of guidance to the second operator who has to do the physical inspection of the article of baggage 4 and helps to increase efficiency of that part of the process.
  • the second operator's inspection allows the confirmation of whether the inspection of the first operator is a true positive or a false positive. Consequently it can be used to provide that kind of feedback which initially allows for a correct analysis of the first inspection process as outlined above, and which needed to assess whether the first inspection was accomplished correctly or not.
  • the system can be used to establish a database 46 of images that can be used for further purposes:
  • the images are stored together with the information whether they represented a false positive or true positive when inspected by the first operator.
  • This information is entered into a computer 48 via input means 50.
  • the scan is received from the scanner 2 or the display 42. Consequently the database 46 is enriched and reveals images for both categories.
  • the database 46 of images is established and can be used to support the inspection process but also the training and certification process of operators.
  • items within an article 8 of baggage can appear in various ways. Although the screening process should be able to identify the respective material in a reliable manner, the fact that items within a baggage are covered by other items or in particular threat items may be willfully shielded against a reliable detection is an issue to address.
  • the invention here allows not only to establish a large database from all the results of scanned baggage. It also links those images to the result of the subsequent physical inspection and to the categories of false positives and true positives.
  • Metadata of such images can be generated that allows for further processing and usage of the images. This can be information about the item itself, and also information about how the visual inspection process of the first operator was accomplished so that he finally decided to tag the item for further inspection (related to an applied security level). With such information, further categories and training strategies can be established and scheduled. For instance items that repeatedly appear in a similar fashion within a scanned image leading to false positives if they were inspected quite quickly may be a candidate for taking a bit more time to be inspected.
  • Figure 7 shows a possible schema for an image database. Histograms of inspection data (e.g. time) versus images of tagged items can be used to improve the quality of the data and respective metadata. Information about a user behavior that shows switching between different image presentations and/or applying different types of image processing means before an item is tagged for further inspection is all of interest.
  • Such a database of images and related metadata can be used to improve image recognition systems that may support the first inspection process as they form an ideal ensemble of data for machine learning technologies applied to such image recognition systems.
  • Figure 8 shows a system in which an automated visual inspection module 86 is applied to the inspection process to run autonomously as an addition step parallel to a human visual inspection.
  • the automated inspection process in module 86 identifies an item as potentially to be tagged for further inspection and displays it to the human operator on screen 12 requiring him to analyze the image with a high degree of attention.
  • the automated and human types of inspection are run independently from each other to qualify both processes. If the automated and the human visual inspection show different results as compared by a comparison module 90, then the baggage will be separated and a second operator's physical process shows finally what type of inspection was correct. This may help to improve both the database and machine learning application of the image recognition as well as the quality and performance of the first operator.
  • Figure 9 shows a scan 8 where the marked area 130 denotes an area that attracted more human inspection time and attention (based on the camera/gaze tracking/eye tracking information) than the automated issued recognition module had expected.
  • the automated system may have expected various areas to receive particular amounts of attention. If, after human operators have done their analysis and the results are fed back into the machine learning training process the actual amounts of attention differ, then the algorithms can use this information to improve their understanding. By such machine learning technologies the automated image recognition can learn from the data of the human inspection that an item with that shape and material has received more attention, and potentially was tagged by the inspector accordingly.
  • a third aspect of the invention provides a cloud-based baggage inspection system.
  • the third aspect of the invention takes also benefit from the library (database 46) of images of items that were tagged for further inspection, positive and false positives and their respective metadata.
  • a cloud-based baggage inspection system as shown in Figure 10 having a server 100 including a training and certification module 102 as well as an inspection module 104 for remotely connecting to a baggage inspection system.
  • the inspection module could alternatively be in a different server/location than the training module.
  • the purpose of the training and certification module 102 is to provide an offline training, qualifying and certification process for personnel that can be deployed as later remotely working operators 106a, b, c, N.... forming a cloud-based baggage inspection system.
  • Offline means here that the user of the system that is trained and/or certificated needs not to be connected to a real baggage inspection system.
  • the software rather simulates a baggage inspection system.
  • the software may be a Software-as-a-Service (SaaS) system. That means that the user logs into the software which runs on a server at a datacenter for instance. The user has to subscribe to the system in advance so that the software collates mandatory information and can create a user profile.
  • SaaS Software-as-a-Service
  • the software might have interfaces to external databases like governmental databases to retrieve and / or check security relevant information about the user. This might be necessary to accomplish an approval process of the user and to allow him after a positive approval to access to the system.
  • the software might have an interface that is able to demand and retrieve certificates of conduct or police clearance certificates.
  • a training and qualification process starts.
  • the library 46 described by the second aspect of the invention is in particular suited to be used for that purpose as metadata of all the images are known.
  • the software knows what images were displayed to the user and therefore what images were used for training and how the user performed on the training material.
  • the system also knows what kind of inspection levels the user has already worked successfully on. As described above, there are different levels of how difficult it is to detect an item and also different security levels that can be applied to an inspection process. Such variation can be purposely applied to the user's training so that the level of performance can be determined and recorded accordingly.
  • the training and the certification process of a user can be accomplished by considering the same conditions that a standard human operator of a gravreal" inspection system will face during his work. I.e. time, frequency of the images that are presented to the user can be configured and changed accordingly. By starting without time limits when screening the image a user can gain his skills and experience. Direct feedback on the users' decision and clarification of whether the user has made a correct or incorrect decision supports the training process. Variation and enhancement of the level of difficulties can be applied so that the user passes through the qualification process.
  • a training strategy can be developed to allow for efficient education.
  • the system is therefore able to configure and schedule a respective qualification and certification processes.
  • Knowledge about how the inspection was accomplished leading to false positives and true positives can also be used as well as the respective knowledge leading to false negatives and true negatives.
  • All these information can be stored in a user profile to record the status of the qualification of the user to any given time.
  • a user Once a user has been approved as being qualified he may be allowed to subscribe to the second module which is the cloud-based baggage inspection system module 104.
  • a second subscription might be necessary to reflect individual needs of the inspection system provider.
  • the subscription may allow the user to choose for certain time periods to which he intends to offer his work.
  • the system can proactively call the user and require his service.
  • the invention here can base on a theoretically unlimited number of operators.
  • the system can configure not only the amount of operators which are connected and which have to inspect the images, it can also control the qualification of the ensemble of operators which are appointed to the individual inspection system.
  • the results 108a, b, N of the pool of operators can be collated and analyzed by an aggregation server 110, and the summarized decision used to determine if an article is to be tagged for physical inspection. It is acknowledged that theoretically the cloud inspection system with a large amount of remotely working operators generally allows for statistical evaluation.
  • Such systems are in a first instance suited to support the more-eyes-principle and therefore are in particular suited to enhance and improve the quality of the inspection process:
  • the likelihood of missing items to be tagged according to a security level can dramatically be reduced, i.e. the rate of false negatives can be minimized if not almost excluded.
  • a standard system with one operator is in general not able to detect false negatives which is a real weakness of the system.
  • the invention here though is able to detect false negatives, as many operators accomplish the inspection so that false negatives of individuals are more likely to be detected.
  • the rate of false negatives of an individual operator can therefore be determined not only from training material but also during real inspection.
  • the system allows for further evaluation and in particular for an evaluation of the performance of each single operator - more or less in real-time: If the system detects that one individual operator currently induces false negatives (because the system knows from the other operator's analysis that the image includes an item representing a true positive), then the system can directly react and replace the operator.
  • the level of consistency throughout the ensemble of operators can be used to analyze, how successful the training and the certification process is. If the evaluation shows an increased level of inconsistent results, the system can define and apply respective actions.
  • the operation of the inspection system can be managed: For instance for a given security level a number of 10 operators may be mandatory to operate the system. With such a number of operators the belt may have a standard speed to allow for one baggage article every 10 seconds. By increasing the number of operators the speed of the belt can be increased accordingly as the number of inspection scales and a defined confidence value can be maintained correspondingly. With the ability to have a much larger amount of inspection results and with the ability to sort out baggage items that might be candidates for further physical inspection the process becomes very uniform and seamless.
  • Such a cloud-based inspection system may need particular safety requirements: + The operators should not know what inspection system he currently operates. + An ongoing monitoring process of how the operator performs may be applied. Only with evenly working and performing users of the system, the cloud-based inspection can reveal its power. I.e. slow users show bottlenecks for fast users. Consequently an ongoing selection process of operators / users assigned to one system could be applied and considered for the final inspection result. If one individual user does just not perform, the user would only lower the quality and confidence values of the inspection, rather than contributing to it.

Abstract

There is disclosed a computer system for monitoring an operator when inspecting displayed images captured of sequentially presented physical articles, the system comprising: monitoring means configured to capture the visual engagement behaviour of a first operator engaged in inspecting the displayed image to generate an inspection outcome, wherein the inspection outcome causes selected ones of the physical articles to be tagged for a second inspection procedure based on identifying an item of interest in the image; and computer storage means configured to store inspection data defining the visual engagement behaviour in association with an image identifier which identifies the displayed image and tagging data which indicates whether the article was tagged for further inspection.

Description

improving the quality and power of baggage inspection systems
The invention relates to improving the quality and power of visual inspection systems, as for baggage. It is particularly but not exclusively to support and to qualify human operators of those systems.
Background of the invention
Baggage inspection systems can be generally described by a scanner that scans articles of baggage on a moving belt by means of x-ray sources and to provide an image of all the items within the baggage. The image is displayed to an operator, e.g. on a screen. The system uses radiation sources with different power to be able to scan materials with different absorption levels. The measurement reveals the outer shape of the various items and classifies them into subsets of items of certain groups of material. Items assigned to the same groups of material like organic materials, nonorganic or metallic items are represented in the image with a same color. Displaying the groups of materials by different colors helps the operator to quickly assess what items are in the article of baggage and whether or not there are items which need further inspection by a second human operator.
There might be also different types or means of image processing tools that can be applied to the scanned image and used by the operator. For instance a simple example is contrast enhancement or noise reduction which can help the operator. Also further processing of supplementary images that may originate from screening the baggage with different power levels can help to reveal items and display them even if their absorption is overlaid by the absorption from other items.
A capability of the inspection systems may be that the operator can switch between a set of different presentation types of the same result (in terms of different image processing means) or between results from different power levels and corresponding processing.
The purpose of an effective inspection system is to detect threat items within the
l baggage. It is therefore the task of the human operator to inspect the scanned images provided by the system and to assess whether or not the baggage contains items which needs further physical inspection. If the operator decides so for a certain article of baggage, it gets separated from the queue and is handed over to a second operator who accomplishes a second physical inspection. In the following description, embodiments of the invention provide improved baggage inspection systems, and techniques for training operators of such systems. The systems are typically located at airports but also systems for other areas of operation and checkpoints are considered.
According to one known training technique images of threat items are superimposed onto the image of screened baggage. This is a method which is used to train operators and also to certify them. Libraries of images of threat items to superimpose into scanned images are used for this purpose. The number of such superimpositions into images as well as the frequency to which those items are superimposed represents variables which allows for setting up training strategies but might be also used to certify operators for different security levels. The library includes images of threat items with different orientation. By superimposing them into the scanned image, the operator learns to identify for instance not only a knife from its side view but also from its head view.
Summary of the Invention
It is the aim of aspects of the invention to provide a solution that helps to determine highly objectively by automated means if and how an operator is accomplishing the task of inspecting scanned images of baggage items.
Another aspect of the invention is to use the results of the inspection process and to populate an image library that provides a database for additional technologies that may be used to support the manual inspection process afterwards.
A third aspect of the invention describes a solution of a cloud inspection system. One part of the system allows for training personnel offline and for sufficiently qualifying them so that they can be appointed as staff member. The second part of the system is to allow to flexibly configure and determine the number and the respective qualifications of remotely working operators in consideration of individual needs at the inspection systems. All aspects of the invention described here are intended to be generally combinable.
In one technique for allowing for analysis of operator behavior, operator gaze trajectories are measured to evaluate/analyze the actual inspection process. This may use additional cameras, for example with face/gaze and other tracking technologies. Such a technique has the advantage of determining the actual behavior of an operator. A relationship between the operator's attention towards the screen and external disturbances/working environment can be determined. A relationship between operator scan decisions (negative/positive) and gaze distribution/gaze pattern can be determined. A dependence between operator performance and attention over time, and within shift time, can be determined.
In one technique for allowing for analysis of operator behavior, inspection trajectory is measured to quantify and qualify each individual inspection process. Eye-tracking technologies may be used to measure an operator's gaze trajectory during an image inspection process. Such a technique has the advantage of providing a detailed insight into each actual inspection process, for example: (i) did the operator inspect all areas of the image and to what extent? (ii) what eye movement behaviors are indicative of a true positive/negative decision, or a false negative? Such a technique also has the advantage of providing direct feedback to the operator if the inspection process was not accomplished according to specified guidelines.
In one technique for allowing for analysis of operator behavior inspection trajectory is measured to determine qualitatively and quantitatively the individual operator's performance. Eye-tracking technology may be used to measure trajectory of the image inspection process and to calculate an operator/group/benchmark metrics. Such a technique has the advantage of providing a detailed insight into the each operator's performance. A performance can be benchmarked against other operators to detect anomalies (fatigue, need for training etc.). Insight into "inspection pattern" related to false positive and false negative may be given. Such a technique also has the advantage of deriving new training methodologies to address trending inspection problems.
In one technique of a computer assisted inspection method, a computer vision machine learning algorithm is trained to recognize key visual features that contribute to correctly escalated scan subjects. Eye-tracking/gaze tracking data from a set of image assessments for which the true assessment is known may be used (e.g. test images, real data on images of baggage items that were escalated to a physical inspection). Such a technique has the advantage of using dwell time, gaze behaviors and other metrics to identify areas of an image that are of interest to a human operator when making an inspection decision, and use these as training data for the machine learning algorithm. Such a technique also has the advantage that a computer vision system trained in this way will be able to assist human operators on future real-world scans for which the results are not yet known. In one technique of a computer assisted inspection method, machine learning technologies are used to deliver a semi-automated inspection process. A machine learning algorithm may be trained to provide first-line evaluation of scanner images. Real-time human-operator behavior and decisions may be used to incrementally improve machine learning accuracy. This technique has the advantage that a large quantity of training material that arises from day to day scanning by human operators is ideal for effective ML training. This technique also has the advantage that, dependent of the confidence status of the techniques, ML assisted inspection can be used to support the human operator inspection process and to pre-emptively escalate items that need more detailed inspection. The technique has the further advantage, when using a distributed scanning team, that images that the machine determines to be suspect may be directed to more qualified human operators and/or a large number of operators overall to ensure a high-confidence decision.
In one technique which preferably is for handling threat responsiveness through a network, a network of remote operators can provide an easily scalable inspection capability. Aggregation of multiple operator decisions for a single item of baggage may be used. Delivery of scanning images securely to remote/offsite operators for real-time inspection may be used. There are multiple advantages associated with this technique. The accuracy of each aggregated scanning decision may be increased. An operator's performance value may be determined, such as the closeness to consensus decision. If the individual performance value goes below a threshold, the operator's decision can be re-weighted or even excluded - the operator may need a break or additional training. The working environment of operators is improved. Throughput with positive impact on capex and opex is improved.
In one technique which preferably is for handling threat responsiveness through a network, the effectiveness of a distributed scanning team is improved and a configurable level of scanning confidence delivered. This may use determination of individual operator performance profile. This may use an automated scanning team assembly to provide appropriate capacity and desirable skill level to meet particular scanning needs. There are multiple advantages associated with this technique. With individual performance values, the likely scanning confidence value and capability vector of a particular selection of operators, formed into an ad-hoc team, can be calculated. Using distributed teams means that such groups can be assembled quickly as demand dictates making scanning activities more efficient and flexible. The technique allows, e.g., for the possibility to define desirable scanning profiles per flight, per airline, per passenger demographic or threat potential or at any other level of granularity.
In one technique which preferably is for handling threat responsiveness through a network, a combination of a distributed scanning team with computer assisted inspection methods. Computer-assisted inspection methods may be included in a network of operators. This technique has the advantage of providing assistance for individual operators during the inspection process. This technique also has the advantage of the ability to analyze assisted operators and non-assisted operators to provide new insight into the inspection process and further input into the ML process. In one technique which preferably is for handling threat responsiveness through a network, the proficiency of an operator is improved by providing real-time training. A remote operator may be provided with real-time scanning images to assess in training mode. The operator responses may be compared to the consensus decisions from live operators to determine a level of proficiency. This technique has the advantage of the provision of training and certification capabilities "without overhead" - results can be taken for the actual inspection as well as solely for training purposes. This technique also has the abilities of scaling the process according to individual needs: opex and capex burdens can be improved.
In an inspection monitoring arrangement a first camera can be used to monitor the attention of the operator to the screen thereby giving an indication of operator attention, and a second camera can be used to monitor the environment in which the operator operates, thereby giving an indication of operator disturbance. Graphs of operator attention and operator disturbance can be plotted, which can show where there is a correlation between disturbances and lack of attention, or where lack of attention has occurred without there being any disturbance. A graph of accuracy and throughput over time can also be created, and can be compared with the graph of operator attention and operator disturbance to see correlations between, e.g., low accuracy and/or throughput, and operator disturbance and/or accuracy. An operator eye-tracking may be monitored to determined alert corresponding to an insufficient operator attention. For example, if the eye-tracking indicates that an operator has inspected below a certain percentage of the baggage area displayed, or a number of items less than the number of items displayed for the baggage, an alert may be flagged to indicate that the operators' check is insufficient.
An operator's performance may be analysed over time, for example over an operator shift. An indication of the percentage of the area of baggage which are checked over time, a percentage of the number of items in baggage which are checked over time, a percentage (or number) of threatening items which are checked over time, and the average time spent checking an item of baggage may all be accumulated for an operator. A benchmark or threshold may be applied to one or all parameters, with the performance dropping below the benchmark indicating a potential alert condition.
Baggage inspection may be achieved in association with machine learning. A machine is trained based on an assessment of a captured images over time. These captured images are used to assist subsequent baggage inspection. The system may take the database of results of operators inspection which includes image of such items that were correctly inspected as being threat items (true positive results). This information may be stored as metadata for such images in the database. Those images of items are used to train machine learning so that appearance of such items within subsequent baggage inspections can be automatically detected and, e.g., highlighted to assist the operator's baggage inspection. Network solutions are provided. By providing a plurality of operators remotely inspecting baggage images, the number of items per minute and the performance value may be increased, whilst the number of false positives and false negatives can be decreased, based on an aggregated assessment. The system may provide individual and collective consensus. Where multiple operators inspect the same images, the analysis of individual operators may be weighted, for example based on an assessment of their individual performance. This may be to weight their analysis to be more predominant within the group, or at the other extreme to eliminate their analysis from the group. The monitoring of an individual performance within a group may be used to determine when an individual operator should take a break, for example as their individual analysis is farthest away from the group consensus.
With the proposed solution one enables an improved and detailed determination of operators performance as well as an individual capabilities. Such could be that one operator is perfectly working with respect to his accuracy level on certain materials like fluids or electrical items within a baggage. Others might show an outperforming level of throughput value under a given accuracy level etc. This can be further used to configure applicable inspection requirements. For instance a certain airline may always require operators with the best accuracy level, whereas they accept a lower throughput. It can also be used if the overall traffic at the airport increases so that a need of operators arises showing high throughput values.
The system may combine individual operator inspection, remotely working operators, as well as autonomous inspection based on machine learning image inspection capabilities.
The system may provide to facilitate training within a group based system. Where multiple operators monitor baggage, then the identification of an individual analysis as being farthest away from the group consensus can be use dot indicate a need for training of that individual operator within the group. The system may also provide the capability to incorporate training for an operator in particular for uncertified operators (trainees) into the real processes of baggage inspection rather than only into processes set up for training. The capability to determine real time data, performance values and consensus decision for operators that are certified and in charge of inspecting baggage allows training in a real environment without overhead effort, to measure uncertified operators and consequently to determine and improve their individual performance until a certification can be issued.
Brief Description of the Drawings:
The invention is now described by way of example to the accompanying drawings, in which:
Figure 1 is an example schematic diagram of a baggage inspection system;
Figure 2 is an example flow chart which shows a process operating in a typical security context;
Figure 3 is an example schema showing how the first operator operates; Figure 4 illustrates an example system implementation for use by a first operator; Figure 5 illustrates an example of a scanned image;
Figure 6 illustrates an example system implementation for use by a second operator; Figure 7 shows an example possible schema for an image database;
Figure 8 shows an example system;
Figure 9 illustrates an example scan; and Figure 10 illustrates an example cloud-based baggage inspection system
Figure 1 is a schematic diagram of a baggage inspection system. A scanner 2 scans sequentially presented articles 4 of baggage which are arranged in a queue on a moving belt 6. The scanner displays an image 8 of the article of baggage being scanned on a screen 12 which is visible to a human operator 14. The human operator inspects the image and seeks to identify items of interest in the image which he believes need further inspection according to specification or assessment criteria, which ultimately are items that might constitute a threat. As already mentioned, the system may use radiation sources with different power to be able to scan materials with different absorption levels. Different materials may be presented in different colours, with the same material being presented in the same colour for different items (see, for example, 10b and 10c). In the above it is described that a scan gives rise to an image. In implementations a scan may give rise to a set of images. A set of images may show, for example, different perspectives of an article or item.
The purpose of the inspection is to detect threat items within the article of baggage 4. Figure 2 is a flow chart which shows a process operating in a typical security context. A new bag arrives at step S2. At step S4 an X-ray scanner creates a scanned image 8.
At step S6 the operator 14 indicates whether or not he has identified any suspicious items. If he has, the bag is passed to a second operator for manual inspection at step S8. If he has not, the system is set ready for the next bag at step S10. It will readily be appreciated that the X-ray scan and the manual inspection may be carried out in different time frames and with different queues. That is, a number of bags may be scanned while a number of bags are queued for the second operator for manual inspection. After step S8, the system is said ready, for first operator for the next bag at step S10.
For definition purposes the operator who is in charge of screening the scanned image is further referenced as the operator or the first operator, whereas the operator who is in charge of accomplishing a physical inspection of a baggage item that was separated by the first operator is further referenced as the second operator.
A matrix of results is defined which are used as reference in the following description: The first operator 14 is required to assess whether or not an article of baggage 4 is to be passed for further physical inspection. That might be along a set of criteria defined from an on-site management level. I.e. there might be various security levels that have to be applied to the inspection and consequently considered by the operator. A basic requirement for the operator is of course to sort out any article of baggage based on identifying items in the image that are clearly defined threat items. Additionally the operator may be generally required to sort out any baggage that appears to contain liquids. To identify liquids the operator may search for items that are displayed as organic items with a certain size and shape. Figure 3 is a schema showing how the first operator operates with assessment criteria and scanned images to generate potential warnings by tagging an article for further inspection. Tagging an article can encompass identifying the suspected threat item(s) within the image. Enhanced security levels may be scaled by the level of certainty (or better, uncertainty) as to the determination of the items within the baggage. Enhanced security levels may also be scaled by widening the specification of items that need further inspection, consequently by increasing the number of items to be detected within the baggage. For instance an enhanced security level may require the operator to search for metallic items that have a certain size and tag those articles of baggage as well. This may be due to the risk that a metallic item is capable of shielding other items from being inspected, e.g. a metallic environment can shield smaller non-metallic items.
One variation or a subset of such enhanced security levels may be defined by the size of metallic items that an operator identifies within the article of baggage.
Another enhancement of security level may be to require the operator to sort out baggage items that contain any type of non-metallic items that cannot clearly be identified. Another security level may require the operator to search for electrical items so that they can be physically checked by the second operator. Smaller smart devices or power packs as well as electrical items for personal body care within a baggage may by examples of items that are required to be subject of further inspection.
Here, it is the intention to acknowledge that the operator of the inspection system is required to search and identify items that need further inspection and there may be different sets of criterions to be applied. Those different sets represent different security levels of the inspection.
For reference reasons the various results of the inspection of the first operator is defined herein. If the operator identifies one or more items within the baggage that needs further physical inspection e.g. by the second operator then this is described as a group of positive inspections.
One can distinguish between the first case in which the second operator confirms that the identified item represents in fact an item that the first operator was required to pick out in accordance with the applied security level (whether or not the item is a threat item or not). This case represents a true positive or for reasons of simplicity a positive inspection. If the physical inspection reveals that the first operator identifies an item incorrectly, i.e. the identified item does not represent an item that was required to be picked out by the operator in accordance with the applied security level, then this is further referenced as a false positive. Accordingly if the operator decides to pass through the article of baggage because he does not identify any item to be further inspected, then this is described as a group of negative inspections.
Consequently here a case is referenced as a false negative in which the operator does not identify an item which actually needed further inspection (according to the applied security level) and which was indeed required to be picked out.
Finally an inspection that is correctly passed through as it does not contain any item that needs further inspection according to the security level will be further referenced as a true negative or for reasons of simplicity a negative.
For reason of clarity it is mention that this terminology is applied to each individual item and its respective inspection, whether or not multiple items are within one baggage article. Similarly the Table I below just refers to one item within a bag and the respective response from the first and second operator on this item.
With this description one can distinguish also between the operators that do and do not successfully accomplish their task:
A "non performer" is therefore an operator who often picks up on items as potentially needing further inspection but which actually, after further inspection according to the applied security level, are assessed to be items that would not have needed further inspection - someone who induces many false positives. Similarly and actually even worse from a security perspective, a non-performing operator also induces false negatives.
Consequently a performer is an operator who has a high rate of true negatives as well as true positives - he accomplishes his task successfully in accordance with the individually applied security level.
Figure imgf000014_0001
Figure imgf000015_0001
TABLE S
Definitions assume that a physical inspection by the second operator is always definitive - i.e. the second operator is never wrong. Note that the breakdown between true and false negatives for "real baggage" cannot be measured as the confirmation from the second operator does not exist. The performance rate on false negatives in a standard system can only be determined by training material.
That kind of terminology already indicates that the qualification and experience of the operator to fulfill his duty becomes highly critical and in particular the current condition of an operator is a major factor that decides how well the task can be accomplished - whether he/she is a performer or a non-performer. Accordingly this relates to both the duty to detect and identify any item that needs further inspection within a bag according to the security level, but also to prevent false-positives as well as possible, since false positives increases the effort and delays the security check in general. In particular with different security levels and situations e.g. with a high number of passengers and therefore items to be checked, this becomes more and more substantial.
Embodiments of the invention provided herein can target a current performance of an operator as compared with known training techniques which only reveal knowledge about something like a general level of performance and quality of the operator. In the training technique outlined in the Background section, there is no further and detailed analysis accomplished and applied that indicates why some operators prove to be performer and some to be non-performer - in particular performing or non-performing on a comparable image.
It is highly beneficial to provide a solution that helps to understand what is needed to recognize items within an article of baggage as an item that needs further inspection according to the various sets of security levels. By understanding what a performer did to provide such a rate of true positives as well as true negatives, one can support the training process of new operators as well as the ongoing training strategies of the existing staff. In addition by tracking against those criterions one can monitor the actual and current performance of an operator. It is clear that false positives are recognized and captured, as the second operator that accomplishes the physical inspection directly reveals that information - a high rate of false positives therefore lowers the overall efficiency of the inspection process at a certain location.
But without further means it is not possible to assess what rate of false negatives an operator exhibits during his shift - a situation which is worse from a security perspective than the effect of false positives. It is possible to include training material into the queue of images that an operator has to inspect to test whether the operator does pick up those training items - but this is actually only a passive test rather than an active process to prove the actual and current performance of the operator on each image.
It is the aim of two aspects of the invention to solve this problem and to provide means for analyzing and supporting the inspection process of an operator on each image. This can be applied to ensure a minimized rate of false negatives and therefore to support and improve quality throughout the various safety levels.
Simultaneously, aspects of the invention assist the operator in preventing false positives and consequently improve the efficiency of the inspection process.
One aspect of the invention provides an automated analysis of a manual inspection process to provide feedback for improving the manual inspection process.
This aspect of the invention provides a solution which allows for a continuously running verification if the operator performs within a defined and acceptable quality level. To achieve this a visual analysis of the operator's inspection performance is introduced. Dependent on the used visual analysis tool, different information is achievable. In a first embodiment shown in Figure 4, a camera system is introduced that has a camera 16 to capture the operator 14 while screening the scanned images, in particular the face of the operator. The system also comprises a computer 18 acting as an image processor and data store. With such a system it can be analyzed whether or not the operator is in fact inspecting and examining the image of the scanned baggage. The computer can enable face tracking and calculation processes 20 so that the image of the operator 14 captured by the camera system can be used to generate some clear outputs relating to the operator's engagement with his inspection task:
+ Did the operator look at the screen. + By synchronizing the timelines of the scanned image 8 appearing on the operator's screen 12 of the inspection systems together with the camera image showing the operator, a solution can determine for what time the operator has focused on the scanned image 8. By combining the camera system with further technologies such as gaze tracking 22 a more sophisticated and precise analysis of what the operator has inspected becomes available.
By introducing further visual analysis tools like eye tracking 24, further steps can be made in sophistication of analysis of the operator's engagement behavior.
By tracking the operator's eyes the system can record a trajectory of the operator's inspection of the scanned image 8 and can do an overlay with the scanned image itself.
Analyzing the result of how the trajectory covers the scanned image, the system can generate further outputs like:
+ What items within the baggage were inspected.
+ How long did an operator review the various items. + Did the operator inspect all parts of the image.
Taking further information into consideration, like the timeline of the trajectory in combination with operator's behavior related to switching between different image presentations and the usage of image processing technologies, the system can analyze what the operator did before he identified an item for further inspection. As described above such a system may be used for analyzing what the operator is actually doing during the inspection.
This knowledge can then also be applied in a feedback mode 30 to support the continuously running inspection process.
That means in combination with the ongoing analysis one can allow for immediate feedback to the inspection in terms of whether or not the operator fulfills defined criteria, which are stored in a criteria store 32. + Did the operator observed all segments of the scanned image. The trajectory can make such a statement and send a message 28 accordingly. The segment that was not inspected at all or not properly may be tagged on the display 12 and the operator may be required to inspect that segment. Figure 5 shows a scanned image 8 with an area 80 inspected as determined by eye tracking and a feedback score 82 based on the proportion of the full image inspected.
+ The scanned image does not have to be divided into fixed segments. With solutions that already recognize areas of the same material, together with image recognition technologies, separated items may be already identifiable. The trajectory can be matched against those items and the system can analyze whether all items are inspected properly.
+ Inspecting an item or a segment properly is also related to the fact that some items may be more difficult to check than others. This is because items that appear with different orientations within an image may be sometimes easy to identify and sometimes much more difficult. Also non-metal items may be usually much more difficult to identify as metal items and therefore need a higher degree of attention.
The system can match the trajectory of the operator's inspection process and its timeline against those items. This allows to ensure that the operator in fact spent the needed time and the defined level of attention, respectively, on those items.
+ By monitoring how the operator performs against those criteria, the system might allow for flagging up if the quality falls below a certain threshold, e.g. requiring the operator to screen the image or the respective segment again, but also to suggest to replace the operator.
+ The variations of the capabilities of each operator during his shift can be recognized and consequently used to adapt and define pauses and interruptions in a personalized fashion.
+ This allows to the same extent to adapt to periods of high passenger appearance. Similarly periods of higher strain can be acknowledged and managed proactively.
+ Different levels of security can be applied and the analysis of the corresponding operator's behavior can be accomplished: Changing or enhancing the criteria may be possible. As the system can determine whether or not the operator does perform against those enhanced criteria, the solution can allow for raising a security level.
On top of this the collation and compilation of such inspection data can be used to establish n operator profiles which are held in an operator profiles store 34.
+ Statements of the average and peak performance capabilities can be analyzed and levels of performance can be consequently certified.
+ Knowledge about noticeable deviations between the defined criteria and the actual performance values can be used to set up an individual training strategy.
+ Additional evaluation of how an operator performed against training material allows also for analyzing what might be the origin of false negatives and/or what might be the reason why one operator induced a false negative whereas another operator successfully tagged correctly an item when they inspected the same image.
+ All this can be evaluated and applied on different security levels so that deep and detailed knowledge can be established about how an operator can successfully perform on different security levels.
+ Finally experts for different security levels and/or for different types of items to be tagged during inspection can be evaluated and consequently certified.
Another aspect of the invention enables the results of a manual inspection process to optionally be used for allowing and automated inspection and/or improving an automated inspection.
As mentioned above a second aspect of the invention deals with a further step within the inspection process:
As described above the current process within the baggage security check includes the step of identifying the item within a scanned image that needs further physical inspection. This is done through an operator interface 13 which can be a processor or interact with a processor associated with the display 12. Consequently that piece of baggage gets tagged to be separated and is led to a second operator 40 (Figure 6) to check and confirm where appropriate the result of the inspection of the first operator. Existing inspection systems may therefore sometimes provide a screen 42 for the second operator on which an image 8a of the scanned baggage (corresponding to the scan 8) is displayed and the item 10c identified by the first operator is indicated (e.g. by highlighting/marking/tagging, etc.). That gives some sort of guidance to the second operator who has to do the physical inspection of the article of baggage 4 and helps to increase efficiency of that part of the process.
As already indicated and described by the first aspect of the invention, the second operator's inspection allows the confirmation of whether the inspection of the first operator is a true positive or a false positive. Consequently it can be used to provide that kind of feedback which initially allows for a correct analysis of the first inspection process as outlined above, and which needed to assess whether the first inspection was accomplished correctly or not.
It can also be used for the benefit of an additional feature: By tagging an item that needs further inspection, in particular according to an applied security level, and checking it by the following physical inspection process, the system can be used to establish a database 46 of images that can be used for further purposes:
The images are stored together with the information whether they represented a false positive or true positive when inspected by the first operator. This information is entered into a computer 48 via input means 50. The scan is received from the scanner 2 or the display 42. Consequently the database 46 is enriched and reveals images for both categories.
Gradually the database 46 of images is established and can be used to support the inspection process but also the training and certification process of operators.
It is noted that items within an article 8 of baggage can appear in various ways. Although the screening process should be able to identify the respective material in a reliable manner, the fact that items within a baggage are covered by other items or in particular threat items may be willfully shielded against a reliable detection is an issue to address.
This problem is known and included partly in training images already. Also the variation of the orientation is an issue that hinders the operator's inspection. A knife that appears by its side view is obviously easy to identify compared to a perspective of the item that just shows the head of the knife.
To improve data for training, the invention here allows not only to establish a large database from all the results of scanned baggage. It also links those images to the result of the subsequent physical inspection and to the categories of false positives and true positives.
Metadata of such images can be generated that allows for further processing and usage of the images. This can be information about the item itself, and also information about how the visual inspection process of the first operator was accomplished so that he finally decided to tag the item for further inspection (related to an applied security level). With such information, further categories and training strategies can be established and scheduled. For instance items that repeatedly appear in a similar fashion within a scanned image leading to false positives if they were inspected quite quickly may be a candidate for taking a bit more time to be inspected. Figure 7 shows a possible schema for an image database. Histograms of inspection data (e.g. time) versus images of tagged items can be used to improve the quality of the data and respective metadata. Information about a user behavior that shows switching between different image presentations and/or applying different types of image processing means before an item is tagged for further inspection is all of interest.
Such a database of images and related metadata can be used to improve image recognition systems that may support the first inspection process as they form an ideal ensemble of data for machine learning technologies applied to such image recognition systems.
It is known that machine learning technologies are as successful as the training material that feeds them. With an increased amount of images of items to be tagged in a whole set of appearances within a scanned image, and respective metadata about the underlying visual inspection process, an automated visual inspection process becomes more and more precise.
Similarly the information about whether an image represent a true positive or a false positive is of particular value for an automated image recognition system which is supported by machine learning technologies.
Figure 8 shows a system in which an automated visual inspection module 86 is applied to the inspection process to run autonomously as an addition step parallel to a human visual inspection.
For example, the automated inspection process in module 86 identifies an item as potentially to be tagged for further inspection and displays it to the human operator on screen 12 requiring him to analyze the image with a high degree of attention. In another configuration the automated and human types of inspection are run independently from each other to qualify both processes. If the automated and the human visual inspection show different results as compared by a comparison module 90, then the baggage will be separated and a second operator's physical process shows finally what type of inspection was correct. This may help to improve both the database and machine learning application of the image recognition as well as the quality and performance of the first operator. Figure 9 shows a scan 8 where the marked area 130 denotes an area that attracted more human inspection time and attention (based on the camera/gaze tracking/eye tracking information) than the automated issued recognition module had expected. The automated system may have expected various areas to receive particular amounts of attention. If, after human operators have done their analysis and the results are fed back into the machine learning training process the actual amounts of attention differ, then the algorithms can use this information to improve their understanding. By such machine learning technologies the automated image recognition can learn from the data of the human inspection that an item with that shape and material has received more attention, and potentially was tagged by the inspector accordingly.
A third aspect of the invention provides a cloud-based baggage inspection system.
The third aspect of the invention takes also benefit from the library (database 46) of images of items that were tagged for further inspection, positive and false positives and their respective metadata. One embodiment proposes a cloud-based baggage inspection system as shown in Figure 10 having a server 100 including a training and certification module 102 as well as an inspection module 104 for remotely connecting to a baggage inspection system. The inspection module could alternatively be in a different server/location than the training module.
The purpose of the training and certification module 102 is to provide an offline training, qualifying and certification process for personnel that can be deployed as later remotely working operators 106a, b, c, N.... forming a cloud-based baggage inspection system. Offline means here that the user of the system that is trained and/or certificated needs not to be connected to a real baggage inspection system. The software rather simulates a baggage inspection system.
The software may be a Software-as-a-Service (SaaS) system. That means that the user logs into the software which runs on a server at a datacenter for instance. The user has to subscribe to the system in advance so that the software collates mandatory information and can create a user profile.
Next to dialogues in which the user has to enter personal data and other information that the system wants to know, the software might have interfaces to external databases like governmental databases to retrieve and / or check security relevant information about the user. This might be necessary to accomplish an approval process of the user and to allow him after a positive approval to access to the system. E.g. the software might have an interface that is able to demand and retrieve certificates of conduct or police clearance certificates.
After ensuring that the user is permitted to use and participate to the system a training and qualification process starts. With a library of a large amount of images of scanned baggage items, the user of the software has access to the training material. The library 46 described by the second aspect of the invention is in particular suited to be used for that purpose as metadata of all the images are known. The software knows what images were displayed to the user and therefore what images were used for training and how the user performed on the training material. The system also knows what kind of inspection levels the user has already worked successfully on. As described above, there are different levels of how difficult it is to detect an item and also different security levels that can be applied to an inspection process. Such variation can be purposely applied to the user's training so that the level of performance can be determined and recorded accordingly.
The training and the certification process of a user can be accomplished by considering the same conditions that a standard human operator of a„real" inspection system will face during his work. I.e. time, frequency of the images that are presented to the user can be configured and changed accordingly. By starting without time limits when screening the image a user can gain his skills and experience. Direct feedback on the users' decision and clarification of whether the user has made a correct or incorrect decision supports the training process. Variation and enhancement of the level of difficulties can be applied so that the user passes through the qualification process.
A training strategy can be developed to allow for efficient education. The system is therefore able to configure and schedule a respective qualification and certification processes. Knowledge about how the inspection was accomplished leading to false positives and true positives can also be used as well as the respective knowledge leading to false negatives and true negatives.
All these information can be stored in a user profile to record the status of the qualification of the user to any given time. Once a user has been approved as being qualified he may be allowed to subscribe to the second module which is the cloud-based baggage inspection system module 104. Dependent on the security and safety considerations, a second subscription might be necessary to reflect individual needs of the inspection system provider. The subscription may allow the user to choose for certain time periods to which he intends to offer his work. In an embodiment the system can proactively call the user and require his service.
With the ability of the system to connect remotely a configurable number N of operators to the one inspection system, the inspection process of baggage on the belt becomes much more powerful which is one of the major values of the invention:
Compared to the standard process by which there is only one operator that screens the image but also compared to more sophisticated solutions that perhaps already allow for a certain but limited amount of remote displays of the same inspection belt, the invention here can base on a theoretically unlimited number of operators.
With its capability of being highly scalable it allows for an easy application of the more- eyes-principle. With a determination of the requirements of the actual and current situation, the system can configure not only the amount of operators which are connected and which have to inspect the images, it can also control the qualification of the ensemble of operators which are appointed to the individual inspection system.
I.e. a situation in which a higher security level is needed and usually applied, a larger amount of operators with a higher certificate of performance can be called - experts on certain items as described above can be added. To the same extent, inspection systems which do not need the highest security level can be managed with a lower number. So a particular ensemble of operators with needed performance levels can be assigned purposely to the inspection belt in relation to its security requirements - without delay and overhead.
The results 108a, b, N of the pool of operators can be collated and analyzed by an aggregation server 110, and the summarized decision used to determine if an article is to be tagged for physical inspection. It is acknowledged that theoretically the cloud inspection system with a large amount of remotely working operators generally allows for statistical evaluation.
That means that generally one could configure the evaluation as to whether or not a baggage is led to further physical inspection on the basis of what percentage of the remote operators tagged an item. One could configure a corresponding threshold for such a purpose.
Such systems are in a first instance suited to support the more-eyes-principle and therefore are in particular suited to enhance and improve the quality of the inspection process: The likelihood of missing items to be tagged according to a security level can dramatically be reduced, i.e. the rate of false negatives can be minimized if not almost excluded.
The problem of a temporal poor performance of individual operators can be excluded as well as the result of a large pool of operators is considered.
As indicated above, a standard system with one operator is in general not able to detect false negatives which is a real weakness of the system. The invention here though is able to detect false negatives, as many operators accomplish the inspection so that false negatives of individuals are more likely to be detected. The rate of false negatives of an individual operator can therefore be determined not only from training material but also during real inspection. With the direct comparison of the performance of multiple operators the system allows for further evaluation and in particular for an evaluation of the performance of each single operator - more or less in real-time: If the system detects that one individual operator currently induces false negatives (because the system knows from the other operator's analysis that the image includes an item representing a true positive), then the system can directly react and replace the operator.
Also the level of consistency throughout the ensemble of operators (also with respect of their certified level of performance) can be used to analyze, how successful the training and the certification process is. If the evaluation shows an increased level of inconsistent results, the system can define and apply respective actions.
With the capability of defining the quality and configuring the number of operators, the operation of the inspection system can be managed: For instance for a given security level a number of 10 operators may be mandatory to operate the system. With such a number of operators the belt may have a standard speed to allow for one baggage article every 10 seconds. By increasing the number of operators the speed of the belt can be increased accordingly as the number of inspection scales and a defined confidence value can be maintained correspondingly. With the ability to have a much larger amount of inspection results and with the ability to sort out baggage items that might be candidates for further physical inspection the process becomes very uniform and seamless.
Consequently the speed of the belt can be operated much more evenly and the frequency of a parallel process to scan and check respective passengers can also be aligned and thus becomes more even and determinable.
Such a cloud-based inspection system may need particular safety requirements: + The operators should not know what inspection system he currently operates. + An ongoing monitoring process of how the operator performs may be applied. Only with evenly working and performing users of the system, the cloud-based inspection can reveal its power. I.e. slow users show bottlenecks for fast users. Consequently an ongoing selection process of operators / users assigned to one system could be applied and considered for the final inspection result. If one individual user does just not perform, the user would only lower the quality and confidence values of the inspection, rather than contributing to it.

Claims

1. A computer system for monitoring an operator when inspecting displayed images captured of sequentially presented physical articles, the system comprising: monitoring means configured to capture the visual engagement behaviour of a first operator engaged in inspecting the displayed image to generate an inspection outcome, wherein the inspection outcome causes selected ones of the physical articles to be tagged for a second inspection procedure based on identifying an item of interest in the image; and computer storage means configured to store inspection data defining the visual engagement behaviour in association with an image identifier which identifies the displayed image and tagging data which indicates whether the article was tagged for further inspection.
2. The computer system of claim 1 wherein the monitoring means is a camera.
3. The computer system of claim 1 or claim 2 wherein the inspection data indicates whether the operator looked at the image.
4. The computer system of claim 3 wherein the inspection data indicates how long the operator looked at the image.
5. The computer system of any one of claims 1 to 4 wherein the monitoring means is configured to perform eye tracking of the operator's gaze trajectory.
6. The computer system of claim 5 wherein the monitoring means is configured to use the operator's gaze trajectory to identify what item(s) in the image the operator looked at.
7. The computer system of claim 5 or claim 6 wherein the monitoring means is configured to use the operators gaze trajectory to identify how long the operator looked at each item.
8. The computer system of any one of claims 1 to 7 wherein the monitoring means is configured to establish if all parts of the image are inspected.
9. The computer system of any one of claims 1 to 8 wherein feedback is provided to the operator based on the inspection data.
10. The computer system of claim 9 wherein the feedback identifies unobserved segments in the displayed image for the operator to inspect.
11. The computer system of claim 9 or claim 10 wherein the feedback recognises certain items and flags items in the displayed image for the operator to inspect.
12. The computer system of any one of claims 9 to 11 wherein the feedback identifies items in the displayed image requiring an enhanced inspection, and if the enhanced inspection was not conducted by the operator, the feedback flags the items in the displayed imaged for enhanced inspection.
13. The computer system of any ones of claims 1 to 12 further comprising a data receiving means configured to receive outcome data indicative of the inspection outcome after the second inspection procedure of the tagged physical article by a second operator.
14. The computer system of any one of claims 1 to 13 further comprising goggles which provide a display for displaying the images and which incorporate the monitoring means.
15. A computer system for monitoring an operator when inspecting sequentially displayed images captured of sequentially presented physical articles, the computer system comprising: display means for presenting to a first operator the sequentially displayed images for the first operator to inspect the displayed images to generate an inspection outcome, wherein the inspection outcome causes selected ones of the physical article to be tagged for a second inspection procedure based on identifying an item of interest in the image; data receiving means configured to receive outcome data indicative of the inspection outcome after the second inspection procedure on the physical article by a second operator; and computer storage means configured to store the outcome data in association with an image identifier which identifies the displayed image and tagging data which indicates whether the article was tagged for further inspection.
16. The computer system of claim 15 wherein the outcome data indicates that the image was correctly tagged when the identified item of interest is within a set of specified items of interest.
17. The computer system of claim 15 wherein the outcome data indicates that the image was correctly or incorrectly tagged.
18. The computer system of claim 15 wherein the image was incorrectly tagged when the identified item of interest was not one of a set of specified items of interest.
19. The computer system of any one of claims 15 to 18 wherein the image was not completely correctly tagged as the operator identified items of interest from a set of specified items of interest but also missed items to identify.
20. The computer system of any one of claims 15 to 19 wherein the outcome data indicates that the image was not tagged when the image contained an item of interest.
21. The computer system of any one of claims 15 to 20 wherein the computer storage means is configured to store metadata in association with the image identifier, wherein the metadata comprises one or more of: inspection data defining usual engagement behaviour of the first operator; information about the item(s) of interest in the image; histograms of inspection data; or images of tagged items.
22. The computer system of claim 16 wherein a set of specified items of interest is determined based on one or more criterion.
23. The computer system of claim 22 wherein the criteria include one or more of: volume, material, shape, whether an item is a cooperating component of another item in the set.
24. The computer system of any one of claims 15 to 23 further configured to provide feedback to the first operator based on the outcome data.
25. The computer system of any one of claims 22 to 24 wherein the one or more criterion varies based on a source of the image.
26. The computer system of claim 25 wherein the source of image is flight information relating to a flight with which the articles are associated.
27. The computer system according to any one of claims 15 to 26 wherein there is provided an additional automated inspection module leaming from the images in the computer storage.
28. A computer system for enabling work sharing of image inspections comprising a server having a processor configured to execute a computer program which implements the method of: providing multiple images to be inspected for transmission to multiple user devices; receiving configuration data which defines operator data including a number N of operators to be engaged in inspecting an image; transmitting the image to each of the multiple user devices associated with the N operators based on the operator data; receiving inspection results from each of the N operators and correlating the inspection results to generate a single inspection outcome for the image.
29. The computer system of claim 28 wherein the configuration data is a number of operators.
30. The computer system of claim 28 or claim 29 wherein the configuration data is a required qualification level.
31. The computer system of any one of claims 28 to 30 wherein the configuration data is a percentage of "experts on certain items".
32. The computer system of any one of claims 28 to 31 wherein the configuration data is based on source data of images.
33. An automated inspection system comprising: image recognition means operable to implement a first inspection procedure to scan images to generate an automated inspection outcome, wherein the automated inspection outcome causes selected ones of the images to be tagged for a second inspection procedure when an item of interest is identified in the image; a display configured to display the images to a human operator to inspect the images to generate a human inspection outcome which causes related ones of the images to be tagged for a second inspection procedure when an item of interest is identified in the image; means for comparing the automated inspection outcome with the human inspection outcome and, when the outcomes differ, feeding back outcome data to at least one of the image recognition means and human operator to modify the behaviour for future images.
34. An automated inspection system comprising: image recognition means operable to scan images to generate an automated inspection outcome, wherein the automated inspection outcome causes selected ones of the images to be tagged for a second human inspection procedure when an item of interest is identified in the image; an image database holding a plurality of training images in association with respective inspection outcomes which occurred following a human inspection of the item of interest; an operator interface configured to receive an inspection outcome from the second human inspection procedure and to add the scanned image and the inspection outcome to the image database; wherein the image recognition means uses a machine learning algorithm which accesses the image database as it is updated with new images and inspection outcomes to improve the accuracy of the automated inspection outcomes as the inspection system continues to operate on subsequent images.
PCT/EP2017/070500 2016-08-11 2017-08-11 Improving the quality and power of baggage inspection systems WO2018029361A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662373504P 2016-08-11 2016-08-11
US62/373,504 2016-08-11

Publications (1)

Publication Number Publication Date
WO2018029361A1 true WO2018029361A1 (en) 2018-02-15

Family

ID=59745878

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2017/070500 WO2018029361A1 (en) 2016-08-11 2017-08-11 Improving the quality and power of baggage inspection systems

Country Status (1)

Country Link
WO (1) WO2018029361A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109521500A (en) * 2018-07-05 2019-03-26 北京中广通业信息科技股份有限公司 A kind of passenger station safety check cooperative control method and system
CN109995612A (en) * 2019-03-21 2019-07-09 北京奇艺世纪科技有限公司 A kind of service method for inspecting, device and electronic equipment
CN112424804A (en) * 2018-05-21 2021-02-26 史密斯探测-沃特福特有限公司 System and method for inspecting an article
CN115791101A (en) * 2023-02-03 2023-03-14 海的电子科技(苏州)有限公司 Display screen blind detection method, point screen equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5642393A (en) * 1995-09-26 1997-06-24 Vivid Technologies, Inc. Detecting contraband by employing interactive multiprobe tomography
US8020993B1 (en) * 2006-01-30 2011-09-20 Fram Evan K Viewing verification systems
US20150123997A1 (en) * 2013-11-07 2015-05-07 Konica Minolta, Inc. Information Display System Including Transmission Type HMD, Non-Transitory Computer-Readable Storage Medium and Display Control Method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5642393A (en) * 1995-09-26 1997-06-24 Vivid Technologies, Inc. Detecting contraband by employing interactive multiprobe tomography
US8020993B1 (en) * 2006-01-30 2011-09-20 Fram Evan K Viewing verification systems
US20150123997A1 (en) * 2013-11-07 2015-05-07 Konica Minolta, Inc. Information Display System Including Transmission Type HMD, Non-Transitory Computer-Readable Storage Medium and Display Control Method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
DESIGN INTERACTIVE: "ScreenADAPT X-Ray Training System", YOU TUBE, 4 February 2016 (2016-02-04), pages 1, XP054977773, Retrieved from the Internet <URL:https://www.youtube.com/watch?time_continue=25&v=UICtGsJQXds> [retrieved on 20171002] *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112424804A (en) * 2018-05-21 2021-02-26 史密斯探测-沃特福特有限公司 System and method for inspecting an article
CN109521500A (en) * 2018-07-05 2019-03-26 北京中广通业信息科技股份有限公司 A kind of passenger station safety check cooperative control method and system
CN109995612A (en) * 2019-03-21 2019-07-09 北京奇艺世纪科技有限公司 A kind of service method for inspecting, device and electronic equipment
CN109995612B (en) * 2019-03-21 2020-12-25 北京奇艺世纪科技有限公司 Service inspection method and device and electronic equipment
CN115791101A (en) * 2023-02-03 2023-03-14 海的电子科技(苏州)有限公司 Display screen blind detection method, point screen equipment and storage medium

Similar Documents

Publication Publication Date Title
WO2018029361A1 (en) Improving the quality and power of baggage inspection systems
US10431108B2 (en) Computer-implemented techniques for interactively training users to perform food quality, food safety, and workplace safety tasks
See et al. The role of visual inspection in the 21st century
Neuschatz et al. A comprehensive evaluation of showups
Li et al. Massive open online proctor: Protecting the credibility of MOOCs certificates
US20110091847A1 (en) Method, system, and computer software code for the adaptation of training via performance diagnosis based on (neuro)physiological metrics
US20190164270A1 (en) System and method for combined automatic and manual inspection
Strickland et al. Prospective memory in the red zone: Cognitive control and capacity sharing in a complex, multi-stimulus task.
Graves et al. The role of the human operator in image-based airport security technologies
Steelman et al. Great expectations: Top-down attention modulates the costs of clutter and eccentricity.
JP7063393B2 (en) Teacher data expansion device, teacher data expansion method and program
CN110378273A (en) A kind of method and apparatus of monitoring results process
Clark et al. Context matters: The structure of task goals affects accuracy in multiple-target visual search
NL2022016B1 (en) A computer-controlled method of and apparatus and computer program product for supporting visual clearance of physical content.
Yang et al. Assessing situation awareness in multitasking supervisory control using success rate of self-terminating search
US11636359B2 (en) Enhanced collection of training data for machine learning to improve worksite safety and operations
Li et al. The impact of out-the-window size on air traffic controllers’ visual behaviours and response time on digital tower operations
Carroll et al. Development of an autodiagnostic adaptive precision trainer for decision making (ADAPT-DM)
CN113408405A (en) Security check method and device, computer equipment and storage medium
Rožanec et al. Predicting operators’ fatigue in a human in the artificial intelligence loop for defect detection in manufacturing
Zelnio et al. Human performance modelling for image analyst decision support design
US11361537B2 (en) Enhanced collection of training data for machine learning to improve worksite safety and operations
Frame et al. Impact of video content and resolution on the cognitive dynamics of surveillance decision‐making
US11887266B2 (en) Augmented reality security screening and guidance
Nakamura et al. Effects of anticipation in manufacturing processes: towards visual search modeling in human factors

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17761025

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17761025

Country of ref document: EP

Kind code of ref document: A1