NL2019927B1 - A computer controlled method of and apparatus and computer program product for supporting visual clearance of physical content. - Google Patents

A computer controlled method of and apparatus and computer program product for supporting visual clearance of physical content. Download PDF

Info

Publication number
NL2019927B1
NL2019927B1 NL2019927A NL2019927A NL2019927B1 NL 2019927 B1 NL2019927 B1 NL 2019927B1 NL 2019927 A NL2019927 A NL 2019927A NL 2019927 A NL2019927 A NL 2019927A NL 2019927 B1 NL2019927 B1 NL 2019927B1
Authority
NL
Netherlands
Prior art keywords
content
computer
data
observer
eye
Prior art date
Application number
NL2019927A
Other languages
Dutch (nl)
Inventor
Van Maanen Peter-Paul
Brouwer Anne-Marie
Alrik Zwaan Jonathan
Original Assignee
Joa Scanning Tech B V
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Joa Scanning Tech B V filed Critical Joa Scanning Tech B V
Priority to NL2019927A priority Critical patent/NL2019927B1/en
Priority to PCT/NL2018/050770 priority patent/WO2019098835A1/en
Priority to NL2022016A priority patent/NL2022016B1/en
Application granted granted Critical
Publication of NL2019927B1 publication Critical patent/NL2019927B1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06395Quality analysis or management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06398Performance of employee with respect to a job function
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/083Shipping
    • G06Q10/0832Special goods or special handling procedures, e.g. handling of hazardous or fragile goods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • G06Q50/265Personal security, identity or safety

Landscapes

  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Engineering & Computer Science (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Strategic Management (AREA)
  • Educational Administration (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Tourism & Hospitality (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Game Theory and Decision Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Eye Examination Apparatus (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

A computer implemented and computer controlled method of, and an apparatus and computer program product for supporting visual clearance of physical content to be inspected, such as container content. The computer collects eye tracking data of an observer performing the visual clearance by viewing the content. From real- time processing of the collected eye tracking data, a clearance profile is determined based on expert eye tracking data of visual content inspection, such as container content inspection. The content is cleared by the computer if the determined clearance profile meets a set clearance profile. The eye tracking data may be enhanced by observer performance data collected during the visual inspection.

Description

Title A computer controlled method of and apparatus and computer program product for supporting visual clearance of physical content.
Technical Field
The present invention relates to visual content inspection, and more particularly, to visual content inspection by an observer from an image of the content to be inspected, generated by an image generator.
Background
Inspection of content, such as container content for security checks, for example the inspection or screening of content of suitcases, hand bags or hand luggage and parcels for transportation by airplanes, trains and busses, and inspection at entrance or access to events, social gatherings, meetings, buildings and or other security checks, for example, is nowadays widely applied an in most cases mandatory by legal requirements.
This inspection is primarily performed by Transportation Safety Officers (TSOs) and trained special security personnel or agents. It is the task of these officers and agents to identify threat items or potential threat items in containers that often comprise a variety of clutteredly packed items. Besides special detection equipment, such as metal detectors, radiation detectors and explosive detectors, for example, the identification is performed by visual inspection of two-dimensional, 2D, or three-dimensional, 3D, images of the content of the containers generated by an X-ray screening or other scanning technology.
Container cargo transport by planes, by trucks over land and by ships over sea, for example, for border security and for custom officials to check whether the contents match the cargo description, and whether illegal cargo can be found, is also subjected to visual inspection by an official observer from an image of the content of a container to be inspected.
By viewing an image of the content, the observer must determine whether a container is believed free from threats or matches the cargo description, as result of which the container is declared “cleared” by the observer and may pass. Whenever there is a potential threat or suspicion of illegal cargo, for example, the container is subjected to further search, which may involve a time and human resource consuming physical inspection involving the unloading of the container, for example. In case the visual inspection leads to a serious suspicion or threat, the container is even to be handled with extra care and/or inspected under strict safety requirements.
In practice, if there is an increased risk or if threats are highly prevalent, inspecting officers or agents will more likely make potential or serious threat decisions, such that “clear” responses are slower because the observer will apply an increased reliability standard in viewing the container content, to avoid as much as possible making a mistake. At airport and access security checks, this may lead to congestions in the security handling and can lead to frustrations with the persons to be cleared and the clearing officials which, eventually, may have a negative influence on the overall quality of the inspection.
Hand baggage or luggage screening for safety checks, for example, is a repetitive visual search task that often has a very low probability of encountering a threat, but may have high consequences if a serious threat is missed. Since threats are of low prevalence, a “cleared” response becomes the more frequent decision. Observers may tend to abandon a search in less than the average time required to find a target under such circumstances.
Safety standards applicable to baggage or luggage inspection at airports, for example, where officers or agents have to find a target in the midst of distractors, impose strict operating times on observers, such as shifts of twenty minutes, for example. These operating times are determined, with a high safety margin, from the time during which an average trained observer can stay focussed. It will be appreciated that, based on these safety requirements, for operating baggage or luggage inspection during the opening hours of an airport, a large team of well trained security or screening personnel is required.
Other applications for clearing physical content by visual inspection involve forgery control, such a forged documents like passports, but also paintings and the like, for which an observer visually inspects the document or painting, but also images of the human or animal body made by security body scan equipment and X-ray equipment, Computed (Axial) Tomography, CT or CAT, scans and Magnetic Resonance Imaging, MRI, in health care applications, for assessing an injury, such as fractures or dislocations, or a disease, for example bone degeneration, infections or tumors, lung and chest problems. In practice, experts like radiologists visually review the pictures and create a report with their findings to aid in diagnosis. In the case of border security control, for example, a customs official checks the identity of a person by viewing the content of a passport, or other ID document, and the person itself.
With these type of visual inspection, the same problems may arise as outlined above with respect to container content inspection.
Accordingly, there is need for improving visual inspection of physical content when viewing the content or monitoring a displayed image of the content by an observer, for at least one of decreasing the time in which an inspection can be completed by an observer, for increasing consistency and reliability in the decision making process by an observer, and for establishing observer operating and rest times for reliable inspection of the content.
Summary
It is an object of the present disclosure to provide a method for supporting visual clearance of physical content to be inspected. It is a further object ofthe present disclosure to implement such a method in an apparatus for supporting visual clearance of physical content to be inspected.
Accordingly, in a first aspect of the present disclosure, there is provided a computer implemented and computer controlled method of supporting visual clearance of physical content to be inspected, such as container content, wherein the computer connects to an eye tracking device arranged for providing eye tracking data of an observer performing the visual clearance by viewing the content, the computer performing the steps of: collecting eye tracking data of the eye tracking device while the observer viewing the image, real-time processing of the collected eye tracking data for determining a clearance profile based on expert eye tracking data of visual content inspection, such as container content inspection, and clearing the content if the determined clearance profile meets a set clearance profile.
Clearance of physical content from directly viewing the content itself or from viewing images of the content, requires inspecting officers or agents that are highly trained and experienced in object recognition.
An image of container content for visual inspection purposes, such as an X-ray image of the container content, for example, generally comprises so-called contoured objects. That is, an object in the container, and the container itself, are depicted by their outer or circumferential contour. Separate parts or items inside an object are also displayed by their contour and, optionally, enhanced by a respective colouring. Hence, the objects inside the container to be inspected, for the greater part, have to be recognized by the observer from their shape or contour and, if available, colour and colour differences. Face recognition, in the case of border security control, for example, requires observers trained in recognizing features of a human face.
In general, a decision whether to clear a particular physical content is, at present, made in the brain of the observer, from the information and stimuli gained at viewing the content or an image of the content, in correlation with the observer's object and feature knowledge and experience. In this brain process, if the observer does not perceives a dangerous or forbidden object, for example, or a deviation or an abnormality in a document, such a special security element or feature, or in the resemblance of a human object and a picture at an ID document, such that there is no suspicion concerning the viewed content, the observer clears same, i.e. let the content pass.
Eye tracking technology offers the possibility of capturing visual behaviour in real-time when monitoring or viewing objects and images. In practice, eye tracking technology is proposed for controlling workflows involving a certain sequence of tasks, or for investigating whether all objects in a cluster of objects have been viewed at, for example. Eye tracking technology is also proposed for training purposes, for detecting reading or viewing patterns that may lead to poor comprehension, for example. Other applications of eye tracking technology involve monitoring of alertness or distraction and situational awareness of humans while performing a task, such as airplane pilots, truck and car drivers, machine operators, etc.
Different from applying eye tracking technology in monitoring the performance of a predefined or well specified task, the method according to the present disclosure explicitly makes use of information that can be extracted from the viewing as such of the content to be inspected by a human observer using eye tracking technology for supporting a human decision making process, wherein the information as to the decision to be made is obtained from the visual inspection by the human observer.
According to the present disclosure, clearance of visually inspected content is determined from the visual behaviour of the observer during the inspection, expressed in a clearance profile, by real-time processing of collected eye tracking data of the observer and validating the processed data against expert eye tracking data of visual content inspection. When the determined clearance profile meets a set clearance profile, the content may be automatically cleared by the computer.
Instead of massively deploying object or shape recognition technology, requiring high capacity, high resolution and high speed data processing algorithms and equipment, and optionally enhanced by chemical and material sensing technology, for supporting or even replacing a human observer, the present disclosure advantageously makes use of the nowadays readily available and accurate eye tracking technology, producing an amount of data at a rate that allows for real-time processing, not requiring excessive computing equipment, and such that content can be cleared by the computing equipment, even when the human observer in his brain still processes the content information perceived.
With the present disclosure, the human observer is not replaced and remains crucial. Visual inspection tasks in the context of the present disclosure are best performed by human observers, which can effectively perceive anomalies within complex scenes, such as content comprised of clutteredly packed items, and that can be readily detected from eye tracking data of the human observer.
Hence, the disclosed method, so to say, is based on the best of two worlds, data available from the best performers of visual content inspection tasks, i.e. the expert data, and eye tracking data that enable real-time processing while the content inspection is carried out.
The method according to the present disclosure can be seamlessly integrated in visual content inspection tasks, either existing or future tasks and systems.
Hand baggage or luggage applied at airport safety checks, for example, in the majority of cases does not contain suspicious objects or including a threat, hence the majority of these checks is quickly cleared by the computer, before the human observer, thereby effectively reducing the time in which an inspection is completed.
On the other hand, in case of a threat or suspicion, the particular content is also quickly discriminated as such, i.e. automatically declared not cleared, by the computer and can be subjected to inspection by another observer and/or a special treatment, like physical inspection of a not cleared container, or inspection using special equipment and the like.
The computer supported method according to the present disclosure further provides for an objective inspection, based on the expertise of the observer, not being prone to whether suspicious content is encountered in a minority or a majority of the inspection tasks, or other external or subjective influences like an increased state of security, for example. The latter can be effectively dealt with in the present disclosure by applying an adaptable clearance profile, i.e. adapting a set clearance profile, or applying a plurality of clearance profiles, i.e. setting a respective clearance profile, that has to be met for clearance of the content. In an embodiment of the method according to the present disclosure, the collected eye tracking data at least comprise gaze point data, wherein the clearance profile is determined, by the computer, from at least one of eye fixations, heat maps, areas of interest, time spent, fixation sequences, time-to-first-fixation, saccades, smooth pursuit movements, vergence movements, and vestibule-ocular movements, identified by the computer in the collected eye tracking data.
Gaze points constitute the basic unit of measure in eye tracking technology. One gaze point equals one raw sample captured by an eye tracking device. The viewing time represented by a gaze point depends on the measurement rate ofthe eye tracking device. Accordingly, several eye tracking data parameters may be processed from the gaze points.
If a series of gaze points happens to be close in time and range, the resulting gaze cluster denotes a fixation, a period in which the eyes are locked toward a specific object. A fixation correlates with informative regions of the content under inspection.
Heat maps are static or dynamic aggregations of gaze points and fixations revealing the distribution of visual attention. Heat maps may also serve as an excellent method to visualize which elements of the stimulus were able to draw attention, using a particular colouring scheme, for example.
Areas of Interest, AOIs, are user-defined sub-regions of a viewed content. Extracting metrics for separate AOIs provide for evaluating the performance of two or more specific areas in a same image, or document, for example. For example to compare conditions, or different features within the same content.
Time spent quantifies the amount of time that observers spent looking at an AOI. As observers have to blend out other stimuli in the visual periphery that could be equally interesting, the amount of time spent often indicates motivation and conscious attention, prolonged visual attention at a certain region of the content clearly points to a high level of interest, while shorter times indicate that other areas of the content or in the environment might be more attractive.
Fixation sequences are generated based on fixation position and timing information. This is dependent on where observers look and how much time they spend, and provides insight into the order of attention, indicating where observers look first, second, third etc. Fixation sequences also reflect brightness, hue, saturation etc. in a displayed image of the content or environmental content, that are likely to catch attention.
Time-to-First-Fixation, TTFF, indicates the amount of time it takes an observer to look at a specific area of interest from stimulus onset. TTFF can indicate both bottom-up stimulus driven searches, a risk bearing object catching immediate attention, for example, as well as top-down attention driven searches, wherein observers actively decide to search for certain elements or areas in the content.
Saccades are rapid eye movements between fixations. Analysis metrics include fixation or gaze durations, saccadic velocities, saccadic amplitudes, and various transition-based parameters between fixations and/or regions of interest. Hence, saccades represent jumps of the eyes while viewing the content. Saccades can be voluntarily, but also occur reflexively, even when the eyes are fixated.
Smooth pursuit movements are much slower tracking movements of the eye, under voluntarily control. Although smooth pursuit movements are generally made while tracking a moving object or stimulus, highly trained observers can make a smooth pursuit movement in the absence of a moving target. The present disclosure is further not limited to static content, but applies as well to moving physical content.
Vergence movements align the two eyes of the observer to targets at different distances from the observer. Vergence movements are disjunctive or disconjugate, different from conjugate eye movements in which the two eyes move in the same direction.
Vestibule-ocular eye movements stabilize the eyes relative to the external world, thus compensating for head movements.
The clearance profile, in an embodiment, may comprise an observation pattern determined, by the computer, from one ore more ofthe above eye tracking data parameters identified in the collected eye tracking data and patterns of such eye tracking parameters known from or identified in the expert eye tracking data.
In an another embodiment, the clearance profile is determined, by the computer, from anomalies in the above-mentioned processed eye tracking data and parameters identified in the collected eye tracking data compared to the expert eye tracking data, and/or unknown patterns identified in the collected and processed eye tracking data.
It will be appreciated that a clearance profile may be content dependent, and that several clearance profiles for similar content may be determined and set for providing a clearance.
In particular when an observer performs the visual clearance by viewing a displayed image of the content, the clearance profile, in an embodiment of the present disclosure, is further determined, by the computer, from at least one of image quality analysis, image dimension analysis and feature extraction analysis of the image of the content.
Low resolution images, less coloured images, low brightness and low contrast are factors that may alter the viewing behaviour of an observer, for example an abnormal increase of fixation sequences, compared to images of excellent quality, i.e. high resolution, high brightness and contrast, for example. Accordingly, false alarms, i.e. false non-clearance, can be avoided by taking the image quality into account when determining a clearance profile.
Content with a plurality or very specific Areas of Interest, AOIs, for example, such as special security features that have to be inspected in the content, may require determining the clearance profile by establishing or resolving same, by the computer, for respective parts of the content or respective image locations. The eye tracking data obtained and/or the clearance profile obtained may also be normalized against a reference image or reference content, such to rule out human tendencies or preferences for particular parts of the content to be inspected, not particularly contributing to the inspection task.
In a further advantageous embodiment of the present disclosure, wherein when an observer views a displayed image of the content under inspection, while the content is not yet cleared, the computer further performs the step of annotating the displayed image based on real-time processing of the collected eye tracking, in particular wherein the annotating comprises highlighting of locations at the displayed image.
As the clearance profile is established in real-time, it has been found that deviations in the processed eye tracking data parameters, deviations in the clearance profile itself or deviations obtained while validating same against a set clearance profile, may be used to notify the observer of potential threats, suspicious objects, or otherwise areas of interest, even before the observer him/herself has concluded that a respective area or part of the physical content needs more attention in viewing.
Notification may be performed by highlighting the image, such as but not limited to hatching, arrowing, encircling, filtering, contrast/brightness changes, filtering out of surrounding image content, etc. and by providing written instructions, for example. By this type of quick feedback, the viewing behaviour of the observer is effectively supported and a reduced inspection time is achieved, compared to inspection without such support.
Notifications may also be provided in case of not cleared content, to facilitate physical inspection or another treatment of suspicious content or a so-called second opinion visual inspection, for example. This, to enhance and speed-up as much as possible the screening and clearance of suspicious content.
In a yet further embodiment, clearance of the content is calculated from eye tracking data collected during a set time window starting from presentation of the image of the content to the observer, wherein the time window is set by the computer from at least one of image quality analysis, image dimension analysis and feature extraction analysis of the image of the content, and the expert eye tracking data.
By setting a time window, taking into account the quality of an image and diversity or variety thereof, anomalies in the viewing behaviour of the observer, for example caused by events or distractions in the environment of the observer, fatigue or other lack of alertness, which may have a negative impact on the quality of the inspection, can be efficiently detected.
It will be appreciated that absence of a clearance or a non-clearance by the computer, does not necessarily have to imply that the content should not be cleared. The human observer eventually may decide whether to clear or to elect the content for a further search or investigating, for example by physical inspection of te content of a container, or using additional equipment and tools when inspecting a document, for example.
In both cases, that is a clearance by the computer and a nonclearance resulting in a further inspection, the method according to the present disclosure significantly increases the consistency and reliability in the decision making process by an observer inspecting physical content to be cleared. In this context, and for statistical purposes, for example, in a further embodiment of the present disclosure, the computer calculates a figure of merit from the determined clearance profile and the set profile, wherein the content is cleared if the figure of merit is within a set range.
The eye tracking data obtained and the processed clearance profiles may serve for improving the method, i.e. an algorithm loaded in the computer for processing the eye tracking data and determining clearance profiles, thereby facilitating machine learning.
In addition to the collection of eye tracking data, the present disclosure also proposes collecting of observer performance data at least one of physiological data, including electroencephalography data, mental data, including at least one of facial expression data, oral expression data and body movements data of the observer while performing the visual clearance, and to determine, by the computer, the clearance profile from real-time processing of both the collected eye tracking data and the observer performance data.
By associating such observer performance data and the eye tracking data parameters processed from the eye tracking, the quality of the inspection in the sense of an increased number of computer clearances is further improved. The observer performance data may also be used in calculating the figure of merit.
For establishing observer operating and rest times for reliable inspection of the content, in a further embodiment of the method according to the present disclosure, an inspection quality parameter is determined, by the computer, from cumulative real-time processing of collected data, the inspection quality parameter providing an indicator of the observer performance when performing the visual clearance by viewing the content or an image of the content, wherein if the calculated inspection quality parameter does not meet a set inspection quality level representative of a set inspection quality, the computer providing at least one of feedback information to the observer.
The inspection quality parameter is based, in an embodiment, on one or more of eye pupil size/dilation, ocular vergence, blinks and distance to the object or image, indicative for body movements.
An increase in pupil size is referred to as pupil dilation, and a decrease in size is called pupil constriction. Pupil size primarily responds to changes in light (ambient light) or stimulus material (image stimulus). Other properties causing changes in pupil size are emotional arousal (referring to the amount of emotional engagement) and cognitive workload (which refers to how mentally taxing a stimulus is). Pupillary responses may be used as a measure for emotional arousal.
As explained above, the extraction of vergence, i.e. whether left and right eyes move together or apart from each other, is a natural consequence of focusing of the eye near and far. Divergence often happens when the human mind drifts away, when losing focus or concentration. It can be picked up instantly by measuring inter-pupil distance.
Eye tracking data can also provide essential information on cognitive workload by monitoring blinks. Cognitively demanding tasks can be associated with delays in blinks, the so-called attentional blink. However, many other insights can be derived from blinks. A very low frequency of blinks, for example, is usually associated with higher levels of concentration. A rather high frequency is indicative of drowsiness and lower levels of focus and concentration.
Along with pupil size, eye tracking data may also comprise a measurement of the distance to the content or image and the relative position of the observer. Leaning forwards or backwards in front of a remote content is tracked directly and can reflect approach-avoidance behavior.
The feedback provided if the calculated inspection quality parameter does not meet a set inspection quality level representative of a set inspection quality, can be any of visual, audio or tactile feedback, for example.
The reliability of a clearance is further enhanced, in an embodiment of the present disclosure, if both the determined clearance profile meets the set clearance profile and the determined inspection quality parameter meets a set inspection quality parameter representative of a set inspection quality.
In addition to the above-mentioned embodiment, the inspection quality parameter, in a further embodiment, the inspection quality parameter is based on the collecting of at least one of physiological data, including electroencephalography data, mental data, including at least one of facial expression data, oral expression data and body movements data of the observer while performing the visual clearance.
In a second aspect, the invention provides an apparatus for supporting an observer for visual clearance of physical content to be inspected, such as container content, the apparatus comprises a computer and data input/output equipment, the computer being communicatively connected or connectable to a database, the data input equipment comprising an eye tracking device arranged for providing eye tracking data of the observer performing the visual clearance by viewing the content, the computer being arranged for performing the computer implemented steps of the method disclosed above.
In a further embodiment, the data input/output equipment of the apparatus further comprising at least one sensor for collecting at least one of physiological data, including electroencephalography data, and mental data, including at least one of facial expression data, oral expression data and body movements data of the observer while performing the visual clearance.
The present disclosure, in particular, provides an apparatus for container content scanning, comprising an image generator for generating at least one image of container content to be inspected, wherein the data output equipment comprising a display arranged for displaying an image provided by the image generator for viewing the displayed image of the container content by the observer performing the visual clearance.
In a third aspect there is provided a computer program product downloadable from a communication network and/or stored on a computer-readable and/or processor-executable medium, the computer program product comprising program code instructions to cause a computer to carry out the computer implemented steps of the method according to the present disclosure.
The above-mentioned and other features and advantages of the present disclosure are illustrated in the following description with reference to the enclosed drawings which are provided by way of illustration only and which are not limitative to the present invention.
Brief Description of the Drawings
Fig. 1 shows, in a schematic and illustrative manner, an embodiment of an apparatus for supporting visual clearance of physical content.
Fig. 2 shows, in a schematic and illustrative manner, an other embodiment of an apparatus for supporting visual clearance of physical content.
Figs. 3, 4 and 5 show typical examples of images for visual inspection of physical content, such as screening of a handbag at an airport security check, a truck for transporting cargo, and a scan of human lungs.
Figs. 6 and 7 show flow chart diagrams of embodiments of the method according to the present disclosure.
Fig. 8 shows, in a schematic and illustrative manner, a luggage or baggage inspection apparatus such as typically used at airport security checks, implementing the teachings of the present disclosure.
Detailed Description
It is noted that in the description of the figures, same reference numerals refer to the same or similar components performing a same or an essentially similar function.
In general, two types of devices for eye tracking are distinguished, the so-called screen-based eye trackers and eye tracking glasses.
Screen-based type eye trackers, or desktop or stationary eye trackers comprise a monitor or other electronic display device at which an image to be inspected is displayed, such as an image of content to be inspected. At or near the screen eye tracking capturing equipment is positioned, such as one or more digital cameras, for remote eye tracking of a user or an observer that views the image displayed at the screen.
Eye tracking glasses type eye trackers are devices that are positioned close to the eyes of the observer, usually head-mounted or mounted onto an eyeglass frame. Eye tracking glasses may be used for inspecting content not necessarily displayed on a screen, such as for inspecting hard copy documents, for example.
Although eye tracking glasses provide observers to move freely when performing a task, compared to screen-based eye trackers providing limited freedom for the observer to move, eye tracking glasses may shift in front of the eyes during data capturing, in particular when they are not mounted correctly.
Eye-tracking glasses technology may be combined with so-called smart glasses or virtual reality glasses or video headsets, by which an image of content to be inspected is displayed in front of the eyes of an observer. Like screen-based eye trackers, smart glasses eye trackers allow for providing visual feedback or notifications to the user or observer during the inspection, such as by highlighting the image in the form of hatching, arrowing, encircling, filtering, contrast/brightness changes, filtering out of surrounding image content, etc. to support the inspection task and providing written instructions, for example. In case the inspection task is not sufficiently performed, the display of the image may be blocked, for example.
Figure 1 schematically illustrates an embodiment of an apparatus or assembly 10 for supporting visual clearance of physical content to be inspected, using screen-based eye tracking technology. The apparatus 10 essentially comprises a computer or processor 11 to which an output device connects, such as a screen or monitor 12, and an input device, such as a digital camera or eye tracking capturing module 13. The screen or monitor 12 optionally may comprise an audio output device 14, such as a loudspeaker, and an audio input device 15, such as microphone.
The computer or processor 12 may access at least one database, such as an internal or local data base or local and working memory 16 and/or an external or remote database 18 communicatively connected or connectable to the computer or processor 12 via a data communication network 17, such as the internet. In use, a user or observer 20 views 21 an image to be inspected that is displayed at the screen or monitor or display 11 while the eyes of the observer 20 are tracked 19 by the digital camera or eye tracking capturing module 13.
Reference numeral 22 schematically denotes data input/output equipment of the apparatus 10 comprised of at least one sensor for collecting observer performance data comprising at least one of physiological data, including electroencephalography data of the observer, while performing the visual clearance. The sensor(s) 22 connect to the computer 12 by a wired link 34.
Eye tracking data and sensor data collected may be stored in the database 16, 18 for use by the computer 12.
Figure 2 schematically illustrates an apparatus or assembly 23 for supporting visual clearance of physical content to be inspected, comprising eye tracking glasses technology. In the embodiment shown, the user or observer 24 wears an eye glasses device 25 comprised of a frame 28 at which a digital camera or other eye tracking capturing module 27 is fixed.
In use, the user or observer 24 views 21 a document 30 while eye tracking data 29 captured by the module 27 are transmitted by a wired or wireless transmission link 31 to a computer or processor 11, as explained above with reference to Figure 1. The frame 28, optionally, may comprise a loudspeaker 32 and microphone 33 audio headset, communicatively connected to the computer or processor 12 via the transmission link 29, for example.
In the embodiment shown, the sensor(s) 22 connect by a wireless link 35 to the computer 12. Those skilled in the art will appreciate that in both the embodiments of Figure 1 and Figure 2 the sensor(s) 22 may connect to the computer 12 by either one of a wired 34 or wireless data communication link 35.
In the case of a virtual reality glasses or video headsets, the frame 28 comprises a screen or display (not shown) at which an image of the physical content to be inspected is displayed.
Eye tracking devices as such are readily known and described in the prior art, such that a further description thereof may be omitted here.
Figure 3 shows a typical example of an X-ray image, available from Wikimedia Commons, obtained with a device for visual inspection of container content such as suitcases, hand bags or hand luggage and parcels for transportation by airplanes, trains and busses, and inspection at entrance or access to events, social gatherings, meetings, buildings and or other security or screening checks, for example.
As can be seen from the image, objects 41 - 44 in the container or bag 40, and the container or bag 40 itself, are depicted by their outer or circumferential contour. Separate parts or items inside an object, such as the electronic equipment 43 are also displayed by their contour. In practice, these objects may be enhanced by a respective different colouring. The objects 41 - 44 inside the container 40 to be inspected, have to be recognized by Transportation Safety Officers, TSOs, and special security personnel or agents by their shape or contour and, if available, colour and colour differences, to identify threat items or potential threat items.
Figure 4 shows a typical example of an X-ray image, available from Evergreen Shipping Agency (Netherlands) B.V., obtained of a truck 45 and a trailer 46 containing cargo 47, 48 that is normally not visible from the outside, because the cargo 47, 48 is surrounded by a cover. From the image, well trained border security and custom officials may check whether the contents match the cargo description, and whether illegal cargo can be found, for example in hollow spaces in the chassis 49 of the trailer and the truck or between the cargo items 47, 48.
Figure 5 shows a typical X-ray scan, available from Wikimedia Commons, of the human chest and lungs, from which a trained physician or radiologists by visual inspection of the image 50 has to detect physical content like a tumor or abscess 51, for example.
In the context of the present disclosure, the term physical content is used to identify real physical objects in the most common meaning thereof, different from virtual or artificial objects.
In all these inspection examples, by viewing an image of the content, the observer must determine whether the content is believed free from strange, unusual objects or deviations from a normal or healthy state, or matches a particular description, or the like, as result of which the physical content is declared “cleared” by the observer and my pass. The term 'clearance' in the present disclosure is to be interpreted as any state of the content that matches a predetermined condition.
Although the examples in Figures 3 - 5 show two-dimensional, 2D, images, in practice three-dimensional, 3D, images may be provided for inspection.
In accordance with the present disclosure, eye tracking data of an observer that is performing a visual content inspection are captured by a computer controlled apparatus 10, 23 as illustrated in Figures 1 and 2, respectively, comprising an eye tracking device such as a screen-based eye tracking device or video headset, in the case of viewing images of the content to be inspected, or an eye glasses based eye tracking device when either one of an image or a real physical object is to be inspected, the latter for example in the case of inspecting original paintings, documents and passport control.
An example of the method according to the present disclosure is illustrated by the flow chart diagram 60 shown in Figure 6. In the flow chart, the steps proceed from the top to the bottom of the drawing. Other flow directions are indicated by a respective arrow. The illustrated method is implemented and controlled by the computer 12, running a data processing algorithm, suitably programmed by program code instructions that may be stored in the database 16 or loaded therein as a computer program product from a computer readable medium, such as, but not limited from a memory stick, hard-disk, solid-state drive, or other non-transitory medium, or as data signal, downloaded from a data communication network, for example.
In a first step, content to be inspected is made available for inspection by a human observer, i.e. block 61 "Present content". While the observer views the content, eye tracking data are obtained, i.e. block 62 "Capture eye tracking data", which are real-time processed by the computer 12 for determining a clearance profile, i.e. block 63 "Determining clearance profile".
It will be appreciated that the clearance profile is determined by the computer 12 from a particular amount of captured eye tracking data in correlation with previously established expert eye tracking data.
The amount of expert eye tracking data and the processing algorithm used in determining the clearance profile are content particular, such as a processing algorithm and expert data pertaining to container content inspection for safety purposes, as illustrated in Figure 3, or a processing algorithm and expert data pertaining to medical content inspection, as illustrated by Figure 5, for forgery content inspection, or a processing algorithm and expert data relating to cargo inspection, as illustrated by Figure 4, for example.
In decision block 64, i.e. "Meets set profile?", the processing algorithm determines whether the clearance profile established in block 63 meets a set or pre-set clearance profile. In the negative, i.e. decision "No" of block 64, if the clearance profile determined and/or the processed eye tracking data conclude to a suspicious content, such as container content forming a threat, i.e. decision block 65 "Suspicious?" answer "Yes", the content may be directly and automatically declared not-cleared by the computer 12, i.e. block 69 "Content not-cleared". If, dependent on the particularities of the determined clearance profile, the computer does not yet directly declare the content not-cleared, i.e. decision "No" of block 65, the computer may decide whether it is appropriate to provide support to the observer, i.e. decision block 66 "Notification?".
In the affirmative, i.e. decision "Yes" of block 66, the viewing of the observer is supported by one or several notifications or annotations, i.e. block 67, "Notification support". In the negative, i.e. decision "No" of block 66, the capturing of eye tracking data may continue.
In the embodiment illustrated by the flow chart 60, a time window is introduced, such that clearance of the content is calculated from eye tracking data collected during a set time window starting from presentation of the content in block 61. In decision block 68, i.e. "Time window lapsed?", it is determined whether the time window set by the computer is lapsed. In the negative, i.e. decision "No" of block 68, the processed eye tracking data accumulates and hence the determination of the clearance profile. If the time window is lapsed, i.e. decision "Yes" of block 68, and the content is not already cleared, the content is declared not-cleared, i.e. block 69.
In the case of content that is not-cleared, further measures may be required, such as a physical inspection of container content, or inspection by a further observer, for example. In all other cases, the content is automatically cleared by the computer 12, merely from processing the captured eye tracking data in relation to the relevant expert eye tracking data, i.e. block 70 "Content cleared".
The automatic computer controlled clearance of physical content from the captured eye tracking data processing in correlation with expert data, allows for an improved, i.e. shortened, inspection completion time up till clearance of the content. This is, in particular, advantageous in the case of content inspection for security or screening purposes, such as at airport safety and security inspection desks and entrance desks at events, for example, where large quantities of content have to be inspected with a risk of waiting queues building up.
It will be appreciated that also in the case of an automatic not-cleared decision by the computer, i.e. block 69 in Figure 6, the inspection process may be advanced, as the computer, from the captured eye tracking data and the expert data, may already quickly after start of the viewing decide to reject the content, such that
Even in the case of notification support, i.e. block 67 in Figure 6, the inspection completion time till computer clearance of the content can be reduced compared to clearance of the content by the observer without computer support. This, because the notification or annotation can be provided quickly after start of viewing the content. By this fast feedback, the viewing of the content by the observer may be supported such that, in the case of none-suspicious content, a clearance or a none-clearance may be provided, based on the expert data available.
Notification or annotation support 67 may comprise one or more of highlighting the image at the display 11, for example, such as but not limited to hatching, arrowing, encircling, filtering, contrast/brightness changes, filtering out of surrounding image content, etc. and by providing written instructions, for example. By way of example, in Figure 3 object 55 is annotated by encircling thereof by a dashed circle 56. This annotation may be shown for a short time, for example, just long enough to control the viewing of the observer towards this object 55, without impeding the view thereof or the view at surrounding or neighbouring objects.
In the case of a viewing the object itself, such as illustrated in Figure 2, audible notification support may be provided, through the loudspeaker 32, for example, and/or by a light or laser pointer 36, controlled by the computer 12, for example.
To facilitate further measures in case of a non-clearance, support may also be provided in the form of the notifications and/or annotations as disclosed above.
Those skilled in the art will appreciate that certain steps and decisions as shown in the flow chart diagram may be interchanged or differently positioned in the processing flow. For example, decision block 65 may also be positioned between the blocks 63 and 64.
The time window mentioned above with reference to block 68, may be a fixed time window or dynamically set or adapted by the computer 12. This time window provides an effective means for taking the viewing behaviour of the observer into account. The level of experience of the observer, the size of the image to be inspected and the image quality, determined from a feature extraction analysis of the content, for example, may have an influence on the way the image is viewed. A bad quality, for example, may prolong the viewing time before a decision can be made, or may involve eye movements that are differently valuated compared to a bright and full contrasted image, for example, although the content is such that same may be cleared.
As mentioned before, an observer may get distracted form viewing by events in his or hers environment, by becoming fatigue or other reasons that lead to a lack of alertness which, eventually, may have a negative impact on the quality of the inspection.
Reference is made to the flow chart diagram 80 shown in Figure 7, illustrating a further embodiment of the present disclosure, implemented and controlled by the computer 12, running a data processing algorithm taking performance data of the observer into account. In block 81 "Present content", the content to be inspected is presented to the observer 20, 24 (see Figures 1 and 2 and block 61). In addition to eye tracking data of the observer, in block 82 "Capture eye tracking data and performance data", observer performance data are collected, provided by input/output equipment connected to the computer 12, such as a sensor or sensors 22 arranged for collecting at least one of physiological data and electroencephalography data, but also facial expression data of the observer 20, 24 captured by the module 13 or 29, or a separate camera module, such as a video camera module (not shown) and, for example, oral expression data of the observer 20, 24 registered by the microphone 15, 33 and body movements of the observer 20, 24 while performing the visual clearance.
From the cumulative real time processing of the captured eye tracking data and performance data both the clearance profile and an inspection quality parameter are calculated, i.e. block 83 "Determine clearance profile and inspection quality parameter". The inspection quality parameter, among others, is representative for the mental and/or physical state of the observer 20, 24 and provides a reliable indicator of the performance of the observer 20, 24 when viewing content 30 or an image of the content at a display or screen or monitor 11.
The algorithm determines in decision block 84 "Meets set profiles/parameters?" whether the clearance profile and/or the inspection quality parameter determined in block 83 meet a set or pre-set clearance profile and quality inspection parameter. In the negative, i.e. decision "No" of block 84, if the clearance profile determined and/or the processed eye tracking data and/or the inspection quality parameter conclude to a suspicious result, either suspicious content and/or a questionable performance by the observer, i.e. decision block 85 "Suspicious?" answer "Yes", the content may be directly determined, by the computer 12, as being not-cleared and a notification may be provided indicating a possible reason or pointing to an object that requires further treatment/inspection, for example, i.e. block 89 "Content not-cleared/notification". If, dependent on the particularities of the determined clearance profile/inspection quality parameter, the computer does not yet directly declare the content not-cleared, i.e. decision "No" of block 85, the computer may decide whether it is appropriate to provide support to the observer, i.e. decision block 86 "Notification?".
In the affirmative, i.e. decision "Yes" of block 86, the viewing of the observer is supported by one or several notifications or annotations and/or performance feedback, i.e. block 87, "Notification support/performance feedback". In the negative, i.e. decision "No" of block 86, the capturing of eye tracking data may continue, dependent on whether a time window has lapsed, i.e. decision block 88 "Time window lapsed?". In the same manner as explained above with reference to Figure 6.
If the determined inspection quality parameter does not meet a set inspection quality level representative of a required minimum inspection quality, for example, the computer 12 may provide audible or visible feedback or information to the observer 20, 24 about his/hers performance by any of the output means disclosed and available with the apparatus 10, 23. Eventually, if the performance gives rise thereto, viewing of the content by the observer 20, 24 may be prohibited, for example.
In a further embodiment, for benchmarking and for use of the captured data and clearance results for improving the algorithms used, i.e. a self-learning facility of the present disclosure, the computer calculates a figure of merit from the determined clearance profile/inspection quality parameters and said set profile/set inspection quality parameters, as shown in Figure 7 by the dashed block 97 "Figure of merit".
The figure of merit may also be used for determining whether to clear the content. In that case an additional decision step may be implemented in the flow chart of Figure 7 between the blocks 90 and 91, or the figure of merit may be calculated between the blocks 83 and 84 and may be used in the decision by block 84, for example.
Figure 8 shows a typical application of the present screening method in a luggage or baggage inspection or screening apparatus 100. The apparatus 100 comprises at one side an entrance 101 for receiving luggage or parcels, the content of which has to be inspected and at another opposite side an exit 102 at which luggage that is inspected leaves the apparatus 100. The luggage is typically transported through the apparatus 100 by a transport belt 103. The belt 103 is automatically stopped when a bag or other container has entered the apparatus 100 for inspection.
At this stage an image, typical an X-ray image, is generated from the content of the container inside the apparatus 100 by an image generator 107, positioneel inside the apparatus 100. At an operator side of the apparatus 100, a screen or monitor or display 104 and a control console 105 is positioned, for use by an operator or observer performing the inspection. For capturing eye tracking data of the observer, an eye tracking module 108 is provided in the vicinity of the display 104.
The apparatus 100, inclusive the belt 103, image generator 107 and display 104 are operated under control of a computer or processor 106 positioned inside the apparatus 100. The setup of the computer 106 and display 104 is equivalent to the equipment shown in Figure 1, i.e. computer 12, data bases 16, 17, data communication network 17, eye tracking module 13, and the input/output devices 14, 15 and 22, although the latter are not explicitly shown in Figure 8 as same may be omitted in te simplest embodiment of the present disclosure.
The apparatus 100, in practice, may form part of a security or screening system, involving further screening devices.
For real-time processing of the captured eye tracking data of the observer and/or the observer performance data, several processing algorithms known in practice may be used, determining one or a plurality of eye fixations, heat maps, areas of interest, time spent, fixation sequences, time-to-first-fixation, saccades, smooth pursuit movements, vergence movements, and vestibule-ocular movements, as elucidated in the Summary part above.
The clearance profile, in an embodiment, comprises an observation pattern determined, by the computer, from the collected and processed eye tracking data and known patterns identified from the expert eye tracking data, whether or not enhanced by collected observer performance data. In another embodiment of the present disclosure, the clearance profile is determined, by the computer, from anomalies identified in the collected eye tracking data compared to the expert eye tracking data and/or unknown patterns identified in the collected and processed eye tracking data, whether or not enhanced by collected observer performance data.
In an embodiment, the clearance profile is determined, by the computer, from a function or model comprising a plurality of expert function or model parameters derived from a collection of observer data gathered from well trained, experienced observers, i.e. experts, while performing a specific visual content inspection task. The data gathered at least comprises eye tracking data and, optionally, observer performance data as elucidated above. Such a function may be established by data analysis experts and/or processor or processing machine controlled using neural network technology, for example.
From the captured eye tracking data and, optionally, collected observer performance data while performing the visual inspection task, function or model parameters in accordance with a task specific function or model are determined by the computer and correlated with the expert function or model parameters. Thereby determining a clearance profile for the purpose of computer controlled clearance of the inspected content in accordance with the present disclosure.
Algorithms for calculating an inspection quality parameter from collected performance data of the observer may be developed and applied in a like manner as explained above with respect to the clearance profile, for example.
Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measured cannot be used to advantage. Any reference signs in the claims should not be construed as limiting the scope thereof.

Claims (16)

1. Computergeïmplementeerde en computergestuurde werkwijze voor het ondersteunen van visuele vrijgave van te inspecteren fysieke inhoud, zoals containerinhoud, waarin de computer is verbonden met een eye-trackinginrichting ingericht voor het verschaffen van eye-trackingdata van een waarnemer welke de visuele vrijgave uitvoert door het bekijken van de inhoud, welke computer de stappen uitvoert van het: verzamelen van eye-trackingdata van de eye-trackinginrichting terwijl de waarnemer de inhoud bekijkt, real-time verwerken van de verzamelde eye-trackingdata voor het bepalen van een vrijgaveprofiel gebaseerd op expert eye-trackingdata van visuele inhoudsinspectie, zoals container-inhoudsinspectie, en vrijgeven van de inhoud indien het bepaalde vrijgaveprofiel voldoet aan een ingesteld vrijgaveprofiel.A computer-implemented and computer-controlled method for supporting visual release of physical content to be inspected, such as container content, wherein the computer is connected to an eye-tracking device adapted to provide eye-tracking data from an observer performing the visual release by viewing of the content, which computer performs the steps of: collecting eye-tracking data from the eye-tracking device while the observer is viewing the content, real-time processing of the collected eye-tracking data to determine a release profile based on expert eye-tracking visual data inspection tracking data, such as container content inspection, and release of the content if the particular release profile meets a set release profile. 2. Werkwijze volgens conclusie 1, waarin de verzamelde eye-trackingdata ten minste gaze-pointdata omvatten, waarin het vrijgaveprofiel door de computer wordt bepaald uit ten minste één van oogfixaties, heat maps, van belang zijnde gebieden, bestede tijd, fixatiesequenties, tijd-tot-eerste-fixatie, saccades, smooth pursuit movements, vergence movements, en vestibule-ocular movements, geïdentificeerd door de computer in de verzamelde eye-trackingdata.The method of claim 1, wherein the collected eye tracking data comprises at least gaze point data, wherein the release profile is determined by the computer from at least one of eye fixations, heat maps, areas of interest, time spent, fixation sequences, time to first fixation, saccades, smooth pursuit movements, vergence movements, and vestibule-ocular movements, identified by the computer in the collected eye-tracking data. 3. Werkwijze volgens een van de voorgaande conclusies, waarin de te inspecteren inhoud wordt weergegeven op een beeldweergeefinrichting en het vrijgaveprofiel verder door de computer wordt bepaald uit ten minste één van beeldkwaliteitanalyse, beelddimensieanalyse en kenmerkextractieanalyse van de afbeelding van de inhoud.The method according to any of the preceding claims, wherein the content to be inspected is displayed on an image display device and the release profile is further determined by the computer from at least one of image quality analysis, image dimension analysis and feature extraction analysis of the content display. 4. Werkwijze volgens een van de voorgaande conclusies, waarin het vrijgaveprofiel verder door de computer wordt bepaald uit het verzamelen en real-time verwerken van ten minste één van fysiologische data, omvattende elektro-encefalografiedata, mentale data, omvattende ten minste één van gezichtsexpressiedata, orale-expressiedata en lichaamsbewegingsdata van de waarnemer tijdens het uitvoeren van de visuele vrijgave.Method according to any of the preceding claims, wherein the release profile is further determined by the computer from collecting and real-time processing of at least one of physiological data, including electroencephalography data, mental data, including at least one of facial expression data, oral expression data and physical activity data of the observer during the visual release. 5. Werkwijze volgens een van de voorgaande conclusies, waarin het bepalen van het vrijgaveprofiel verder omvat het door de computer tot stand brengen van het vrijgaveprofiel voor een deel van de te inspecteren inhoud.A method according to any one of the preceding claims, wherein determining the release profile further comprises establishing the release profile for a part of the content to be inspected by the computer. 6. Werkwijze volgens een van de voorgaande conclusies, waarin de te inspecteren inhoud wordt weergegeven op een beeldweergeefinrichting, waarin terwijl de inhoud nog niet is vrijgegeven, de computer verder de stap uitvoert van het annoteren van de weergegeven afbeelding gebaseerd op de real-time verwerking van de verzamelde eye-trackingdata, in het bijzonder waarin het annoteren omvat het markeren van locaties op de weergegeven afbeelding.A method according to any one of the preceding claims, wherein the content to be inspected is displayed on an image display device, wherein while the content is not yet released, the computer further performs the step of annotating the displayed image based on real-time processing of the collected eye-tracking data, in particular wherein the annotation includes marking of locations on the displayed image. 7. Werkwijze volgens een van de voorgaande conclusies, waarin vrijgave van de inhoud wordt gebaseerd op door de computer verzamelde data tijdens een ingesteld tijdvenster startend vanaf de presentatie van de afbeelding van de inhoud aan de waarnemer, waarin het tijdvenster door de computer wordt ingesteld uit ten minste één van beeldkwaliteitanalyse, beelddimensieanalyse en kenmerkextractieanalyse van de afbeelding van de inhoud.A method according to any one of the preceding claims, wherein release of the content is based on data collected by the computer during a set time window starting from the presentation of the image of the content to the observer, wherein the time window is set by the computer from at least one of image quality analysis, image dimension analysis, and feature extraction analysis of the image content. 8. Werkwijze volgens een van de voorgaande conclusies, waarin het vrijgaveprofiel een door de computer bepaald observatiepatroon uit één of meer van in de verzamelde eye-trackingdata geïdentificeerde eye-trackingparameters en uit de expert eye-trackingdata bekende of geïdentificeerde patronen van deze eye-trackingparameters omvat.A method according to any one of the preceding claims, wherein the release profile is an observation pattern determined by the computer from one or more of eye-tracking parameters identified in the collected eye-tracking data and patterns of these eye-tracking parameters known or identified from the expert eye-tracking data. includes. 9. Werkwijze volgens een van de voorgaande conclusies, waarin het vrijgaveprofiel door de computer wordt bepaald uit ten minste in de verzamelde eye-trackingdata geïdentificeerde afwijkingen in vergelijking tot de expert eye-trackingdata.A method according to any one of the preceding claims, wherein the release profile is determined by the computer from at least defects identified in the collected eye-tracking data compared to the expert eye-tracking data. 10. Werkwijze volgens een van de voorgaande conclusies, waarin de computer uit het bepaalde vrijgaveprofiel en het ingestelde profiel een kwaliteitsfactor berekent, waarbij de inhoud wordt vrijgegeven indien de kwaliteitsfactor binnen een ingesteld bereik ligt.A method according to any one of the preceding claims, wherein the computer calculates a quality factor from the determined release profile and the set profile, wherein the content is released if the quality factor is within a set range. 11. Werkwijze volgens een van de voorgaande conclusies, waarin door de computer een inspectiekwaliteitsparameter wordt bepaald uit cumulatieve real-time verwerking van verzamelde data, welke inspectiekwaliteitsparameter een indicator verschaft van de prestaties van de waarnemer bij het uitvoeren van de visuele vrijgave, waarin indien de berekende inspectiekwaliteitsparameter niet voldoet aan een ingesteld inspectiekwaliteitsniveau representatief voor een ingestelde inspectiekwaliteit, de computer ten minste terugkoppelinformatie aan de waarnemer verschaft.A method according to any one of the preceding claims, wherein the computer determines an inspection quality parameter from cumulative real-time processing of collected data, which inspection quality parameter provides an indicator of the performance of the observer in performing the visual release, wherein if the calculated inspection quality parameter does not meet a set inspection quality level representative of a set inspection quality, the computer provides at least feedback information to the observer. 12. Werkwijze volgens conclusie 11, waarin de inhoud wordt vrijgegeven indien zowel het bepaalde vrijgaveprofiel voldoet aan het ingestelde vrijgaveprofiel en de bepaalde inspectiekwaliteitsparameter voldoet aan een ingestelde inspectie-kwaliteitsparameter representatief voor een ingestelde inspectiekwaliteit.The method of claim 11, wherein the content is released if both the determined release profile meets the set release profile and the determined inspection quality parameter meets a set inspection quality parameter representative of a set inspection quality. 13. Inrichting voor het ondersteunen van een waarnemer voor visuele vrijgave van te inspecteren fysieke inhoud, zoals containerinhoud, welke inrichting een computer en data-invoer/uitvoerapparatuur omvat, welke computer communicatief is verbonden of verbindbaar met een databank, welke data-invoerapparatuur een eye-trackinginrichting omvat ingericht voor het verschaffen van eye-trackingdata van de waarnemer bij het uitvoeren van de visuele vrijgave door het bekijken van de inhoud, welke computer is ingericht voor het uitvoeren van de computergeïmplementeerde stappen volgens een van de conclusies 1 -12.13. Device for supporting an observer for visual release of physical content to be inspected, such as container content, which device comprises a computer and data input / output equipment, which computer is communicatively connected or connectable to a database, which data input equipment is an eye -tracking device comprises adapted to provide eye-tracking data from the observer in performing the visual release by viewing the content, which computer is adapted to perform the computer-implemented steps according to any of claims 1-12. 14. Inrichting volgens conclusie 13, waarin de data-invoer/uitvoerapparatuur verder ten minste één sensor omvat voor het verzamelen van ten minste één van fysiologische data, omvattende elektro-encefalografiedata, en mentale data, omvattende ten minste één van gezichtsuitdrukkingsdata, orale-expressiedata en lichaams-bewegingsdata van de waarnemer tijdens het uitvoeren van de visuele vrijgave.The device of claim 13, wherein the data input / output equipment further comprises at least one sensor for collecting at least one of physiological data, including electroencephalography data, and mental data, including at least one of facial expression data, oral expression data and body movement data of the observer during the visual release. 15. Inrichting volgens conclusie 13 of 14, verder omvattende een beeldgenerator voor het genereren van ten minste één afbeelding van te inspecteren containerinhoud, waarin de data-uitvoerapparatuur een weergeefinrichting omvat ingericht voor het weergeven van een door de beeldgenerator verschafte afbeelding voor het door de waarnemer welke de visuele vrijgave uitvoert bekijken van de weergegeven afbeelding van de containerhoud.An apparatus according to claim 13 or 14, further comprising an image generator for generating at least one image of container contents to be inspected, wherein the data output equipment comprises a display device adapted to display an image provided by the image generator for the viewer provided by the observer which performs the visual release viewing the displayed image of the container container. 16. Computerprogrammaproduct dat kan worden geladen vanaf een communicatienetwerk en/of opgeslagen op een door een computer leesbaar en/of door een computer verwerkbaar medium, welk computerprogrammaproduct programmacode-instructies omvat om te bewerkstelligen dat een computer de computergeïmplementeerde stappen volgens een van de conclusies 1-12 uitvoert.A computer program product that can be loaded from a communication network and / or stored on a computer readable and / or computer processable medium, the computer program product comprising program code instructions to cause a computer to perform the computer implemented steps of any one of claims 1 -12.
NL2019927A 2017-11-16 2017-11-16 A computer controlled method of and apparatus and computer program product for supporting visual clearance of physical content. NL2019927B1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
NL2019927A NL2019927B1 (en) 2017-11-16 2017-11-16 A computer controlled method of and apparatus and computer program product for supporting visual clearance of physical content.
PCT/NL2018/050770 WO2019098835A1 (en) 2017-11-16 2018-11-16 A computer-controlled method of and apparatus and computer program product for supporting visual clearance of physical content
NL2022016A NL2022016B1 (en) 2017-11-16 2018-11-16 A computer-controlled method of and apparatus and computer program product for supporting visual clearance of physical content.

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
NL2019927A NL2019927B1 (en) 2017-11-16 2017-11-16 A computer controlled method of and apparatus and computer program product for supporting visual clearance of physical content.

Publications (1)

Publication Number Publication Date
NL2019927B1 true NL2019927B1 (en) 2019-05-22

Family

ID=60766110

Family Applications (2)

Application Number Title Priority Date Filing Date
NL2019927A NL2019927B1 (en) 2017-11-16 2017-11-16 A computer controlled method of and apparatus and computer program product for supporting visual clearance of physical content.
NL2022016A NL2022016B1 (en) 2017-11-16 2018-11-16 A computer-controlled method of and apparatus and computer program product for supporting visual clearance of physical content.

Family Applications After (1)

Application Number Title Priority Date Filing Date
NL2022016A NL2022016B1 (en) 2017-11-16 2018-11-16 A computer-controlled method of and apparatus and computer program product for supporting visual clearance of physical content.

Country Status (2)

Country Link
NL (2) NL2019927B1 (en)
WO (1) WO2019098835A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7347409B2 (en) * 2020-12-28 2023-09-20 横河電機株式会社 Apparatus, method and program

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050105768A1 (en) * 2001-09-19 2005-05-19 Guang-Zhong Yang Manipulation of image data

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8020993B1 (en) * 2006-01-30 2011-09-20 Fram Evan K Viewing verification systems
ITFI20120072A1 (en) * 2012-04-05 2013-10-06 Sr Labs S R L METHOD AND SYSTEM FOR IMPROVING THE VISUAL EXPLORATION OF AN IMAGE DURING THE SEARCH FOR A TARGET.

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050105768A1 (en) * 2001-09-19 2005-05-19 Guang-Zhong Yang Manipulation of image data

Also Published As

Publication number Publication date
NL2022016A (en) 2019-05-20
NL2022016B1 (en) 2021-05-31
WO2019098835A1 (en) 2019-05-23

Similar Documents

Publication Publication Date Title
Amso et al. Bottom-up attention orienting in young children with autism
Tatler et al. Visual correlates of fixation selection: Effects of scale and time
CN101203747B (en) Method and system for intelligent qualitative and quantitative analysis of digital radiography softcopy reading
Itti New eye-tracking techniques may revolutionize mental health screening
US20220308664A1 (en) System and methods for evaluating images and other subjects
CN111343927A (en) Cognitive dysfunction diagnosis device and cognitive dysfunction diagnosis program
CN109983505A (en) Personage's trend recording device, personage's trend recording method and program
Wiebel et al. The speed and accuracy of material recognition in natural images
Graves et al. The role of the human operator in image-based airport security technologies
Long et al. A familiar-size Stroop effect in the absence of basic-level recognition
Golan et al. Association and dissociation between detection and discrimination of objects of expertise: Evidence from visual search
CN109620266A (en) The detection method and system of individual anxiety level
Mayer et al. Do people “pop out”?
Adhikari et al. Video‐based eye tracking for neuropsychiatric assessment
Foulsham et al. If visual saliency predicts search, then why? Evidence from normal and gaze-contingent search tasks in natural scenes
Marsman et al. Linking cortical visual processing to viewing behavior using fMRI
CN109983502A (en) The device and method of quality evaluation for medical images data sets
NL2019927B1 (en) A computer controlled method of and apparatus and computer program product for supporting visual clearance of physical content.
Stainer et al. Examination strategies of experienced and novice clinicians viewing the retina
CN115439920A (en) Consciousness state detection system and equipment based on emotional audio-visual stimulation and facial expression
Balas et al. Face and body emotion recognition depend on different orientation sub-bands
Hosseinkhani et al. Significance of Bottom-up Attributes in Video Saliency Detection Without Cognitive Bias
Momtaz et al. Differences of eye movement pattern in natural and man-made scenes and image categorization with the help of these patterns
Podladchikova et al. Temporal dynamics of fixation duration, saccade amplitude, and viewing trajectory
Singh et al. Cognitive science based inclusive border management system

Legal Events

Date Code Title Description
MM Lapsed because of non-payment of the annual fee

Effective date: 20211201