NL2024816B1 - Detection method for detecting an occlusion of an eye region in an image - Google Patents

Detection method for detecting an occlusion of an eye region in an image Download PDF

Info

Publication number
NL2024816B1
NL2024816B1 NL2024816A NL2024816A NL2024816B1 NL 2024816 B1 NL2024816 B1 NL 2024816B1 NL 2024816 A NL2024816 A NL 2024816A NL 2024816 A NL2024816 A NL 2024816A NL 2024816 B1 NL2024816 B1 NL 2024816B1
Authority
NL
Netherlands
Prior art keywords
eye
region
parameter
face image
determined
Prior art date
Application number
NL2024816A
Other languages
Dutch (nl)
Inventor
Ali Tauseef
Khan Asif
Original Assignee
20Face B V
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 20Face B V filed Critical 20Face B V
Priority to NL2024816A priority Critical patent/NL2024816B1/en
Application granted granted Critical
Publication of NL2024816B1 publication Critical patent/NL2024816B1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)

Abstract

Computer-implemented detection method (100) for detecting an occlusion of an eye region in an input face image, the detection method comprising following steps: determining at least one eye landmark parameter (110) being representative for the position of an eye in the input face image; determining a nose landmark parameter (120) being representative for the position of a nose in the input face image; determining at least one under-eye region (130) based on the at least one eye landmark parameter and the nose landmark parameter; determining at least one pixel parameter (140) based on pixel values of pixels Within the determined under-eye region; determining whether an occlusion of an eye region is present in the input face image (150) based on the determined pixel parameter.

Description

Detection method for detecting an occlusion of an eye region in an image Field of Invention The field of the invention relates to a computer-implemented detection method for detecting an occlusion of an eye region in an input image. Particular embodiments relate to a detection method for detecting an occlusion of an eye region in an input face image comprising a human face, wherein a facial recognition method is to be executed on the input face image.
Background At present, facial recognition is widely applied in or via a variety of devices and for many different purposes. For example, facial recognition can be used by security systems to prevent crime, identify suspects, and find missing persons, by smart devices as personalised unlock mechanism, or by social media to identify persons in photographs, etc. Current facial recognition methods require as input an image of a human face with facial features such as a nose, eye(s), eyebrows etc. However, when (some of) these facial features are (partly) occluded, erroneous results may be generated by the facial recognition methods. This may be the case if the human face in the input image is wearing sunglasses, a mask, an eye patch or if (part of) the human face is located behind any other object that occludes some facial features.
Summary The object of embodiments of the present invention is to provide an efficient and reliable method of detecting an occlusion of an eye region in an input face image. More in particular, the object of embodiments of the present invention is to provide an occlusion detection method to determine whether or not an occlusion is present in the input face image for a facial recognition method.
According to a first aspect of the present invention there is provided a computer- implemented detection method for detecting an occlusion of an eye region in an input face image comprising a haman face. The detection method comprises determining at least one eye landmark parameter being representative for the position of an eye in the input face image, and determining a nose landmark parameter being representative for the position of a nose in the input face image. The detection method further comprises determining at least one under-eye region based on the at least one eye landmark parameter and the nose landmark parameter, determining at least one pixel parameter based on pixel values of pixels within the determined under-eye region, and determining whether an occlusion of an eye region is present in the input face image based on the determined pixel parameter. Note that an occlusion of an eye region may comprise any occlusion which covers at least a part of an eye in the input face image. However, such occlusion may also comprise severe occlusions which cover both eyes and optionally also at least part of the eye lashes and/or eyebrows, e.g. sunglasses.
Embodiments of the present invention are based inter alia on the insight that pixel values in the under-eye region(s) in such input face image comprise valuable information which allows to detect the presence of an occlusion of the human face in an efficient and reliable manner.
In an embodiment, the at least one pixel parameter comprises at least one of: a pixel uniformity of the pixels within the determined under-eye region: an average pixel intensity of the pixels within the determined under-eye region; and an average pixel color of the pixels within the determined under-eye region.
It is noted that the determination of pixel parameters such as pixel intensity, pixel uniformity and pixel color do not require a lot of computational resources. Therefore, also given the limited size of the ander-eye region(s) as compared to the entire input face image, the corresponding pixel values or averages thereof can be calculated in an efficient manner.
Preferably, determining whether an occlusion of an eye region is present in the input face image based on the determined pixel parameter, comprises comparing the determined pixel parameter with a predefined parameter threshold and determining whether an occlusion of the eye region is present based on a result of said comparing.
In a further embodiment, determining whether an occlusion of an eye region is present in the input face image based on the determined pixel parameter, comprises: determining at least one additional facial region based on the at least one eye landmark parameter and/or the nose landmark parameter. Correspondingly it further comprises determining at least one additional pixel parameter based on pixel values of pixels within the determined additional facial region, comparing the determined pixel parameter with the additional pixel parameter, and determining whether an occlusion of the eye region is present based on a result of said comparing.
Note that by comparing pixel parameters corresponding to the under-eye region(s) with pixel parameters corresponding to another facial region, relative pixel parameter information can be obtained in addition to absolute pixel parameter information. In this manner, additional information is available to determine whether an occlusion of an eye region is present or not. The other facial region may comprise any one of a nose region, a cheek region, a forehead region, a chin region and a nasion region. It is further noted that the at least one additional facial region may be determined before, after, and/or during the determination of the under eye regions.
Preferably the at least one additional facial region is a nasion region. The nasion is the midline bony depression between the eyes where the frontal and two nasal bones meet. The nasion is often also referred to as the bridge of the nose. It is noted that the nasion region, as mentioned above, comprises the nasion and optionally a surrounding area of the human face.
In an embodiment, determining at least one eye landmark parameter comprises determining a first eye landmark parameter being representative for the position of a first eye in the input face image. In addition, or alternatively, determining at least one eye landmark parameter comprises determining a second eye landmark parameter being representative for the position of a second eye in the input face image.
In a further embodiment determining at least one under-eye region comprises determining a first under-eye region based on the first eye landmark parameter and the nose landmark parameter. In addition, or alternatively, determining at least one under-eye region comprises determining a second under-eye region based on the second eye landmark parameter and the nose landmark parameter.
In a further embodiment determining at least one pixel parameter comprises determining at least a first pixel parameter based on pixel values of pixels within the determined first under-eye region. In addition, or alternatively, determining at least one pixel parameter comprises determining at least a second pixel parameter based on pixel values of pixels within the determined second under-eye region.
In a further embodiment, determining whether an occlusion of an eye region is present comprises determining whether an occlusion of an eye region is present in the input face image based on the determined first and/or second pixel parameter.
In an embodiment detecting an occlusion of an eye region comprises detecting whether the human face in the input face image is wearing sunglasses.
In an embodiment the detection method further comprises a step of aligning the input face image based on the at least one eye landmark parameter and the nose landmark parameter. Preferably, the step of determining at least one under-eye region based on the at least one eye landmark parameter and the nose landmark parameter in done in the aligned input face image.
In a further embodiment the step of aligning the input face image comprises rotating the input face image in order to align the positions of the eyes, e.g. horizontally, and/or resizing the input face image such that the positions of the eyes correspond with predetermined positions.
In an embodiment, the at least one under-eye region is determined to be positioned substantially below the position of the respective eye, and to have a width of between 20% and 45% of an inter-pupillary distance, and a height of between 35% and 65% of a nose length distance.
In an embodiment, the nasion region is determined to be positioned substantially between the positions of the eyes, and to have a width of between 20% and 45% of an inter-pupillary distance, and a height of between 35% and 65% of a nose length distance.
In a further embodiment, the detection method further comprises a step of informing a face detection module whether an occlusion of an eye region is present in the input face image.
According to yet another aspect of the present invention, there is provided a computer program product comprising a computer-executable program of instructions for performing, when executed on a computer, the steps of the method of any one of the method embodiments described above.
It will be understood by the skilled person that the features and advantages disclosed hereinabove with respect to embodiments of the detection method may also apply, mutatis mutandis, to embodiments of the computer program product.
According to yet another aspect of the present invention, there is provided a digital storage medium encoding a computer-executable program of instructions to perform, when executed on a computer, the steps of the method of any one of the method embodiments described above.
It will be understood by the skilled person that the features and advantages disclosed hereinabove with respect to embodiments of the detection method may also apply, mutatis mutandis, to embodiments of the digital storage medium.
Further aspects of the present invention are described by the dependent claims. The features from the dependent claims, features of any of the independent claims and any features of other dependent claims may be combined as considered appropriate to the person of ordinary skill in the art, and not only in the particular combinations as defined by the claims.
Brief description of the figures The accompanying drawings are used to illustrate presently preferred non-limiting exemplary embodiments of detection methods of the present invention. The above and other advantages of the features and objects of the present invention will become more apparent and the 5 present invention will be better understood from the following detailed description when read in conjunction with the accompanying drawings, in which: Figure 1 schematically illustrates a flowchart of a detection method according to an embodiment; Figure 2 schematically illustrates a flowchart of a detection method according to a further IO embodiment; Figure 3 schematically illustrates the process of aligning the input face image; Figure 4 schematically illustrates a detection method according to an embodiment wherein a facial recognition module is informed about whether or not an occlusion of an eye region, in this case sunglasses, is present in the input face image; Figure 5 schematically illustrates a flowchart of a detection method according to an exemplary embodiment; Figure 6 schematically illustrates a flowchart of a detection method according to an alternative exemplary embodiment; Figure 7 schematically illustrates the determination of under-eye regions and a nasion region in the input face image according to an embodiment; Figure 8 further illustrates determined under-eye regions and nasion region in an input face image according to an embodiment; and Figure 9 illustrates a step of determining a pixel uniformity within a nasion region according to an embodiment.
Description of embodiments Figure 1 illustrates a computer-implemented detection method 100 for detecting an occlusion of an eye region in an input face image comprising a human face, the detection method comprises step 110 of determining at least one eye landmark parameter being representative for the position of an eye in the input face image, and step 120 of determining a nose landmark parameter being representative for the position of a nose in the input face image. Preferably two eye landmark parameters are determined such that the positions of both eyes are determined in addition to the position of the nose, e.g. nose tip, in the input face image. The eve landmark parameters, nose landmark parameters and corresponding eye and nose positions can be determined in various known manners. One possible manner to do so is described in following paper of K. Zhang et al: “Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks”, IEEE
Signal Processing Letters 23(99), April 2016. However, it is clear to the skilled person that alternative manners to obtain these landmark parameters and positions are available. Which particular manner is used to determine the positions of the eyes and nose in the input face image is not relevant for the scope of the present disclosure.
The detection method further comprises step 130 of determining at least one under-eye region based on the at least one eye landmark parameter and the nose landmark parameter. As described above the eye landmark parameter(s) and nose landmark parameter are representative for the position(s) of the eye(s) and the position of the nose in the input face image, respectively. When these positions are determined, one or two under-eye regions can be determined in the input face image. The determination of an under-eye region will be elaborated further in the text, more in particular in view of figure 7.
When one or two under-eye regions are determined, the detection method further comprises step 140 of determining at least one pixel parameter based on pixel values of pixels within the determined one or two under-eye regions. In other words, the pixel values of pixels within the one or two under-eye regions are analysed based on one or more pixel parameters. Such pixel parameter can be any one of a pixel uniformity, an average pixel intensity, and an average pixel color, or a combination thereof.
Based on the determined pixel parameter it is determined in step 150 whether an occlusion of an eye region is present in the input face image. Preferably, this is done by comparing the determined pixel parameter with a predefined parameter threshold and consequently by determining whether an occlusion of the eye region is present based on the outcome of this comparison. For example, if a pixel uniformity within the under-eye region(s) is below a predefined uniformity parameter, corresponding with an expected lower threshold for pixel uniformity when no occlusions are present, it is determined that an occlusion is present. On the other hand, if a pixel uniformity within the under-eye region(s) is equal or above said predefined uniformity parameter, it is determined that no occlusion is present. It is clear to the skilled person that similar approaches apply to the other pixel parameters, mutatis mutandis. Alternatively, or in addition, in embodiments where two under-eye regions are determined, the pixel parameters of both under-eye regions can be compared with each other, as to determine whether an occlusion is present or not.
Figure 2 illustrates a detection method 200 wherein, in addition to the one or two under- eye regions and the corresponding pixel parameter(s), at least one additional facial region and corresponding additional pixel parameter are determined. In this manner, additional information is extracted from the input face image in order to determine whether an occlusion is present or not. More in particular, steps 210, 220, 230, 240, and 250 in figure 2 corresponds with steps 110, 120,
130, 140 and 150 in figure 1, respectively. In step 230° at least one additional facial region is determined based on the determined eye(s) and/or nose landmark parameters. Consecutively, in step 240° at least one additional pixel parameter is determined based on pixel values of pixels within the determined additional facial region. The additional facial region may comprise any one of a nose region, a cheek region, a forehead region, a chin region and a nasion region, or a combination thereof. In step 245° the pixel parameter as determined in step 240 and the additional pixel parameter as determined in step 240’ are compared with each other, and in step 250 it is determined whether or not an occlusion is present, based on the outcome of this comparison. For example, when the pixel parameter and additional parameter correspond with an average pixel color, the average pixel color within the one or two under-eye regions is compared with the average pixel color within the additional facial region. If the average pixel color in both regions is substantially the same, it is determined that no occlusion is present. On the other hand, if the average pixel color in both regions is substantially different, it is determined that an occlusion is present. It is clear to the skilled person that similar approaches apply to the other pixel parameters, mutatis mutandis.
Preferably the additional facial region corresponds with a nasion region. The nasion is often also referred to as the bridge of the nose. It is noted that the nasion region, as mentioned above, comprises the nasion and optionally a surrounding area of the human face in the input face image. The determination of the nasion region will be elaborated further in the text, more in particular in view of figure 7. The inventors have found that pixel information obtained from the nasion region, in combination with the pixel information in the under-eye region(s), can be used to quickly and accurately determine the presence of severe face occlusions, such as sunglasses, in the input face image. This will be elaborated further in view of figures 6 to 9.
Preferably, the detection method comprises a step of aligning the input face image based on the determined eye(s) position(s) and nose position. In this manner the under-eye region(s) and/or additional facial region(s) can be determined more efficiently. This will be further elaborated in view of figure 7. An exemplary aligning step is illustrated in figure 3. In the illustrated embodiment, the aligning step comprises rotating the input face image in order to align, in this case horizontally, the positions of the eyes. Optionally, the aligning step may comprise resizing the input face image such that the positions of the eyes correspond with predetermined fixed positions in order to further facilitate computations regarding the to be determined under-eye region(s) and additional facial region(s). lt is noted that the aligning step is optional, and that the particular application of this step depends on the original orientation of the human face in the input face image. Figure 3 illustrates at the left hand side determined eye and nose landmark parameters, also known as feature points, which correspond with coordinates of the centers of each individual eye and with coordinates of the center of the nose tip, respectively, in the input face image. The eye landmark parameters are represented as (x1,y1) and (x2,y2). The nose landmark parameter is represented as (x3,y3). It is noted that the specifics of determining these landmark parameters are not within the scope of this text. For this purpose, techniques which are well known by the skilled person are employed. These techniques have shown to be able to determine the centers of the eves even when the eves are severely occluded, for example by sunglasses. The determined landmark parameters are then processed and transformed in order to align the input face image. The transformed landmark parameters in the aligned input face image are represented as (xla,yla), (x2a,y2a) and (x3a,y3a). The alignment step may involve two substeps: a first substep of rotation, IO ie. roll, of the input face image for aligning the centers of eyes horizontally, and/or a second substep of resizing the input face image to shift the centers of the eyes to fixed locations. The aligning step can for example be executed by performing an affine transformation on the determined landmark parameters. From the aligned input face image the under-eye region(s) and/or additional facial region(s) can be determined more easily.
Preferably the detection method further comprises a step of informing a facial recognition module whether an occlusion of an eye region is present in the input face image. In other words, the detection method can be used as a pre-processing operation on an input face image to check the quality of the input face image before facial recognition is performed on the input face image. This is illestrated in figure 4 which shows, on the one hand, an input face image 495 where no occlusions are present, and, on the other hand, an input face image 405’ where a severe occlusion of the eyes is present, in this case the severe occlusion corresponds with sunglasses. Experience and research has indicated that in practice, sunglasses are amongst the major sources of severe occlusions which lead to erroneous facial recognitions results. Therefore, some embodiments and examples are directed to the detection of sunglasses in input face images. However, it is clear to the skilled person that the detection method as currently presented also applies to other forms of severe occlusions. Via line 410 the input face image 405 is presented to the occlusion detection method, which performs the steps as described in view of figures 1 and/or 2. Via line 420 the input face image 405 is presented to a facial recognition module. The occlusion detection method will determine whether or not an occlusion is present in the input face image, and will output the determined result via line 415 to the facial recognition module. In this case no occlusion is detected in the input face image 405 and the facial recognition module is informed accordingly. Via line 410° the input face image 405’ is presented to the occlusion detection method, which performs the steps as described in view of figures 1 and/or 2. Via line 420’ the input face image 4057 is presented to a facial recognition module. The occlusion detection method will determine whether or not an occlusion is present in the input face image, and will output the determined result via line 415 to the facial recognition module. In this case the presence of an occlusion, i.e. sunglasses, is detected in the input face image 405’ and the facial recognition module is informed accordingly.
By informing 415, 415’ the facial recognition module about the absence or the presence of sunglasses in the input face image, the facial recognition module can take appropriate actions and make more informed decisions about validating or accepting the input face image. This will result in an increased accuracy of the facial recognition module. Many examples of appropriate actions can be given. As a first example, the facial recognition module can still make an attempt of recognizing the face in the input face image when sunglasses are detected. However, facial recognition module may appoint a relatively low confidence score to the outcome of such an attempt. In this manner, the low level of confidence indicates that facial recognition has been performed with potentially corrupted and/or misleading information on the input face image. As a second example, the facial recognition module can exclude the eye regions from both the input face image and the enrolled face image(s) and attempt to perform facial recognition on the resulting partial face images, wherein the eye region, for examples eyes and eyebrows, have been removed.
The detection of sunglasses on a human face is important as sunglasses are one of the most frequently occurring sources of occlusions on a human face in the context of facial recognition. Sunglasses, which partially occlude the human face and often fully occlude the eyes, make the task of automatic facial recognition very difficult. Existing facial recognition systems or methods applied on either a stand-alone computer or a mobile phone require an image of a human face with facial features, such as a nose, eyes, eyebrows, etc. The presence of sunglasses may occlude these features and may cause a facial recognition result to be erroneous or false. However, by informing the facial recognition module that the eyes and/or eyebrows are occluded by sunglasses, as described above, a facial recognition module may process the input face image differently to make an informed decision and to increase its accuracy. Research has shown that regular vision eyeglasses do not create severe problems for known facial recognition techniques. This is mainly because important features such as eyes, eyelashes, and eyebrows can be seen through the transparent parts, i.e. glasses, of the regular vision eyeglasses. In a similar manner, some sunglasses, i.e. “light color” sunglasses, are substantially transparent and as a consequence do not present a severe occlusion of the eyes. Therefore, regular vision eyeglasses and light color sunglasses do not seem to significantly reduce the accuracy of automatic facial recognition techniques. However, experiments show that “dark color” sunglasses, which significantly occlude the eyes, do reduce the accuracy of automatic facial recognition techniques. The reason is that because of the severe occlusion of most part of the eyes, eye lashes and/or eye brows, the face recognition module needs to work with only partially available information on the input face image to perform facial recognition.
Figure 5 is a flowchart of a sunglasses detection method 500 according to an exemplary embodiment. This embodiment is based on the idea that the nasion region and under-eye regions of the face have similar skin texture and/or color. Moreover, research has shown that these regions are usually uniform in pixel intensity values. If sunglasses are present on the face, the nasion region and under-cye regions are no longer uniform, as further illustrated in figure 8. The inventors have learned that, in case of the presence of sunglasses, there usually is also a substantial difference between the average intensity or color of the nasion region 830’ on the one hand, and the average intensity or color of the under-eye regions 830, on the other hand. Therefore. it has been found that the presence or absence of sunglasses can be detected by determining the nasion region and under- eye regions, and comparing the pixel parameters of the nasion region with the pixel parameters of the under-eye regions.
In figure 5 an input face image with a clearly visible human face is received at the start of the method 500. The eyes and nose landmarks, which in this case are coordinates of the centers of eyes and the center of the nose tip, respectively, are computed in step 510. Based on the computed landmarks or coordinates, the input face image is aligned in step 525. The aligned input face image is used to determine the nasion region and ender-eye regions in step 530, 530’. Consecutively the nasion region and under-eye regions are processed and corresponding pixel parameters are extracted during step 540. It is noted that, although determining 530, 530° and/or processing 540 of the nasion region and the under-eye regions are illustrated to be done simultaneously, the nasion region can be determined and/or processed prior to determining and/or processing the under-eye regions. Alternatively, the under-eye regions can be determined and/or processed prior to determining and/or processing the nasion region. A decision on the presence or absence of sunglasses in the input face image is made in step 550, and the facial recognition module is informed about this decision in step 560. A more detailed flowchart of an exemplary embodiment is illustrated in figure 6.
In the detection method 600 of figure 6 an input face image, preferably illustrating a single human face, is received in step 601. In step 602 a quality check is executed to determine whether the quality of the input face image is acceptable for further processing. For example it can be determined that the input face image is acceptable for further processing if the size of the input face image, size of the face in the input face image, and/or any other image related quality parameters, if any, are according to predetermined requirements. Obviously, an input face image will be determined as not acceptable if a human face cannot be detected in the input face image.
If the image quality is determined not to be acceptable, the image cannot be processed and the method 600 returns via step 603 to step 601. If the image quality is determined to be acceptable, the input face image is further processed and arrives at step 610, where the input face image is aligned in order to facilitate further processing.
Please note that before the input face image is aligned, the eye landmark parameters and nose landmark parameter have been determined, however this step is not explicitly illustrated in figure 6. To further elaborate on this step, reference is made to figure 3 and the corresponding description.
After the aligning step 610, the nasion region is determined in step 630’ and corresponding pixel parameters are extracted.
In this particular embodiment, the nasion region is determined and processed before the under-eye regions are determined.
However, alternative embodiments exist wherein the nasion region is determined and processed after the under-eye regions are determined, or wherein the nasion region and the under-eye regions are determined and processed at the same time.
In step 631° the uniformity of the nasion region is determined by means of homogeneous {5 intensity values.
Research has shown that the skin color of a human face in this region is almost uniform without sharp changes.
The non-uniformity of intensity values in this region most probably indicates the presence of an external object or strong shadows.
The external object on this part of the face are most likely to be glasses.
A variety of methods can be used to determine the uniformity of the nasion region.
As described earlier, the method 600 further takes into account the color of the glasses.
As compared to regular vision eyeglasses or light color sunglasses, dark color sunglasses often substantially completely occlude the eyes and the skin surrounding the eyes.
This results in a significant change in the color of the nasion region and the under-eye regions of the face.
The non-uniformity of the nasion region and significant difference between the colors of the nasion region and under-eye regions indicate the presence of sunglasses on the face.
The illustrated embodiment outputs "no detection” of sunglasses in step 650’ if the nasion region is determined to be uniform in step 631°. If the nasion region is determined not to be uniform in step 631°, the under eye regions are determined and processed in step 630. Consecutively, the color of the determined under-eye regions is determined and compared to the color of the nasion region in step 631. If there is no significant difference in color between the nasion region and the under-eye regions, the illustrated embodiment also outputs "no detection” of sunglasses in step 650°. However, if there is a significant difference in color between the nasion region and the under-eye region, it is indicated that sunglasses are detected in step 650. As a last step in figure 6, a face recognition module is informed about the outcome of the sunglasses detection method in step 660. Figure 7 illustrates how the nasion region and/or under-eye regions are determined according to a preferred embodiment.
Although very specific dimensions and calculations are described in view of figure 7, it is clear to the skilled person that (minor) changes to the described dimensions and calculations are within the scope of the claims.
Figure 7 illustrates an exemplary selection of width, height and location of bounding boxes for the nasion region and the under-eye regions. Although different methods and parameters can be used to determine the nasion region, the determined eye and nose landmarks, i.e. coordinates of the centers of eyes and center of the nose tip, on the preferably aligned image are used to determine these regions. The inter-pupillary distance (ipd), which is the horizontal distance between the centers of eyes i.e, (xla,yla) and (x2a,y2a), is represented by the following expression: ipd = [x2a-x1a| According to the described embodiment, one third of the ipd (in pixels) is taken as width of the nasion region and, and the illustrated nose length is taken as height of the nasion region. The nose length is defined as the vertical distance between the horizontal line joining the centers of both the eyes and the center of the nose tip in the aligned image. Mathematically, the nose length is represented as: nose length = [y3a-ylal or [y3a-y2al.
The nasion bounding box is drawn and centered horizontally, preferably substantially in the middle of the two eyes. The upper border of the nasion region is set to be one third of the ipd above the horizontal line joining the centers of both the eyes. This position and dimensions of the nasion region correspond with the rectangle 730’ in figure 7, and is selected in this manner to increase the chances of capturing the central part of any kind of glasses. Although the ipd is used in this embodiment to determine position and dimensions of the nasion region, it is clear to the skilled person that variations are possible wherein a predetermined or estimated distance is used in stead of the actual ipd.
Under-eye regions are defined as the regions on both sides of the nasion region including some part of the skin below the eyes. Preferably, the (upper) eyelid is not processed as it is mostly occluded by eyebrows and shadows, even without any kind of glasses or other external occlusions being present. Under-eye regions as illustrated include a region from the border of the nasion region to approximately the center of the eye. Thus the width of the under-eye region can be determined as one third of the ipd. This is especially preferred if the face in the image is completely in frontal pose. The upper border of the under-eye region is aligned with the centers of eyes and the lower border is aligned with the lower border of the nasion region. In the illustrated example, the estimated inter-pupillary distance and nose length are used to determine the nasion and under-eye regions. The described method of determining the nasion region and under-eye regions is robust to different poses of the face in the input face image. As long as the centers of eyes and the center of the nose tip can be determined, the nasion region and under-eye regions can be determined. It is clear to the skilled person that changing the pitch and yaw angels of the face will change the visibility of the nasion and under-eye regions and the described method will still automatically determine the visible parts of these regions. In further embodiments the method can also apply some restrictions on the processing of these regions. For example, the method can stop processing an image if the height or width of either nasion region or under-eye region is less than a predefined threshold. The predefined thresholds can be selected to correspond with a width and height (in pixels) of the nasion region and/or under-eye regions. A face in completely frontal pose (looking into the camera) will have a maximum possible width and height of both the nasion region and under-eye regions. Increase or decrease in pitch angle (looking upward or downward) of the face IO will reduce the height, and increase or decrease in yaw angle (looking away from camera) of the face will reduce the width of both the nasion region and/or under-eye regions. Smaller values of width or height may indicate insufficient information for processing the detection of sunglasses. Although the ipd is used in this embodiment to determine position and dimensions of the under-eye regions, it is clear to the skilled person that variations are possible wherein a predetermined or estimated distance is used in stead of the actual ipd. Furthermore, although the nasion region is used in this embodiment to determine position and dimensions of the under-eye regions, it is clear to the skilled person that variations are possible wherein other predetermined or estimated regions or distances are used in stead of the actual nasion region.
In addition to the embodiment of figure 7, it is noted that the at least one under-eye region is determined to be positioned substantially below the position of the respective eye or the center thereof, and to have a width of between 20% and 45%, preferably between 25% and 40%, more preferably between 30% and 35% of an inter-pupillary distance, and a height of between 35% and 65%, preferably between 45% and 55% of a nose length distance. Furthermore it is noted that the nasion region is determined to be positioned substantially between the positions of the eyes, and to have a width of between 20% and 45%, preferably between 25% and 40%, more preferably between 30% and 35% of an inter-pupillary distance, and a height of between 75% and 125%, preferably between 85% and 115%, more preferably between 95% and 105% of a nose length distance.
Some of the described embodiments are based on the idea that nasion region and under-eye regions share multiple similarities in the absence of sunglasses. They usually share the same pixel intensity and skin color. These regions are also uniform in pixel intensities. The nasion region of the aligned input face image is processed to determine if it is uniform in pixel intensities. The uniformity can be detected by different methods, for example, the detection of edge can show non- uniformity, higher values of standard deviation of pixel intensities can show non-uniformity, etc. In an exemplary embodiment successive blurs of the nasion region are used to determine the uniformity.
Experiments show that the use of successive blurs is a robust and accurate method for uniformity indication, In this method of successive blurs, the nasion region is successively blurred by using Gaussian kernels of size 5x5 multiple times, e.g. blur it ten times). Figure 9 illustrates an input face image with sunglasses, indicated nasion region 930’, isolated nasion region and blurred version of the nasion region.
The difference of the original nasion region and its blurred version can then be calculated by different methods.
The sum of squared difference (SSD) is one of sach method that has been used here.
Higher values of the SSD between the original nasion region and blurred version of nasion region indicate that the nasion region is not uniform in intensity.
The IO reason is that successive blurs significantly change the non-uniform regions leaving less impact on uniform regions.
An empirically selected threshold on SSD can be set to decide if the nasion region is uniform or not.
A person of skill in the art would readily recognize that steps of various above-described methods can be performed by programmed computers.
Herein, some embodiments are also intended to cover program storage devices, e.g., digital data storage media, which are machine or computer readable and encode machine-executable or computer-executable programs of instructions, wherein said instructions perform some or all of the steps of said above-described methods.
The program storage devices may be, e.g., digital memories, magnetic storage media such as a magnetic disks and magnetic tapes, hard drives, or optically readable digital data storage media.
The program storage devices may be resident program storage devices or may be removable program storage devices, such as smart cards.
The embodiments are also intended to cover computers programmed to perform said steps of the above-described methods.
The description and drawings merely illustrate the principles of the present invention.
It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles of the present invention and are included within its scope.
Furthermore, all examples recited herein are principally intended expressly to be only for pedagogical purposes to aid the reader in understanding the principles of the present invention and the concepts contributed by the inventor(s) to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions.
Moreover, all statements herein reciting principles, aspects, and embodiments of the present invention, as well as specific examples thereof, are intended to encompass equivalents thereof.
The functions of the various elements shown in the figures, including any functional blocks labelled as “processors”, may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software.
When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term “processor” or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (DSP) hardware, network processor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), read only memory (ROM) for storing software, random access memory (RAM). and non volatile storage. Other hardware, conventional and/or custom, may also be included. Similarly. any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context.
It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative circuitry embodying the principles of the present invention. Similarly, it will be appreciated that any flowcharts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer.
It should be noted that the above-mentioned embodiments illustrate rather than limit the present invention and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word “comprising” does not exclude the presence of elements or steps not listed in a claim. The word “a” or “an” preceding an element does not exclude the presence of a plurality of such elements. The present invention can be implemented by means of hardware comprising several distinct elements and by means of a suitably programmed computer. In claims enumerating several means, several of these means can be embodied by one and the same item of hardware. The usage of the words “first”. “second”, “third”, etc. does not indicate any ordering or priority. These words are to be interpreted as names used for convenience.
In the present invention, expressions such as “comprise”, “include”, “have”, “may comprise”, “may include”, or “may have” indicate existence of corresponding features but do not exclude existence of additional features.
Whilst the principles of the present invention have been set out above in connection with specific embodiments, it is to be understood that this description is merely made by way of example and not as a limitation of the scope of protection which is determined by the appended claims.

Claims (14)

ConclusiesConclusions 1. Computergeimplementeerde detectiewerkwijze (100) voor het detecteren van een occlusie van een ooggebied in een invoergezichtsafbeelding, waarbij de detectiewerkwijze volgende stappen omvat: - het bepalen van ten minste één oogpositioneringsparameter (110) die representatief is voor de positie van een oog in de invoergezichtsafbeelding; - het bepalen van een neuspositioneringsparameter (120) die representatief is voor de positie van een neus in de invoergezichtsafbeelding; - het bepalen van ten minste één onderooggebied (130) op basis van de ten minste ene oogpositioneringsparameter en de neuspositioneringsparameter; - het bepalen van ten minste één pixelparameter (140) op basis van pixelwaarden van pixels in het bepaalde onderooggebied; - het bepalen of een occlusie van een ooggebied aanwezig is in de invoergezichtsafbeelding (150) op basis van de bepaalde pixelparameter; waarbij het bepalen of een occlusie van een ooggebied aanwezig is in de invoergezichtsafbeelding (150) op basis van de bepaalde pixelparameter omvat: - het bepalen van ten minste één bijkomend gezichtsgebied op basis van de ten minste één oogpositioneringsparameter en/of de neuspositioneringsparameter; - het bepalen van ten minste één bijkomende pixelparameter op basis van pixelwaarden van pixels in het bepaalde bijkomend gezichtsgebied; - het vergelijken van de bepaalde pixelparameter met de bijkomende pixelparameter; en - het bepalen of een occlusie van het ooggebied aanwezig is op basis van het vergelijken; en waarbij het ten minste één bijkomend gezichtsgebied een nasiongebied is.A computer-implemented detection method (100) for detecting an occlusion of an eye region in an input face image, the detection method comprising the steps of: - determining at least one eye positioning parameter (110) representative of the position of an eye in the input face image ; - determining a nose positioning parameter (120) representative of the position of a nose in the input facial image; - determining at least one lower eye region (130) based on the at least one eye positioning parameter and the nose positioning parameter; - determining at least one pixel parameter (140) based on pixel values of pixels in the determined under-eye region; - determining whether an occlusion of an eye region is present in the input face image (150) based on the determined pixel parameter; wherein determining whether an occlusion of an eye region is present in the input facial image (150) based on the determined pixel parameter comprises: - determining at least one additional facial region based on the at least one eye positioning parameter and/or the nose positioning parameter; - determining at least one additional pixel parameter based on pixel values of pixels in the determined additional face area; - comparing the determined pixel parameter with the additional pixel parameter; and - determining whether an occlusion of the eye region is present based on the comparison; and wherein the at least one additional face region is a nasion region. 2. Detectiewerkwijze volgens conclusie 1, waarbij de ten minste ene pixelparameter ten minste één omvat van: - een pixeluniformiteit van de pixels in het bepaalde onderooggebied; - een gemiddelde pixelintensiteit van de pixels in het bepaalde onderooggebied; en - een gemiddelde pixelkleur van de pixels in het bepaalde onderooggebied.A detection method according to claim 1, wherein the at least one pixel parameter comprises at least one of: - a pixel uniformity of the pixels in the determined under-eye region; - an average pixel intensity of the pixels in the determined under-eye area; and - an average pixel color of the pixels in the determined under-eye area. 3. Detectiewerkwijze volgens conclusie 1 of 2, waarbij het bepalen of een occlusie van een ooggebied aanwezig is in de invoergezichtsafbeelding (150) op basis van de bepaalde pixelparameter omvat:The detection method of claim 1 or 2, wherein determining whether an occlusion of an eye region is present in the input face image (150) based on the determined pixel parameter comprises: - het vergelijken van de bepaalde pixelparameter met een vooraf bepaalde parametergrenswaarde; en - het bepalen of een occlusie van het ooggebied aanwezig is op basis van een resultaat van het vergelijken.- comparing the determined pixel parameter with a predetermined parameter threshold; and - determining whether an occlusion of the eye region is present based on a result of the comparison. 4. Detectiewerkwij ze volgens één der voorgaande conclusies, waarbij het bepalen van de ten minste één oogpositioneringsparameter omvat: - het bepalen van een eerste oogpositioneringsparameter die representatief is voor de positie van een eerste oog in de invoergezichtsafbeelding; en/of - het bepalen van een tweede oogpositioneringsparameter die representatief is voor de positie van een tweede oog in de invoergezichtsafbeelding.A detection method according to any one of the preceding claims, wherein determining the at least one eye positioning parameter comprises: - determining a first eye positioning parameter representative of the position of a first eye in the input face image; and/or - determining a second eye positioning parameter representative of the position of a second eye in the input face image. 5. Detectiewerkwijze volgens conclusie 4, waarbij het bepalen van ten minste één onderooggebied omvat: - het bepalen van een eerste onderoog gebied op basis van de eerste oogpositioneringsparameter en de neuspositioneringsparameter; en/of - het bepalen van een tweede onderooggebied op basis van de tweede oogpositioneringsparameter en de neuspositioneringsparameter.A detection method according to claim 4, wherein determining at least one lower eye region comprises: - determining a first lower eye region based on the first eye positioning parameter and the nose positioning parameter; and/or - determining a second lower eye region based on the second eye positioning parameter and the nose positioning parameter. 6. Detectiewerkwijze volgens conclusie 5, waarbij het bepalen van ten minste één pixelparameter omvat: - het bepalen van ten minste een eerste pixelparameter op basis van pixelwaarden van pixels in het bepaalde eerste onderooggebied; en/of - het bepalen van ten minste een tweede pixelparameter op basis van pixelwaarden van pixels in het bepaalde tweede onderooggebied.A detection method according to claim 5, wherein determining at least one pixel parameter comprises: - determining at least a first pixel parameter based on pixel values of pixels in the determined first under-eye region; and/or - determining at least a second pixel parameter based on pixel values of pixels in the determined second under-eye region. 7. Detectiewerkwijze volgens conclusie 6, waarbij het bepalen of een occlusie van het ooggebied aanwezig is, omvat: - het bepalen of een occlusie van het ooggebied aanwezig is in de invoergezichtsafbeelding op basis van de bepaalde eerste en/of tweede pixelparameter.A detection method according to claim 6, wherein determining whether an occlusion of the eye region is present comprises: - determining whether an occlusion of the eye region is present in the input face image based on the determined first and/or second pixel parameter. 8. Detectiewerkwijze volgens één der voorgaande conclusies, waarbij het detecteren van een occlusie van een ooggebied het detecteren omvat of het menselijk gezicht in de invoergezichtsafbeelding een zonnebril draagt.A detecting method according to any preceding claim, wherein detecting an occlusion of an eye region comprises detecting whether the human face in the input face image is wearing sunglasses. 9. Detectiewerkwijze volgens één der voorgaande conclusies, verder omvattende een stap van het uitlijnen van de invoergezichtsafbeelding op basis van den ten minste ene oogpositioneringsparameter en de neuspositioneringsparameter; en waarbij het ten minste ene onderooggebied wordt bepaald in de uitgelijnde invoergezichtsafbeelding.The detection method of any preceding claim, further comprising a step of aligning the input facial image based on the at least one eye positioning parameter and the nose positioning parameter; and wherein the at least one under eye region is defined in the aligned input face image. 10. Detectiewerkwijze volgens conclusie 9, waarbij het uitlijnen van de invoergezichtsafbeelding op basis van de ten minste ene oogpositioneringsparameter en de neuspositioneringsparameter omvat: - het roteren van de invoergezichtsafbeelding om de posities van de ogen bij voorkeur IO horizontaal uit te lijnen; en/of - het wijzigen van het formaat van de invoergezichtsafbeelding zodanig dat de posities van de ogen overeenkomstig zijn met vooraf bepaalde vaste posities.The detection method of claim 9, wherein aligning the input face image based on the at least one eye positioning parameter and the nose positioning parameter comprises: - rotating the input face image to align the positions of the eyes preferably 10 horizontally; and/or - resizing the input face image such that the positions of the eyes correspond to predetermined fixed positions. 11. Detectiewerkwijze volgens één der voorgaande conclusies, waarbij het ten minste ene onderooggebied wordt bepaald om in hoofdzaak onder de positie van het respectievelijke oog gepositioneerd te zijn, en om een breedte van tussen 20% en 45%, bij voorkeur tussen 25% en 40%, meer bij voorkeur tussen 30% en 35% van een tussenpupilafstand, en een hoogte van tussen 35% en 65%, bij voorkeur tussen 45% en 55% van een neuslengteafstand te hebben.A detection method according to any one of the preceding claims, wherein the at least one under-eye region is determined to be positioned substantially below the position of the respective eye, and to have a width of between 20% and 45%, preferably between 25% and 40 %, more preferably between 30% and 35% of an interpupillary distance, and a height of between 35% and 65%, preferably between 45% and 55% of a nose length distance. 12, Detectiewerkwijze volgens één der voorgaande conclusies en conclusie 5, waarbij het nasiongebied wordt bepaald om in hoofdzaak tussen de posities van de ogen gepositioneerd te zijn, en om een breedte van tassen 20% en 45%, bij voorkeur tussen 25% en 40%, meer bij voorkeur tussen 30% en 35% van een tussenpupilafstand, en een hoogte van tussen 75% en 125%, bij voorkeur tussen 85% en 115%, meer bij voorkeur tussen 95% en 105% van een neuslengteafstand te hebben.A detection method according to any one of the preceding claims and claim 5, wherein the nasion region is determined to be positioned substantially between the positions of the eyes, and to have a width of 20% and 45%, preferably between 25% and 40% more preferably between 30% and 35% of an interpupillary distance, and a height of between 75% and 125%, preferably between 85% and 115%, more preferably between 95% and 105% of a nose length distance. 13. Detectiewerkwijze volgens één der voorgaande conclusies, verder omvattende een stap van het informeren van een gezichtsdetectiemodule of een occlusie van een ooggebied aanwezig is in de invoergezichtsatbeelding.A detection method according to any preceding claim, further comprising a step of informing a face detection module whether an occlusion of an eye region is present in the input face image. 14. Een computerprogrammaproduct omvattende een computeruitvoerbaar programma van instructies voor het uitvoeren van de stappen van de detectiewerkwijze van één van de voorgaande conclusies wanneer deze wordt uitgevoerd op een computer.A computer program product comprising a computer executable program of instructions for performing the steps of the detection method of any preceding claim when executed on a computer.
NL2024816A 2020-02-03 2020-02-03 Detection method for detecting an occlusion of an eye region in an image NL2024816B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
NL2024816A NL2024816B1 (en) 2020-02-03 2020-02-03 Detection method for detecting an occlusion of an eye region in an image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
NL2024816A NL2024816B1 (en) 2020-02-03 2020-02-03 Detection method for detecting an occlusion of an eye region in an image

Publications (1)

Publication Number Publication Date
NL2024816B1 true NL2024816B1 (en) 2021-10-05

Family

ID=70614514

Family Applications (1)

Application Number Title Priority Date Filing Date
NL2024816A NL2024816B1 (en) 2020-02-03 2020-02-03 Detection method for detecting an occlusion of an eye region in an image

Country Status (1)

Country Link
NL (1) NL2024816B1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9953211B2 (en) * 2014-10-15 2018-04-24 Nec Corporation Image recognition apparatus, image recognition method and computer-readable medium
US10318797B2 (en) * 2015-11-16 2019-06-11 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US20200034657A1 (en) * 2017-07-27 2020-01-30 Tencent Technology (Shenzhen) Company Limited Method and apparatus for occlusion detection on target object, electronic device, and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9953211B2 (en) * 2014-10-15 2018-04-24 Nec Corporation Image recognition apparatus, image recognition method and computer-readable medium
US10318797B2 (en) * 2015-11-16 2019-06-11 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US20200034657A1 (en) * 2017-07-27 2020-01-30 Tencent Technology (Shenzhen) Company Limited Method and apparatus for occlusion detection on target object, electronic device, and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FERNÁNDEZ ALBERTO ET AL: "Glasses detection on real images based on robust alignment", MACHINE VISION AND APPLICATIONS, SPRINGER VERLAG, DE, vol. 26, no. 4, 31 March 2015 (2015-03-31), pages 519 - 531, XP035501562, ISSN: 0932-8092, [retrieved on 20150331], DOI: 10.1007/S00138-015-0674-1 *
K. ZHANG ET AL.: "Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks", IEEE SIGNAL PROCESSING LETTERS, vol. 23, no. 99, April 2016 (2016-04-01)

Similar Documents

Publication Publication Date Title
US10789465B2 (en) Feature extraction and matching for biometric authentication
TWI687832B (en) Biometric system and computer-implemented method for biometrics
JP4755202B2 (en) Face feature detection method
US8836777B2 (en) Automatic detection of vertical gaze using an embedded imaging device
US9939893B2 (en) Eye gaze tracking
JP4845698B2 (en) Eye detection device, eye detection method, and program
WO2020018359A1 (en) Three-dimensional living-body face detection method, face authentication recognition method, and apparatuses
WO2017036160A1 (en) Glasses removal method for facial recognition
WO2019061658A1 (en) Method and device for positioning eyeglass, and storage medium
US9576210B1 (en) Sharpness-based frame selection for OCR
CN106951869B (en) A kind of living body verification method and equipment
JP7197485B2 (en) Detection system, detection device and method
JP6784261B2 (en) Information processing equipment, image processing system, image processing method and program
US10360441B2 (en) Image processing method and apparatus
WO2019061659A1 (en) Method and device for removing eyeglasses from facial image, and storage medium
JP2007025900A (en) Image processor and image processing method
NL2024816B1 (en) Detection method for detecting an occlusion of an eye region in an image
Mahadeo et al. Model-based pupil and iris localization
JP2013029996A (en) Image processing device
US20240144722A1 (en) Methods and systems for adaptive binarization thresholding for pupil segmentation
KR102579610B1 (en) Apparatus for Detecting ATM Abnormal Behavior and Driving Method Thereof
KR20110007418A (en) Face rotation angle detection method and apparatus
Raviteja et al. A robust human face detection algorithm
BR122018007964B1 (en) METHOD IMPLEMENTED BY COMPUTER AND SYSTEM