US20150294168A1 - Method and apparatus for an adaptive threshold based object detection - Google Patents
Method and apparatus for an adaptive threshold based object detection Download PDFInfo
- Publication number
- US20150294168A1 US20150294168A1 US14/249,981 US201414249981A US2015294168A1 US 20150294168 A1 US20150294168 A1 US 20150294168A1 US 201414249981 A US201414249981 A US 201414249981A US 2015294168 A1 US2015294168 A1 US 2015294168A1
- Authority
- US
- United States
- Prior art keywords
- image
- score
- mixture
- processor
- parts
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G06K9/00838—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/593—Recognising seat occupancy
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/2433—Single-class perspective, e.g. one-against-all classification; Novelty detection; Outlier detection
-
- G06K9/46—
-
- G06K9/4604—
-
- G06K9/52—
-
- G06K9/6256—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/30—Transforming light or analogous information into electric information
- H04N5/33—Transforming infrared radiation
-
- G06K2009/4666—
Definitions
- the present disclosure relates generally to automatic detection of objects, and, more particularly, to a method and an apparatus for an adaptive threshold based object detection.
- HOV high occupancy vehicle
- HET high occupancy tolling
- Some facial detection methods have attempted to automate detection of people in vehicles for the HOV or HOT lanes. However, due to varying conditions or varying image quality, the currently deployed methods may not be consistent or accurate in detecting people in a vehicle.
- One disclosed feature of the embodiments is a method that receives the image, calculates a score for each one of a plurality of locations in the image, performs a box plot of the score of the each one of the plurality of locations of the image, identifies an outlier score that falls outside of the box plot, determines that a distance ratio of the outlier score is less than a predefined distance ratio and detects the object in a location of the plurality of locations of the image corresponding to the outlier score.
- Another disclosed feature of the embodiments is a non-transitory computer-readable medium having stored thereon a plurality of instructions, the plurality of instructions including instructions which, when executed by a processor, cause the processor to perform an operation that receives the image, calculates a score for each one of a plurality of locations in the image, performs a box plot of the score of the each one of the plurality of locations of the image, identifies an outlier score that falls outside of the box plot, determines that a distance ratio of the outlier score is less than a predefined distance ratio and detects the object in a location of the plurality of locations of the image corresponding to the outlier score.
- Another disclosed feature of the embodiments is an apparatus comprising a processor and a computer readable medium storing a plurality of instructions which, when executed by the processor, cause the processor to perform an operation that receives the image, calculates a score for each one of a plurality of locations in the image, performs a box plot of the score of the each one of the plurality of locations of the image, identifies an outlier score that falls outside of the box plot, determines that a distance ratio of the outlier score is less than a predefined distance ratio and detects the object in a location of the plurality of locations of the image corresponding to the outlier score.
- FIG. 1 illustrates an example system of the present disclosure
- FIG. 2 illustrates an example image of the present disclosure
- FIG. 3 illustrates an example box plot of the present disclosure
- FIG. 4 illustrates an example flowchart of a method for detecting an object in an image
- FIG. 5 illustrates a high-level block diagram of a general-purpose computer suitable for use in performing the functions described herein.
- the present disclosure broadly discloses a method and non-transitory computer-readable medium for detecting an object in an image.
- law enforcement officers must be dispatched to a side of the HOV or HOT lanes to visually examine incoming or passing vehicles.
- Using law enforcement officers for counting people in cars of an HOV/HOT lane may be a poor utilization of the law enforcement officers.
- deploying law enforcement officers to regulate the HOV/HOT lanes is expensive and inefficient.
- One embodiment of the present disclosure provides a method for automatically detecting objects in an image. This information may then be used to automatically determine a number of people in a vehicle to assess whether the vehicle is in violation of the HOV/HOT lane rules or regulations. As a result, law enforcement officers may be more efficiently deployed.
- Previous automated methods relied on a static threshold value for determining whether an object is detected in the image. However, due to varying conditions, image quality and low contrast of images, previous automated methods would have a low accuracy.
- One embodiment of the present disclosure improves the accuracy of automated object detection in images using a distance ratio that is not a fixed scalar threshold value.
- training images may be analyzed to calculate a predefined distance ratio.
- a distance ratio of the outlier scores in subsequently analyzed images may be compared to a predefined distance ratio to determine if the outlier score is an object.
- an outlier score may have a wide range of values from image to image, but may still be considered to be an object based upon a distance relative to a distance of a second highest score in the image, as will be discussed in further detail below.
- FIG. 1 illustrates an example system 100 of the present disclosure.
- the system 100 may include an Internet protocol (IP) network 120 .
- the IP network 120 may include an application server (AS) 122 and a database (DB) 124 .
- the IP network 120 may include other network elements, such as for example, border elements, firewalls, routers, switches, and the like that are not shown for simplicity.
- the IP network 120 may be operated by a law enforcement agency or an agency in charge of monitoring and/or enforcing HOV/HOT rules and regulations.
- the AS 122 may perform various functions disclosed herein and be deployed as a server or a general purpose computer described below in FIG. 5 .
- the DB 124 may store various types of information.
- the DB 124 may store the algorithms used by the AS 122 , the equations discussed below, the images captured, the results of the analysis performed by the AS 122 , and the like.
- the AS 122 may be in communication with one or more image capture devices 106 and 108 .
- the communication may be over a wired or wireless communication path. Although only two image capture devices 106 and 108 are illustrated in FIG. 1 , it should be noted that any number of image capture devices 106 and 108 may be deployed.
- the image capture devices 106 and 108 may be a near infrared (NIR) band image capture device, a camera, a video camera, and the like. In one embodiment, the image capture device 108 may be positioned to capture an image of a front side of a vehicle 110 . In one embodiment, the image capture device 106 may be positioned to capture a side view of the vehicle 110 or a B-Frame image. In one embodiment, the image capture devices 106 and 108 may be coupled to a motion sensor or other triggering device to automatically capture an image or images whenever the vehicle 110 triggers the motion sensor or the triggering device. In another embodiment, the image capture devices 106 and 108 may automatically capture an image or images of the vehicle at an entrance of a HOV/HOT lane 104 .
- NIR near infrared
- a highway 102 may include the HOV/HOT lane 104 to help reduce the number of vehicles 110 , 112 and 114 on a roadway, reduce congestion and reduce pollution by encouraging car pooling.
- the system 100 may be deployed to automatically monitor and regulate the vehicles 110 , 112 and 114 in the HOV/HOT lane 104 to ensure that the vehicles 110 , 112 and 114 have a sufficient number of passengers to qualify for use of the HOV/HOT lane 104 .
- FIG. 2 illustrates an example image 200 captured by the image capture device 108 of a front of the vehicle 110 .
- the image 200 may be analyzed by the AS 122 to determine if an object 212 is detected.
- the object 212 may be a person in the vehicle 110 .
- a plurality of locations 201 - 210 in the image 200 may be analyzed to calculate a score.
- the plurality of locations 201 - 210 may be randomly selected.
- the locations 201 - 210 may be selected such that every portion of the image 200 is analyzed.
- the locations 201 - 210 may be determined based upon one or more landmark points associated with a mixture of the image. The one or more landmark points and the mixture of the image are discussed in further detail below.
- the score for each one of the locations 201 - 210 may be calculated using Equation (1) below:
- S is a score as a function of the image being analyzed I, the one or more landmark points L tuned for a mixture m
- ⁇ (I,l i ) is the HoG (Histogram of Gradients) features extracted at location l i
- App m is a sum of appearance evidence for placing a template w i m for a part tuned for the mixture m at a location l i of the image.
- Shape m is a score of a mixture specific spatial arrangement of parts L (dx and dy are the x-axis and y-axis displacements of part i with respect to part j, and parameters (a, b, c and d) specify the spatial cost constraints between pairs of parts i and j) or a geometric relationship between the one or more landmark points (e.g., a number of pixels between a corner of an eye to an eyelid) and ⁇ m is a constant for the mixture m.
- V m represents a pool of parts belonging to the mixture m.
- E m represents a set of edges between the pool of parts in V m .
- the Equation 1 is an approach to encode the elastic deformation and 3D structure of an object for face detection and pose estimation.
- the Equation 1 uses a mixture of poses with a shared pool of parts defined at each landmark position.
- the Equation 1 then uses global mixtures to model topological changes due to different viewpoints.
- the global mixture can also be used to capture gross deformation changes for a single viewpoint.
- each particular configuration of parts or landmark points L may be defined by Equation (2) below:
- the one or more landmark points may be points in the image that identifies a specific portion of the object.
- a landmark point may be a corner of an eye, a curve of an ear, a circular point of a nostril or of a lip, and the like.
- a human face may have between 38 to 68 landmark points.
- Each one of the landmark points may have the collection of parts V located at various coordinates (x i , y i ) of the ith part of the image.
- the one or more landmarks may vary depending on the mixture of parts that are analyzed. For example, depending on an angle of the face (e.g., a 90 degree face, a 70 degree face, a 50 degree face, and so forth) only a single ear may be seen or a single eye, and so forth.
- thirteen different mixtures may be used for the human face. Each one of the thirteen different mixtures may have its own set of one or more landmark points.
- the one or more landmark points may be obtained for each one of the mixtures based upon a plurality of training images that are analyzed.
- the plurality of training images may be used as part of a supervised learning classifier that includes training images that are marked images (e.g., marked with the known object, the land mark points for a given mixture, and the like).
- a supervised learning classifier that includes training images that are marked images (e.g., marked with the known object, the land mark points for a given mixture, and the like).
- further details for Equations 1 and 2 may be found in a study by Zhu and Ramanan entitled “Face Detection, Pose Estimation, and Landmark Localization in the Wild”, 2012, which is incorporated by reference in its entirety.
- the present disclosure may use a distance ratio as a threshold to determine if an object 212 is detected in the image 200 .
- the value of the threshold may be dynamic or different for each image 200 that is analyzed.
- a plurality of different images may each have one or more objects having a score value that fluctuates or varies over a wide range from image to image.
- the distance ratio for the object for each image may be within a small range that is less than a predefined distance ratio.
- the scores of a plurality of different locations on the training images may be used to perform a box plot of the scores for each image and calculate the predefined distance ratio threshold.
- FIG. 3 illustrates one embodiment of a box plot 300 that is performed for the training images and the images that are analyzed to detect an object.
- the box plot 300 may be a plot of all of the scores of an image.
- the box plot 300 may include a median 302 , a 25% quartile score 306 , a 75% quartile score 304 , a lowest score 308 and a second highest score 310 .
- the box plot 300 may also include an outlier score 312 .
- the outlier score 312 may be identified as the outlier score if the score is outside of the box plot. In other words, the outlier score 312 does not fall within the range from the second highest score 310 to the lowest score 308 .
- the outlier score 312 may not be identified as an object if the outlier score 312 is too close to the second highest score 310 .
- multiple outlier scores 312 may be identified.
- a predefined distance ratio threshold may be calculated and used that in essence provides a dynamic threshold value instead of a static threshold value.
- the distance ratio may be calculated based upon a distance 314 of the second highest score 310 to a distance 316 of the outlier score 312 .
- the distance 314 may be measured from the median 302 of the box plot 300 to the second highest score 310 .
- the distance 316 may be measured from the median 302 of the box plot 300 to the outlier score 312 .
- the distance ratio for each one of a plurality of training images may be calculated. Based upon the calculated distance ratio, a predefined distance ratio threshold may be used for subsequently analyzed images to identify an object in an image. In one embodiment, the predefined distance ratio threshold may be approximately 0.6 for NIR images of 750 nanometers (nm) to 1000 nm. If multiple outlier scores 312 are identified and each outlier score 312 is below the distance ratio threshold, then multiple objects may be detected in the image.
- the outlier score 312 may correspond to the score of the location 205 and have a value of ⁇ 0.20227.
- the second highest score 310 may correspond to the score of the location 210 and have a value of ⁇ 1.003 and the median score is ⁇ 1.412.
- the predefined distance ratio threshold may have been calculated to be 0.6.
- the outlier score 312 would be detected as being an object since the distance ratio is less than the predefined ratio 0.6 (e.g., 0.338 ⁇ 0.6).
- the object detection may be used to automatically regulate or manage the HOV/HOT lane 104 .
- the total number of people in the vehicle 110 may be calculated by summing the total number of objects detected in the vehicle 110 from the images that are analyzed. If the total number of people is less than a total number of passengers requirement for the HOV/HOT lane 104 , a ticket may be automatically generated and mailed to the registered owner of the vehicle 110 .
- the registered owner of the vehicle may be identified based on a license plate number captured in the image.
- FIG. 4 illustrates a flowchart of a method 400 for detecting an object in an image.
- one or more steps or operations of the method 400 may be performed by the AS 120 or a general-purpose computer as illustrated in FIG. 5 and discussed below.
- the method 400 begins.
- the method 400 analyzes a plurality of training images having a known object to calculate a predefined distance ratio. For example, the distance ratio for each one of a plurality of training images may be calculated. Based upon the calculated distance ratio, a predefined distance ratio threshold may be used for subsequently analyzed images to identify an object in an image. In one embodiment, the predefined distance ratio threshold may be approximately 0.6 for NIR images of 750 nanometers (nm) to 1000 nm.
- the method 400 receives an image.
- an image capture device such as, for example, a near infrared (NIR) band image capture device, a camera, a video camera, and the like, may capture an NIR image, photograph, video and the like.
- NIR near infrared
- the method 400 calculates a score for each one of a plurality of locations in the image.
- Equation 1 described above may be used to calculate the score of each one of the plurality of locations in the image.
- the method 400 performs a box plot of the score of the each one of the plurality of locations of the image. For example, all of the scores that were calculated in step 408 may be tallied or graphed into a box plot, such as the box plot 300 , illustrated in FIG. 3 .
- the method 400 determines if an outlier score is identified.
- an outlier score may be any score that does not fall within the bounds of the box plot. In one embodiment, multiple outlier scores may be identified. If an outlier score is not identified, the method 400 may return to step 406 to receive and analyze another image. However, if an outlier score is identified, the method 400 may proceed to step 414 .
- the method 400 determines if a distance ratio of the outlier score is less than the predetermined distance ratio. For example, a first distance measured from the median of the box plot to the second highest score is compared to a second distance measured from the median of the box plot to the outlier score of the box plot to obtain the distance ratio. If the distance ratio of the outlier score is greater than the predetermined distance ratio the method 400 may return to step 406 to receive and analyze another image. However, if the distance ratio of the outlier score is less than the predetermined distance ratio, the method 400 may proceed to step 416 .
- the method 400 detects the object in a location of the plurality of locations of the image corresponding to the outlier score. For example, if the method 400 is being used to detect a person for automated management of vehicles in HOV/HOT lanes, the object that is detected may be a person. The number of objects detected in the image may be tallied and provided to determine if a vehicle has enough passengers to qualify for user of the HOV/HOT lanes.
- the object detection may be used to automatically regulate or manage the HOV/HOT lanes.
- the total number of people in the vehicle may be calculated by summing the total number of objects detected in the vehicle from the images that are analyzed. If the total number of people is less than a total number of passengers requirement for the HOV/HOT lane, a ticket may be automatically generated and mailed to the registered owner of the vehicle. The registered owner of the vehicle may be identified based on a license plate number captured in the image.
- the method 400 determines if there are any additional images remaining that need to be analyzed. For example, a plurality of images may be analyzed. In one embodiment, the images may be continually received as multiple cars enter the HOV/HOT lanes. If there are additional images, the method 400 returns to step 406 to receive the next image. If there are no additional images, the method 400 proceeds to step 420 . At step 420 , the method 400 ends.
- the embodiments of the present disclosure provide an automated method for detecting objects (e.g., people) that can be used to automatically manage and regulate NOV/HOT lanes.
- objects e.g., people
- law enforcement officers may be more efficiently used rather than manually being required to count passengers in each vehicle that enter the HOV/HOT lanes.
- one or more steps, functions, or operations of the method 400 described above may include a storing, displaying and/or outputting step as required for a particular application.
- any data, records, fields, and/or intermediate results discussed in the methods can be stored, displayed, and/or outputted to another device as required for a particular application.
- steps, functions, or operations in FIG. 4 that recite a determining operation, or involve a decision do not necessarily require that both branches of the determining operation be practiced. In other words, one of the branches of the determining operation can be deemed as an optional step.
- FIG. 5 depicts a high-level block diagram of a general-purpose computer suitable for use in performing the functions described herein.
- the system 500 comprises a processor element 502 (e.g., a SIMD, a CPU, and the like), a memory 504 , e.g., random access memory (RAM) and/or read only memory (ROM), a module 505 for detecting an object in an image, and various input/output devices 506 (e.g., storage devices, including but not limited to, a tape drive, a floppy drive, a hard disk drive or a compact disk drive, a receiver, a transmitter, a speaker, a display, a speech synthesizer, an output device (such as a graphic display, printer, and the like), an output port, and a user input device (such as a keyboard, a keypad, a mouse, and the like)).
- a processor element 502 e.g., a SIMD, a CPU, and the like
- the present disclosure can be implemented in software and/or in a combination of software and hardware, e.g., using application specific integrated circuits (ASIC), a general purpose computer or any other hardware equivalents, e.g., computer readable instructions pertaining to the method(s) discussed above can be used to configure a hardware processor to perform the steps of the above disclosed methods.
- ASIC application specific integrated circuits
- the present module or process 505 for detecting an object in an image can be loaded into memory 504 and executed by processor 502 to implement the functions as discussed above.
- the present method 505 for detecting an object in an image (including associated data structures) of the present disclosure can be stored on a non-transitory (e.g., physical and tangible) computer readable storage medium, e.g., RAM memory, magnetic or optical drive or diskette and the like.
- a non-transitory (e.g., physical and tangible) computer readable storage medium e.g., RAM memory, magnetic or optical drive or diskette and the like.
- the hardware processor 502 can be programmed or configured with instructions (e.g., computer readable instructions) to perform the steps, functions, or operations of method 400 .
Abstract
Description
- The present disclosure relates generally to automatic detection of objects, and, more particularly, to a method and an apparatus for an adaptive threshold based object detection.
- Cities are attempting to reduce the number of cars on the road and improve commuting times by creating high occupancy vehicle (HOV) lanes or high occupancy tolling (HOT) lanes. For example, certain highways may have lanes dedicated for cars carrying two or more persons or three or more persons. However, some cars having only a single person may attempt to drive in these lanes creating extra congestion, which defeats the purpose of the HOV/HOT lanes.
- Currently, to enforce traffic rules associated with the HOV/HOT lanes law enforcement officers must be dispatched to a side of the HOV or HOT lanes to visually examine incoming or passing vehicles. Using law enforcement officers for counting people in cars of an HOV/HOT lane may be a poor utilization of the law enforcement officers. In other words, deploying law enforcement officers to regulate the HOV/HOT lanes is expensive and inefficient.
- Some facial detection methods have attempted to automate detection of people in vehicles for the HOV or HOT lanes. However, due to varying conditions or varying image quality, the currently deployed methods may not be consistent or accurate in detecting people in a vehicle.
- According to aspects illustrated herein, there are provided a method, a non-transitory computer readable medium, and an apparatus for detecting an object in an image. One disclosed feature of the embodiments is a method that receives the image, calculates a score for each one of a plurality of locations in the image, performs a box plot of the score of the each one of the plurality of locations of the image, identifies an outlier score that falls outside of the box plot, determines that a distance ratio of the outlier score is less than a predefined distance ratio and detects the object in a location of the plurality of locations of the image corresponding to the outlier score.
- Another disclosed feature of the embodiments is a non-transitory computer-readable medium having stored thereon a plurality of instructions, the plurality of instructions including instructions which, when executed by a processor, cause the processor to perform an operation that receives the image, calculates a score for each one of a plurality of locations in the image, performs a box plot of the score of the each one of the plurality of locations of the image, identifies an outlier score that falls outside of the box plot, determines that a distance ratio of the outlier score is less than a predefined distance ratio and detects the object in a location of the plurality of locations of the image corresponding to the outlier score.
- Another disclosed feature of the embodiments is an apparatus comprising a processor and a computer readable medium storing a plurality of instructions which, when executed by the processor, cause the processor to perform an operation that receives the image, calculates a score for each one of a plurality of locations in the image, performs a box plot of the score of the each one of the plurality of locations of the image, identifies an outlier score that falls outside of the box plot, determines that a distance ratio of the outlier score is less than a predefined distance ratio and detects the object in a location of the plurality of locations of the image corresponding to the outlier score.
- The teaching of the present disclosure can be readily understood by considering the following detailed description in conjunction with the accompanying drawings, in which:
-
FIG. 1 illustrates an example system of the present disclosure; -
FIG. 2 illustrates an example image of the present disclosure; -
FIG. 3 illustrates an example box plot of the present disclosure; -
FIG. 4 illustrates an example flowchart of a method for detecting an object in an image; and -
FIG. 5 illustrates a high-level block diagram of a general-purpose computer suitable for use in performing the functions described herein. - To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures.
- The present disclosure broadly discloses a method and non-transitory computer-readable medium for detecting an object in an image. As discussed above, to enforce traffic rules associated with the HOV/HOT lanes law enforcement officers must be dispatched to a side of the HOV or HOT lanes to visually examine incoming or passing vehicles. Using law enforcement officers for counting people in cars of an HOV/HOT lane may be a poor utilization of the law enforcement officers. In other words, deploying law enforcement officers to regulate the HOV/HOT lanes is expensive and inefficient.
- One embodiment of the present disclosure provides a method for automatically detecting objects in an image. This information may then be used to automatically determine a number of people in a vehicle to assess whether the vehicle is in violation of the HOV/HOT lane rules or regulations. As a result, law enforcement officers may be more efficiently deployed.
- Previous automated methods relied on a static threshold value for determining whether an object is detected in the image. However, due to varying conditions, image quality and low contrast of images, previous automated methods would have a low accuracy.
- One embodiment of the present disclosure improves the accuracy of automated object detection in images using a distance ratio that is not a fixed scalar threshold value. In one embodiment, training images may be analyzed to calculate a predefined distance ratio. Then, a distance ratio of the outlier scores in subsequently analyzed images may be compared to a predefined distance ratio to determine if the outlier score is an object. Thus, an outlier score may have a wide range of values from image to image, but may still be considered to be an object based upon a distance relative to a distance of a second highest score in the image, as will be discussed in further detail below.
-
FIG. 1 illustrates anexample system 100 of the present disclosure. In one embodiment, thesystem 100 may include an Internet protocol (IP)network 120. TheIP network 120 may include an application server (AS) 122 and a database (DB) 124. TheIP network 120 may include other network elements, such as for example, border elements, firewalls, routers, switches, and the like that are not shown for simplicity. In one embodiment, theIP network 120 may be operated by a law enforcement agency or an agency in charge of monitoring and/or enforcing HOV/HOT rules and regulations. - In one embodiment, the AS 122 may perform various functions disclosed herein and be deployed as a server or a general purpose computer described below in
FIG. 5 . In one embodiment, the DB 124 may store various types of information. For example, the DB 124 may store the algorithms used by theAS 122, the equations discussed below, the images captured, the results of the analysis performed by theAS 122, and the like. - In one embodiment, the AS 122 may be in communication with one or more
image capture devices image capture devices FIG. 1 , it should be noted that any number ofimage capture devices - In one embodiment, the
image capture devices image capture device 108 may be positioned to capture an image of a front side of avehicle 110. In one embodiment, theimage capture device 106 may be positioned to capture a side view of thevehicle 110 or a B-Frame image. In one embodiment, theimage capture devices vehicle 110 triggers the motion sensor or the triggering device. In another embodiment, theimage capture devices HOT lane 104. - As discussed above, a
highway 102 may include the HOV/HOT lane 104 to help reduce the number ofvehicles system 100 may be deployed to automatically monitor and regulate thevehicles HOT lane 104 to ensure that thevehicles HOT lane 104. -
FIG. 2 illustrates anexample image 200 captured by theimage capture device 108 of a front of thevehicle 110. In one embodiment, theimage 200 may be analyzed by theAS 122 to determine if anobject 212 is detected. In one embodiment, theobject 212 may be a person in thevehicle 110. - In one embodiment, a plurality of locations 201-210 in the
image 200 may be analyzed to calculate a score. In one embodiment, the plurality of locations 201-210 may be randomly selected. In one embodiment, the locations 201-210 may be selected such that every portion of theimage 200 is analyzed. In another embodiment, the locations 201-210 may be determined based upon one or more landmark points associated with a mixture of the image. The one or more landmark points and the mixture of the image are discussed in further detail below. - In one embodiment, the score for each one of the locations 201-210 may be calculated using Equation (1) below:
-
- wherein S is a score as a function of the image being analyzed I, the one or more landmark points L tuned for a mixture m, φ(I,li) is the HoG (Histogram of Gradients) features extracted at location li, Appm is a sum of appearance evidence for placing a template wi m for a part tuned for the mixture m at a location li of the image. Shapem is a score of a mixture specific spatial arrangement of parts L (dx and dy are the x-axis and y-axis displacements of part i with respect to part j, and parameters (a, b, c and d) specify the spatial cost constraints between pairs of parts i and j) or a geometric relationship between the one or more landmark points (e.g., a number of pixels between a corner of an eye to an eyelid) and αm is a constant for the mixture m. Vm represents a pool of parts belonging to the mixture m. Em represents a set of edges between the pool of parts in Vm.
- The Equation 1 is an approach to encode the elastic deformation and 3D structure of an object for face detection and pose estimation. The Equation 1 uses a mixture of poses with a shared pool of parts defined at each landmark position. The Equation 1 then uses global mixtures to model topological changes due to different viewpoints. The global mixture can also be used to capture gross deformation changes for a single viewpoint. In one embodiment, each particular configuration of parts or landmark points L may be defined by Equation (2) below:
-
L={I i=(x i ,y i):i□V}, Eq. (2) - wherein Ii is the ith part location and V is a shared pool of parts. In one embodiment, the one or more landmark points may be points in the image that identifies a specific portion of the object. For example for a human face, a landmark point may be a corner of an eye, a curve of an ear, a circular point of a nostril or of a lip, and the like. In one embodiment, a human face may have between 38 to 68 landmark points. Each one of the landmark points may have the collection of parts V located at various coordinates (xi, yi) of the ith part of the image.
- In one embodiment, the one or more landmarks may vary depending on the mixture of parts that are analyzed. For example, depending on an angle of the face (e.g., a 90 degree face, a 70 degree face, a 50 degree face, and so forth) only a single ear may be seen or a single eye, and so forth. In one embodiment, thirteen different mixtures may be used for the human face. Each one of the thirteen different mixtures may have its own set of one or more landmark points. In one embodiment, the one or more landmark points may be obtained for each one of the mixtures based upon a plurality of training images that are analyzed. In one embodiment, the plurality of training images (e.g., a few hundred images) may be used as part of a supervised learning classifier that includes training images that are marked images (e.g., marked with the known object, the land mark points for a given mixture, and the like). In one embodiment, further details for Equations 1 and 2 may be found in a study by Zhu and Ramanan entitled “Face Detection, Pose Estimation, and Landmark Localization in the Wild”, 2012, which is incorporated by reference in its entirety.
- Previous methods that used the scoring Equation (1) above, for example in Zhu and Ramanan, used a static scalar threshold value to determine if an object was detected. However, as discussed above, due to varying environmental conditions, image quality or low contrast in images, using a static scalar threshold leads to a low accuracy problem.
- In one embodiment, the present disclosure may use a distance ratio as a threshold to determine if an
object 212 is detected in theimage 200. In one embodiment, by using the distance ratio rather than a static scalar threshold value, the value of the threshold may be dynamic or different for eachimage 200 that is analyzed. In other words, a plurality of different images may each have one or more objects having a score value that fluctuates or varies over a wide range from image to image. However, the distance ratio for the object for each image may be within a small range that is less than a predefined distance ratio. In one embodiment, the scores of a plurality of different locations on the training images may be used to perform a box plot of the scores for each image and calculate the predefined distance ratio threshold. -
FIG. 3 illustrates one embodiment of abox plot 300 that is performed for the training images and the images that are analyzed to detect an object. In one embodiment, thebox plot 300 may be a plot of all of the scores of an image. Thebox plot 300 may include a median 302, a 25% quartile score 306, a 75% quartile score 304, alowest score 308 and a secondhighest score 310. Thebox plot 300 may also include anoutlier score 312. In one embodiment, theoutlier score 312 may be identified as the outlier score if the score is outside of the box plot. In other words, theoutlier score 312 does not fall within the range from the secondhighest score 310 to thelowest score 308. - Using the previous methods that have static threshold, the
outlier score 312 may not be identified as an object if theoutlier score 312 is too close to the secondhighest score 310. In one embodiment,multiple outlier scores 312 may be identified. However, to account for varying image qualities, environmental conditions and varying contrast in images, a predefined distance ratio threshold may be calculated and used that in essence provides a dynamic threshold value instead of a static threshold value. - In one embodiment, the distance ratio may be calculated based upon a
distance 314 of the secondhighest score 310 to adistance 316 of theoutlier score 312. In one embodiment, thedistance 314 may be measured from the median 302 of thebox plot 300 to the secondhighest score 310. In one embodiment, thedistance 316 may be measured from the median 302 of thebox plot 300 to theoutlier score 312. - In one embodiment, the distance ratio for each one of a plurality of training images may be calculated. Based upon the calculated distance ratio, a predefined distance ratio threshold may be used for subsequently analyzed images to identify an object in an image. In one embodiment, the predefined distance ratio threshold may be approximately 0.6 for NIR images of 750 nanometers (nm) to 1000 nm. If
multiple outlier scores 312 are identified and eachoutlier score 312 is below the distance ratio threshold, then multiple objects may be detected in the image. - Thus, in one example, if the
image 200 is captured and analyzed, theoutlier score 312 may correspond to the score of thelocation 205 and have a value of −0.20227. The secondhighest score 310 may correspond to the score of thelocation 210 and have a value of −1.003 and the median score is −1.412. The predefined distance ratio threshold may have been calculated to be 0.6. Thus, the distance of the secondhighest score 310 of theimage 200 in the above example would be −0.409 (e.g., median−second highest score=−1.412−−1.003=−0.409) and the distance of theoutlier score 312 of theimage 200 would be −1.20973 (e.g., median−outlier score=−1.412−−0.20227=−1.20973). The distance ratio would be 0.338 (e.g., distance of second highest score/distance of outlier score=−0.409/−1.20973=0.338). Thus, theoutlier score 312 would be detected as being an object since the distance ratio is less than the predefined ratio 0.6 (e.g., 0.338<0.6). - When the objects being detected are people, the object detection may be used to automatically regulate or manage the HOV/
HOT lane 104. For example, the total number of people in thevehicle 110 may be calculated by summing the total number of objects detected in thevehicle 110 from the images that are analyzed. If the total number of people is less than a total number of passengers requirement for the HOV/HOT lane 104, a ticket may be automatically generated and mailed to the registered owner of thevehicle 110. The registered owner of the vehicle may be identified based on a license plate number captured in the image. -
FIG. 4 illustrates a flowchart of amethod 400 for detecting an object in an image. In one embodiment, one or more steps or operations of themethod 400 may be performed by theAS 120 or a general-purpose computer as illustrated inFIG. 5 and discussed below. - At
step 402 themethod 400 begins. Atstep 404, themethod 400 analyzes a plurality of training images having a known object to calculate a predefined distance ratio. For example, the distance ratio for each one of a plurality of training images may be calculated. Based upon the calculated distance ratio, a predefined distance ratio threshold may be used for subsequently analyzed images to identify an object in an image. In one embodiment, the predefined distance ratio threshold may be approximately 0.6 for NIR images of 750 nanometers (nm) to 1000 nm. - At
step 406, themethod 400 receives an image. For example, an image capture device such as, for example, a near infrared (NIR) band image capture device, a camera, a video camera, and the like, may capture an NIR image, photograph, video and the like. - At
step 408, themethod 400 calculates a score for each one of a plurality of locations in the image. In one embodiment, Equation 1 described above may be used to calculate the score of each one of the plurality of locations in the image. - At
step 410, themethod 400 performs a box plot of the score of the each one of the plurality of locations of the image. For example, all of the scores that were calculated instep 408 may be tallied or graphed into a box plot, such as thebox plot 300, illustrated inFIG. 3 . - At
step 412, themethod 400 determines if an outlier score is identified. In one embodiment, an outlier score may be any score that does not fall within the bounds of the box plot. In one embodiment, multiple outlier scores may be identified. If an outlier score is not identified, themethod 400 may return to step 406 to receive and analyze another image. However, if an outlier score is identified, themethod 400 may proceed to step 414. - At step 414, the
method 400 determines if a distance ratio of the outlier score is less than the predetermined distance ratio. For example, a first distance measured from the median of the box plot to the second highest score is compared to a second distance measured from the median of the box plot to the outlier score of the box plot to obtain the distance ratio. If the distance ratio of the outlier score is greater than the predetermined distance ratio themethod 400 may return to step 406 to receive and analyze another image. However, if the distance ratio of the outlier score is less than the predetermined distance ratio, themethod 400 may proceed to step 416. - At
step 416, themethod 400 detects the object in a location of the plurality of locations of the image corresponding to the outlier score. For example, if themethod 400 is being used to detect a person for automated management of vehicles in HOV/HOT lanes, the object that is detected may be a person. The number of objects detected in the image may be tallied and provided to determine if a vehicle has enough passengers to qualify for user of the HOV/HOT lanes. - In one embodiment, when the objects being detected are people, the object detection may be used to automatically regulate or manage the HOV/HOT lanes. For example, the total number of people in the vehicle may be calculated by summing the total number of objects detected in the vehicle from the images that are analyzed. If the total number of people is less than a total number of passengers requirement for the HOV/HOT lane, a ticket may be automatically generated and mailed to the registered owner of the vehicle. The registered owner of the vehicle may be identified based on a license plate number captured in the image.
- At
step 418, themethod 400 determines if there are any additional images remaining that need to be analyzed. For example, a plurality of images may be analyzed. In one embodiment, the images may be continually received as multiple cars enter the HOV/HOT lanes. If there are additional images, themethod 400 returns to step 406 to receive the next image. If there are no additional images, themethod 400 proceeds to step 420. Atstep 420, themethod 400 ends. - As a result, the embodiments of the present disclosure provide an automated method for detecting objects (e.g., people) that can be used to automatically manage and regulate NOV/HOT lanes. Thus, law enforcement officers may be more efficiently used rather than manually being required to count passengers in each vehicle that enter the HOV/HOT lanes.
- It should be noted that although not explicitly specified, one or more steps, functions, or operations of the
method 400 described above may include a storing, displaying and/or outputting step as required for a particular application. In other words, any data, records, fields, and/or intermediate results discussed in the methods can be stored, displayed, and/or outputted to another device as required for a particular application. Furthermore, steps, functions, or operations inFIG. 4 that recite a determining operation, or involve a decision, do not necessarily require that both branches of the determining operation be practiced. In other words, one of the branches of the determining operation can be deemed as an optional step. -
FIG. 5 depicts a high-level block diagram of a general-purpose computer suitable for use in performing the functions described herein. As depicted inFIG. 5 , thesystem 500 comprises a processor element 502 (e.g., a SIMD, a CPU, and the like), amemory 504, e.g., random access memory (RAM) and/or read only memory (ROM), amodule 505 for detecting an object in an image, and various input/output devices 506 (e.g., storage devices, including but not limited to, a tape drive, a floppy drive, a hard disk drive or a compact disk drive, a receiver, a transmitter, a speaker, a display, a speech synthesizer, an output device (such as a graphic display, printer, and the like), an output port, and a user input device (such as a keyboard, a keypad, a mouse, and the like)). - It should be noted that the present disclosure can be implemented in software and/or in a combination of software and hardware, e.g., using application specific integrated circuits (ASIC), a general purpose computer or any other hardware equivalents, e.g., computer readable instructions pertaining to the method(s) discussed above can be used to configure a hardware processor to perform the steps of the above disclosed methods. In one embodiment, the present module or
process 505 for detecting an object in an image can be loaded intomemory 504 and executed byprocessor 502 to implement the functions as discussed above. As such, thepresent method 505 for detecting an object in an image (including associated data structures) of the present disclosure can be stored on a non-transitory (e.g., physical and tangible) computer readable storage medium, e.g., RAM memory, magnetic or optical drive or diskette and the like. For example, thehardware processor 502 can be programmed or configured with instructions (e.g., computer readable instructions) to perform the steps, functions, or operations ofmethod 400. - It will be appreciated that variants of the above-disclosed and other features and functions, or alternatives thereof, may be combined into many other different systems or applications. Various presently unforeseen or unanticipated alternatives, modifications, variations, or improvements therein may be subsequently made by those skilled in the art which are also intended to be encompassed by the following claims.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/249,981 US9177214B1 (en) | 2014-04-10 | 2014-04-10 | Method and apparatus for an adaptive threshold based object detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/249,981 US9177214B1 (en) | 2014-04-10 | 2014-04-10 | Method and apparatus for an adaptive threshold based object detection |
Publications (2)
Publication Number | Publication Date |
---|---|
US20150294168A1 true US20150294168A1 (en) | 2015-10-15 |
US9177214B1 US9177214B1 (en) | 2015-11-03 |
Family
ID=54265317
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/249,981 Active 2034-05-27 US9177214B1 (en) | 2014-04-10 | 2014-04-10 | Method and apparatus for an adaptive threshold based object detection |
Country Status (1)
Country | Link |
---|---|
US (1) | US9177214B1 (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105282507A (en) * | 2015-10-19 | 2016-01-27 | 北京电信易通信息技术股份有限公司 | Land law-enforcing service system |
CN106203279A (en) * | 2016-06-28 | 2016-12-07 | 广东欧珀移动通信有限公司 | The recognition methods of destination object, device and mobile terminal in a kind of augmented reality |
US10202048B2 (en) * | 2017-06-28 | 2019-02-12 | Toyota Motor Engineering & Manufacturing North America, Inc. | Systems and methods for adjusting operation of a vehicle according to HOV lane detection in traffic |
CN109624850A (en) * | 2017-10-05 | 2019-04-16 | 斯特拉德视觉公司 | Monitor the method for the blind spot of vehicle and the blind spot monitoring device using this method |
US10546195B2 (en) * | 2016-12-02 | 2020-01-28 | Geostat Aerospace & Technology Inc. | Methods and systems for automatic object detection from aerial imagery |
US10699119B2 (en) * | 2016-12-02 | 2020-06-30 | GEOSAT Aerospace & Technology | Methods and systems for automatic object detection from aerial imagery |
CN111930075A (en) * | 2020-07-31 | 2020-11-13 | 深圳吉兰丁智能科技有限公司 | Self-adaptive machining control method and non-volatile readable storage medium |
CN112020461A (en) * | 2018-04-27 | 2020-12-01 | 图森有限公司 | System and method for determining a distance from a vehicle to a lane |
US10965925B2 (en) | 2018-05-31 | 2021-03-30 | Canon Kabushiki Kaisha | Image capturing apparatus, client apparatus, control method, and storage medium |
CN113626497A (en) * | 2021-08-03 | 2021-11-09 | 上海哥瑞利软件股份有限公司 | Box diagram-based semiconductor problem machine positioning system and positioning method |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107657211A (en) * | 2017-08-11 | 2018-02-02 | 广州烽火众智数字技术有限公司 | The Vehicular occupant number detection method and device in a kind of HOV tracks |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6783167B2 (en) * | 1999-03-24 | 2004-08-31 | Donnelly Corporation | Safety system for a closed compartment of a vehicle |
CA2559118A1 (en) * | 2004-03-10 | 2005-09-15 | John H. Pearce | Transport system |
US8611608B2 (en) * | 2011-08-23 | 2013-12-17 | Xerox Corporation | Front seat vehicle occupancy detection via seat pattern recognition |
US8811664B2 (en) * | 2011-12-06 | 2014-08-19 | Xerox Corporation | Vehicle occupancy detection via single band infrared imaging |
US9202118B2 (en) * | 2011-12-13 | 2015-12-01 | Xerox Corporation | Determining a pixel classification threshold for vehicle occupancy detection |
-
2014
- 2014-04-10 US US14/249,981 patent/US9177214B1/en active Active
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105282507A (en) * | 2015-10-19 | 2016-01-27 | 北京电信易通信息技术股份有限公司 | Land law-enforcing service system |
CN106203279A (en) * | 2016-06-28 | 2016-12-07 | 广东欧珀移动通信有限公司 | The recognition methods of destination object, device and mobile terminal in a kind of augmented reality |
US10546195B2 (en) * | 2016-12-02 | 2020-01-28 | Geostat Aerospace & Technology Inc. | Methods and systems for automatic object detection from aerial imagery |
US10699119B2 (en) * | 2016-12-02 | 2020-06-30 | GEOSAT Aerospace & Technology | Methods and systems for automatic object detection from aerial imagery |
US10202048B2 (en) * | 2017-06-28 | 2019-02-12 | Toyota Motor Engineering & Manufacturing North America, Inc. | Systems and methods for adjusting operation of a vehicle according to HOV lane detection in traffic |
CN109624850A (en) * | 2017-10-05 | 2019-04-16 | 斯特拉德视觉公司 | Monitor the method for the blind spot of vehicle and the blind spot monitoring device using this method |
CN112020461A (en) * | 2018-04-27 | 2020-12-01 | 图森有限公司 | System and method for determining a distance from a vehicle to a lane |
US11727811B2 (en) | 2018-04-27 | 2023-08-15 | Tusimple, Inc. | System and method for determining car to lane distance |
US10965925B2 (en) | 2018-05-31 | 2021-03-30 | Canon Kabushiki Kaisha | Image capturing apparatus, client apparatus, control method, and storage medium |
CN111930075A (en) * | 2020-07-31 | 2020-11-13 | 深圳吉兰丁智能科技有限公司 | Self-adaptive machining control method and non-volatile readable storage medium |
CN113626497A (en) * | 2021-08-03 | 2021-11-09 | 上海哥瑞利软件股份有限公司 | Box diagram-based semiconductor problem machine positioning system and positioning method |
Also Published As
Publication number | Publication date |
---|---|
US9177214B1 (en) | 2015-11-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9177214B1 (en) | Method and apparatus for an adaptive threshold based object detection | |
Fang et al. | Falls from heights: A computer vision-based approach for safety harness detection | |
CN108416250B (en) | People counting method and device | |
EP3343443B1 (en) | Object detection for video camera self-calibration | |
US8737690B2 (en) | Video-based method for parking angle violation detection | |
US8744132B2 (en) | Video-based method for detecting parking boundary violations | |
US9633267B2 (en) | Robust windshield detection via landmark localization | |
US9070023B2 (en) | System and method of alerting a driver that visual perception of pedestrian may be difficult | |
US20160379049A1 (en) | Video monitoring method, video monitoring system and computer program product | |
Ding et al. | An adaptive road ROI determination algorithm for lane detection | |
US20150286885A1 (en) | Method for detecting driver cell phone usage from side-view images | |
WO2021227586A1 (en) | Traffic accident analysis method, apparatus, and device | |
US20170243067A1 (en) | Side window detection through use of spatial probability maps | |
CN109800682A (en) | Driver attributes' recognition methods and Related product | |
CN104239847B (en) | Driving warning method and electronic device for vehicle | |
CN114248819B (en) | Railway intrusion foreign matter unmanned aerial vehicle detection method, device and system based on deep learning | |
CN113674523A (en) | Traffic accident analysis method, device and equipment | |
Lin et al. | Improved traffic sign recognition for in-car cameras | |
CN111383248A (en) | Method and device for judging red light running of pedestrian and electronic equipment | |
CN113076851A (en) | Method and device for acquiring vehicle violation data and computer equipment | |
CN113593099A (en) | Gate control method, device and system, electronic equipment and storage medium | |
CN112488042A (en) | Pedestrian traffic bottleneck discrimination method and system based on video analysis | |
Suttiponpisarn et al. | Detection of wrong direction vehicles on two-way traffic | |
Kahlon et al. | An intelligent framework to detect and generate alert while cattle lying on road in dangerous states using surveillance videos | |
KR102604009B1 (en) | System and method for monitoring and responding to forgery of license plates |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: XEROX CORPORATION, CONNECTICUT Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ARTAN, YUSUF O.;PAUL, PETER;SIGNING DATES FROM 20140210 TO 20140401;REEL/FRAME:032797/0820 |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: CONDUENT BUSINESS SERVICES, LLC, TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:XEROX CORPORATION;REEL/FRAME:041542/0022 Effective date: 20170112 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
AS | Assignment |
Owner name: BANK OF AMERICA, N.A., NORTH CAROLINA Free format text: SECURITY INTEREST;ASSIGNOR:CONDUENT BUSINESS SERVICES, LLC;REEL/FRAME:057970/0001 Effective date: 20211015 Owner name: U.S. BANK, NATIONAL ASSOCIATION, CONNECTICUT Free format text: SECURITY INTEREST;ASSIGNOR:CONDUENT BUSINESS SERVICES, LLC;REEL/FRAME:057969/0445 Effective date: 20211015 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |