GB2585247A - Occupant classification method and apparatus - Google Patents

Occupant classification method and apparatus Download PDF

Info

Publication number
GB2585247A
GB2585247A GB1909699.9A GB201909699A GB2585247A GB 2585247 A GB2585247 A GB 2585247A GB 201909699 A GB201909699 A GB 201909699A GB 2585247 A GB2585247 A GB 2585247A
Authority
GB
United Kingdom
Prior art keywords
occupant
landmark
vehicle
classification
cabin
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1909699.9A
Other versions
GB2585247B (en
GB201909699D0 (en
Inventor
Hasedzic Elvir
Valentin Gheorghe Ionut
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jaguar Land Rover Ltd
Original Assignee
Jaguar Land Rover Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jaguar Land Rover Ltd filed Critical Jaguar Land Rover Ltd
Priority to GB1909699.9A priority Critical patent/GB2585247B/en
Publication of GB201909699D0 publication Critical patent/GB201909699D0/en
Priority to DE102020117555.8A priority patent/DE102020117555A1/en
Publication of GB2585247A publication Critical patent/GB2585247A/en
Application granted granted Critical
Publication of GB2585247B publication Critical patent/GB2585247B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60NSEATS SPECIALLY ADAPTED FOR VEHICLES; VEHICLE PASSENGER ACCOMMODATION NOT OTHERWISE PROVIDED FOR
    • B60N2/00Seats specially adapted for vehicles; Arrangement or mounting of seats in vehicles
    • B60N2/002Seats provided with an occupancy detection means mounted therein or thereon
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/593Recognising seat occupancy

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

A method for classifying an occupant of a vehicle, which may have the safety benefit of preventing children from being left unattended in a vehicle, comprises the following steps. An image is received from a camera disposed in a cabin of a vehicle and analysed to identify a plurality of body landmarks, which may include shoulders, elbows, chests or hips of an occupant in the vehicle cabin. The plurality of identified body landmarks include at least one pair of the body landmarks. A distance is determined between the body landmarks forming each pair. The occupant is classified as a first occupant type or a second occupant type in dependence on a comparison of the determined distance with a predefined classification threshold T1. The identified landmarks may make up a plurality of pairs of landmarks, and these may be added together, with the total compared to the threshold T1. Operation of one or more vehicle systems is thus be controlled in dependence on the classification of the occupant. A vehicle controller executing the above method and vehicle system such a controller are also claimed.

Description

OCCUPANT CLASSIFICATION METHOD AND APPARATUS
TECHNICAL FIELD
The present disclosure relates to an occupant classification method and apparatus. Aspects of the invention relate to a vehicle occupant classification method; a non-transitory computer-readable medium; a vehicle occupant classification system; a vehicle; and a controller.
BACKGROUND
There have been instances of children being left unattended in a vehicle, such as an automobile. It would be desirable to provide a system which can identify when a child is in the vehicle. Using a camera to detect children has some problems. For example, it may be difficult to locate the camera in the vehicle to capture a full body image, usually only the torso is visible, with the child's legs tending to be out of view. The processing of the image data may also be difficult as the occupant is constantly moving within the vehicle.
It is an aim of the present invention to address one or more of the disadvantages associated with the prior art.
SUMMARY OF THE INVENTION
Aspects and embodiments of the invention provide a vehicle occupant classification method; a non-transitory computer-readable medium; a vehicle occupant classification system; a vehicle; and a controller according to the appended claims According to an aspect of the present invention there is provided a vehicle occupant classification method comprising: receiving an image from a camera disposed in a cabin of a vehicle; analysing the image to identify a plurality of body landmarks of an occupant in the vehicle cabin, the plurality of identified body landmarks comprising at least one pair of the body landmarks; determining a distance between the body landmarks forming each pair; and classifying the occupant as a first occupant type or a second occupant type in dependence on a comparison of the determined distance with a predefined classification threshold. The image is processed to identify the body landmarks which may form a skeletal model. The skeletal model may then be analysed to arrive at a reference value for a vehicle occupant. The reference value may, for example, comprise a skeletal size value.
The first occupant type may be classified as being independent; and the second occupant type may be classified as being dependent (or vulnerable). The first occupant type may be classified as being a first age group; and the second occupant type may be classified as being a second age group. The first occupant type may be an adult (a major); and the second occupant type may be a child (a minor). The method may comprise classifying each occupant of the vehicle as being either an adult or a child. The reference value may be used to classify the occupant as a child or an adult. This process may be performed for each vehicle occupant.
The method may comprise classifying the occupant as one of a plurality of different occupant types. The occupant types may comprise one or more of the following: a neonate (having an age less than one (1) month old); an infant (having an age between one (1) month and two (2) years old); a child (having an age greater than two (2) years and less than or equal to twelve (12) years old); an adolescent (having an age between twelve (12) and sixteen (16) years old); and an adult (having an age greater than or equal to sixteen (16) years old). For example, a classification may be made in respect of an occupant classified as being a child less than six (6) years old (or any other predefined age).
The method may be performed in respect of a single image. Alternatively, the method may be performed in respect of a video image which may comprise one or more image frames per second. An average distance between the body landmarks may be determined in dependence on a plurality of image frames. The average value may be used to classify the occupant.
The plurality of identified body landmarks may comprise a plurality of pairs of the body landmarks. The method may comprise determining the distance between the body landmarks in each of the plurality of pairs. The method may comprise adding the determined distances of each of the plurality of pairs to determine a total distance. The total distance may represent a skeletal size. At least in certain embodiments, the skeletal size may be used to classify the occupant. The method may comprise comparing the total distance to the classification threshold.
The classification threshold may be defined with reference to population data, for example comprising weights and/or dimensions of a population. The classification threshold may correspond to the weight and/or dimensions of a predefined percentile of the population for a given age or classification. The classification threshold may, for example, correspond to a 50th percentile of children of a particular age, or a 50th percentile of adult males or adult females.
The classification threshold may be set in dependence on a location of the occupant in the vehicle. For example, first and second classification thresholds which are different from each other may be defined for first and second rows of seats respectively.
The method may comprise modelling a skeletal frame of the occupant in dependence on the identified body landmarks. The skeletal frame may comprise at least one link extending between the body landmarks in each pair. The distance may correspond to a length of the link.
The at least one pair of the body landmarks may comprise one or more of the following: a right shoulder landmark and a right elbow landmark; a left shoulder landmark and a left elbow landmark; a chest landmark and a right shoulder landmark; a chest landmark and a left shoulder landmark; a left shoulder landmark and a right shoulder landmark; a chest landmark and a right hip landmark; and a chest landmark and a left hip landmark.
The at least one pair of the body landmarks may comprise one or more of the following: a nose landmark to a chest landmark; a right hip landmark to a right knee landmark; and a left hip landmark to a left knee landmark.
The method may comprise determining if the occupant is seated in a first row of seats in the vehicle cabin or is seated in a second row of seats in the vehicle cabin. The first row may be a front row of seats. The second row may be a back row of seats. The method may comprise determining if the occupant is seated in a third row of seats in the vehicle cabin. The classification threshold may comprise a first classification threshold if the occupant is in the first row and the classification threshold may comprise a second classification threshold if the occupant is in the second row.
The processing of the image may comprise defining one or more areas of interest. Each area of interest may be defined in a respect of a discrete sub-set of the image. The areas of interest may have a non-overlapping arrangement. A plurality of areas of interest may be defined, each corresponding to a seat in the cabin of the vehicle. The areas of interest may be defined in respect of different rows of seats in the cabin. The areas of interest could have the same dimensions. Alternatively, the areas of interest could have different dimensions. For example, a first area of interest defined in respect of a seat at the rear of the cabin may be smaller than a second area of interest defined in respect of a seat at the front of the cabin.
The body landmarks of an occupant may be identified in respect of the one or more areas of interest. For example, the body landmarks identified within or proximal to an area of interest may be categorised as relating to an occupant in a seat associated with that area of interest. By predefining the areas of interest within the scene, the potential for false identification of an occupant may be reduced. For example, the likelihood of a person visible through a side window or a rear windshield being identified as an occupant may be reduced. In some experiments, the camera falsely identified people outside a vehicle as being a child.
The plurality of identified body landmarks may comprise at least one reference body landmark. The method may comprise determining if the reference body landmark is inside a predefined area of interest. The reference body landmark may, for example, comprise a chest landmark. 15 The area of interest may be associated with a seat in the vehicle cabin. A separate area of interest may be associated with each seat in the vehicle cabin. For example, a first area of interest may be associated with a first seat; and a second area of interest may be associated with a second seat. The method may comprise determining that the seat is occupied if the reference body landmark is inside the area of interest. The method may comprise determining that the seat is unoccupied if the reference body landmark is outside the area of interest. Alternatively, or in addition, the method may comprise determining that the seat is unoccupied if no body landmarks are identified.
The method may comprise analysing the image to identify a child safety seat in the cabin. The seat may comprise the identified child safety seat.
The method may comprise inhibiting or enabling an operation of one or more vehicle systems in dependence on the classification of the occupant. Alternatively, or in addition, the method may comprise controlling operation of one or more vehicle systems in dependence on the classification of the occupant. The method may comprise inhibiting or enabling an operation of one or more vehicle systems if the occupant is classified as being a child. The vehicle system may comprise a safety system for generating a notification or an alert. A first notification may be output for an occupant classified as a first occupant type; and a second notification may be output for an occupant classified as a second occupant type. For example, the first notification may be output for an occupant classified as a child having an age which is less than six (6) years old; and a second notification may be output for an occupant classified as a child having an age which is greater than or equal to 6 years old. The method may comprise enabling an operation of one or more vehicle systems only if the occupant is classified as being an adult. A vehicle drive system may be disabled if an occupant identified in the driver's seat is classified as being a child (or if no occupant is identified in the driver's seat). The vehicle drive system may be enabled only if the occupant in the driver's seat is classified as being an adult. The vehicle drive system may comprise one or more of the following: an ignition system, a transmission controller, and a throttle pedal, a brake system (such as a parking brake). Inappropriate or erroneous engagement of a drive mode may thereby be reduced.
The method may be repeated to classify each occupant in the vehicle cabin.
The classification threshold may be set in dependence on a geographical operating region of the vehicle. The classification threshold may be customised to reflect variations in height and/or size demographics for the population in the geographical operating region where the vehicle will be used. The geographical operating region may be specified in a vehicle configuration file or may be input by a user. Alternatively, the geographical operating region may be determined with reference to a satellite positioning system.
According to a further aspect of the present invention there is provided a non-transitory computer-readable medium having a set of instructions stored therein which, when executed, cause a processor to perform the method as described herein.
According to a further aspect of the present invention there is provided a vehicle occupant classification system comprising a controller having a processor and a system memory, the controller being configured to: receive an image from a camera disposed in a cabin of a vehicle; analyse the image to identify a plurality of body landmarks of an occupant in the vehicle cabin, the plurality of identified body landmarks comprising at least one pair of the body landmarks; determine a distance between the body landmarks forming each pair; and classify the occupant as a first occupant type or a second occupant type in dependence on a comparison of the determined distance with a predefined classification threshold.
The first occupant type may be classified as being independent; and the second occupant type may be classified as being dependent (or vulnerable). The first occupant type may be classified as being a first age group; and the second occupant type may be classified as being a second age group. The first occupant type may be an adult and the second occupant type may be a child. The vehicle occupant classification system may thereby be operative to classify each occupant of the vehicle as being either an adult or a child.
The plurality of identified body landmarks may comprise a plurality of pairs of the body landmarks. The controller may be configured to determine the distance between the body landmarks in each of the plurality of pairs. The controller may be configured to add the determined distances of each of the plurality of pairs to determine a total distance. The total distance may represent a skeletal size. At least in certain embodiments, the skeletal size may be used to classify the occupant. The controller is configured to compare the total distance to the classification threshold.
The at least one pair of the body landmarks comprises one or more of the following: a right shoulder landmark and a right elbow landmark; a left shoulder landmark and a left elbow landmark; a chest landmark and a right shoulder landmark; a chest landmark and a left shoulder landmark; a left shoulder landmark and a right shoulder landmark; a chest landmark and a right hip landmark; and a chest landmark and a left hip landmark.
The at least one pair of the body landmarks may comprise one or more of the following: a nose landmark to a chest landmark; a right hip landmark to a right knee landmark; and a left hip landmark to a left knee landmark.
The controller may be configured to determine if the occupant is seated in a first row of seats in the vehicle cabin or is seated in a second row of seats in the vehicle cabin. The classification threshold may comprise a first classification threshold if the occupant is in the first row and the classification threshold may comprise a second classification threshold if the occupant is in the second row.
The controller may be configured to define one or more areas of interest. Each area of interest may be defined in a respect of a discrete sub-set of the image. The areas of interest may have a non-overlapping arrangement. A plurality of areas of interest may be defined, each corresponding to a seat in the cabin of the vehicle. The areas of interest may be defined in respect of different rows of seats in the cabin. The areas of interest could have the same dimensions. Alternatively, the areas of interest could have different dimensions. For example, a first area of interest defined in respect of a seat at the rear of the cabin may be smaller than a second area of interest defined in respect of a seat at the front of the cabin.
The body landmarks of an occupant may be identified in respect of the one or more areas of interest. For example, the body landmarks identified within or proximal to an area of interest may be categorised as relating to an occupant in a seat associated with that area of interest. By predefining the areas of interest within the scene, the potential for false identification of an occupant may be reduced. For example, the likelihood of a person visible through a side window or a rear windshield being identified as an occupant may be reduced. In some experiments, the camera falsely identified people outside a vehicle as being a child.
The plurality of identified body landmarks may comprise at least one reference body landmark, the controller being configured to determine if the reference body landmark is inside a predefined area of interest. The reference body landmark may, for example, comprise a chest landmark.
The area of interest may be associated with a seat in the vehicle cabin. The controller may be configured to determine that the seat is occupied if the reference body landmark is inside the area of interest. The controller may be configured to determine that the seat is unoccupied if the reference body landmark is outside the area of interest. The controller may be configured to determine that the seat is unoccupied if no body landmarks are identified within the image data.
The controller may be configured to analyse the image to identify a child safety seat in the cabin. The seat may comprise the identified child safety seat.
The controller may be configured to inhibit or enable operation of one or more vehicle systems in dependence on the classification of the occupant. Alternatively, or in addition, the controller may be configured to control operation of one or more vehicle systems in dependence on the classification of the occupant. The controller may be configured to inhibit or to enable an operation of one or more vehicle systems if the occupant is classified as being a child. The vehicle system may comprise a safety system for generating a notification or an alert.
The classification threshold may be set in dependence on a geographical operating region of the vehicle. The classification threshold may be customised to reflect variations in height and/or size demographics for the population in the geographical operating region where the vehicle will be used. The geographical operating region may be specified in a vehicle configuration file or may be input by a user. Alternatively, the geographical operating region may be determined with reference to a satellite positioning system.
According to a further aspect of the present invention there is provided a vehicle comprising a vehicle occupant classification system as described herein.
According to a further aspect of the present invention there is provided a controller for classifying a vehicle occupant, the controller being configured to: receive an image from a camera disposed in a cabin of a vehicle; analyse the image to identify a plurality of body landmarks of an occupant in the vehicle cabin, the plurality of identified body landmarks comprising at least one pair of the body landmarks; determine a distance between the body landmarks forming each pair; and classify the occupant as a first occupant type or a second occupant type in dependence on a comparison of the determined distance with a predefined classification threshold. The controller may be provided in a vehicle.
The first occupant type may be classified as being independent; and the second occupant type may be classified as being dependent (or vulnerable). The first occupant type may be classified as being a first age group; and the second occupant type may be classified as being a second age group. The first occupant type may be an adult and the second occupant type may be a child. The vehicle occupant classification system may thereby be operative to classify each occupant of the vehicle as being either an adult or a child.
Within the scope of this application it is expressly intended that the various aspects, embodiments, examples and alternatives set out in the preceding paragraphs, in the claims and/or in the following description and drawings, and in particular the individual features thereof, may be taken independently or in any combination. That is, all embodiments and/or features of any embodiment can be combined in any way and/or combination, unless such features are incompatible. The applicant reserves the right to change any originally filed claim or file any new claim accordingly, including the right to amend any originally filed claim to depend from and/or incorporate any feature of any other claim although not originally claimed in that manner.
BRIEF DESCRIPTION OF THE DRAWINGS
One or more embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which: Figure 1 shows a schematic representation of a vehicle incorporating an occupant classification system in accordance with an embodiment of the present invention; Figure 2 shows a schematic representation of the body landmarks of a skeletal frame of an occupant; Figure 3 shows an image captured by a sensor unit in the vehicle with a plurality of areas of interest overlaid; Figure 4A shows a first skeletal model for use by the occupant classification system for classification of the occupant of the vehicle; Figure 4B shows a second skeletal model for use by the occupant classification system; Figure 4C shows a third skeletal model for use by the occupant classification system; Figure 5 shows a frequency distribution for skeletal size of adults and children in a test sample; Figure 6 shows a flow chart representing operation of the occupant classification system in accordance with the present embodiment; Figure 7A shows a first image captured by the optical camera representing a first 20 scenario; Figure 7B shows a first distribution plot for the skeletal size of the occupants in the rear seat of the cabin shown in Figure 7A; Figure 8A shows a second image captured by the optical camera representing a second scenario; Figure 8B shows a second distribution plot for the skeletal size of the occupants in the rear seat of the cabin shown in Figure 8A; and Figure 9 shows a third distribution plot comprising the first the first and second distribution plots shown in Figures 7B and 8B.
DETAILED DESCRIPTION
An occupant classification system 1 for a vehicle V in accordance with an embodiment of the present invention will now be described with reference to the accompanying Figures. The occupant classification system 1 is configured to classify each occupant 0-n of the vehicle V as being either a first occupant type or a second occupant type.
In the present embodiment, the first occupant type is an adult and the second occupant type is a child. The occupant classification system 1 may thereby classify each occupant 0-n of the vehicle V as being either an adult (a major) or a child (a minor). In the present embodiment the term "child" is used to refer to a person under the age of twelve (12) years old; and the term "adult" is used herein to refer to a person who is twelve (12) years old or older. It will be understood that the age threshold to differentiate between a child and an adult may be higher or lower, for example sixteen (16) years, seventeen (17) years, or eighteen (18) years. The occupant classification system 1 is configured to control one or more vehicle systems VS-n in dependence on the determined occupant classification. The occupant classification system 1 may be configured to control one or more vehicle safety systems VS-n. The vehicle safety systems VS-n may, for example, generate a notification if one or more occupant 0-n classified as being a child is identified in the vehicle V without identifying an occupant 0-n that is classified as being an adult. A timer function may be implemented such that the notification is generated after expiry of a predetermined time limit. The notification may comprise an audio or visible alert generated by the vehicle V. Other types of notification are also contemplated.
The vehicle V comprises a cabin C for one or more occupants 0-n. A plan view of the cabin C of the vehicle V is shown in Figure 1. The cabin C in the present embodiment comprises a front row R-1 comprising first and second front seats SF-1, SF-2; and a back-row R-2 comprising first, second and third back seats SB-1, SB-2, SB-3. The first front seat SF-1 is a driver seat for seating a driver of the vehicle; and the second front seat SF-2 is a passenger seat for seating a passenger. The first, second and third back seats SB-1, SB-2, SB-3 are suitable for additional passengers. The driver seat is illustrated on the right-hand side of the cabin C, but it will be understood that the invention can be applied in left-and right-hand drive iterations of the vehicle V. In a modified arrangement, the back-row R-2 may be consist of first and second back seats SB-1, SB-2. The occupant classification system 1 may be used in a vehicle V having a single row of seats, for example consisting of first and second front seats SF-1, SF-3. The occupant classification system 1 may be used in a vehicle V having more than two rows of seats, for example a third row which may comprise one or more occasional or temporary seats.
The occupant classification system 1 comprises a cabin sensor unit 10 and a processing module 11. The cabin sensor unit 10 in the present embodiment comprises an optical camera 12 having a field of view FV1. The optical camera 12 is operable to generate image data IMG1 representing an image scene within the cabin C. The optical camera 12 is a video camera operable to generate image data IMG1 which is updated a plurality of times per second (corresponding to image "frames"). The optical camera 12 is mounted at the front of the cabin C and has a rearward-facing orientation. In use, the optical camera 12 is oriented such that the field of view FV1 encompasses at least a portion of each occupant 0-n seated in one or more of the first and second front seats SF-1, SF-2 and/or one or more of the first, second and third back seats SB-1, SB-2, SB-3. The optical camera 12 in the present embodiment is mounted centrally in an upper region of the cabin C to provide an improved line of sight of an occupant 0-n sitting in one or more of the first, second and third back seats SB-1, SB-2, SB- 3. The optical camera 12 could, for example, be mounted to a rear-view mirror, a roof of the cabin C, or a dashboard (not shown). The cabin sensor unit 10 could comprise more than one optical camera 12. A separate optical camera 12 could be associated with each row of seats in the cabin C or with each seat in the cabin C. By way of example, first and second optical cameras 12 could be associated with the front row R-1 and the back-row R-2 respectively.
The optical camera 12 in the present embodiment operates in a visible region of the light spectrum. Alternatively, or in addition, the optical camera 12 could operate in an infra-red spectrum.
The processing module 11 comprises an electronic processor 13 and a system memory 14. A set of computational instructions is stored on the system memory 14 and, when executed, said computational instructions cause the electronic processor 13 to perform the method(s) described herein. The processing module 11 is configured to receive the image data IMG1 generated by the optical camera 12. The processing module 11 implements a body landmark recognition algorithm as a pre-processing step. The body landmark recognition algorithm processes the image data IMG1 to identify body landmarks of a skeletal frame 15 associated with an occupant 0-n. The body landmarks are identified for each occupant 0-n present in the cabin C. As shown in Figure 2, the body landmarks of an occupant 0-n may comprise one or more of the following: a nose landmark LM-0, a chest landmark LM-1, a right shoulder landmark LM- 2, a right elbow landmark LM-3, a right wrist landmark LM-4, a left shoulder landmark LM-5, a left elbow landmark LM-6, a left wrist landmark LM-7, a right hip landmark LM-8, a right knee landmark LM-9, a right ankle landmark LM-10, a left hip landmark LM-11, a left knee landmark LM-12, a left ankle landmark LM-13, a right eye landmark LM-14, a left eye landmark LM-15, a right ear landmark LM-16, and a left ear landmark LM-17. The field of view FV1 may be partially occluded by features in the cabin C and, as a result, the image data IMG1 may comprise an incomplete representation of one or more occupants 0-n. For example, the first and second front seats SF-1, SF-2 may partially occlude an occupant 0-n seated in one of the first, second and third back seats SB-1, SB-2, SB-3. In respect of each occupant 0-n, the body landmark recognition algorithm is configured to generate a skeletal model 20 (shown in Figure 4A) in dependence on the identified body landmarks. In the present embodiment, the skeletal model consists of the following body landmarks: the chest landmark LM-1; the right and left shoulders landmarks LM-2, LM-5; the right and left elbows landmark LM-3, LM-6; and the right and left hip landmark LM-8, LM-11. The skeletal model 20 could be modified to incorporate additional body landmarks, such as the right and left knees landmark LM-9, landmark LM-12 and/or the nose landmark LM-0. A variety of visual body landmark detection algorithms are available for commercial applications. A suitable body landmark recognition algorithm is the Open Pose algorithm.
As illustrated in Figure 3, a plurality of areas of interest A-n are defined within the image data IMG1. The areas of interest A-n each comprise a discrete region which does not overlap with any of the other areas of interest A-n. In the present embodiment, the areas of interest A-n are rectangular. The areas of interest A-n may have other polygonal shapes. Each area of interest A-n is associated with one of the seats in the cabin C. In the present embodiment, the first and second areas of interest A-1, A-2 are associated with the first and second front seats SF-1, SF-2 respectively; and the third, fourth and fifth areas of interest A-3, A-4, A-5 are associated with the first, second and third back seats SB-1, SB-2, SB-3 respectively. The areas of interest A-n associated with the first, second and third back seats SB-1, SB-2, SB-3 are smaller than those associated with the first and second front seats SF-1, SF-2 to reflect the relative proximity to the optical camera 12. The size and/or location of the areas of interest A-n could be modified dynamically. An area of interest A-n may be increased in size in dependence on a determination that an adjacent seat is unoccupied; and/or may be reduced in size in dependence on a determination that an adjacent seat is occupied. By predefining the areas of interest A-n within the scene, the potential for false identification of an occupant 0-n may be reduced. For example, the likelihood of a person visible through a side window or a rear windshield being identified as an occupant 0-n may be reduced.
The body landmark recognition algorithm uses the areas of interest A-n to identify one or more body landmarks LM-n relating to an occupant 0-n seated in one of the seats within the cabin C. The processing module 11 can thereby process the image data IMG1 to determine if each seat in the cabin C is occupied or vacant. The processing module 11 uses at least one of the body landmarks LM-n as at least one reference body landmark LM-n for this determination. In the present embodiment, the chest landmark LM-1 is used as the reference body landmark LM-n. The processing module 11 analyses the image data IMG1 to identify one or more chest landmarks LM-1. The processing module 11 compares the location of the or each identified chest landmark LM-1 to the areas of interest A-n. If the processing module 11 identifies a chest landmark LM-1 located within a predefined area of interest A-n, the seat associated with that area of interest A-n is flagged as being occupied. If the processing module 11 is unable to identify a chest landmark LM-1 located within a predefined areas of interest A-n, the seat associated with that area of interest A-n is flagged as being unoccupied. The processing module 11 may thereby determine whether each seat is occupied or unoccupied.
The body landmark recognition algorithm links the body landmarks LM-n associated with the identified chest landmark(s) LM-1 to form the skeletal model 20 for each occupant 0-n. The body landmark recognition algorithm is configured to identify pairs of the body landmarks LM-n making up the skeletal model 20. In the present embodiment, the skeletal model 20 is composed of five (5) pairs A-E, as illustrated in Figure 4. The right shoulder landmark LM-2 and the right elbow landmark LM-3 form a first pair A corresponding to an upper (right) arm of the occupant 0-n. The left shoulder landmark LM-5 and the left elbow landmark LM-6 form a second pair B corresponding to an upper (left) arm of the occupant 0-n. The right shoulder landmark LM-2 and the left shoulder landmark LM-5 form a third pair C. The chest landmark LM-1 and the right hip landmark LM-8 form a fourth pair D; and the chest landmark LM-1 and the left hip landmark LM-11 form a fifth pair E. The image data IMG1 is analysed to determine a length of each pair A-E (i.e. to determine a distance between the body landmarks LM-n in each pair A-E). A body signature is determined for each skeletal model 20 to enable classification of the occupant 0-n as being either a child or an adult. The body signature in the present embodiment is in the form of a skeletal distance (d) determined by summing the length of each of the pairs A-E. In the present embodiment, the skeletal size (d) is equal to the sum of the first, second, third, fourth and fifth pairs A-E d=A+B+C+D+E+F). The skeletal size (d) may be determined for each valid frame identified in the image data IMG1, or the results may be averaged across multiple frames (for example covering a time period of 2 or 3 seconds). To reduce the computational load, the processing module 11 may be configured only to analyse the image data IMG1 in respect of those seats which are identified as being occupied.
A false rejection rate (FRR) may be determined as a ratio of the number of false rejections divided by the number of identification attempts. By way of example, the FRR of a biometric security system provides a measure of the likelihood that the system will incorrectly reject an access attempt by an authorized user. In the present embodiment, an FFR equal to zero (i.e. FFR=0) indicates that no adults are missed but some children may be classified as adults. A false acceptance rate (FAR) may be determined as a ratio of the number of false acceptances divided by the number of identification attempts. By way of example, the FAR of a biometric security system provides a measure of the likelihood that the system will incorrectly accept an access attempt by an authorized user. In the present embodiment, an FAR equal to zero (FAR=0) indicates that no children are missed but some adults may be classified as children.
The processing module 11 is configured to classify each occupant 0-n in the cabin C in dependence on the determined skeletal size (d) for the corresponding skeletal model 20. A frequency distribution plot 100 for a set of empirical analysis comprising a plurality of test subjects in a sample is illustrated in Figure 5. The test subjects in the sample were classed as being either an adult or a child based on their age when the test was undertaken. The frequency distribution plot 100 shows the number of test subjects (Y-axis) having a given skeletal size (d) (X-axis). A first frequency distribution curve 105 shows the distribution for test subjects classed as children. A second frequency distribution curve 110 shows the distribution for test subjects classed as adults. An approximation of a threshold corresponding to the FRR=0 is shown in the frequency distribution plot 105 defining a lower limit for skeletal size (d) of test subjects classed as adults below which no adults are missed but above which some children may erroneously be classed as adults. An approximation of a threshold corresponding to the FAR=0 is shown in the frequency distribution plot 105 defines an upper limit for skeletal size (d) of test subjects classed as children above which no children are missed but below which some adults may erroneously be classed as children. As shown in Figure 5, the first and second frequency distribution curves 105, 110 overlap with each other for skeletal sizes (d) which are greater than the limit defined by FRR=0 and less than the limit defined by FAR=0. A skeletal size (d) falling within this overlapping region, may be that of either a child or an adult.
A classification threshold T1 is defined in the distribution curve 100. The classification threshold (T1) may, for example, correspond to a 50th percentile 12-year-old child. The processing module 11 compares each skeletal size (d) to the classification threshold T1. If the skeletal size (d) is greater than the classification threshold T1, the processing module 11 classifies that occupant 0-n as an adult. If the skeletal size (d) is less than the classification threshold T1, the processing module 11 classifies that occupant 0-n as a child. This comparison is performed in respect of each occupant 0-n identified in the cabin C of the vehicle V. In the present embodiment, there is only the optical camera 12 is disposed at the front of the cabin C. It will be appreciated that the skeletal size (d) is dependent on a distance between the occupant 0-n and the optical camera 12. For example, an occupant 0-n seated in one of the back seats SB-1,SB-2, SB-3 will appear smaller than if they were seated in one of the front seats SF-1, SF-2. In order to allow for this variation, different classification thresholds may be defined in dependence on whether the occupant 0-n is seated in one of the back seats SB-1, SB-2, SB-3 or in one of the front seats SF-1, SF-2. For example, a first classification threshold T1 may be utilised to classify an occupant seated in one of the front seats SF-1, SF-2 and a second classification threshold T2 may be utilised to classify an occupant seated in one of the back seats SB-1, SB-2, SB-3. Alternatively, or in addition, a scaling factor may be applied to the skeletal size (d) at least partially to compensate for differences caused by the offset between the occupant and the optical camera 12.
The operation of the occupant classification system 1 will now be described with reference to a first block diagram 200 shown in Figure 6. The occupant classification system 1 is activated (BLOCK 205). The optical camera 12 generates image data IMG1 representing an interior of the cabin C (BLOCK 210). The body landmark recognition algorithm analyses the images to identify one or more chest landmarks LM-1 in the scene (BLOCK 215). The processing module 11 determines which, if any, of the identified chest landmark(s) LM1 is located within one of the areas of interest A-n (BLOCK 220). If the processing module 11 determines that an area of interest A-n does not have one of the identified chest landmark(s) LM-1 located therein, the seat associated with that area of interest A-n is identified as being unoccupied (BLOCK 225). If the processing module 11 determines that an area of interest A-n does have one of the chest landmark(s) LM-1 located therein, the seat associated with that area of interest A-n is identified as being occupied (BLOCK 230). The body landmark recognition algorithm analyses the image data IMG1 to identify the body landmarks LM-n and forms the skeletal model 20 (BLOCK 235). The pairs of body landmarks LM-n are identified within the skeletal model 20 and the skeletal size (d) determined (BLOCK 240). The skeletal size (d) for each occupant On identified in the cabin C is compared to the classification threshold T1 (BLOCK 245). If the skeletal size (d) for a given occupant 0-n is greater than or equal to the classification threshold T1, the processing module 11 classifies that occupant 0-n as being an adult (BLOCK 250). If the skeletal size (d) for a given occupant 0-n is less than the classification threshold, the processing module 11 classifies that occupant 0-n as being a child (BLOCK 255). The process is repeated for each occupant 0-n identified in the cabin C. The process is complete when all occupants 0-n have been classified as either an adult or a child (BLOCK 260). The occupant classification system 1 may periodically repeat the check to determine if the occupants 0-n have changed.
A first image 30 captured by the optical camera 12 in a first scenario is illustrated in Figure 6A.
First and second occupants 0-1, 0-2 are seated in the first and second front seats SF-1, SF- 2 respectively; and third and fourth occupants 0-3, 0-4 are seated in the first and third back seats SB-1, SB-3 respectively. In the present example, each of the four occupants 0-1, 0-2, 0-3, 0-4 is an adult. The skeletal models 20 for the third and fourth occupants 0-3, 0-4 (seated in the first and third back seats SB-1, SB-3 are overlaid onto the first image 30 to illustrate operation of the body landmark recognition algorithm. In the illustrated example, the skeletal size (d) of the third occupant 0-3 in the first back seat SB-1 is calculated as 929.36; and the skeletal size (d) of the fourth occupant 0-4 in the third back seat SB-3 is calculated as 970.54. The image data IMG1 captured by the optical camera 12 is analysed over a period of time to generate a plurality of data counts. Due to movement of the occupants 0-n in the cabin C, it will be understood that the skeletal size (d) fluctuates. A first distribution plot 35 for the skeletal size (d) of the third and fourth occupants 0-3, 0-4 is shown in Figure 7B.
A second image 40 captured by the optical camera 12 in a second scenario is illustrated in Figure 8A. A first occupant 0-1 is seated in the first front seat SF-1; and second and third occupants 0-2, 0-3 are seated in the first and third back seats SB-1, SB-3 respectively. In the present example, the first occupant 0-1 is an adult; and the second and third occupants 0-2, 0-3 are children. The skeletal models 20 for the children in the first and third back seats SB- 1, SB-3 are overlaid onto the second image 40 to illustrate operation of the body landmark recognition algorithm. In the illustrated example, the skeletal size (d) of the child in the first back seat SB-1 is calculated as 642.79; and the skeletal size (d) of the child in the third back seat SB-3 is calculated as 699.79. It will be appreciated that the body landmark recognition algorithm is capable of operating even when a child is seated in a child-safety seat provided on top of a standard seat in the vehicle cabin C, as shown in the second image 40. The image data IMG1 captured by the optical camera 12 is analysed over a period of time to generate a plurality of data counts. Due to movement of the occupants 0-n in the cabin C, it will be understood that the skeletal size (d) fluctuates. A second distribution plot 45 for the skeletal size (d) of the second and third occupants 0-3, 0-4 is shown in Figure 8B. The processing module 11 may be configured to analyse the image data IMG1 to identify the child-seat seat, for example by implementing an image matching algorithm. The processing module 11 may determine if there is an occupant 0-n seated in the child-safety seat or if the child-safety seat is unoccupied.
The first and second distribution plots 35, 45 are combined into a third distribution plot 50 shown in Figure 9. A first frequency distribution curve 55 shows the distribution for children; and a second frequency distribution curve 60 shows the distribution for adults. The differences in the skeletal size (d) for the occupants 0-n in the first and second scenarios is evidenced by the separation of the first and second distribution plots 35, 45 along the X-axis.
The classification threshold T1 may be configured in dependence on a geographical operating region of the vehicle V. The classification threshold T1 may thereby reflect the normal distribution of size and/or height of a local population.
The occupant classification system 1 in the above embodiment utilises a skeletal model 20 consisting of five (5) pairs of body landmarks LM-n. It will be understood that the skeletal model may consist of less than five (5) pairs of body landmarks LM-n; or more than five (5) pairs of body landmarks LM-n. By way of example, a skeletal model 20 consisting of six (6) pairs (A to F) of body landmarks LM-n is shown in Figure 4B; and a skeletal model 20 consisting of eight (8) pairs (A to H) of body landmarks LM-n is shown in Figure 4C.
The occupant classification system 1 may output a control signal in dependence on the determined classification of the occupant 0-n. The control signal may control operation of one or more vehicle systems, for example to selectively enable and disable one or more vehicle system. A first control signal may be output if the occupant 0-n is classified as being a first occupant type. A second control signal may be output if the occupant 0-n is classified as being a second occupant type. The first and second control signal may selectively enable and disable different vehicle systems; or may selectively enable and disable the same vehicle system.
It will be appreciated that various changes and modifications can be made to the present invention without departing from the scope of the present application.
The occupant classification system 1 may be configured to make additional classifications, for example to classify each occupant of the vehicle V as being one of a plurality of different occupant types. The plurality of occupant types may, for example, comprise one or more of the following: a neonate (having an age less than one (1) month old); an infant (having an age between one (1) month and two (2) years old); a child (having an age greater than two (2) years and less than or equal to twelve (12) years old); an adolescent (having an age between twelve (12) and sixteen (16) years old); and an adult (having an age greater than or equal to sixteen (16) years old). Still further classifications may be made. For example, a further classification may be made in respect of an occupant classified as being a child less than six (6) years old.
The operation of one or more vehicle systems may be enabled/disabled or modified in dependence on the determined classification of the occupant. The one or more vehicle systems associated with each seat in the vehicle may be configured in dependence on the classification of the occupant seated in that seat. The one or more vehicle systems may be configured on a per-seat basis according to the classification of the seat occupant. By way of example, airbag deployment may be selectively enabled and disabled in dependence on the occupant classification. Alternatively, or in addition, airbag deployment may be controlled in dependence on the occupant classification. The airbag may be configurable in different deployment modes. A reduced deployment mode may, for example, selectively disable certain deployment features whilst enabling others. The deployment mode of the airbag may be selected in dependence on the occupant classification. The reduced deployment mode may be engaged if the occupant is classified as being a child who is less than six (6) years old.
A plurality of classification thresholds may be defined to classify the occupant 0-n in dependence on the determined skeletal size (d). The classification thresholds may, for example, be defined in dependence on a size distribution chart of a given population. The classification thresholds may be defined for male or female occupants. By way of example, the occupant classification system 1 may be configured to differentiate between a 5th percentile of females having a weight of approximately 48kg and a 6 year old. The occupant classification system 1 may be configured to generate different notifications in dependence on any such classification.

Claims (25)

  1. CLAIMS1. A vehicle occupant classification method comprising: receiving an image from a camera disposed in a cabin of a vehicle; analysing the image to identify a plurality of body landmarks of an occupant in the vehicle cabin, the plurality of identified body landmarks comprising at least one pair of the body landmarks; determining a distance between the body landmarks forming each pair; classifying the occupant as a first occupant type or a second occupant type in dependence on a comparison of the determined distance with a predefined classification threshold; and controlling operation of one or more vehicle systems in dependence on the classification of the occupant.
  2. 2. A vehicle occupant classification method according to claim 1, wherein the plurality of identified body landmarks comprises a plurality of pairs of the body landmarks, the method comprising determining the distance between the body landmarks in each of the plurality of pairs.
  3. 3. A vehicle occupant classification method according to claim 2, wherein the method comprises adding the determined distances of each of the plurality of pairs to determine a total distance.
  4. 4. A vehicle occupant classification method according to claim 3, wherein the method comprises comparing the total distance to the classification threshold.
  5. 5. A vehicle occupant classification method according to any one of the preceding claims, wherein the at least one pair of the body landmarks comprises one or more of the following: a right shoulder landmark and a right elbow landmark; a left shoulder landmark and a left elbow landmark; a left shoulder landmark and a right shoulder landmark; a chest landmark and a right shoulder landmark; a chest landmark and a left shoulder landmark; a chest landmark and a right hip landmark; and a chest landmark and a left hip landmark.
  6. 6. A vehicle occupant classification method according to any one of the preceding claims, wherein the method comprises determining if the occupant is seated in a first row of seats in the vehicle cabin or is seated in a second row of seats in the vehicle cabin, the classification threshold comprising a first classification threshold if the occupant is in the first row and the classification threshold comprising a second classification threshold if the occupant is in the second row.
  7. 7. A vehicle occupant classification method according to any one of the preceding claims, wherein the plurality of identified body landmarks comprises a reference body landmark, the method comprising determining if the reference body landmark is inside a predefined area of interest.
  8. 8. A vehicle occupant classification method according to claim 7, wherein the area of interest is associated with a seat in the vehicle cabin; and the method comprises: determining that the seat is occupied if the reference body landmark is inside the area of interest; and determining that the seat is unoccupied if the reference body landmark is outside the area of interest.
  9. 9. A vehicle occupant classification method according to claim 8, wherein the method comprises analysing the image to identify a child safety seat in the cabin; and the seat comprises the identified child safety seat.
  10. 10. A vehicle occupant classification method according to any one of the preceding claims, wherein the method comprises inhibiting or enabling an operation of one or more vehicle systems in dependence on the classification of the occupant.
  11. 11. A vehicle occupant classification method as claimed in any one of the preceding claims, wherein the first classification threshold is set in dependence on a geographical operating region of the vehicle.
  12. 12. A non-transitory computer-readable medium having a set of instructions stored therein which, when executed, cause a processor to perform the method according to any one of the preceding claims.
  13. 13. A vehicle occupant classification system comprising a controller having a processor and a system memory, the controller being configured to: receive an image from a camera disposed in a cabin of a vehicle; analyse the image to identify a plurality of body landmarks of an occupant in the vehicle cabin, the plurality of identified body landmarks comprising at least one pair of the body landmarks; determine a distance between the body landmarks forming each pair; classify the occupant as a first occupant type or a second occupant type in dependence on a comparison of the determined distance with a predefined classification threshold; and control operation of one or more vehicle systems in dependence on the classification of the occupant.
  14. 14. A vehicle occupant classification system according to claim 13, wherein the plurality of identified body landmarks comprises a plurality of pairs of the body landmarks, the controller being configured to determine the distance between the body landmarks in each of the plurality of pairs.
  15. 15. A vehicle occupant classification system according to claim 14, wherein the controller is configured to add the determined distances of each of the plurality of pairs to determine a total distance.
  16. 16. A vehicle occupant classification system according to claim 15, wherein the controller is configured to compare the total distance to the classification threshold.
  17. 17. A vehicle occupant classification system according to any one of claims 14 to 16, wherein the at least one pair of the body landmarks comprises one or more of the following: a right shoulder landmark and a right elbow landmark; a left shoulder landmark and a left elbow landmark; a chest landmark and a right shoulder landmark; a chest landmark and a left shoulder landmark; a left shoulder landmark and a right shoulder landmark; a chest landmark and a right hip landmark; and a chest landmark and a left hip landmark.
  18. 18. A vehicle occupant classification system according to any one of claims 14 to 17, wherein the controller is configured to determine if the occupant is seated in a first row of seats in the vehicle cabin or is seated in a second row of seats in the vehicle cabin, the classification threshold comprising a first classification threshold if the occupant is in the first row and the classification threshold comprising a second classification threshold if the occupant is in the second row.
  19. 19. A vehicle occupant classification system according to any one of claims 14 to 18, wherein the plurality of identified body landmarks comprises at least one reference body landmark, the controller being configured to determine if the reference body landmark is inside a predefined area of interest.
  20. 20. A vehicle occupant classification system according to claim 19, wherein the area of interest is associated with a seat in the vehicle cabin; and the controller is configured to: determine that the seat is occupied if the reference body landmark is inside the area of interest; and determine that the seat is unoccupied if the reference body landmark is outside the area of interest.
  21. 21. A vehicle occupant classification system according to claim 20, wherein the controller is configured to analyse the image to identify a child safety seat in the cabin; and the seat comprises the identified child safety seat.
  22. 22. A vehicle occupant classification system according to any one of claims 14 to 21, wherein the controller is configured to inhibit or to enable an operation of one or more vehicle systems in dependence on the classification of the occupant.
  23. 23. A vehicle occupant classification system as claimed in any one of claims 14 to 22, wherein the classification threshold is set in dependence on a geographical operating region of the vehicle.
  24. 24. A vehicle comprising a vehicle occupant classification system as claimed in any one of claims 14 to 23. 30
  25. 25. A controller for classifying a vehicle occupant, the controller being configured to: receive an image from a camera disposed in a cabin of a vehicle; analyse the image to identify a plurality of body landmarks of an occupant in the vehicle cabin, the plurality of identified body landmarks comprising at least one pair of the body landmarks; determine a distance between the body landmarks forming each pair; and classify the occupant as a first occupant type or a second occupant type in dependence on a comparison of the determined distance with a predefined classification threshold.
GB1909699.9A 2019-07-05 2019-07-05 Occupant classification method and apparatus Active GB2585247B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB1909699.9A GB2585247B (en) 2019-07-05 2019-07-05 Occupant classification method and apparatus
DE102020117555.8A DE102020117555A1 (en) 2019-07-05 2020-07-03 METHOD AND DEVICE FOR INTRIBUTOR CLASSIFICATION

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1909699.9A GB2585247B (en) 2019-07-05 2019-07-05 Occupant classification method and apparatus

Publications (3)

Publication Number Publication Date
GB201909699D0 GB201909699D0 (en) 2019-08-21
GB2585247A true GB2585247A (en) 2021-01-06
GB2585247B GB2585247B (en) 2022-07-27

Family

ID=67623210

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1909699.9A Active GB2585247B (en) 2019-07-05 2019-07-05 Occupant classification method and apparatus

Country Status (2)

Country Link
DE (1) DE102020117555A1 (en)
GB (1) GB2585247B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112937479A (en) * 2021-03-31 2021-06-11 北京市商汤科技开发有限公司 Vehicle control method and device, electronic device and storage medium
GB2625515A (en) * 2022-12-14 2024-06-26 Continental Automotive Tech Gmbh A method of identifying vehicle occupant and system thereof

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210188205A1 (en) * 2019-12-19 2021-06-24 Zf Friedrichshafen Ag Vehicle vision system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005008581A2 (en) * 2003-07-23 2005-01-27 Eaton Corporation System or method for classifying images
US20190026548A1 (en) * 2017-11-22 2019-01-24 Intel Corporation Age classification of humans based on image depth and human pose

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3532772B2 (en) * 1998-09-25 2004-05-31 本田技研工業株式会社 Occupant state detection device
DE102004042959A1 (en) * 2004-09-02 2006-03-09 Robert Bosch Gmbh Passenger protection device in a vehicle
DE102013021930B4 (en) * 2013-12-20 2018-10-31 Audi Ag Motor vehicle with a safety system and method for operating a safety system of a motor vehicle
WO2016067082A1 (en) * 2014-10-22 2016-05-06 Visteon Global Technologies, Inc. Method and device for gesture control in a vehicle
DE102018210028A1 (en) * 2018-06-20 2019-12-24 Robert Bosch Gmbh Method and device for estimating a posture of an occupant of a motor vehicle
EP3923809A4 (en) * 2019-02-17 2022-05-04 Gentex Technologies (Israel) Ltd. System, device, and methods for detecting and obtaining information on objects in a vehicle
DE102019004864A1 (en) * 2019-07-11 2020-01-16 Daimler Ag Method for estimating a weight and a size of a driver of a motor vehicle, computer program product and motor vehicle

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005008581A2 (en) * 2003-07-23 2005-01-27 Eaton Corporation System or method for classifying images
US20190026548A1 (en) * 2017-11-22 2019-01-24 Intel Corporation Age classification of humans based on image depth and human pose

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112937479A (en) * 2021-03-31 2021-06-11 北京市商汤科技开发有限公司 Vehicle control method and device, electronic device and storage medium
GB2625515A (en) * 2022-12-14 2024-06-26 Continental Automotive Tech Gmbh A method of identifying vehicle occupant and system thereof

Also Published As

Publication number Publication date
GB2585247B (en) 2022-07-27
GB201909699D0 (en) 2019-08-21
DE102020117555A1 (en) 2021-01-07

Similar Documents

Publication Publication Date Title
CN111469802B (en) Seat belt state determination system and method
US7505841B2 (en) Vision-based occupant classification method and system for controlling airbag deployment in a vehicle restraint system
US9077962B2 (en) Method for calibrating vehicular vision system
CN113147664B (en) Method and system for detecting whether a seat belt is used in a vehicle
US6801662B1 (en) Sensor fusion architecture for vision-based occupant detection
US7689008B2 (en) System and method for detecting an eye
US7636479B2 (en) Method and apparatus for controlling classification and classification switching in a vision system
US20070195990A1 (en) Vision-Based Seat Belt Detection System
US20040220705A1 (en) Visual classification and posture estimation of multiple vehicle occupants
GB2585247A (en) Occupant classification method and apparatus
US20030204384A1 (en) High-performance sensor fusion architecture
US20050201591A1 (en) Method and apparatus for recognizing the position of an occupant in a vehicle
US8560179B2 (en) Adaptive visual occupant detection and classification system
US20060209072A1 (en) Image-based vehicle occupant classification system
Owechko et al. Vision-based fusion system for smart airbag applications
US7623950B2 (en) Confidence boost for automotive occupant classifications
Baltaxe et al. Marker-less vision-based detection of improper seat belt routing
US11975683B2 (en) Relative movement-based seatbelt use detection
Makrushin et al. Car-seat occupancy detection using a monocular 360 NIR camera and advanced template matching
US20240029452A1 (en) Seat belt wearing determination apparatus
CN116930961A (en) Method, device, vehicle and medium for identifying target object
CN117416301A (en) Safety belt wearing determination device
CN116653847A (en) Method and device for controlling vehicle safety airbag, vehicle and medium
CN117953333A (en) Biometric service assessment architecture for vehicles