WO2019188060A1 - Dispositif de détection et procédé de détection - Google Patents

Dispositif de détection et procédé de détection Download PDF

Info

Publication number
WO2019188060A1
WO2019188060A1 PCT/JP2019/008767 JP2019008767W WO2019188060A1 WO 2019188060 A1 WO2019188060 A1 WO 2019188060A1 JP 2019008767 W JP2019008767 W JP 2019008767W WO 2019188060 A1 WO2019188060 A1 WO 2019188060A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
frame
occupant
determination unit
seat
Prior art date
Application number
PCT/JP2019/008767
Other languages
English (en)
Japanese (ja)
Inventor
滋 中崎
Original Assignee
株式会社フジクラ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2018070092A external-priority patent/JP2019177852A/ja
Priority claimed from JP2018070093A external-priority patent/JP2019177853A/ja
Priority claimed from JP2018070091A external-priority patent/JP2019177851A/ja
Application filed by 株式会社フジクラ filed Critical 株式会社フジクラ
Publication of WO2019188060A1 publication Critical patent/WO2019188060A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/04Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • B60R21/01Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents
    • B60R21/015Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents including means for detecting the presence or position of passengers, passenger seats or child seats, and the related safety parameters therefor, e.g. speed or timing of airbag inflation in relation to occupant position or seat belt use
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R22/00Safety belts or body harnesses in vehicles
    • B60R22/48Control systems, alarms, or interlock systems, for the correct application of the belt or harness

Definitions

  • the present invention relates to a detection device and a detection method.
  • This application claims priority based on Japanese Patent Application No. 2018-070091, Japanese Patent Application No. 2018-070092 and Japanese Patent Application No. 2018-070093 filed in Japan on March 30, 2018, the contents of which are incorporated herein by reference. To do.
  • This seat belt reminder detects, for example, whether or not an occupant is seated in a seat by, for example, a seating sensor that detects a load applied to the seat. Then, the seat belt reminder further detects whether or not the seat belt is worn, and notifies when the seat belt is not worn.
  • Patent Document 1 requires a sensor for detection.
  • Patent Document 2 it is necessary to attach or print a marking on the seat belt, and it is impossible to accurately detect whether or not the occupant is wearing the seat belt from the photographed image. There was a case.
  • the present invention has been made in view of the above-described problems, and can detect whether or not a person is wearing a fixing device such as a seat belt based on a captured image. And to provide a detection method.
  • a detection apparatus extracts an image of a region including a person from a captured unit that captures an image using the first frame.
  • An extraction unit that extracts, using a second frame, an image of an area that includes a fixing device that fixes the person from the photographed image; and when the first frame and the second frame are extracted,
  • a determination unit that determines whether the person is wearing the fixing device based on an overlapping state of the image of one frame and the image of the second frame.
  • the determination unit may be configured such that the area where the image of the first frame and the image of the second frame overlap is larger than a predetermined area.
  • the determination unit may determine that the image of the second frame is the first frame when the image of the second frame overlaps the image of the first frame.
  • the ratio of the area of the part that overlaps the image of one frame to the area of the first frame is larger than a predetermined ratio, it is determined that the person is wearing the fixing device, and the image of the second frame is When the ratio of the area of the portion of the second frame image overlapping the first frame image to the area of the first frame is equal to or less than a predetermined ratio when the first frame image overlaps the first frame image, It may be determined that the person does not wear the fixing device.
  • the determination unit may determine that the image of the second frame is a predetermined position when the image of the second frame overlaps the image of the first frame. If the person is wearing the fixing device when straddling the boundary set in the above, and if the image of the second frame overlaps the image of the first frame, When the image of the second frame does not straddle the boundary line, it may be determined that the person does not wear the fixing device.
  • the determination unit may be configured to detect the second frame image in the horizontal direction when the second frame image overlaps the first frame image.
  • the width value provided in at least one of the vertical directions is equal to or greater than a predetermined length, it is determined that the person wears the fixing device, and the image of the second frame is the image of the first frame.
  • the width value of the image of the second frame is less than a predetermined length, it may be determined that the person does not wear the fixing device.
  • the detection device further includes a storage unit that stores a determination unit that extracts an image of the second frame from an image when the person wears the fixing device.
  • the extraction unit may extract the image of the second frame using the determination unit stored in the storage unit.
  • the determination unit may be configured such that an overlapping state between the image of the first frame and the image of the second frame straddles a boundary line set at a predetermined position.
  • the image of the second frame is formed, it may be determined that the person is wearing the fixing device.
  • the detection apparatus which concerns on 1 aspect of this invention is equipped with the detection part which detects the information regarding the position of at least a head of a passenger
  • the said determination part is an image of a said 1st frame, and a said 2nd frame
  • the fixing device is attached. You may make it determine with it being a possible passenger
  • the reference position is a position set in a vehicle seat or a predetermined position with respect to the position set in the vehicle seat. There may be.
  • the extraction unit extracts an image including at least a head of an occupant from the image captured by the imaging unit, and the determination unit captures the image In the image, when the position of the head of the extracted image including at least the head of the occupant is above a reference position, it may be determined that the occupant can be attached to the fixing device.
  • the extraction unit extracts an image including an occupant's head from an image captured by the imaging unit using the first frame, and The reference position is detected from an image photographed by the photographing unit, and the determination unit determines that the occupant is seated on a seat when the extraction unit can extract the first frame, and When the image extracted using one frame and the reference position overlap, it is determined that the occupant can wear the fixing device, and the image extracted using the first frame overlaps the reference position. If not, it may be determined that the occupant is unable to wear the fixing device.
  • the determination unit is not an occupant when the image extracted using the first frame is not detected below the second reference position. May be determined.
  • the extraction unit is configured to divide a space in which a plurality of people are accommodated into a region corresponding to the number of people in the space, in the image captured by the imaging unit. An image of a detection area set for each of the divided areas is extracted, and the determination unit is configured to extract a human image based on the image of the first frame extracted from the image of the detection area extracted by the extraction unit. The presence or absence may be determined.
  • the detection area is an area divided in a width direction of the space where the plurality of persons are accommodated with reference to an end portion of the space where the plurality of persons are accommodated. You may make it be.
  • the determination unit may include an image of a person outside the detection area in the image captured by the imaging unit.
  • the area of the image of the person included is less than a threshold value, an image corresponding to a person outside the detection area may not be extracted.
  • a detection method includes a fixing device that extracts an image of a region including a person from an image using a first frame and fixes the person from the image.
  • a detection method in a detection apparatus having a storage unit that stores a determination unit that extracts an image of an area including a second frame, the imaging unit taking an image, and the extraction unit, the storage unit Using the determination means for storing, an image of a region including a person is extracted from the captured image using a first frame, and an image of a region including a fixing device for fixing the person is extracted from the captured image.
  • the procedure of extracting using the second frame and the determination unit based on the overlapping state of the image of the first frame and the image of the second frame when the first frame and the second frame are extracted. Determining whether the person is wearing the fixture; and Including.
  • the detection unit detects information related to the position of at least the head of the occupant
  • the determination unit includes the image of the first frame and the first frame. Fixed when the position of the head of the occupant is above the reference position set for the seat based on the result detected by the detection procedure when the images of the two frames overlap. And a procedure for determining that the occupant can wear the appliance.
  • the extraction unit may divide a space in which a plurality of people are accommodated into regions corresponding to the number of people that fit in the space in the captured image.
  • the present invention it is possible to appropriately detect whether or not a person is wearing a fixing device based on a photographed image.
  • FIG. 1 It is a figure which shows the structural example of the detection apparatus which concerns on 3rd Embodiment. It is a flowchart which shows the example of a process sequence which the detection apparatus which concerns on 3rd Embodiment performs. It is a figure which shows the example in which the drawer opening of a seatbelt is provided in the upper part of the seat. It is a figure which shows the structural example of the passenger
  • FIG. 24 is an example in which an image of a region (first frame) including the head of the occupant P1 is extracted when the occupant P1 is not wearing a seat belt.
  • FIG. 26 is an example in which an image of a region (first frame) including the head of the passenger P1 is extracted when the passenger P1 wears the seat belt after FIG. It is the example which extracted the image of the area
  • FIG. 37 is an image example of the first detection area Ar1 when the occupant is not wearing a seat belt in FIG.
  • the detection device (fixed device detection device) described below can be applied to, for example, a vehicle, a roller coaster (roller coaster), a parachute, a paraglider, and the like.
  • the fixing device is a seat belt, and an object fixed by the fixing device is an occupant.
  • the fixing device is a safety bar, and the object fixed by the fixing device is a passenger.
  • the fixing device When the detection device is applied to a parachute, the fixing device is a body harness, and the object fixed by the fixing device is a person who uses the parachute.
  • the fixing device of the present embodiment may be extended or contracted like a seat belt, or may be fixed like a safety bar. Further, the fixing device is a device having a shape in which not only the person is fixed to something but also something is fixed to the person and the relative position does not change.
  • FIG. 1 is a diagram illustrating an arrangement example of the detection device of the vehicle 2 according to the present embodiment.
  • the detection device 1 ⁇ / b> A (fixed device detection device) is installed or attached to the vehicle 2, for example, near the dashboard.
  • 1 A of detection apparatuses may be installed or attached to the rear-mirror part, the windshield upper part, etc.
  • the detection device 1A detects an occupant seated in the front seat and detects whether or not a seat belt is worn.
  • the detection device 1B (fixed appliance detection device) is installed or attached to the vehicle 2, for example, on a console, a ceiling, a rearview mirror, an upper part of a windshield, or the like.
  • the detection device 1B detects an occupant seated in a back seat and whether a seat belt is worn. Moreover, in FIG. 1, the code
  • the detection device 1 determines whether a seated occupant is wearing a seat belt based on the captured image.
  • the detection device 1 (or 1AA or 1BB) notifies the passenger to urge the user to wear the seat belt when the passenger is not wearing the seat belt.
  • the overlapping state of the images refers to a state in which the first frame image and the second frame image overlap (FIG. 6), and the first frame image and the second frame image do not overlap.
  • One of the states (FIG. 8).
  • FIG. 2 is a diagram illustrating a configuration example of the detection device 1 according to the present embodiment.
  • the detection device 1 includes an imaging unit 11 (detection unit), a determination unit 12, a storage unit 13, a communication unit 14, and a notification unit 15. Note that the detection device 1 may perform transmission / reception of information with the external device 3 via a network.
  • the photographing unit 11 photographs an image including a front seat or a rear seat. Note that the photographing unit 11 performs photographing at a predetermined time interval.
  • the imaging unit 11 includes, for example, a CMOS (Complementary Metal-Oxide-Semiconductor) imaging device or a CCD (Charge Coupled Device) imaging device.
  • CMOS Complementary Metal-Oxide-Semiconductor
  • CCD Charge Coupled Device
  • the determination unit 12 includes an extraction unit 121, an occupant determination unit 122, and a belt determination unit 123.
  • the determination unit 12 acquires an image captured by the imaging unit 11. Based on the determination result of the belt determination unit 123, the determination unit 12 generates an instruction to turn on or turn off warning information for prompting the user to wear the seat belt.
  • the extraction unit 121 extracts an image of an area including the occupant from the acquired image using the determination unit stored in the storage unit 13.
  • the determination means is an artificial intelligence (AI Artificial Intelligence) constructed by an algorithm constructed so as to make a discrimination by providing a threshold value, machine learning, or the like.
  • the extracted image of the area including the occupant is referred to as a first frame image.
  • the size of the first frame differs for each occupant depending on the physique of the occupant and the size of the face. Also, if a load is placed on the seat, the extraction unit 121 does not extract the first frame image by the determination unit. Further, the extraction unit 121 extracts an image of an area including the seat belt from the acquired image using the determination unit stored in the storage unit 13.
  • the image of the area including the extracted seat belt is referred to as a second frame image.
  • the extraction unit 121 does not extract the second frame image by the determination unit.
  • the detection device 1 is applied to a roller coaster (roller coaster), a parachute, a paraglider, or the like, a part of a person (for example, the chest) and a fixing device may be included in the captured image. In such a case, the entire captured image may be handled as the first frame image.
  • the occupant determination unit 122 determines that the occupant is seated when the image of the first frame is extracted by the extraction unit 121.
  • the occupant determination unit 122 determines that the occupant is not seated when the image of the first frame is not extracted by the extraction unit 121.
  • the occupant determination unit 122 generates an instruction to turn off warning information that prompts the user to wear the seat belt.
  • the belt determination unit 123 generates an instruction to turn on or off warning information for prompting the user to wear the seat belt based on the extracted state of the first frame image and the second frame image.
  • the belt determination unit 123 first determines whether an image of the second frame has been extracted, and generates an instruction to turn on warning information that prompts the user to attach the seat belt when the image of the second frame is not extracted. To do. The processing performed by the belt determination unit 123 will be described later.
  • the storage unit 13 stores a determination unit that extracts an image of an area including an occupant from a captured image.
  • the storage unit 13 stores determination means for detecting an image of an area including the seat belt from the captured image.
  • the determination means stored in the storage unit 13 uses the image when the occupant is seated on the seat and the occupant is wearing the seat belt, and the seat belt when the occupant is seated on the seat belt. Learn and generate images. That is, in the present embodiment, the image when the occupant does not wear the seat belt is not used when learning the image extraction determination means.
  • the detection device 1 extracts only the image of the seat belt when the occupant is wearing the seat belt, and the seat belt when the occupant is not wearing the seat belt or when the occupant is not seated. Do not extract seat belt images.
  • the storage unit 13 stores a learning model for extracting an image of a region including the occupant from the captured image and a learning model for detecting an image of the region including the seat belt from the captured image. .
  • the communication unit 14 transmits the information output from the determination unit 12 to the external device 3 via the network.
  • the communication unit 14 receives the information transmitted by the external device 3 via the network, and outputs the received information to the determination unit 12.
  • the information transmitted by the external device 3 includes, for example, a determination unit used by the extraction unit 121.
  • the information output by the determination unit 12 is, for example, an image captured by the photographing unit 11, an image within a frame extracted by the extraction unit 121, a determination result, or the like.
  • the notification unit 15 turns warning information on or off according to an instruction from the belt determination unit 123.
  • the notification unit 15 is, for example, a speaker, an LED (light emitting diode) display device, a liquid crystal display device, or the like.
  • the warning information ON state is, for example, a state where the LED display device is lit or blinking.
  • the warning information off state is, for example, a state where the LED display device is turned off.
  • FIG. 3 is a diagram illustrating an example of a state in which an occupant is seated on a seat.
  • the longitudinal direction of the seat is the x-axis direction
  • the height direction is the y-axis direction.
  • passengers P1 and P2 are seated on the seat in the x-axis direction.
  • Reference numeral 4 denotes a seat.
  • Reference numeral 41 denotes a headrest.
  • Reference numeral 42 denotes a seat back.
  • Reference numeral 43 denotes a seat cushion.
  • Reference numerals 44-1 to 44-3 are seat belts.
  • the seated passenger P1 is an adult.
  • the passenger P2 is assumed to be a child, for example.
  • FIG. 4 is a diagram illustrating an example of an image photographed by the photographing unit 11 according to the present embodiment.
  • FIG. 4 is an example of photographing the front seat.
  • Reference numeral g ⁇ b> 102 is an example of an image photographed by the photographing unit 11.
  • the determination unit 12 acquires the image of the seatbelt imaged when the occupant is seated on the seat and is wearing the seatbelt, and stores the acquired image in the storage unit 13, for example.
  • the determination unit 12 learns a determination unit that extracts the first frame image and the second frame image from the plurality of images acquired in this way. Note that the initial value of the determination means may be stored during learning.
  • FIG. 5 is a diagram illustrating an example in which an occupant is not seated on a seat.
  • the extraction unit 121 attempts to extract an image of the first frame Fr from the image captured by the imaging unit 11. In the example of FIG. 5, the image of the first frame Fr is not extracted. Subsequently, the extraction unit 121 attempts to extract an image of the region (second frame Frb) including the seat belt from the image captured by the imaging unit 11. In the example of FIG. 5, the extraction unit 121 cannot extract the image of the first frame Fr, and thus does not extract the image of the second frame Frb.
  • the occupant determination unit 122 determines that no occupant is seated on the seat. As a result, the determination unit 12 determines that no occupant is seated on the seat, and turns off warning information that prompts the user to wear the seat belt. As a result, the notification unit 15 does not notify the warning information that prompts the user to wear the seat belt.
  • FIG. 6 is a diagram illustrating an example in which the occupant P1 is seated on the seat and is wearing a seat belt.
  • the image in the first frame Fr (P1) includes an image of the occupant P1, and specifically includes an image of the head, an image of the chest, and an image of the arm of the occupant P1.
  • the image in the symbol Frb (P1) includes an image of the seat belt.
  • the extraction unit 121 extracts an image of the first frame Fr (P1) from the image captured by the imaging unit 11.
  • the extraction unit 121 extracts an image of the second frame Frb (P1) from the image captured by the imaging unit 11.
  • the occupant determination unit 122 determines that the occupant P1 is seated on the seat because the first frame Fr (P1) is extracted. Since the second frame Frb (P1) is extracted, the belt determination unit 123 determines whether the image of the second frame Frb (P1) overlaps the image of the first frame Fr (P1). In the example shown in FIG. 6, since the image of the second frame Frb (P1) overlaps the image of the first frame Fr (P1), the belt determination unit 123 determines that the occupant is wearing the seat belt. .
  • the determination unit 12 determines that the occupant is seated on the seat and the occupant is wearing the seat belt, and turns off warning information that prompts the user to attach the seat belt.
  • the notification unit 15 does not notify the warning information that prompts the user to wear the seat belt.
  • FIG. 7 is a diagram illustrating an example in which the occupant P1 is seated on the seat and is not wearing a seat belt.
  • the extraction unit 121 attempts to extract an image of the first frame Fr from the image captured by the imaging unit 11.
  • an image of the first frame Fr (P1) corresponding to the occupant P1 is extracted.
  • the extraction unit 121 attempts to extract an image of the second frame Frb from the image captured by the imaging unit 11.
  • the image of the second frame Frb is not extracted.
  • the determination means stored in the storage unit 13 uses the image when the seated occupant is wearing the seat belt, and the image of the region (second frame) including the seat belt is obtained from the image. This is because it is a judgment means for extraction. For this reason, the extraction unit 121 does not extract the image of the second frame from the image when the occupant is not wearing the seat belt.
  • the occupant determination unit 122 determines that the occupant is seated in the seat because the first frame Fr (P1) has been extracted. Since the second frame Frb is not extracted, the belt determination unit 123 determines that the occupant is not wearing the seat belt.
  • the determination unit 12 determines that the occupant is seated on the seat, but the occupant is not wearing the seat belt, and turns on warning information that prompts the user to attach the seat belt.
  • the notification unit 15 notifies warning information that prompts the user to wear the seat belt.
  • FIG. 8 shows an image of a region in which the occupant P1 is seated in the seat and is not wearing a seat belt, and the occupant P2 is seated in the seat next to the seat, is seated, and includes a seat belt for the next seat. It is a figure which shows the example from which was extracted.
  • the extraction unit 121 attempts to extract an image of the first frame Fr from the image captured by the imaging unit 11. In the example of FIG. 5, images of the first frame Fr (P1) corresponding to the occupant P1 and the first frame Fr (P2) corresponding to the occupant P2 are extracted.
  • the extraction unit 121 attempts to extract an image of the second frame Frb from the image captured by the imaging unit 11.
  • the occupant determination unit 122 determines that the occupant P1 and the occupant P2 are seated on the seat because the first frame Fr (P1) and the first frame Fr (P2) are extracted.
  • the belt determination unit 123 determines that the occupant P1 does not wear the seat belt because the image of the first frame Fr (P1) does not overlap the image of the second frame Frb (P2).
  • the belt determination unit 123 determines that the occupant P2 is wearing the seat belt because the image of the first frame Fr (P2) overlaps the image of the second frame Frb (P2).
  • the determination unit 12 determines that the occupant P1 is seated in the seat but the occupant is not wearing a seat belt, and the warning information that prompts the occupant P1 to attach the seat belt. Turn on the.
  • the notification unit 15 notifies warning information that prompts the occupant P1 to wear the seat belt.
  • the determination unit 12 determines that the occupant P2 is seated in the seat and the occupant is wearing the seat belt, and the warning information that prompts the occupant P2 to attach the seat belt is turned off. To. As a result, the notification unit 15 does not notify the occupant P2 of warning information that prompts the user to wear the seat belt. That is, in this embodiment, it is determined whether or not warning information for prompting the user to wear a seat belt is notified for each occupant, and the warning information is notified or not notified for each occupant seated in the seat.
  • FIG. 9 is a flowchart illustrating an example of a processing procedure performed by the detection device 1 according to the present embodiment. Note that the detection device 1 performs the following processing at a predetermined time interval after, for example, the ignition key is turned on.
  • Step S1 The photographing unit 11 photographs an image including a seat.
  • Step S ⁇ b> 2 The extraction unit 121 determines whether or not the image of the region (first frame) including the occupant has been extracted from the image captured by the imaging unit 11. When it is determined that the image of the region (first frame) including the occupant has been extracted (step S2; YES), the extraction unit 121 proceeds to the process of step S3 and extracts the image of the region (first frame) including the occupant. If it is determined that the process could not be performed (step S2; NO), the process proceeds to step S7.
  • Step S3 The extraction unit 121 determines whether or not the image of the region (second frame) including the seat belt has been extracted from the image captured by the imaging unit 11. When it is determined that the image of the region (second frame) including the seat belt has been extracted (step S3; YES), the extraction unit 121 proceeds to the process of step S4 and extracts the region (second frame) including the seat belt. If it is determined that it could not be performed (step S3; NO), the process proceeds to step S8.
  • Step S4 The belt determination unit 123 determines whether or not the first frame image and the second frame image overlap each other. If the belt determination unit 123 determines that the first frame image and the second frame image overlap each other (step S4; YES), the process proceeds to step S6. If the belt determination unit 123 determines that the first frame image and the second frame image do not overlap (step S4; NO), the belt determination unit 123 proceeds to step S8.
  • Step S6 The belt determination unit 123 determines that the occupant is wearing the seat belt. For this reason, the belt determination unit 123 generates an instruction to turn off warning information that prompts the user to wear the seat belt. (Step S ⁇ b> 7) In response to the instruction from the belt determination unit 123, the notification unit 15 turns off warning information that prompts the user to wear the seat belt. The notification unit 15 ends the process.
  • Step S8 The belt determination unit 123 determines that the occupant is not wearing the seat belt.
  • Step S9 The belt determination unit 123 generates an instruction to turn on warning information that prompts the user to wear the seat belt. Subsequently, the notification unit 15 turns on warning information that prompts the user to wear the seat belt in accordance with an instruction from the belt determination unit 123. After the process, the determination unit 12 ends the process.
  • step S2 and step S3 may be performed simultaneously, and step S2 may be performed after performing step S3 first.
  • the determination unit that extracts the image of the region including the occupant as the image of the first frame and the image when the occupant is seated on the seat and wearing the seat belt are learned.
  • the storage unit 13 stores determination means for extracting an image of an area including the seat belt when the seat belt is worn as an image of the second frame.
  • the image of the area including the occupant is extracted as the image of the first frame
  • the image of the area including the seat belt is extracted as the image of the second frame.
  • whether or not the occupant is wearing the seat belt is determined by the overlap between the extracted first frame image and the second frame image. Judgment was made. That is, in the present embodiment, it is determined that the occupant is wearing the seat belt when the image of the first frame and the image of the second frame overlap.
  • the seat belt is worn by the overlap of the image of the first frame and the image of the second frame. Even when the image is taken, since the image of the second frame by the seat belt does not overlap the image of the first frame corresponding to the target occupant, erroneous detection of the seat belt wearing state can be prevented. .
  • the determination unit 12 may perform the above-described processing for each seat.
  • the storage unit 13 may store information indicating the seat position in advance.
  • a determination unit that learns an image when an occupant is seated in a seat and wears a seat belt, and extracts an image of a first frame from an image of a region including the occupant, and a seat belt
  • the storage unit 13 may store determination means for extracting an image of an area including the image as the second frame image.
  • the determination unit is used to extract the image of the area including the occupant as the first frame image and extract the image of the area including the seat belt as the second frame image. Good.
  • the image of the region (first frame) including the occupant is extracted from the image captured by the imaging unit 11, the image of the region (second frame) including the seat belt is extracted, and the image of the first frame
  • the overlapping state of the images means, for example, a state where the image of the first frame and the image of the second frame do not overlap or a state where the image of the second frame does not straddle the boundary line (FIG.
  • FIG. 10 is a diagram illustrating a configuration example of the detection apparatus 1AA according to the present embodiment.
  • symbol is used for the function part which has the same function as the detection apparatus 1, and description is abbreviate
  • the detection device 1AA (fixed appliance detection device) includes an imaging unit 11 (detection unit), a determination unit 12A, a storage unit 13A, a communication unit 14, and a notification unit 15.
  • the determination unit 12A includes an extraction unit 121A, an occupant determination unit 122, and a belt determination unit 123A.
  • the extraction unit 121A extracts an image of an area including the occupant from the acquired image using the determination unit stored in the storage unit 13A.
  • the determination means is an artificial intelligence (AI Artificial Intelligence) constructed by an algorithm constructed so as to make a discrimination by providing a threshold value, machine learning, or the like.
  • the extraction unit 121A extracts an image of an area including the seat belt from the acquired image using the determination unit stored in the storage unit 13A.
  • the extraction unit 121A obtains the area of the first frame image and obtains the area of the second frame image.
  • the extraction unit 121A obtains the coordinates of the four corners of the extracted second frame image (for example, relative coordinates and absolute coordinates in the vehicle).
  • the detection apparatus 1AA when the detection apparatus 1AA is applied to a roller coaster (roller coaster), a parachute, a paraglider, or the like, a part of a person (for example, the chest) and a fixing device may be included in the photographed image. In such a case, the entire captured image may be handled as the first frame image.
  • the occupant determination unit 122 determines that the occupant is seated when the image of the first frame is extracted by the extraction unit 121A as in the first embodiment.
  • the occupant determination unit 122 determines that the occupant is not seated when the image of the first frame is not extracted by the extraction unit 121A.
  • the belt determination unit 123 generates an instruction to turn off warning information that prompts the user to wear the seat belt.
  • the belt determination unit 123 causes the area of the first frame to be equal to or less than a predetermined area. In this case, it may be determined that the passenger is not seated.
  • the belt determination unit 123A is based on the result of comparing the extraction state of the image of the first frame and the image of the second frame, particularly the area where the image of the first frame and the image of the second frame overlap with the predetermined area. Then, an instruction to turn on or off the warning information that prompts the user to wear the seat belt is generated. Note that, similarly to the first embodiment, the belt determination unit 123A first determines whether an image of the second frame has been extracted, and if the image of the second frame is not extracted, warning information that prompts the user to wear the seat belt Generates an instruction to turn on The processing performed by the belt determination unit 123A will be described later.
  • the storage unit 13A stores determination means for extracting an image of an area included by the occupant from the captured image.
  • the storage unit 13A stores determination means for detecting an image of an area including the seat belt from the captured image.
  • the storage unit 13A stores a predetermined area for each type of seat belt.
  • the storage unit 13A stores a predetermined area for each seat, for example.
  • the storage unit 13A stores coordinates of a boundary line C described later. The boundary line C will be described later.
  • the storage unit 13A stores a learning model for extracting an image of a region including the occupant from the captured image and a learning model for detecting an image of the region including the seat belt from the captured image. To do.
  • FIG. 11 is a diagram illustrating an example in which an occupant is not seated on a seat.
  • FIG. 12 is a diagram illustrating an example in which an occupant is seated on a seat and is wearing a seat belt.
  • the area of the second frame Frb (P1) is smaller than when the occupant shown in FIG. 11 is not seated on the seat and when the occupant shown in FIG. 12 is seated on the seat and seated.
  • the predetermined area is the area of the image of the seat belt when the occupant is not wearing the seat belt.
  • the determination unit 12A captures the state before the seat belt is attached, extracts the second frame image, and stores the extracted image area in the storage unit 13A as a predetermined area.
  • the predetermined area may be, for example, half or more of the area of the image of the first frame. Alternatively, the predetermined area may be double the area of the second frame when the occupant is not wearing the seat belt.
  • symbols W1 and W2 represent the width
  • the chain line C is a boundary line set at a predetermined position with respect to the left-right direction (x-axis direction, seating direction) of the seat, for example.
  • the boundary line C is set, for example, approximately at the center of the seat or headrest in the x-axis direction for each seat. Further, as shown in FIG. 11, when the occupant does not wear the seat belt, the image of the second frame Frb (P1) does not straddle the boundary line C.
  • FIG. 13 is a diagram illustrating an example in a case where an occupant is seated on a seat and a seat belt is not worn.
  • the seat belt is hidden in the arm of the passenger P1.
  • Images of the first frame Fr (P1) and the second frame Frb (P1) are extracted by the extraction unit 121A.
  • the occupant determination unit 122 determines that the occupant P1 is seated on the seat because the first frame Fr (P1) is extracted. Since the second frame Frb (P1) is extracted, the belt determination unit 123A determines whether the area where the image of the first frame and the image of the second frame overlap is greater than or equal to a predetermined area. In the example shown in FIG.
  • the determination unit 12 determines that the occupant is seated on the seat, but the occupant is not wearing the seat belt, and turns on warning information that prompts the user to attach the seat belt.
  • the notification unit 15 notifies warning information that prompts the user to wear the seat belt.
  • FIG. 14 is a diagram illustrating an example in which an image of an area including a seat belt for an adjacent seat is extracted when the occupant P1 wearing the seat belt is seated on the seat.
  • the extraction unit 121A attempts to extract the image of the first frame Fr from the image captured by the imaging unit 11. In the example of FIG. 14, only the image of the first frame Fr (P1) corresponding to the passenger P1 is extracted. Since the first frame Fr (P1) is extracted, the occupant determination unit 122 determines that only the occupant P1 is seated on the seat.
  • the extraction unit 121A attempts to extract the image of the second frame Frb from the image captured by the imaging unit 11.
  • the image of the second frame Frb (P1) corresponding to the seat belt 44-1
  • the image of the second frame Frb (P2) corresponding to the seat belt 44-2
  • the seat belt 44-3 A corresponding image of the second frame Frb (P2) is extracted. Since the second frame Frb (P1) has been extracted, the belt determination unit 123A determines the rate at which the image of the second frame Frb (P1) overlaps the image of the first frame Fr (P1). In the example shown in FIG.
  • the belt determination unit 123A causes the occupant P1 to seat the seat belt. It is determined that it is attached.
  • the determination unit 12A determines that the occupant P1 is seated on the seat and the occupant P1 is wearing the seat belt, and turns off warning information that prompts the user to wear the seat belt.
  • the belt determination unit 123A determines that the image of the second frame Frb (P2) is not a seat belt for the occupant P1. Similarly, since the image of the second frame Frb (P3) does not overlap the image of the first frame Fr (P1), the image of the second frame Frb (P3) is displayed on the seat for the occupant P1. It is determined that it is not a belt.
  • the determination unit 12A determines that the image of the second frame Frb (P2) does not overlap the image of the first frame Fr, and the occupant determination unit 122 determines that only the occupant P1 is seated. Warning information that prompts the occupant of the seat to wear the seat belt is turned off. Further, the determination unit 12A determines that the image of the second frame Frb (P3) does not overlap the image of the first frame Fr, and the occupant determination unit 122 determines that only the occupant P1 is seated. Warning information that prompts the occupant of the seat to attach the seat belt is turned off.
  • FIG. 15 is a diagram showing an example before the seat belt of the type that fixes the waist is attached.
  • the determination unit 12A captures the state before mounting, extracts an image of the second frame Frb (P1), and stores the area of the extracted image in the storage unit 13A as a predetermined area.
  • reference sign W ⁇ b> 11 is the width in the x-axis direction in the image of the second frame Frb (P ⁇ b> 1). Further, as shown in FIG. 15, when the occupant does not wear a seat belt, the image of the second frame Frb (P1) does not straddle the boundary line C.
  • FIG. 16 is a diagram showing an example after the seat belt of the type that fixes the waist is attached.
  • an occupant is seated on the seat and a seat belt is worn.
  • an image of the first frame Fr (P1) and an image of the second frame Frb (P1) are extracted.
  • the image of the second frame Frb (P1) straddles the boundary line C.
  • the belt determination unit 123A determines that the occupant P1 is wearing the seat belt because the area where the image of the first frame Fr (P1) and the image of the second frame Frb (P1) overlap is larger than the predetermined area. To do. As a result, the determination unit 12A determines that the occupant P1 is seated in the seat and the occupant P1 is wearing the seat belt, and turns off warning information that prompts the user to wear the seat belt. As a result, the notification unit 15 does not notify the warning information that prompts the user to wear the seat belt.
  • a predetermined area corresponding to the shape and type of the seat belt is stored in the storage unit 13A, and this is used to determine whether or not the seat belt is attached.
  • the predetermined area is the area when the occupant does not wear the seat belt, but is not limited thereto.
  • the predetermined area may be, for example, half of the area when the occupant is wearing the seat belt with reference to the area when the occupant is wearing the seat belt.
  • FIG. 17 is a flowchart illustrating an example of a processing procedure performed by the detection apparatus 1AA according to the present embodiment. Note that the detection apparatus 1AA performs the following processing at a predetermined time interval after, for example, the ignition key is turned on.
  • Step S101 The imaging unit 11 captures an image including a seat.
  • Step S102 The extraction unit 121A determines whether or not the image of the region (first frame) including the occupant has been extracted from the image captured by the imaging unit 11. When it is determined that the image of the region (first frame) including the occupant has been extracted (step S102; YES), the extraction unit 121A proceeds to the process of step S103 and extracts the image of the region (first frame) including the occupant. If it is determined that the process could not be performed (step S102; NO), the process proceeds to step S107.
  • Step S103 The extraction unit 121A determines whether or not the image of the region (second frame) including the seat belt has been extracted from the image captured by the imaging unit 11. When it is determined that the image of the region (second frame) including the seat belt has been extracted (step S103; YES), the extraction unit 121A proceeds to the process of step S104 and extracts the region (second frame) including the seat belt. If it is determined that the process could not be performed (step S103; NO), the process proceeds to step S108.
  • Step S104 The belt determination unit 123A determines whether the area where the image of the first frame and the image of the second frame overlap is larger than a predetermined area. If the belt determination unit 123A determines that the area where the image of the first frame and the image of the second frame overlap is larger than the predetermined area (step S104; YES), the process proceeds to step S106. If the belt determination unit 123A determines that the area where the image of the first frame and the image of the second frame overlap is equal to or smaller than the predetermined area (step S104; NO), the process proceeds to step S108.
  • Step S106 The belt determination unit 123A determines that the occupant is wearing the seat belt. For this reason, the belt determination unit 123A generates an instruction to turn off warning information that prompts the user to wear the seat belt.
  • Step S107 The notification unit 15 turns off warning information for prompting the user to wear the seat belt in accordance with an instruction from the belt determination unit 123A. Note that the off state of the warning information that prompts the user to wear the seat belt is, for example, a state in which the lamp is turned off.
  • the notification unit 15 ends the process.
  • Step S108 The belt determination unit 123A determines that the occupant is not wearing the seat belt.
  • Step S109 The belt determination unit 123A generates an instruction to turn on warning information that prompts the user to wear the seat belt.
  • the notification unit 15 turns on warning information that prompts the user to wear the seat belt in accordance with an instruction from the belt determination unit 123A.
  • the on state of the warning information that prompts the user to wear the seat belt is, for example, a state in which the lamp is lit or a state in which the lamp is blinked.
  • the determination unit 12A ends the process.
  • step S102 and step S103 may be performed simultaneously, or step S102 may be performed after step S103 is performed first.
  • the determination unit 12A changes or adds to the comparison of the overlapping areas of the image of the first frame and the image of the second frame in this way, and the image of the second frame straddles the boundary line C. Whether or not the seat belt is worn may be determined based on whether or not the seat belt is worn. In this case, the determination unit 12A determines using the coordinates of the four corners of the image of the second frame obtained by the extraction unit 121A and the coordinates of the boundary line C.
  • the image of the second frame when the occupant is not seated on the seat and the seat belt is not worn does not straddle the boundary line C in the x-axis direction.
  • the image of the second frame when the occupant is seated on the seat and the seat belt is worn straddles the boundary line C in the x-axis direction.
  • the image of the second frame when the seat belt is not worn does not cross the boundary line C in the x-axis direction.
  • the image of the second frame when the seat belt is worn straddles the boundary line C in the x-axis direction.
  • the determination unit 12A changes or adds to the comparison of the overlapping areas of the image of the first frame and the image of the second frame, so that the width (horizontal width) of the image of the second frame in the x-axis direction is increased. Whether or not the occupant is wearing the seat belt may be determined based on whether or not the width is equal to or greater than the predetermined width. In this case, the determination unit 12A determines using the coordinates of the four corners of the image of the second frame obtained by the extraction unit 121A and the coordinates of the boundary line C. In the examples shown in FIGS. 11 to 16, the width (x-axis direction) is shown as an example of the width, but the width may be the vertical direction (y-axis direction).
  • the determination unit 12A changes or adds to the comparison of the overlapping areas of the image of the first frame and the image of the second frame, and the width in the y-axis direction (vertical width) of the image of the second frame. ) Is greater than or equal to a predetermined width, it may be determined whether or not the occupant is wearing a seat belt.
  • the lateral width W2 in the x-axis direction in the image of the second frame when the seat belt is worn is the second width when the seat belt is not worn, as shown in FIG.
  • the width of the image is wider than the width W1.
  • the lateral width W12 in the x-axis direction in the image of the second frame when the seat belt is attached is the same as the width W12 in the case where the seat belt is not attached as shown in FIG. It is wider than the width W11 of the two-frame image.
  • the image of the area including the passenger is extracted as the first frame image
  • the image of the area including the seat belt is extracted as the second frame image.
  • the first frame and the second frame are extracted, whether the occupant is wearing the seat belt according to the area where the extracted image of the first frame and the image of the second frame overlap. Judged whether or not.
  • the image of the second frame is outside the image of the first frame, it is determined that the occupant should not wear the seat belt.
  • the wearing state of the seat belt can be determined from the captured image. Further, according to the present embodiment, even when an image of the seat belt of the adjacent seat is captured, the image of the first frame corresponding to the occupant targeted by the second frame image by the seat belt Even if they overlap, the area is not more than a predetermined area or not more than a threshold area ratio. For this reason, according to the present embodiment, erroneous detection of the seat belt wearing state can be prevented.
  • the determination unit 12A obtains the ratio of the second frame image to the first frame image in the image when the occupant is seated and the seat belt is worn. That is, when the second frame overlaps the first frame, the determination unit 12A determines how much the area of the overlapping portion is relative to the area of the first frame.
  • the determination unit 12A obtains the ratio of the first frame image and the second frame image in the image when the occupant is seated and the seat belt is not worn. Thereby, the determination unit 12A calculates the threshold value of the area ratio, and stores the calculated threshold value in the storage unit 13A.
  • the storage unit 13A stores an area ratio threshold. In addition, you may make it the memory
  • FIG. 18 is a flowchart illustrating an example of a processing procedure performed by the detection apparatus 1AA according to the modification of the present embodiment.
  • description is abbreviate
  • Steps S101 to S103 The detection apparatus 1AA performs the processes of Steps S101 to S103. After the process, the determination unit 12A proceeds to the process of step S201.
  • Step S201 The belt determination unit 123A determines whether the ratio of the first frame image and the second frame image is greater than a predetermined ratio. If the belt determination unit 123A determines that the ratio of the first frame image and the second frame image is greater than the area ratio threshold (step S201; YES), the process proceeds to step S106. If the belt determination unit 123A determines that the ratio of the first frame image and the second frame image being equal to or less than the area ratio threshold (step S201; NO), the process proceeds to step S108.
  • the determination unit 12A changes or adds to the comparison of the ratio of the area of the image of the first frame and the area of the image of the second frame, so that the image of the second frame defines the boundary line C. Whether or not the seat belt is worn may be determined based on whether or not the vehicle is straddling. Further, the determination unit 12A changes or adds to the comparison of the ratio of the area of the image of the first frame and the area of the image of the second frame so that the width (horizontal width) in the x-axis direction of the image of the second frame is predetermined. Whether or not the occupant is wearing the seat belt may be determined based on whether or not the width is greater than or equal to the width.
  • the seat belt wearing state can be determined from the photographed image. Further, according to the modification of the present embodiment, even when an image of the seat belt of the adjacent seat is captured, the first frame corresponding to the occupant targeted by the image of the second frame by the seat belt Even if it overlaps with the image, the area is not more than a predetermined area or not more than a threshold area ratio. For this reason, according to the modification of this embodiment, the erroneous detection of the seatbelt wearing state can be prevented.
  • the determination unit 12A may perform the processing of FIG. 17 or FIG. 18 for each seat.
  • the storage unit 13A may store information indicating the position of the seat in advance.
  • an image of an area (first frame) including the head of the occupant is extracted from an image captured by the imaging unit 11
  • an image of an area (second frame) including the seat belt is extracted, and the second
  • An example of determining whether or not the occupant is wearing a seat belt based on the state of the frame image will be described.
  • the overlapping state of the images includes a state in which the first frame image and the second frame image overlap, a state in which the first frame image and the second frame image do not overlap, A state in which the image of the frame and the image of the second frame overlap, and a state in which the image of the second frame straddles the boundary line, a state in which the image of the first frame and the image of the second frame overlap, and the second One of the states in which the image of the frame does not straddle the boundary line.
  • FIG. 19 is a diagram illustrating a configuration example of the detection device 1BB according to the present embodiment.
  • symbol is used for the function part which has the same function as the detection apparatus 1, and description is abbreviate
  • the detection device 1BB (fixed appliance detection device) includes an imaging unit 11 (detection unit), a determination unit 12B, a storage unit 13B, a communication unit 14, and a notification unit 15.
  • the determination unit 12B includes an extraction unit 121B, an occupant determination unit 122, and a belt determination unit 123B.
  • the extraction unit 121B extracts an image of an area including the occupant from the acquired image using the determination unit stored in the storage unit 13B.
  • the determination means is an artificial intelligence (AI Artificial Intelligence) constructed by an algorithm constructed so as to make a discrimination by providing a threshold value, machine learning, or the like.
  • the extraction unit 121B extracts an image of an area including the seat belt from the acquired image using the determination unit stored in the storage unit 13B.
  • the extraction unit 121B obtains coordinates of the four corners of the extracted second frame image (for example, relative coordinates and absolute coordinates in the vehicle).
  • the detection device 1BB when the detection device 1BB is applied to a roller coaster (roller coaster), a parachute, a paraglider, or the like, a part of a person (for example, the chest) and a fixing device may be included in the photographed image. In such a case, the entire captured image may be handled as the first frame image.
  • the occupant determination unit 122 determines that the occupant is seated when the image of the first frame is extracted by the extraction unit 121B as in the first embodiment.
  • the occupant determination unit 122 determines that the occupant is not seated when the image of the first frame is not extracted by the extraction unit 121B.
  • the belt determination unit 123 generates an instruction to turn off warning information that prompts the user to wear the seat belt.
  • the belt determination unit 123B determines whether or not the first frame image and the second frame image overlap each other. Then, when the image of the first frame and the image of the second frame overlap, the belt determination unit 123B determines whether the position of the image of the second frame straddles the boundary line C in the x-axis direction. To do. If the belt determination unit 123A determines that the image position of the second frame crosses the boundary line C in the x-axis direction, the belt determination unit 123A determines that the occupant is wearing the seat belt. In this case, the belt determination unit 123A generates an instruction to turn off warning information that prompts the user to wear the seat belt.
  • the belt determination unit 123A determines that the position of the image of the second frame does not cross the boundary line C in the x-axis direction. In this case, the belt determination unit 123A generates an instruction to turn on warning information that prompts the user to wear the seat belt.
  • the storage unit 13B stores a determination unit that extracts an image of an area including an occupant from the captured image.
  • the storage unit 13B stores determination means for detecting an image of the seat belt from the captured image.
  • the storage unit 13B stores the coordinates of the boundary line C.
  • the storage unit 13B stores a learning model for extracting an image of a region including the occupant from the captured image.
  • FIGS. 11, 12, and 13 an example of a type in which a seat belt is placed on a shoulder will be described with reference to FIGS. 11, 12, and 13.
  • the image of the second frame Frb (P1) that is an image of the region including the seat belt is extracted.
  • the position of the image of the second frame Frb (P1) does not straddle the boundary line C in the x-axis direction (seat direction).
  • the image of the first frame Fr (P1) that is an image of the region including the occupant and the image of the region including the seat belt
  • An image of the second frame Frb (P1) is extracted.
  • the position of the image of the second frame Frb (P1) is within the image of the first frame and straddles the boundary line C in the x-axis direction (seat direction). .
  • the image of the first frame Fr (P1) which is an image of the area including the occupant, and the image of the area including the seat belt.
  • An image of the second frame Frb (P1) is extracted.
  • the position of the image of the second frame Frb (P1) is within the image of the first frame, but does not straddle the boundary line C in the x-axis direction (seat direction). .
  • the position of the boundary line C is not limited to the position of the center of the seat or the headrest as described above, and may be an arbitrary position. Further, the boundary line C may be provided in the vertical direction as shown in FIGS. 11 to 17, or may be provided in an oblique direction.
  • the boundary line C is a position based on the seat belt mounting position and the positional relationship between the seat belt and the seat. There may be. Also, the boundary line C is provided according to the position of the seat belt regardless of the first frame (provided in advance in the “middle” according to the position of the belt and the buckle when the seat belt is not used). It ’s also good.
  • the boundary line C may be defined by defining the middle of the first frame as a boundary line and providing the first frame according to the original. Further, the boundary line is not limited to a straight line but may be a curved line. In other words, the boundary line C may be a line that the image of the second frame straddles in an image in a state where an occupant is seated on the seat and the seat belt is worn.
  • FIG. 18 is a flowchart illustrating an example of a processing procedure performed by the detection apparatus 1BB according to the present embodiment.
  • the detection device 1BB performs the following processing at predetermined time intervals after, for example, the ignition key is turned on.
  • Step S301 The photographing unit 11 photographs an image including a seat.
  • Step S ⁇ b> 302 The extraction unit 121 ⁇ / b> B determines whether or not the image of the region (first frame) including the occupant has been extracted from the image captured by the imaging unit 11. When it is determined that the image of the region (first frame) including the occupant can be extracted (step S302; YES), the extraction unit 121B proceeds to the process of step S303 and extracts the image of the region (first frame) including the occupant. If it is determined that the process could not be performed (step S302; NO), the process proceeds to step S307.
  • Step S303 The extraction unit 121B determines whether or not the image of the region (second frame) including the seat belt has been extracted from the image captured by the imaging unit 11. When it is determined that the image of the region including the seat belt (second frame) has been extracted (step S303; YES), the extraction unit 121B proceeds to the process of step S304 and extracts the region including the seat belt (second frame). If it is determined that it could not be performed (step S303; NO), the process proceeds to step S308.
  • Step S304 The belt determination unit 123B determines whether or not the first frame image and the second frame image overlap each other. The belt determination unit 123B determines whether or not the position of the image of the second frame straddles the boundary line C when the image of the first frame and the image of the second frame overlap. If the belt determination unit 123B determines that the position of the image of the second frame straddles the boundary line C (step S304; YES), the belt determination unit 123B proceeds to step S306. If the belt determination unit 123B determines that the position of the image of the second frame does not cross the boundary line C (step S304; NO), the belt determination unit 123B proceeds to step S308.
  • Step S306 The belt determination unit 123B determines that the occupant is wearing the seat belt. For this reason, the belt determination unit 123B generates an instruction to turn off warning information that prompts the user to wear the seat belt.
  • Step S307 The notification unit 15 turns off warning information that prompts the user to wear the seat belt in accordance with an instruction from the belt determination unit 123B. Note that the off state of the warning information that prompts the user to wear the seat belt is, for example, a state in which the lamp is turned off.
  • the notification unit 15 ends the process.
  • Step S308 The belt determination unit 123B determines that the passenger is not wearing the seat belt.
  • Step S309 The belt determination unit 123B generates an instruction to turn on warning information that prompts the user to wear the seat belt.
  • the notification unit 15 turns on warning information that prompts the user to wear the seat belt in accordance with an instruction from the belt determination unit 123B.
  • the on state of the warning information that prompts the user to wear the seat belt is, for example, a state in which the lamp is lit or a state in which the lamp is blinked.
  • the determination unit 12A ends the process.
  • the determination unit 12B determines whether or not the occupant is wearing the seat belt when the image of the first frame and the image of the second frame overlap. For this reason, in this way, the determination unit 12B does not determine whether the occupant is wearing the seat belt when the image of the first frame and the image of the second frame do not overlap. Good.
  • the boundary line C described above is set for each seat. Therefore, for example, in FIG. 8, when detecting the seat belt wearing state with respect to the occupant P1, a part of the second frame Frb (P2) of the seat belt 44-2 in the adjacent seat is the first of the occupant P1. Even if it overlaps within the frame Fr (P1), the position of the image of the second frame Frb (P2) does not straddle the boundary line C corresponding to the occupant (P1). For this reason, according to this embodiment, it can prevent the determination part 12B misdetecting the seatbelt of an adjacent seat.
  • step S304 the belt determination unit 123B may not determine whether or not the first frame image and the second frame image overlap each other.
  • the belt determination unit 123B may determine whether or not the position of the image of the second frame straddles the boundary line C. The reason for this will be explained. In general, the state in which the position of the image of the second frame, which is an area including the seat belt, straddles the boundary line C is a state in which the occupant is wearing the seat belt. For this reason, the determination unit 12B may determine only whether or not the occupant is wearing the seat belt based on the state of the image of the second frame.
  • step S302 When the detection device 1BB is mounted on the vehicle and step S302 is omitted, for example, the occupant is detected by the seating sensor in combination with an object such as a seating sensor, and only the seat belt is detected by the image. Good. Further, if the seat belt is an essential seat (driver's seat, one-person vehicle, etc.), it is not necessary to detect the occupant, so the occupant determination in step S302 may not be performed.
  • the determination unit 12B it is preferable to make the determination by making the image as small as possible based on the processing load and power consumption of the determination unit 12B, the relationship between the capacity stored in the storage unit 13B, the power consumption when transmitting to the external device 3, and the like. For this reason, if a fixing device is indispensable, an image may be taken only in a specific small range that cannot be understood by a person, and it may be determined whether the fixing device is used by straddling the image.
  • the detection device 1BB sets, for example, not only the driver's seat of the vehicle and the vehicle for one person but also the chest of the person with a parachute or the like in the photographing unit 11 and attaches a seat belt through these processes. You may make it determine whether it exists.
  • step 302 if step 302 is simply omitted, the warning information is turned on even when there is no person and the seat belt is not worn, but it is assumed that there is a person who wears a fixing device (others) In this case, the step 302 can be omitted.
  • the wearing state of the seat belt can be determined from the captured image. Furthermore, according to the present embodiment, even when an image of the seat belt of the adjacent seat is captured, the image of the second frame by the seat belt does not straddle the boundary line C. Can be prevented from being erroneously detected as not being mounted. In addition, according to the present embodiment, even when a part of an unmounted seat belt is detected as an image of the second frame from the captured image, the image of the second frame by the seat belt is a boundary line. Since it does not straddle C, the passenger can detect and notify that the seat belt is not worn.
  • the present embodiment in addition to detection of a passenger's seat belt in the vehicle, for example, when it is necessary to attach a fixing device such as a vehicle for one person or a parachute, it is imaged whether or not the fixing device is worn by a person. It is possible to appropriately detect the detected image, and to appropriately notify when the fixing device is not mounted based on the detected result.
  • a fixing device such as a vehicle for one person or a parachute
  • the determination unit 12B may perform the process of FIG. 20 for each seat.
  • the storage unit 13B may store information indicating the seat position in advance.
  • FIG. 21 is a diagram illustrating an example in which the seat belt outlet is provided in the upper part of the seat.
  • the difference between FIG. 6 and FIG. 21 is that the seat belt outlet is on the y-axis direction.
  • the image of the second frame exists outside the image of the first frame in addition to the image of the first frame. Even in such a case, it is possible to determine the seat belt wearing state in each of the above embodiments according to whether or not the position of the image of the second frame crosses the boundary line C. It is possible to detect the wearing state of the seat belt.
  • the overlapping area is also affected.
  • the ratio of the area of the first frame and the second frame is not affected, the width of the second frame is not affected, and the seat belt is not affected by whether or not the position of the image of the second frame straddles the boundary line C.
  • a wearing state can be determined appropriately.
  • a program for realizing all or part of the functions of the determination unit 12 (or 12A, 12B) in the present invention is recorded on a computer-readable recording medium, and the program recorded on the recording medium is stored in a computer system. All or part of the processing performed by the determination unit 12 (or 12A, 12B) may be performed by causing the determination unit 12 to read and execute the processing.
  • the “computer system” includes an OS and hardware such as peripheral devices.
  • the “computer system” includes a WWW system having a homepage providing environment (or display environment).
  • the “computer-readable recording medium” refers to a storage device such as a flexible medium, a magneto-optical disk, a portable medium such as a ROM or a CD-ROM, and a hard disk incorporated in a computer system. Further, the “computer-readable recording medium” refers to a volatile memory (RAM) in a computer system that becomes a server or a client when a program is transmitted via a network such as the Internet or a communication line such as a telephone line. In addition, those holding programs for a certain period of time are also included.
  • RAM volatile memory
  • the program may be transmitted from a computer system storing the program in a storage device or the like to another computer system via a transmission medium or by a transmission wave in the transmission medium.
  • the “transmission medium” for transmitting the program refers to a medium having a function of transmitting information, such as a network (communication network) such as the Internet or a communication line (communication line) such as a telephone line.
  • the program may be for realizing a part of the functions described above. Furthermore, what can implement
  • the detection device described below can be applied to, for example, a vehicle, a roller coaster (roller coaster), a go-kart, or the like in which there are a plurality of spaces in which people can fit and people are seated side by side.
  • the fixing device is a seat belt, and an object fixed by the fixing device is an occupant.
  • the fixing device is a safety bar, and the object fixed by the fixing device is a passenger.
  • the fixing device of the present embodiment may be extended or contracted like a seat belt, or may be fixed like a safety bar. Further, the fixing device is a device having a shape in which not only the person is fixed to something but also something is fixed to the person and the relative position does not change.
  • the first detection device is installed or attached to the vehicle 2, for example, near the dashboard.
  • the 1st detection apparatus may be installed or attached to the rear-view mirror part, the windshield upper part, etc.
  • the first detection device detects an occupant seated in the front seat and detects whether or not a seat belt is worn.
  • the 2nd detection apparatus is installed or attached to the vehicle 2, for example in a console, a ceiling, a rearview mirror part, a windshield upper part, etc.
  • the second detection device detects an occupant seated in the back seat and detects whether or not the seat belt is worn.
  • the detection device captures an image including a seat on which the occupant is seated, and determines that the seated occupant is the occupant to which the seat belt is to be mounted based on the captured image.
  • the detection device detects whether or not the seat belt is worn based on the captured image.
  • the image overlapping state is a state in which the image of the first frame and the image of the second frame overlap and the image of the occupant's head is above the reference position (FIG. 26).
  • the image of the first frame and the image of the second frame overlap, and the image of the first frame overlaps the reference position. 27 (FIG. 27), a state in which the image of the first frame and the image of the second frame overlap more than a predetermined area, or a state in which the width of the image of the second frame is equal to or greater than a predetermined width (lateral direction) (FIG. 25).
  • the state in which the image of the first frame and the image of the second frame overlap more than a predetermined area or the width of the image of the second frame A state smaller than the width (lateral direction) (FIG.
  • a state where the image of the first frame and the image of the second frame overlap, and the image of the second frame are in a predetermined position for example, the center of the seat in the x-axis direction.
  • FIG. 22 is a diagram illustrating a configuration example of the detection apparatus 1CC according to the present embodiment.
  • the detection device 1CC occupant detection device
  • the detection device 1CC includes an imaging unit 11C (detection unit), a determination unit 12C, a storage unit 13C, a communication unit 14C, and a notification unit 15C.
  • the detection apparatus 1C may perform information transmission / reception with the external apparatus 3C via a network.
  • the photographing unit 11C photographs an image including the front seat or the back seat.
  • the photographing unit 11C performs photographing at a predetermined time interval.
  • the photographing unit 11C includes, for example, a CMOS photographing device or a CCD photographing device.
  • the imaging unit 11C detects information related to at least the head of the occupant.
  • the imaging unit 11C has not been able to capture information on at least the head position of the occupant, that is, has not been able to detect information on at least the head position of the occupant.
  • the determination unit 12C includes an extraction unit 121C, a determination position detection unit 122C, an occupant determination unit 123C, and a belt determination unit 124C.
  • the determination unit 12C acquires an image captured by the imaging unit 11C. Based on the determination result of the occupant determination unit 123C, the determination unit 12C generates warning information and outputs the warning information to the notification unit 15C when an occupant who should not be wearing the seat belt is wearing the seat belt. In addition, based on the result determined by the occupant determination unit 123C, the determination unit 12C generates warning information and outputs the warning information to the notification unit 15C when the occupant to which the seat belt is to be mounted does not have the seat belt.
  • the determination unit 12C captures an area including a seat when no occupant is seated, and extracts an image of the seat from the captured image. Then, the determination unit 12C stores the seat position in the storage unit 13C based on the extracted seat image. Further, the determination unit 12C extracts a reference position from the photographed image, and stores the extracted reference position in the storage unit 13C.
  • the reference position may be set to a predetermined position on the seat or a predetermined position on the captured image. Note that when the detection device 1CC is applied to a roller coaster (roller coaster), a parachute, a paraglider, or the like, a part of a person (for example, a chest) and a fixing device may be included in a photographed image. In such a case, the entire captured image may be handled as the first frame image.
  • the extraction unit 121C extracts a region including at least the head of the occupant from the acquired image by the determination unit stored in the storage unit 13C.
  • the determination means is an artificial intelligence (AI Artificial Intelligence) constructed by an algorithm constructed so as to make a discrimination by providing a threshold value, machine learning, or the like.
  • the extracted area is referred to as a first frame.
  • the size of the first frame differs for each occupant depending on the physique of the occupant and the size of the face. Also, if the luggage is placed on the seat, the extraction unit 121C determines that the occupant is not seated on the seat, and does not extract the image of the first frame for the seat.
  • the determination position detection unit 122C detects the reference position for the image in the first frame extracted by the extraction unit 121C based on the information stored in the storage unit 13C. In this embodiment, an example in which the lower position of the headrest is used as the reference position will be described.
  • the occupant determination unit 123C extracts an image of the occupant's head from the first frame image extracted by the extraction unit 121C by a known image detection method such as feature amount extraction or pattern matching.
  • the head of the occupant may be, for example, the upper end of the head (position close to the upper side of the first frame), the middle of the upper end of the head and the lower end of the chin, or the lower end of the chin.
  • the head of the passenger is basically preferably the upper end of the head. Then, the occupant determination unit 123C determines whether or not the image of the occupant's head is above the reference position.
  • the occupant determination unit 123C determines that the image of the occupant's head is above the reference position, the occupant determination unit 123C determines that the seated occupant is the occupant to which the seat belt should be attached. If the occupant determination unit 123C determines that the image of the occupant's head is below the reference position, the occupant determination unit 123C determines that the seated occupant should not be seated. The occupant determination unit 123C determines that the occupant is not seated when the extraction unit 121C does not extract the first frame.
  • the belt determination unit 124C extracts an image of an area including the seat belt from the acquired image by the determination unit stored in the storage unit 13C.
  • the determination means is an artificial intelligence (AI Artificial Intelligence) constructed by an algorithm constructed so as to make a discrimination by providing a threshold value, machine learning, or the like.
  • the belt determination unit 124C may extract an image of an area including the seat belt from the first frame image extracted by the extraction unit 121C by the determination unit stored in the storage unit 13C. .
  • the extracted area is referred to as a second frame.
  • the belt determination unit 124C determines whether or not the occupant is wearing the seat belt based on the extracted second frame image and first frame image.
  • the belt determination unit 124C determines that the occupant is wearing a seat belt when the image of the first frame and the image of the second frame overlap, and the image of the first frame and the image of the second frame If the two do not overlap, it is determined that the occupant is not wearing a seat belt.
  • the belt determination unit 124C determines that the occupant is wearing the seat belt when the area where the image of the first frame and the image of the second frame overlap is greater than or equal to a predetermined area. When the area where the image and the image of the second frame overlap is less than a predetermined area, it may be determined that the occupant is not wearing the seat belt.
  • the first frame may use only the head (including the face) of the occupant as the frame.
  • the belt determination unit 124C may determine that the occupant is wearing the seat belt when the image of the first frame and the image of the second frame overlap in the lateral direction.
  • the belt determination unit 124C may determine that the occupant is wearing the seat belt when the image of the second frame can be extracted. If the second frame cannot be extracted, the belt determination unit 124C may determine that the occupant is not wearing the seat belt. Even if the belt determination unit 124C can extract the second frame, for example, when the width of the second frame is within a predetermined value, the belt determination unit 124C may determine that the occupant does not wear the seat belt.
  • the storage unit 13C stores the position of the seat.
  • the storage unit 13C stores a reference position with respect to the seat.
  • the storage unit 13C stores determination means for detecting an area including the head of the occupant from the image.
  • the storage unit 13C stores determination means for detecting from the image that the occupant is wearing the seat belt.
  • the storage unit 13C stores a learning model for extracting an image of an area including at least the head of the occupant from the image.
  • the storage unit 13C stores a learning model for detecting a seat belt image from an image.
  • the communication unit 14C transmits the information output by the determination unit 12C to the external device 3C via the network.
  • the communication unit 14C receives the information transmitted by the external device 3C via the network, and outputs the received information to the determination unit 12C.
  • the information transmitted by the external device 3C includes, for example, determination means used by the extraction unit 121C, information indicating the determination position, and the like.
  • the information output by the determination unit 12C is, for example, an image captured by the imaging unit 11C or an image within a frame extracted by the extraction unit 121C.
  • the notification unit 15C notifies the warning information output by the determination unit 12C.
  • the notification unit 15C is, for example, a speaker, an LED (light emitting diode) display device, a liquid crystal display device, or the like.
  • FIG. 23 is a diagram illustrating an example of a state in which an occupant is seated on a seat.
  • the longitudinal direction of the seat is the x-axis direction
  • the height direction is the y-axis direction.
  • passengers P1 to P3 are seated on the seat in the x-axis direction.
  • Reference numeral 204 denotes a seat.
  • Reference numeral 341 denotes a headrest.
  • Reference numeral 242 denotes a seat back.
  • Reference numeral 243 denotes a seat cushion.
  • Reference numeral 244 denotes a seat belt.
  • the seated passenger P1 is an adult.
  • the passenger P2 is assumed to be a six-year-old child, for example.
  • the passenger P3 is assumed to be a three-year-old child, for example.
  • Reference sign Rf indicates a reference position.
  • the determination unit 12C determines that the seat belt 244 can be worn by the passengers P1 and P2, and determines that the seat belt 244 cannot be worn by the passenger P3.
  • FIG. 24 is a diagram illustrating an example of an image captured by the imaging unit 11C according to the present embodiment.
  • FIG. 24 is an example of photographing the front seat.
  • Reference numeral g201 is an example of an image photographed by the photographing unit 11C.
  • the occupant P1 sits on the seat and wears the seat belt
  • the occupant P2 sits on the seat and wears the seat belt.
  • the photographing unit 11C starts photographing before the ignition key is turned on, for example, and continues photographing while the ignition key is on.
  • FIG. 25 is an example in which an image of a region (first frame) including at least the head of the occupant P1 is extracted when the occupant P1 does not wear the seat belt in FIG.
  • the image in the first frame Fr (P1) c includes an image of the head of the occupant P1, an image of the chest, and an image of the arm.
  • the extraction unit 121C extracts an image of the first frame Fr (P1) c from the image captured by the imaging unit 11C. Note that the extraction unit 121C attempts to extract an image using the first frame for each seat.
  • the occupant determination unit 123C determines that the occupant is seated in the seat because the image of the first frame Fr (P1) c is extracted.
  • the occupant determination unit 123C detects the reference position Rf from the image captured by the imaging unit 11C using an image recognition technique.
  • the occupant determination unit 123C may detect the reference position Rf from the image in the first frame Fr (P1) c using an image recognition technique.
  • the occupant determination unit 123C determines whether the image of the occupant's head is above the reference position Rf. For example, the occupant determination unit 123C determines whether or not the upper portion of the image of the first frame Fr (P1) c is above the reference position Rf. In the example shown in FIG.
  • the occupant determination unit 123C determines that the occupant can wear the seat belt. That is, if the image of the first frame and the reference position overlap, the occupant determination unit 123C determines that the occupant can attach the seat belt, and the image of the first frame and the reference position do not overlap. It is determined that the occupant cannot be fitted with a seat belt.
  • the belt determination unit 124C detects an image of the region (second frame) including the seat belt from the image captured by the imaging unit 11C using an image recognition technique.
  • the belt determination unit 124C may detect an image of the region (second frame) including the seat belt from the image in the first frame Fr (P1) c by using an image recognition method.
  • the belt determination unit 124C determines that the occupant P1 does not wear the seat belt.
  • FIG. 26 is an example in which an image of an area (first frame) including at least the head of the occupant P1 is extracted when the occupant P1 wears the seat belt after FIG.
  • the image in the first frame Fr (P1) c includes an image of the head of the occupant P1, an image of the chest, an image of the arm, and an image of the seat belt.
  • the belt determination unit 124C extracts an image of the region (second frame Frb (P1)) c including the seat belt from the image captured by the imaging unit 11C using an image recognition technique. Note that the belt determination unit 124C extracts an image of the region (second frame Frb (P1)) c including the seat belt from the image in the first frame Fr (P1) d using an image recognition technique. It may be. In the example shown in FIG. 26, the second frame Frb (P1) c is extracted inside the image of the first frame Fr (P1) c.
  • the belt determination unit 124C determines whether or not the width of the second frame Frb (P1) c in the x-axis direction has a predetermined length, and if it has the predetermined length, the seat belt You may make it determine with having mounted
  • the belt determination unit 124C determines that the occupant is wearing the seat belt when the image of the first frame and the image of the second frame overlap, and the image of the first frame and the image of the second frame It is determined that the occupant is wearing the seat belt when the and do not overlap.
  • FIG. 27 is an example in which an image of a region (first frame) including the head of the occupant P2 is extracted from FIG.
  • the image in the first frame Fr (P2) c includes an image of the head of the occupant P3, an image of the chest, and an image of the seat belt.
  • the extraction unit 121C extracts an image of the first frame Fr (P2) c from the image captured by the imaging unit 11C.
  • the occupant determination unit 123C determines that the occupant is seated in the seat because the image of the first frame Fr (P2) c is extracted.
  • the occupant determination unit 123C detects the reference position Rf from the image captured by the imaging unit 11C using an image recognition technique.
  • the occupant determination unit 123C determines whether the occupant's head is above or below the reference position Rf.
  • the occupant determination unit 123C determines that the occupant cannot wear the seat belt because the occupant's head is below the reference position Rf.
  • the occupant determination unit 123C extracts the image of the head region from the image of the first frame by a known image processing (binarization, feature extraction, pattern matching, contour extraction, etc.) method, It is determined whether or not the head is above the reference position. In the example illustrated in FIG. 27, the occupant determination unit 123C determines that the seat belt cannot be mounted because the image of the first frame and the reference position do not overlap.
  • the belt determination unit 124C extracts an image of the region (second frame Frb (P2)) c including the seat belt from the image captured by the imaging unit 11C using an image recognition technique. Because the second frame Frb (P2) c can be extracted, the belt determination unit 124C determines that the occupant is wearing the seat belt. In the example shown in FIG. 27, the belt determination unit 124C determines that the occupant is wearing the seat belt because the image of the first frame and the image of the second frame overlap. As a result, the occupant determination unit 123C notifies the warning information from the notification unit 15C because the occupant who should not be wearing the seat belt is wearing the seat belt.
  • the occupant determination unit 123C may detect the reference position Rf from the image in the first frame Fr (P2) c using an image recognition method. When the reference position Rf cannot be detected, the occupant determination unit 123C determines that the head of the occupant is below the reference position Rf, and determines that the occupant cannot wear the seat belt. Good.
  • the belt determination unit 124C extracts an image of the region (second frame Frb (P2)) c including the seat belt from the image in the first frame Fr (P2) c by using an image recognition method. It may be. In this case, when the image of the second frame straddles a predetermined position in the x-axis direction of the first frame (for example, the center of the seat in the x-axis direction), the belt determination unit 124C mounts the seat belt. You may make it determine with having carried out.
  • the warning information when the seat belt is not worn may be different from the warning information when the infant is seated on the seat and wearing the adult seat belt. Also good.
  • the determination unit 12C may notify the warning information whether the seat belt is worn or not.
  • the warning information notified in such a case may be, for example, a display or a sound prompting the user to sit on the child seat.
  • the first frame and the second frame have a square shape, but the present invention is not limited to this.
  • the shape may be a polygon, a circle, an ellipse, or a shape that matches the contour of the occupant's head (including the face).
  • FIG. 28 is a flowchart illustrating an example of a processing procedure performed by the detection device 1 according to this embodiment.
  • Step S401 The imaging unit 11C captures an image including a seat.
  • Step S402 The extraction unit 121C extracts an image (first frame) of an area including at least the head for each seat from the image captured by the imaging unit 11C.
  • the determination unit 12C performs the processing of steps S403 to S415 for each seat.
  • Step S403 The extracting unit 121C determines whether or not an image (first frame) of an area including at least the head has been extracted. If it is determined that the image of the region including at least the head has not been extracted (step S403; NO), the extraction unit 121C proceeds to the process of step S404. If it is determined that the image of the region including at least the head has been extracted (step S403; YES), the extraction unit 121C proceeds to the process of step S405.
  • Step S404 The occupant determination unit 123C determines that the occupant is not seated in the seat. After the process, the determination unit 12C ends the occupant determination process.
  • Step S405 The occupant determination unit 123C determines that the occupant is seated on the seat. After the processing, the occupant determination unit 123C proceeds to the process of step S406.
  • Step S406 The occupant determination unit 123C determines whether or not the head is above the reference position. If the occupant determination unit 123C determines that the head is above the reference position (step S406; YES), the process proceeds to step S407. If the occupant determination unit 123C determines that the head is below the reference position (step S406; NO), the process proceeds to step S408.
  • Step S407 The occupant determination unit 123C determines that the occupant seated in the seat can wear the seat belt. After the process, the determination unit 12C proceeds to the process of step S409.
  • Step S408 The occupant determination unit 123C determines that the occupant seated in the seat cannot install the seat belt. After the processing, the occupant determination unit 123C proceeds to the process of step S409.
  • Step S409 The belt determination unit 124C extracts an image (second frame) of an area including the seat belt from the image captured by the imaging unit 11C.
  • Step S410 The belt determination unit 124C determines whether an image of an area including the seat belt has been extracted. If it is determined that the image of the region including the seat belt has been extracted (step S410; YES), the belt determination unit 124C advances the process to step S411. If it is determined that the image of the area including the seat belt has not been extracted (step S410; NO), the belt determination unit 124C advances the process to step S414.
  • Step S411 The belt determination unit 124C determines that the occupant seated in the seat is wearing the seat belt.
  • the belt determination unit 124C proceeds to the process of step S412.
  • Step S412 The belt determination unit 124C determines whether or not the occupant seated in the seat can wear the seat belt, based on the processing of step S407 or S408. If the belt determination unit 124C determines that the seat belt can be worn (step S412; YES), the occupant determination process ends. If the belt determination unit 124C determines that the seat belt cannot be mounted (step S412; NO), the process proceeds to step S413.
  • Step S414 The belt determination unit 124C determines that the occupant seated in the seat does not wear the seat belt. After the processing, belt determination unit 124C proceeds to the process of step S415.
  • Step S415) Based on the processing in step S407 or S408, the belt determination unit 124C determines whether or not the occupant seated in the seat is unable to mount the seat belt. If the belt determination unit 124C determines that the seat belt cannot be worn (step S415; YES), the process proceeds to step S413. If the belt determination unit 124C determines that the seat belt can be worn (step S415; NO), the occupant determination process ends.
  • Step S413 The belt determination unit 124C generates warning information. Subsequently, the notification unit 15C notifies the warning information generated by the belt determination unit 124C. After the process, the determination unit 12C ends the occupant determination process.
  • the determination position for performing the determination may be a position at a predetermined interval from the reference position (for example, in the middle of the headrest).
  • FIG. 29 is a diagram illustrating another example of the determination position.
  • the lower portion of the headrest is the reference position Rf
  • the determination position Rp is a position ⁇ h from the reference position in the positive direction of the y-axis direction.
  • the occupant determination unit 123C determines whether or not the head is above the determination position Rp.
  • the occupant determination unit 123C determines that the occupant seated on the seat can wear the seat belt, and the head is below the determination position Rp. If it is determined that the passenger is seated in the seat, it may be determined that the occupant seated in the seat can wear the seat belt.
  • the occupant determination unit 123C may determine that the seat belt can be worn when the occupant's chin is above the reference position (or determination position).
  • the fourth embodiment the example in which it is determined that the seat belt can be worn when the head of the occupant is above the reference position in the image of the region (first frame) including the head of the occupant has been described. .
  • an example will be described in which determination is made by comparing the upper side of a region (first frame) including at least the head and the reference position.
  • the structure of the detection apparatus 1 is the same as 5th Embodiment (FIG. 22).
  • the image overlapping state is a state in which the image of the first frame and the image of the second frame overlap and the position of the upper side of the first frame is above the reference position (FIG. 30). The first frame image and the second frame image overlap each other and the position of the upper side of the first frame is below the reference position (FIG. 30).
  • FIG. 30 is a diagram illustrating a comparative example of the upper side of the first frame and the reference position according to the present embodiment.
  • the symbol Tp indicates the position of the upper side of the first frame Fr (P1) c.
  • the occupant determination unit 123C compares the position of the upper side Tp of the first frame with the reference position Rf. Then, when the position of the upper side Tp of the first frame is above the reference position Rf, the occupant determination unit 123C determines that the seated occupant is the occupant to which the seat belt should be attached. When the upper side Tp of the frame extracted by the extraction unit 121C is below the reference position Rf, the occupant determination unit 123C determines that the seated occupant is not the occupant to which the seat belt should be attached.
  • the occupant determination unit 123C since the upper side Tp of the first frame and the reference position Rf are compared, the occupant determination unit 123C does not extract the image of the occupant's head from the first frame. It can be determined whether or not the occupant is to wear the seat belt.
  • the reference position Rf may be an upper position of the headrest or the like.
  • the determination unit 12C may compare the determination position with a predetermined position from the reference position Rf.
  • the determination unit 12C extracts an image of an area (first frame) including at least the head of the occupant and an image of an area (second frame) including a seat belt. Although the example to do was demonstrated, it is not restricted to this.
  • the determination unit 12C may transmit the captured image to the external device 3C via the communication unit 14C.
  • the external device 3C extracts the first frame, the second frame, and the reference position from the received image, determines whether or not the occupant is seated on the seat, and the occupant wears a seat belt. At least one of determination as to whether or not the vehicle is an occupant who may be wearing an adult seat belt may be performed.
  • the external device 3 ⁇ / b> C may transmit the determination result to the detection device 1.
  • the processing performed by the determination unit 12C may be performed on the cloud.
  • the external device 3C may be a smartphone or the like in which a program of a determination unit used by the determination unit 12C is installed.
  • the detection device 1CC may be provided.
  • the detection device 1CC may be installed on the dashboard and may be photographed including the rear seat.
  • the determination unit 12C uses the detection result detected by the buckle sensor attached to the buckle to detect that the image of the head of the occupant seated on the seat is below the reference position. You may make it alert
  • the external device 3C uses the received image to determine the first frame, a determination unit to extract the second frame, a determination unit to extract an image including the occupant's head, a reference
  • the determination means for detecting the position may be updated by learning.
  • the external device 3C may transmit the learned determination means to the detection device 1CC.
  • the detection device 1CC stores the determination unit received from the external device 3C in the storage unit 13D.
  • the reference position is the position set with respect to the seat, the vehicle 2 (FIG. 1). Any position that can be identified by a captured image, such as a position set with respect to the window frame, may be used.
  • FIG. 31 is a diagram illustrating a configuration example of the detection device 1DD according to the present embodiment.
  • the detection device 1DD occupant detection device
  • the determination unit 12D includes an occupant determination unit 123D and a belt determination unit 124D.
  • omits the function part which has the same function as 1CC of detection apparatuses using the same code
  • the sensor unit 16 includes, for example, a light emitting unit and a light receiving unit.
  • the light emitting unit irradiates light at a position above the reference position, and the light receiving unit receives the light.
  • the sensor unit 16 outputs the received light reception result to the determination unit 12D.
  • the sensor part 16 has detected the information regarding a passenger
  • the light receiving unit is attached to, for example, a rear glass.
  • the sensor unit 16 irradiates light at a predetermined position when the seat belt is worn. And the sensor part 16 light-receives the light which the irradiated light reflected and returned, and outputs the received light reception result to the determination part 12D.
  • the sensor unit 16 may be a sensor using sound waves or radio waves. Moreover, the sensor part 16 irradiates light for every seat, or scans light and detects about several seats.
  • the occupant determination unit 123D determines whether or not the occupant can wear the seat belt based on the light reception result output from the sensor unit 16. The occupant determination unit 123D determines that the occupant's head does not exist above the reference position, for example, when the light receiving unit can receive the light emitted from the light emitting unit of the sensor unit 16. The occupant determination unit 123D determines that the occupant's head is above the reference position when the light receiving unit cannot receive the light emitted by the light emitting unit of the sensor unit 16, for example.
  • the belt determination unit 124D determines whether or not the seat belt is attached based on the light reception result output from the sensor unit 16. For example, when the light receiving unit can receive the light emitted from the light emitting unit of the sensor unit 16, the belt determining unit 124D determines that the occupant is wearing the seat belt. The belt determination unit 124D determines that the occupant does not wear the seat belt when the light receiving unit cannot receive the light emitted from the light emitting unit of the sensor unit 16, for example. In the case where the sensor unit 16 includes a buckle sensor, the belt determination unit 124D may determine whether or not the seat belt is worn based on a detection value of the buckle sensor.
  • the determination unit 12 outputs warning information to the notification unit 15C when the belt determination unit 124D determines that the seat belt is not worn.
  • the determination unit 12D determines that the seat belt is not worn by the belt determination unit 124D and the occupant determination unit 123D determines that the head of the occupant is not present above the reference position, the seat belt Is determined to be an occupant who cannot be worn, and warning information is output to the notification unit 15C.
  • Storage unit 13D stores information used by determination unit 12D for determination.
  • the information to be stored is, for example, a threshold value indicating whether light is received.
  • FIG. 32 is a diagram illustrating an example of a position where the sensor unit 16 according to this embodiment irradiates light.
  • a symbol g301 is a position of a light beam irradiated to detect the head of the passenger.
  • there are a plurality of light irradiation positions such as the upper part of the head and the left and right sides.
  • symbol g302 is the reflecting material attached to the seatbelt.
  • the sensor unit 16 is set in advance so that light is emitted to the position of the reflective material when the occupant wears the seat belt. For example, the sensor unit 16 changes the irradiation angle and sets the reflected light to a maximum state.
  • the determination unit 12D determines whether or not the occupant can wear the seat belt using the light emitted from the sensor unit 16.
  • the determination unit 12D determines whether or not the occupant can wear the seat belt without using the photographing unit 11C and without performing image analysis.
  • the detection apparatus 1DD may be provided with the imaging
  • FIG. 33 is a diagram illustrating an example of a captured image in the modification.
  • the example shown in FIG. 33 is an example in which a passerby through the rear glass is photographed.
  • Reference numeral g401 is a rear glass image.
  • Symbol Fr (P11) c is an image of an area including the head of the passerby P11.
  • Reference numeral Rf2 represents a second reference position.
  • the passerby P11 is also determined to be an occupant, and the head of the passerby is above the reference position, so that it is determined to be an occupant. There is. And since the image of the area including the seat belt cannot be extracted, there is a possibility that an occupant (passerby P11) who can wear the seat belt may erroneously detect that the seat belt is not worn.
  • the occupant determination unit 123C determines that the image is not an occupant when the frame Fr (P11) c of the image including the head of the passerby P11 does not straddle the second reference position Rf2.
  • the second reference position Rf2 is, for example, below the lower side of the rear glass g401 in the y-axis direction, at a predetermined height from the upper part of the head lift, the upper part of the seatback, a predetermined position of the rear glass, and the side glass. The predetermined position or the like.
  • the rear glass is shown as an example of the window frame, but the window frame may be a side glass frame.
  • the occupant determination unit 123C determines that the person outside the vehicle 2 when the lower side of the image of the first frame Fr (P11) c of the passerby P11 does not straddle the second reference position Rf2. It is determined that it is not a passenger.
  • the second reference position Rf2 may be the same position as the first reference position Rf (FIG. 25, etc.), may be lower than the first reference position Rf, and may be a position below the window frame.
  • a program for realizing all or part of the functions of the determination unit 12C (or 12D) in the present invention is recorded on a computer-readable recording medium, and the program recorded on the recording medium is read into a computer system.
  • the “computer system” includes an OS and hardware such as peripheral devices.
  • the “computer system” includes a WWW system having a homepage providing environment (or display environment).
  • the “computer-readable recording medium” refers to a storage device such as a flexible medium, a magneto-optical disk, a portable medium such as a ROM or a CD-ROM, and a hard disk incorporated in a computer system.
  • the “computer-readable recording medium” refers to a volatile memory (RAM) in a computer system that becomes a server or a client when a program is transmitted via a network such as the Internet or a communication line such as a telephone line. In addition, those holding programs for a certain period of time are also included.
  • RAM volatile memory
  • the program may be transmitted from a computer system storing the program in a storage device or the like to another computer system via a transmission medium or by a transmission wave in the transmission medium.
  • the “transmission medium” for transmitting the program refers to a medium having a function of transmitting information, such as a network (communication network) such as the Internet or a communication line (communication line) such as a telephone line.
  • the program may be for realizing a part of the functions described above. Furthermore, what can implement
  • the detection device described below can be applied to, for example, a vehicle, a roller coaster (roller coaster), a go-kart, or the like in which there are a plurality of spaces in which people can fit and people are seated side by side.
  • the fixing device is a seat belt, and an object fixed by the fixing device is an occupant.
  • the fixing device is a safety bar, and the object fixed by the fixing device is a passenger.
  • the fixing device of the present embodiment may be extended or contracted like a seat belt, or may be fixed like a safety bar. Further, the fixing device is a device having a shape in which not only the person is fixed to something but also something is fixed to the person and the relative position does not change.
  • the first detection device (human detection device) is installed or attached to the vehicle 2, for example, near the dashboard.
  • the 1st detection apparatus may be installed or attached to the rear-view mirror part, the windshield upper part, etc.
  • the first detection device detects an occupant seated in the front seat and detects whether or not a seat belt is worn.
  • the 2nd detection apparatus (human detection apparatus) is installed or attached to the vehicle 2, for example in the console, the ceiling, the rearview mirror part, the windshield upper part, etc.
  • the second detection device detects an occupant seated in the back seat and detects whether or not the seat belt is worn.
  • the detection device determines the presence or absence of a passenger based on the captured image. Further, the detection device detects whether or not the seat belt is worn based on whether or not an image of an area including the seat belt has been detected from the photographed image.
  • the image overlapping state is a state in which the image of the first frame and the image of the second frame overlap in the image of the detection area, and the width of the second frame has a predetermined length.
  • State (FIG. 41) a state where the image of the first frame and the image of the second frame overlap in the image of the detection area, and a state where the width of the second frame does not have a predetermined length (FIG.
  • the first frame image and the second frame image overlap each other and the area of the first frame is equal to or larger than a predetermined area (FIG. 44). It is one of a state (FIG. 44) in which the images of the second frame overlap and the area of the first frame is less than a predetermined area.
  • FIG. 34 is a diagram illustrating a configuration example of a detection device 1EE (human detection device) according to the present embodiment.
  • the detection apparatus 1EE includes an imaging unit 11E (detection unit), a determination unit 12E, a storage unit 13E, a communication unit 14E, and a notification unit 15E.
  • the detection device 1E may perform transmission / reception of information with the external device 3E via a network.
  • the detection device 1EE determines the presence or absence of a passenger based on the captured image. Further, the detection device 1 detects whether or not the seat belt is worn based on whether or not an image of an area including the seat belt has been detected from the captured image.
  • the imaging unit 11E captures an image including the front seat or the back seat.
  • the imaging unit 11E performs imaging at a predetermined time interval.
  • the imaging unit 11E includes, for example, a CMOS imaging device or a CCD imaging device.
  • the photographing unit 11E detects information related to the occupant's head.
  • the imaging unit 11E cannot capture information regarding the position of the occupant's head, that is, cannot detect information regarding the position of the occupant's head.
  • the determination unit 12E includes an extraction unit 121E, an occupant determination unit 122E, a belt determination unit 123E, and a region setting unit 124E.
  • the determination unit 12E acquires an image captured by the imaging unit 11E. Based on the result determined by the belt determination unit 123E, the determination unit 12E generates warning information and outputs the warning information to the notification unit 15E when the occupant does not wear the seat belt. Note that the determination unit 12E captures an area including a seat when no occupant is seated, and extracts an image of the seat from the captured image. Then, the determination unit 12E causes the storage unit 13E to store the seat position based on the extracted seat image.
  • the detection device 1EE when the detection device 1EE is applied to a roller coaster (roller coaster), a parachute, a paraglider, or the like, a part of a person (for example, the chest) and a fixing device may be included in the photographed image. In such a case, the entire captured image may be handled as the first frame image.
  • the extraction unit 121E extracts an image of the area including the occupant from the acquired image using the determination unit stored in the storage unit 13E and the detection area set for each seat.
  • the determination means is an artificial intelligence (AI Artificial Intelligence) constructed by an algorithm constructed so as to make a discrimination by providing a threshold value, machine learning, or the like.
  • the extraction unit 121E extracts an image of the region including the occupant using the determination unit stored in the storage unit 13E for each detection region, that is, for each seat.
  • the extracted area is referred to as a first frame.
  • the size of the first frame differs for each occupant depending on the physique of the occupant and the size of the face. Also, if the luggage is placed on the seat, the extraction unit 121E determines that the occupant is not seated on the seat by the determination unit, and does not extract the first frame for the seat.
  • the passenger determination unit 122E determines that the passenger is not seated when the image of the first frame is not extracted by the extraction unit 121E.
  • the occupant determination unit 122E determines that the occupant is seated when the image of the first frame is extracted by the extraction unit 121E.
  • the belt determination unit 123E extracts an image of an area including the seat belt from the first frame image extracted by the extraction unit 121E by the determination unit stored in the storage unit 13E.
  • the extracted area is referred to as a second frame.
  • the belt determination unit 123E determines whether the occupant is wearing the seat belt based on the extracted second frame. If the second frame can be extracted, the belt determination unit 123E determines that the occupant is wearing the seat belt. If the second frame cannot be extracted, the belt determination unit 123E determines that the occupant is not wearing the seat belt. Even if the second frame can be extracted, the belt determination unit 123E determines that the occupant is not wearing the seat belt, for example, when the width of the second frame is within a predetermined value.
  • the area setting unit 124E captures an image when the occupant is not seated on the seat, and sets a detection area for each seat based on the captured image.
  • the area setting unit 124E stores information indicating each of the set detection areas in the storage unit 13E.
  • the region setting unit 124E may, for example, capture an image when an adult occupant is seated on the seat, and set the second frame for each seat based on the captured image.
  • the area setting unit 124E may store information indicating each set second frame in the storage unit 13E. In addition, you may make it set the detection area
  • the area setting unit 124E may select a detection area corresponding to the vehicle type in accordance with a user instruction or operation when the detection device 1EE is installed in the vehicle 2 (FIG. 1). The user sets the detection area by operating the external device 3E (FIG. 34).
  • the storage unit 13E stores the position of the seat.
  • stores the information which shows the detection area
  • the storage unit 13E stores determination means for extracting an image of an area including an occupant from the image.
  • the storage unit 13E stores determination means for detecting an image of the seat belt from the image.
  • the storage unit 13E stores a threshold value of the area of the region for determining whether or not the vehicle is an occupant.
  • the storage unit 13E stores a learning model for extracting an image of a region including an occupant from the image.
  • the storage unit 13E stores a learning model for detecting a seat belt image from the image.
  • the communication unit 14E transmits information output from the determination unit 12E to the external device 3E via the network.
  • the communication unit 14E receives the information transmitted by the external device 3E via the network, and outputs the received information to the determination unit 12E.
  • the information transmitted by the external device 3E includes, for example, determination means used by the extraction unit 121E, information indicating the determination position, and the like.
  • the information output by the determination unit 12E is, for example, an image captured by the imaging unit 11E or an image within a frame extracted by the extraction unit 121E.
  • reports the warning information which the determination part 12E outputs.
  • the notification unit 15E is, for example, a speaker, an LED (light emitting diode) display device, a liquid crystal display device, or the like.
  • FIG. 35 is a diagram illustrating an example of a state in which an occupant is seated on a seat.
  • the longitudinal direction of the seat is the x-axis direction
  • the height direction is the y-axis direction.
  • occupants P1 to P3 are seated on the seat in the x-axis direction.
  • Reference numeral 304 denotes a seat.
  • Reference numeral 341 denotes a headrest.
  • Reference numeral 342 denotes a seat back.
  • Reference numeral 43 denotes a seat cushion.
  • Reference numeral 344 denotes a seat belt.
  • FIG. 36 is a diagram illustrating an example of an image captured by the imaging unit 11E according to the present embodiment.
  • FIG. 36 is an example of photographing the front seat.
  • Reference numeral g501 is an example of an image photographed by the photographing unit 11E.
  • the passengers P1 and P2 have overlapping hands in the region g511.
  • the passengers P2 and P3 have overlapping hands.
  • the conventional technique may erroneously detect that five occupants are seated in the back seat.
  • FIG. 37 is a diagram illustrating an example of a region for detecting an occupant according to the present embodiment.
  • FIG. 37 is the same as the captured image g201 of FIG.
  • the occupant P1 is seated on the first seat.
  • the occupant P2 is seated on the second seat.
  • the occupant P3 is seated in the third seat.
  • the detection area Ar1 is a first detection area set for the first seat.
  • the detection area Ar2 is a second detection area set for the second seat.
  • the detection area Ar3 is a third detection area set for the third seat.
  • the seat width direction (x-axis direction) is divided and set according to the number of people who can be seated on the seat, and is set as a detection area for each seat.
  • the determination unit 12E detects an occupant for each detection region set in advance from the image g501 captured by the imaging unit 11E.
  • the determination unit 12E may determine from the image g501 whether an occupant's frame is detected for each detection area after detecting the occupant first.
  • FIG. 38 is a diagram illustrating a setting example of the detection area for each seat according to the present embodiment.
  • FIG. 38 is an image g502 captured when no occupant is seated in the back seat.
  • the seat 304 is a space that can accommodate three passengers.
  • Reference numerals 341a to 341c denote images of the headrest area.
  • FIG. 39 is a flowchart illustrating an example of a processing procedure for setting a detection region performed by the region setting unit 124E according to the present embodiment.
  • Step S501 The photographing unit 11E performs photographing when no occupant is seated on the seat.
  • Step S502 The region setting unit 124E performs image processing such as feature amount extraction and contour extraction on the captured image g502 to obtain left and right widths of the seat.
  • the region setting unit 124E obtains the width with reference to the end of the seat 4.
  • Step S503 The region setting unit 124E performs image processing such as feature amount extraction and contour extraction from the image g502 to extract headrest region images (reference numerals 341a to 341c).
  • Step S504 The region setting unit 124E divides the seat width obtained in Step S502 by the number of headrests that can be extracted. Subsequently, the area setting unit 124E sets an area for the number of divisions in the captured image g502. Note that the region setting unit 124E sets each detection region so as to include, for example, an image of the upper part of the seat (upper part of the seat back), an image of the headrest, and an image of the lower part of the seat (lower part of the seat back). Subsequently, the region setting unit 124E causes the storage unit 13E to store information indicating the set detection region. That is, the region setting unit 124E divides the detection region in the seat width direction with the end of the seat 304 as a reference.
  • adjacent detection areas Ar1 and Ar2 and adjacent detection areas Ar2 and Ar3 may partially overlap.
  • the detection area has been described by taking the back seat as an example, but the detection area is also set in advance for the front seat.
  • FIG. 40 is an image example of the detection area Ar1 when the occupant P1 does not wear the seat belt in FIG.
  • the extraction unit 121E extracts the image of the detection area Ar1 from the image captured by the imaging unit 11E.
  • the extraction unit 121E extracts an image using the detection area Arn (n is 1 or more and the maximum value is an integer of the number of seats) for each seat.
  • the extraction unit 121E extracts an image of the region (first frame Fr1) including the occupant using the determination unit stored in the storage unit 13E with respect to the image of the detection region Ar1.
  • the occupant determination unit 122E determines that the occupant is seated in the seat because the image of the first frame Fr1 is extracted.
  • the belt determination unit 123E detects an image of the region (second frame) including the seat belt from the image in the first frame Fr1 using an image recognition technique. Since the second frame cannot be extracted, the belt determination unit 123C determines that the occupant P1 does not wear the seat belt.
  • FIG. 41 is an example in which an image of the region (first frame) including the occupant P1 is extracted when the occupant P1 wears the seat belt after FIG.
  • the belt determination unit 123E extracts an image of the region including the seat belt (second frame Frb1) from the image in the first frame Fr1 using an image recognition method.
  • the second frame Frb1 is extracted inside the image of the first frame Fr1.
  • the belt determination unit 123E determines whether or not the width of the second frame Frb1 in the x-axis direction has a predetermined length, and if it has the predetermined length, attaches a seat belt. You may make it determine with it. For example, when the seat belt is not attached, the width of the second frame Frb1 in the x-axis direction is narrowed. Since the image of the seat belt has been extracted, the belt determination unit 123E determines that the occupant is wearing the seat belt.
  • FIG. 42 is a diagram illustrating an example of an image including a passer-by outside the vehicle.
  • the example shown in FIG. 42 is an example in which a passerby through the rear glass is photographed.
  • Reference numeral g601 is a rear glass image.
  • the passerby P11 When the passerby P11 is walking near the vehicle, an image of the passerby P11 is also taken through the window of the vehicle. In such a case, a part of the passerby P11 is captured as shown in FIG. Therefore, for example, when an image of the passerby P11 is extracted in the detection area Ar2, the image area of the passerby P11 extracted in the detection area Ar2 is seated on the seat in the vehicle as shown in FIG. It is smaller than the image area of the passenger. Accordingly, the occupant determination unit 122E determines that the person is not an occupant when the image area of the person extracted in the detection region is less than the predetermined area.
  • the occupant determination unit 122E determines whether or not the base Fr11U in the y-axis direction of the first frame Fr11 corresponding to the extracted passerby P11 exists below the boundary line Rf set at the predetermined position. May be determined. In this case, the occupant determination unit 122E determines that the occupant is present when the base Fr11U of the first frame Fr11 exists below the boundary line Rf, and the base Fr11U of the first frame Fr11 is lower than the boundary line Rf. You may make it determine with it not being a passenger
  • the occupant determination unit 122E may determine that the first frame Fr11 is an occupant when straddling the boundary line Rf, and may determine that the first frame Fr11 is not an occupant when not straddling the boundary line Rf. .
  • the occupant determination unit 122E does not extract the human image by the determination unit stored in the storage unit 13E.
  • the boundary line Rf may be set in the y-axis direction.
  • the boundary line Rf may be a straight line or a curved line.
  • the image of the detection area Ar1 may include an image including a passerby P11 who is not an occupant.
  • FIG. 43 is a diagram illustrating an example of an image including a person outside the vehicle in the detection area. The image of such an example may also occur when the detection device 1EE is installed on the dashboard and takes a picture of the front seat, or when an occupant is seated in the back seat.
  • the extraction unit 121E extracts an image of the region (first frame Fr11) including a person using the determination unit stored in the storage unit 13E with respect to the image in the detection region Ar1.
  • the image of the first frame Fr11 is an image extracted in the detection area Ar1 in this way.
  • the occupant determination unit 122E compares the area of the first frame Fr11 with the predetermined area stored in the storage unit 13E, determines that the area of the first frame Fr11 is less than the predetermined area, and within the first frame Fr11 It is determined that the person corresponding to the image is not an occupant.
  • FIG. 44 is a diagram illustrating an example in which an image of an occupant and an image of a person outside the vehicle are included in one detection area.
  • the extraction unit 121E uses the determination unit stored in the storage unit 13E for the image in the detection area Ar1, and the image including the person (first frame Fr1) and the area including the person.
  • An image of (first frame Fr11) is extracted.
  • the occupant determination unit 122E compares the extracted area of the first frame Fr1 with the predetermined area stored in the storage unit 13E, and determines that the occupant is an occupant when the area of the first frame Fr11 is equal to or greater than the predetermined area.
  • the occupant determination unit 122E compares the extracted area of the first frame Fr11 with the predetermined area stored in the storage unit 13E, determines that the area of the first frame Fr11 is less than the predetermined area, Judge that there is no.
  • FIG. 45 is a flowchart illustrating an example of a processing procedure performed by the detection apparatus 1EE according to the present embodiment. It is assumed that an area for detecting an occupant is set for each seat.
  • Step S601 The imaging unit 11E captures an image including a seat.
  • Step S602 The extraction unit 121E extracts an image of the detection area set for each seat from the image captured by the imaging unit 11E.
  • the determination unit 12E performs the processing of steps S603 to S612 for each seat.
  • Step S603 The extraction unit 121E extracts an image of the region (first frame) including the occupant for each seat from the extracted image of the detection region.
  • Step S604 The extraction unit 121E determines whether or not the image of the area including the occupant has been extracted. If it is determined that the image of the region including the occupant cannot be extracted (step S604; NO), the extraction unit 121E proceeds to the process of step S606. If it is determined that the image of the region including the occupant has been extracted (step S604; YES), the extraction unit 121E proceeds to the process of step S605.
  • Step S605 The occupant determination unit 122E compares the extracted area of the first frame with the predetermined area stored in the storage unit 13E, and determines whether the area of the first frame is equal to or larger than the predetermined area. . If it is determined that the area of the first frame is less than the predetermined area (step S605; NO), the occupant determination unit 122E proceeds to the process of step S606. If it is determined that the area of the first frame is equal to or larger than the predetermined area (step S605; YES), the occupant determination unit 122E proceeds to the process of step S607.
  • Step S606 The occupant determination unit 122E determines that the occupant is not seated in the seat. After the process, the determination unit 12E ends the occupant determination process.
  • Step S607 The occupant determination unit 122E determines that the occupant is seated on the seat. After the processing, the occupant determination unit 122E proceeds to the process of step S608.
  • Step S608 The belt determination unit 123E extracts an image of the region (second frame) including the seat belt from the image of the region extracted in Step S602. After the process, the belt determination unit 123E proceeds to the process of step S609.
  • Step S609 The belt determination unit 123E determines whether an image of an area including the seat belt has been extracted. If the belt determination unit 123E determines that the image of the region including the seat belt has not been extracted (step S609; NO), the process proceeds to step S610. If it is determined that the image of the region including the seat belt has been extracted (step S609; YES), the belt determination unit 123E advances the process to step S612.
  • Step S610 The belt determination unit 123E determines that the occupant seated in the seat does not wear the seat belt.
  • the belt determination unit 123E proceeds to the process of step S611.
  • Step S611 The belt determination unit 123E generates warning information.
  • the notification unit 15E notifies the warning information generated by the belt determination unit 123E.
  • the determination unit 12E ends the occupant determination process.
  • Step S612 The belt determination unit 123E determines that the occupant seated in the seat is wearing the seat belt. After the process, the determination unit 12E ends the occupant determination process.
  • the extraction unit 121E extracts a region including the occupant of the occupant P1 with the first frame, and extracts a region including the arm of the occupant P2 with another first frame.
  • the extraction unit 121E separates the image of the occupant P1 and the image of the occupant P2 using an image processing technique such as pattern matching or clustering. Since the area of the first frame of the occupant P2 in the detection area Ar1 extracted in this way is equal to or smaller than the predetermined area, the occupant determination unit 122E detects only the occupant P1 in the detection area Ar1.
  • the occupant determination unit 122E determines that no occupant is seated in the detection area Ar2.
  • a detection area for detecting an occupant is set in the seat width direction, and the presence or absence of the occupant, the seat is determined based on the image in the detection area. It was determined whether or not a belt was attached.
  • an occupant image is extracted from an image in the detection area.
  • the present embodiment when the area of the occupant image extracted with respect to the image in the detection area is less than the threshold, it is determined that the occupant is not an occupant. Thereby, according to the present embodiment, it is possible to prevent a person outside the vehicle such as a passerby photographed through the rear glass from being erroneously detected as an occupant.
  • the determination unit 12E extracts, from the detection area, an image of an area including a passenger (first frame) and an image of an area including a seat belt (second frame). Not limited to this.
  • the determination unit 12E may transmit the captured image to the external device 3E via the communication unit 14E.
  • the external device 3E extracts each image of the detection area from the received image, extracts the first frame from the extracted image of the detection area, extracts the second frame, and whether the occupant is seated in the seat At least one of determination of whether or not and determination of whether or not the occupant is wearing the seat belt may be performed.
  • the external device 3E may transmit the determination result to the detection device 1EE. In this way, the processing performed by the determination unit 12E may be performed on the cloud.
  • the external device 3E may be a smartphone or the like in which a program of determination means used by the determination unit 12E is installed.
  • the detection apparatus 1EE may be installed on the dashboard and may be photographed including the rear seat.
  • the determination unit 12C also uses the detection result detected by the buckle sensor attached to the buckle to detect that the image of the occupant sitting on the seat is below the reference position. You may make it alert
  • the external device 3E also uses the received image to determine a determination unit for extracting the first frame, a determination unit for extracting the second frame, a determination unit for extracting an image including an occupant, and a reference position.
  • the determination means for updating may be updated by learning.
  • the external device 3E may transmit the learned determination means to the detection device 1EE.
  • the detection device 1EE stores the determination unit received from the external device 3E in the storage unit 13E.
  • the occupant and the seat belt are described as the learning model.
  • the learning model may be an image in which the occupant wears the seat belt.
  • the determination unit 12E generates an occupant detection and seat belt detection determination unit by learning an image in which the occupant wears the seat belt using an image learning model in which the occupant wears the seat belt. You may do it.
  • an example in which it is determined that the seat belt is worn when the image of the seat belt can be extracted after detecting the image of the occupant has been described, but the present invention is not limited thereto.
  • the determination unit 12E determines that the seatbelt is worn when the extracted image of the seatbelt straddles the boundary set at a predetermined position, and the seatbelt when it does not straddle. You may make it determine with not mounting
  • the predetermined position is, for example, the center position of the seat in the x-axis direction, the center position in the y-axis direction, or the lower position of the headrest in the y-axis direction.
  • the determination unit 12E wears the seat belt when the ratio of the area of the image of the seat belt to the area of the image of the occupant that has been extracted is equal to or greater than a predetermined ratio. If the ratio is less than a predetermined ratio, it may be determined that the seat belt is not worn.
  • the determination unit 12E determines that when the second frame image is formed across the boundary line where the overlapping state of the first frame image and the second frame image is set at a predetermined position, It may be determined that a fixing device (seat belt) is worn.
  • a program for realizing all or part of the functions of the determination unit 12E in the present invention is recorded on a computer-readable recording medium, and the program recorded on the recording medium is read into a computer system and executed. Accordingly, all or part of the processing performed by the determination unit 12E may be performed.
  • the “computer system” includes an OS and hardware such as peripheral devices.
  • the “computer system” includes a WWW system having a homepage providing environment (or display environment).
  • the “computer-readable recording medium” refers to a storage device such as a flexible medium, a magneto-optical disk, a portable medium such as a ROM or a CD-ROM, and a hard disk incorporated in a computer system.
  • the “computer-readable recording medium” refers to a volatile memory (RAM) in a computer system that becomes a server or a client when a program is transmitted via a network such as the Internet or a communication line such as a telephone line. In addition, those holding programs for a certain period of time are also included.
  • RAM volatile memory
  • the program may be transmitted from a computer system storing the program in a storage device or the like to another computer system via a transmission medium or by a transmission wave in the transmission medium.
  • the “transmission medium” for transmitting the program refers to a medium having a function of transmitting information, such as a network (communication network) such as the Internet or a communication line (communication line) such as a telephone line.
  • the program may be for realizing a part of the functions described above. Furthermore, what can implement
  • the present invention may be a fixture detection device having the following characteristics.
  • An image capturing unit that captures an image, an image of an area including a person is extracted from the captured image using a first frame, and an area including a fixing device that fixes the person from the captured image When the first frame and the second frame are overlapped when the first frame and the second frame are extracted, and when the first frame and the second frame are extracted, the person And a determination unit that determines that the fixing device is mounted.
  • the fixing device detection device according to (1), wherein an image of the second frame is extracted using a.
  • An image capturing unit that captures an image, an image of an area including a person is extracted from the captured image using a first frame, and an area including a fixing device that fixes the person from the captured image. Based on an extraction unit that extracts an image using a second frame, and a state in which the image of the first frame and the image of the second frame overlap when the first frame and the second frame are extracted. And a determination unit that determines whether or not the person is wearing the fixing device. (4) When the area where the image of the first frame and the image of the second frame overlap is larger than a predetermined area, the determination unit determines that the person is wearing the fixing device.
  • the determination unit determines the area of the portion where the image of the second frame overlaps the image of the first frame.
  • the ratio to the area of the first frame is larger than a predetermined ratio, it is determined that the person wears the fixing device, and the image of the second frame overlaps the image of the first frame.
  • the fixing device detection device When the ratio of the area of the portion where the image of the second frame overlaps the image of the first frame to the area of the first frame is equal to or less than a predetermined ratio, it is determined that the person does not wear the fixing device.
  • the fixing device detection device according to (3). (6) In the case where the image of the second frame overlaps the image of the first frame and the image of the second frame straddles a boundary line set at a predetermined position, the determination unit When it is determined that the person wears the fixing device and the image of the second frame overlaps the image of the first frame, the image of the second frame straddles the boundary line.
  • the fixing device detection apparatus according to any one of (3) to (5), in which it is determined that the person does not wear the fixing device when the device is not present.
  • the determination unit has a width value provided in at least one of a horizontal direction and a vertical direction of the image of the second frame.
  • An imaging unit that captures an image
  • an extraction unit that extracts, using the second frame, an image of an area including a fixing device that fixes a person from the captured image, and when the second frame is extracted
  • a determination unit that determines that the person is wearing the fixing device when the image of the second frame is formed across a boundary line set at a predetermined position.
  • Instrument detection device (9)
  • the determination unit is configured to extract an image of an area including a person from the captured image using a first frame, and the determination unit includes the first frame and the second frame. When extracted, when the image of the first frame and the image of the second frame overlap, it is determined whether or not the person is wearing the fixing device.
  • Fixture detection device may be a fixing device detection method having the following features.
  • An image of an area including the fixed device is extracted using the first frame, and an image of the area including the fixing device is extracted from the captured image using the second frame using the algorithm stored in the storage unit. And when the first frame and the second frame are extracted and the first frame and the second frame overlap, the person wears the fixing device.
  • a procedure for determining that the Method (11)
  • the photographing unit captures an image, and the extraction unit extracts a region including a person from the photographed image using a first frame, and fixes the person from the photographed image.
  • a procedure for extracting an image of a region including a fixture to be performed using the second frame, and when the determination unit extracts the first frame and the second frame, the image of the first frame and the second frame And a procedure for determining whether or not the person is wearing the fixing device based on a state in which images of frames overlap.
  • a fixing device detection method including:
  • the present invention may be an occupant detection device having the following characteristics.
  • (13) Based on a detection unit that detects information related to at least the position of the head of the occupant and a result detected by the detection unit, the position of the head of the occupant is above a reference position that is set with respect to the seat.
  • crew detection apparatus provided with the determination part which determines with it being a passenger
  • the detection unit includes an imaging unit, and the determination unit includes an extraction unit that extracts an image including at least a head of an occupant from an image captured by the imaging unit, and the determination unit captures the image In the image, when the position of the head of the extracted image including at least the head of the occupant is above a reference position, it is determined that the occupant can wear the fixing device (13) or ( The passenger detection device according to 14).
  • the extraction unit extracts an image including an occupant's head from an image captured by the imaging unit using a first frame, and detects the reference position from the image captured by the imaging unit.
  • the determination unit determines that the occupant is seated in a seat when the extraction unit can extract the first frame, and the image extracted using the first frame and the reference position are It is determined that the occupant can wear the fixing device when they overlap, and the occupant cannot mount the fixing device when the image extracted using the first frame and the reference position do not overlap.
  • the extraction unit extracts an image including the seat belt from an image captured by the imaging unit using a second frame, and the determination unit includes an image extracted using the first frame;
  • the occupant detection device according to (16) wherein the presence / absence of the fixing device is determined based on the state of the image extracted using the second frame.
  • the determination unit determines that the occupant is wearing the fixing device when an image extracted using the first frame overlaps an image extracted using the second frame.
  • the image extracted using the first frame and the image extracted using the second frame are determined not to overlap with each other when the occupant is not wearing the fixing device.
  • Occupant detection device Any one of (16) to (18), wherein the determination unit determines that the occupant can wear the fixing device when the upper side of the first frame is above the reference position.
  • the occupant detection device described in 1. (20)
  • the determination unit determines that the image is not a passenger when the image extracted using the first frame is not detected below the second reference position.
  • the present invention may be an occupant detection method having the following characteristics.
  • An occupant detection method including: a determination step of determining that the occupant is capable of mounting the fixing device when the position is above the set reference position.
  • the present invention may be a human detection device having the following characteristics.
  • (22) In the image capturing unit that captures an image and the image captured by the image capturing unit, a space in which a plurality of people are accommodated is divided into regions corresponding to the number of people that can be accommodated in the space, and is set for each of the divided regions.
  • a human detection apparatus comprising: an extraction unit that extracts an image of a detection region that is present; and a determination unit that determines whether or not a person is present based on the image of the detection region extracted by the extraction unit.
  • the extraction unit extracts an image including the person from the image of the detection area using a first frame, and the determination unit, when the extraction unit can extract the first frame, The person detection device according to (22), wherein it is determined that the person exists.
  • the determination unit determines that the person does not exist when the extraction unit can extract the image of the first frame and the area of the image of the first frame is less than a threshold value.
  • the human detection device according to (23). The extraction unit extracts an image including a fixing device for fixing the person from the image of the detection area using a second frame, and the determination unit extracts the image of the second frame.
  • the determination unit includes a case where an area of the human image included in the detection region is less than a threshold value.
  • the human detection device according to any one of (22) to (26), wherein an image corresponding to a person outside the detection area is not extracted.
  • the present invention may be a human detection method having the following characteristics. (28) In the image capturing step in which the image capturing unit captures an image, and in the image captured by the image capturing step, a space in which a plurality of persons are accommodated is divided into regions corresponding to the number of people that fit in the space, An extraction step for extracting an image of a detection area set for each divided area, and a determination unit that determines whether or not a person is present based on the image of the detection area extracted by the extraction step A human detection method including steps.

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)
  • Automotive Seat Belt Assembly (AREA)

Abstract

La présente invention concerne un dispositif de détection (1, 1AA, 1BB, 1CC, 1EE) comprenant : une unité de photographie (11, 11C, 11E) qui capture une image ; une unité d'extraction (121, 121A, 121B, 121C, 121E) qui utilise une première trame pour extraire, à partir de l'image capturée, une image d'une région qui comprend une personne et utilise une seconde trame pour extraire, à partir de l'image capturée, une image d'une région qui comprend un appareil de sécurisation pour sécuriser la personne ; et une unité de détermination (12, 12A, 12B, 12C, 12E) qui, lorsque la première trame et la seconde trame ont été extraites, détermine, sur la base du chevauchement entre l'image pour la première trame et l'image pour la seconde trame, si la personne porte l'appareil de sécurisation.
PCT/JP2019/008767 2018-03-30 2019-03-06 Dispositif de détection et procédé de détection WO2019188060A1 (fr)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP2018-070093 2018-03-30
JP2018070092A JP2019177852A (ja) 2018-03-30 2018-03-30 乗員検知装置および乗員検知方法
JP2018070093A JP2019177853A (ja) 2018-03-30 2018-03-30 固定器具検知装置および固定器具検知方法
JP2018-070092 2018-03-30
JP2018070091A JP2019177851A (ja) 2018-03-30 2018-03-30 人検知装置および人検知方法
JP2018-070091 2018-03-30

Publications (1)

Publication Number Publication Date
WO2019188060A1 true WO2019188060A1 (fr) 2019-10-03

Family

ID=68060211

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/008767 WO2019188060A1 (fr) 2018-03-30 2019-03-06 Dispositif de détection et procédé de détection

Country Status (1)

Country Link
WO (1) WO2019188060A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11210539B2 (en) * 2019-04-04 2021-12-28 Joyson Safety Systems Acquisition Llc Detection and monitoring of active optical retroreflectors

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09240428A (ja) * 1995-12-27 1997-09-16 Omron Corp シートベルト未装着警告装置及びシートベルト装着管理装置
JP2000211475A (ja) * 1999-01-22 2000-08-02 Nsk Ltd シ―トベルト緊張装置
JP2006117046A (ja) * 2004-10-20 2006-05-11 Fujitsu Ten Ltd 乗員保護システム、及び乗員保護装置
JP2008129948A (ja) * 2006-11-22 2008-06-05 Takata Corp 乗員検出装置、作動装置制御システム、シートベルトシステム、車両
WO2013021707A1 (fr) * 2011-08-10 2013-02-14 本田技研工業株式会社 Dispositif de détection de passager de véhicule
JP2014211734A (ja) * 2013-04-18 2014-11-13 日産自動車株式会社 乗員検出装置
CN107330378A (zh) * 2017-06-09 2017-11-07 湖北天业云商网络科技有限公司 一种基于嵌入式图像处理的驾驶员行为检测系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09240428A (ja) * 1995-12-27 1997-09-16 Omron Corp シートベルト未装着警告装置及びシートベルト装着管理装置
JP2000211475A (ja) * 1999-01-22 2000-08-02 Nsk Ltd シ―トベルト緊張装置
JP2006117046A (ja) * 2004-10-20 2006-05-11 Fujitsu Ten Ltd 乗員保護システム、及び乗員保護装置
JP2008129948A (ja) * 2006-11-22 2008-06-05 Takata Corp 乗員検出装置、作動装置制御システム、シートベルトシステム、車両
WO2013021707A1 (fr) * 2011-08-10 2013-02-14 本田技研工業株式会社 Dispositif de détection de passager de véhicule
JP2014211734A (ja) * 2013-04-18 2014-11-13 日産自動車株式会社 乗員検出装置
CN107330378A (zh) * 2017-06-09 2017-11-07 湖北天业云商网络科技有限公司 一种基于嵌入式图像处理的驾驶员行为检测系统

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11210539B2 (en) * 2019-04-04 2021-12-28 Joyson Safety Systems Acquisition Llc Detection and monitoring of active optical retroreflectors

Similar Documents

Publication Publication Date Title
US7243945B2 (en) Weight measuring systems and methods for vehicles
JP6303907B2 (ja) 運転者監視装置
US7415126B2 (en) Occupant sensing system
CN113459982B (zh) 用于乘员分类和基于此调节气囊展开的系统和方法
JP4898261B2 (ja) 対象物検出システム、作動装置制御システム、車両、対象物検出方法
JP2007022401A (ja) 乗員情報検出システム、乗員拘束装置、車両
EP1870295A1 (fr) Système de détection d'occupant d'un véhicule, système de contrôle d'un dispositif fonctionnel, et véhicule
JP2016027452A (ja) ドライバの運転不能状態検出装置
EP2743141A1 (fr) Agencement de commande pour véhicule
JP2008129948A (ja) 乗員検出装置、作動装置制御システム、シートベルトシステム、車両
JP2008261749A (ja) 乗員検出装置、作動装置制御システム、シートベルトシステム、車両
US20180268230A1 (en) Vehicle display system and method of controlling vehicle display system
JP2016009255A (ja) ドライバの運転不能状態検出装置
JP2009113621A (ja) 乗員画像撮像装置、運転支援装置
JP6361312B2 (ja) ドライバの運転不能状態検出装置
JP2007153035A (ja) 乗員着座判定システム
DE102019110429A1 (de) Steuerung des airbagaktivierungsstatus an einem kraftfahrzeug
JP2019055759A (ja) 車載機器の制御システム
CN115675353A (zh) 用于使用基于乘员的大小和形状的座椅安全带绕行区带评估座椅安全带绕行的系统和方法
CN110588562A (zh) 一种儿童安全乘车提醒方法、装置、车载设备及存储介质
JP2016009258A (ja) ドライバの運転不能状態検出装置
WO2019188060A1 (fr) Dispositif de détection et procédé de détection
JP2010203836A (ja) 車室内状態の認識装置
JP2013252863A (ja) 乗員拘束制御装置および乗員拘束制御方法
JP2010195139A (ja) 乗員拘束制御装置および乗員拘束制御方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19776831

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19776831

Country of ref document: EP

Kind code of ref document: A1