WO2019207625A1 - Occupant detection device, occupant detection method, and occupant detection system - Google Patents

Occupant detection device, occupant detection method, and occupant detection system Download PDF

Info

Publication number
WO2019207625A1
WO2019207625A1 PCT/JP2018/016455 JP2018016455W WO2019207625A1 WO 2019207625 A1 WO2019207625 A1 WO 2019207625A1 JP 2018016455 W JP2018016455 W JP 2018016455W WO 2019207625 A1 WO2019207625 A1 WO 2019207625A1
Authority
WO
WIPO (PCT)
Prior art keywords
occupant detection
calibration
occupant
processing unit
vehicle
Prior art date
Application number
PCT/JP2018/016455
Other languages
French (fr)
Japanese (ja)
Inventor
洸暉 安部
太郎 熊谷
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to PCT/JP2018/016455 priority Critical patent/WO2019207625A1/en
Priority to JP2020515326A priority patent/JPWO2019207625A1/en
Publication of WO2019207625A1 publication Critical patent/WO2019207625A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • B60R21/01Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents
    • B60R21/015Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents including means for detecting the presence or position of passengers, passenger seats or child seats, and the related safety parameters therefor, e.g. speed or timing of airbag inflation in relation to occupant position or seat belt use

Definitions

  • the present invention relates to an occupant detection device, an occupant detection method, and an occupant detection system.
  • the occupant detection includes, for example, detection of the presence or absence of an occupant in the vehicle, detection of the position of each occupant when there are one or more occupants in the vehicle, and detection of each occupant when there are one or more occupants in the vehicle. It includes estimation of physique and determination of the presence or absence of child seats in each seat in the vehicle. The result of occupant detection is used, for example, for controlling the operation of the airbag.
  • the occupant detection when the ignition switch is turned on is executed before the vehicle starts.
  • the time T1 from when the ignition switch is turned on until the vehicle starts may be short.
  • the time T2 required for calibration may be longer than the time T1.
  • the present invention has been made to solve the above-described problems, and an occupant detection device and an occupant detection method capable of suppressing a delay in the execution timing of occupant detection with respect to an event generation timing such as vehicle start. And an occupant detection system.
  • the occupant detection device includes a calibration processing unit that executes a calibration process for a first occupant detection process using image data indicating an image captured by a camera for imaging in a vehicle interior, and the calibration process is completed.
  • An end determination unit that determines whether or not, a first occupant detection processing unit that executes the first occupant detection process using the image data and the result of the calibration process after the end of the calibration process, and the end of the calibration process.
  • a second occupant detection processing unit that executes a second occupant detection process that does not require a calibration process using image data is provided.
  • the present invention since it is configured as described above, it is possible to suppress a delay in the execution timing of occupant detection with respect to the occurrence timing of an event such as vehicle start.
  • FIG. 2A is an explanatory diagram illustrating an example of an arrangement position of a camera in the vehicle and an example of an imageable range by the camera, and is an explanatory diagram illustrating a state viewed from above the vehicle.
  • FIG. 2B is an explanatory diagram illustrating an example of a camera arrangement position in the vehicle and an example of an imageable range by the camera, and is an explanatory diagram illustrating a state viewed from the side of the vehicle.
  • FIG. 3A is a block diagram illustrating a hardware configuration of the control device according to the first embodiment.
  • FIG. 3B is a block diagram illustrating another hardware configuration of the control device according to the first embodiment.
  • FIG. 4A is a flowchart showing an operation of the occupant detection device according to Embodiment 1.
  • FIG. 4B is a flowchart showing another operation of the occupant detection device according to Embodiment 1. It is a block diagram which shows the state by which the passenger
  • FIG. 1 is a block diagram illustrating a state in which an occupant detection system according to Embodiment 1 is provided in a vehicle.
  • FIG. 2 is an explanatory diagram illustrating an example of the arrangement position of the camera in the vehicle and an example of an imageable range by the camera. With reference to FIG.1 and FIG.2, the passenger
  • the vehicle 1 has a camera 2 for imaging in the passenger compartment.
  • the camera 2 is configured by, for example, an infrared camera, a visible light camera, or a distance image sensor.
  • the camera 2 is provided, for example, on a dashboard (more specifically, a center cluster) of the vehicle 1.
  • FIG. 2 shows an example of an arrangement position of the camera 2 in the vehicle 1 and an example of an imageable range A by the camera 2.
  • all the seats in the vehicle 1 are included in the imageable range A. For this reason, all seats in the vehicle 1 are to be imaged by the camera 2.
  • the vehicle 1 has sensors 3.
  • the sensors 3 are, for example, a sensor that detects on / off of an ignition switch in the vehicle 1, a sensor that detects opening / closing of each door in the vehicle 1, a sensor that detects the traveling speed of the vehicle 1, and a seat surface portion of each seat in the vehicle 1. It is configured by at least one of a load sensor (a so-called “sitting sensor”) provided or a sensor for detecting a range position of the transmission in the vehicle 1.
  • a load sensor a so-called “sitting sensor”
  • the image data acquisition unit 11 acquires image data indicating an image captured by the camera 2 from the camera 2 at a predetermined time interval. This time interval is set to a different value according to, for example, the imaging frame rate by the camera 2.
  • the image data acquisition unit 11 outputs the acquired image data to the calibration processing unit 13, the first occupant detection processing unit 15, and the second occupant detection processing unit 16.
  • the start determination unit 12 uses the output signals from the sensors 3 to determine whether or not a predetermined event (hereinafter referred to as “start event”) has occurred in the vehicle 1.
  • the start determination unit 12 determines that a start event has occurred.
  • the start determination unit 12 determines that a start event has occurred.
  • the start determination unit 12 determines that a start event has occurred.
  • the start determination unit 12 determines that a start event has occurred.
  • the start determination unit 12 determines that a start event has occurred.
  • the calibration processing unit 13 uses the image data output from the image data acquisition unit 11 to perform calibration processing for occupant detection processing (hereinafter referred to as “first occupant detection processing”) by the first occupant detection processing unit 15. Is to execute.
  • the calibration processing by the calibration processing unit 13 is started when the start determination unit 12 determines that a start event has occurred.
  • the end determination unit 14 determines whether the calibration processing by the calibration processing unit 13 has ended after the calibration processing unit 13 has started the calibration processing.
  • the first occupant detection processing unit 15 detects the first occupant detection using the image data output from the image data acquisition unit 11 and the result of the calibration process. The process is executed. That is, the first occupant detection process is based on a method that requires a prior calibration process, and is executed after the calibration process is completed.
  • the second occupant detection processing unit 16 uses the image data acquisition unit 11 to The occupant detection process (hereinafter referred to as “second occupant detection process”) is executed using the output image data. That is, the second occupant detection process is based on a method that does not require a prior calibration process, and is executed before the end of the calibration process.
  • Each of the first occupant detection process and the second occupant detection process includes, for example, detection of the presence or absence of an occupant in the vehicle 1 (hereinafter referred to as “occupant detection”), Detection of the occupant's boarding position (hereinafter referred to as “boarding position detection”), estimation of the physique of each occupant (hereinafter referred to as “physique detection”) when there are one or more occupants in the vehicle 1, This includes the determination of whether or not a child seat is installed in each seat (hereinafter referred to as “child seat detection”).
  • the calibration processing unit 13 sets a plurality of reference values R used for occupant detection, boarding position detection, physique detection, and child seat detection by executing image recognition processing on a captured image by the camera 2.
  • the reference value R corresponds to, for example, the value R 1 corresponding to the position of each seat, the value R 2 corresponding to the position of each detection target (such as an occupant or a child seat) in the depth direction, and the position of the seat surface portion of each seat it is intended to include the value R 3 to.
  • the first occupant detection processing unit 15 executes image recognition processing on an image captured by the camera 2, thereby a plurality of values corresponding to a plurality of elements used for occupant detection, boarding position detection, physique detection, and child seat detection.
  • element value E is calculated.
  • the element value E is, for example, a value E 1 corresponding to the shoulder width of each occupant, a value E 2 corresponding to the face width of each occupant, a value E 3 corresponding to the seat height of each occupant, and a feature such as an uneven shape in each seat. it is intended to include the value E 4 corresponding to.
  • the first occupant detection processing unit 15 uses the reference value R set by the calibration processing unit 13 to convert each element value E from a pixel unit value (that is, a value in a captured image) E to a meter unit value.
  • a coefficient ⁇ for conversion to E ′ (that is, a value in real space) is calculated.
  • the first occupant detection processing unit 15 converts each element value E from the value E in the captured image to the value E ′ in the real space using the coefficient ⁇ .
  • the first occupant detection processing unit 15 performs occupant detection, boarding position detection, physique detection, and child seat detection using the converted element value E ′.
  • the boarding position detection and the physique detection are executed only when one or more passengers are detected by the passenger detection (that is, when one or more passengers are present in the vehicle 1).
  • an element value E 1 ′ corresponding to the shoulder width of each occupant for example, an element value E 2 ′ corresponding to the face width of each occupant, and an element value E 3 ′ corresponding to the seat height of each occupant are used.
  • the occupant's physique is classified into three types: the physique of an adult (for example, a person over 14 years old), the physique of a child (for example, a person over 6 years old and under 13 years old), and the physique of an infant (for example, a person under 12 years old). Shall be.
  • the first passenger detection processing unit 15 an element value E 1 corresponding to the adult body size ', E 2', E 3 ' and each of the range, the element values E 1 corresponding to the child's physique', E 2 ' , E 3 ′, and a database indicating each range of element values E 1 ′, E 2 ′, E 3 ′ corresponding to the infant's physique are stored in advance.
  • This database is generated by, for example, a statistical method.
  • the first occupant detection processing unit 15 determines in which range each element value E 1 ′ is in the database, and each element value E 2 ′ is in which range in the database. And whether each element value E 3 ′ is a value in the database, the physique of each occupant is an adult physique, a child physique or an infant physique Which of the two is estimated.
  • an element value E 2 ′ corresponding to the face width of each occupant and an element value E 4 corresponding to features such as the uneven shape in each seat are used. That is, the first occupant detection processing unit 15 determines whether the face corresponding to each element value E 2 ′ is an infant's face by comparing each element value E 2 ′ with a predetermined threshold value. To do. The first passenger detecting processor 15 determines whether the edge shape encompassing uneven shape corresponding to the individual element values E 4 (i.e. edge shape typical child seat has). Based on these determination results, the first occupant detection processing unit 15 determines whether a child seat is installed in each seat.
  • the dictionary data storage unit 17 stores a plurality of dictionary data in advance.
  • the plurality of dictionary data is generated by machine learning, for example.
  • Each of the plurality of dictionary data is to be compared with image data indicating an image captured by the camera 2.
  • the plurality of dictionary data correspond to different states in the vehicle 1. More specifically, the plurality of dictionary data includes the number of occupants in the vehicle 1, the position of each occupant in the vehicle 1, the physique of each occupant in the vehicle 1, and the child seat installation in each seat in the vehicle 1. It corresponds to various states that are different from each other.
  • the second occupant detection processing unit 16 acquires a plurality of dictionary data stored in the dictionary data storage unit 17 from the dictionary data storage unit 17.
  • the second occupant detection processing unit 16 compares the image data output by the image data acquisition unit 11 with each of the acquired plurality of dictionary data.
  • the second occupant detection processing unit 16 selects one dictionary data having the highest similarity to the image data among the plurality of dictionary data. Thereby, occupant detection, boarding position detection, physique detection, and child seat detection are realized.
  • the second occupant detection process is based on so-called “pattern recognition”. For this reason, the second occupant detection process is lower in robustness against the posture change of each occupant and the detection accuracy is lower than the first occupant detection process. In other words, the first occupant detection process is more robust with respect to changes in posture of each occupant and has higher detection accuracy than the second occupant detection process.
  • the result information storage unit 18 stores information indicating the result of the first occupant detection process when the first occupant detection processing unit 15 executes the first occupant detection process.
  • the result information storage unit 18 stores information indicating the result of the second occupant detection process when the second occupant detection processing unit 16 executes the second occupant detection process.
  • information stored in the result information storage unit 18 is collectively referred to as “result information”.
  • the airbag control device 4 acquires the result information stored in the result information storage unit 18 from the result information storage unit 18.
  • the airbag control device 4 controls the operation of the airbag in the vehicle 1 using the acquired result information.
  • the airbag control device 4 invalidates the operation of the airbag corresponding to the seat on which the infant is seated based on the result of the physique detection.
  • the airbag control device 4 invalidates the operation of the airbag corresponding to the seat on which the child seat is installed based on the child seat detection result.
  • the calibration processing unit 13, the end determination unit 14, the first occupant detection processing unit 15, and the second occupant detection processing unit 16 constitute a main part of the occupant detection device 100.
  • the image data acquisition unit 11, the start determination unit 12, the dictionary data storage unit 17, the result information storage unit 18, and the occupant detection device 100 constitute a main part of the control device 200.
  • the camera 2 and the control device 200 constitute the main part of the occupant detection system 300.
  • the control device 200 is configured by a computer, and the computer includes a processor 31 and memories 32 and 33.
  • the memory 32 causes the computer to function as the image data acquisition unit 11, the start determination unit 12, the calibration processing unit 13, the end determination unit 14, the first occupant detection processing unit 15, and the second occupant detection processing unit 16.
  • the program is stored.
  • the processor 31 reads and executes the program stored in the memory 32, the image data acquisition unit 11, the start determination unit 12, the calibration processing unit 13, the end determination unit 14, the first occupant detection processing unit 15, and the first
  • the function of the two occupant detection processing unit 16 is realized.
  • the functions of the dictionary data storage unit 17 and the result information storage unit 18 are realized by the memory 33.
  • control device 200 may include a memory 33 and a processing circuit 34.
  • the functions of the image data acquisition unit 11, the start determination unit 12, the calibration processing unit 13, the end determination unit 14, the first occupant detection processing unit 15, and the second occupant detection processing unit 16 are realized by the processing circuit 34. It may be a thing.
  • control device 200 may include a processor 31, memories 32 and 33, and a processing circuit 34 (not shown). In this case, some of the functions of the image data acquisition unit 11, the start determination unit 12, the calibration processing unit 13, the end determination unit 14, the first occupant detection processing unit 15, and the second occupant detection processing unit 16 are performed. It may be realized by the processor 31 and the memory 32, and the remaining functions may be realized by the processing circuit 34.
  • the processor 31 uses, for example, a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), a microprocessor, a microcontroller, or a DSP (Digital Signal Processor).
  • a CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • microprocessor a microcontroller
  • DSP Digital Signal Processor
  • the memories 32 and 33 use, for example, a semiconductor memory or a magnetic disk. More specifically, the memories 32 and 33 are RAM (Random Access Memory), ROM (Read Only Memory), flash memory, EPROM (Erasable Programmable Read Only Memory), EEPROM (Electrically Stable Memory). (Solid State Drive) or HDD (Hard Disk Drive) or the like is used.
  • RAM Random Access Memory
  • ROM Read Only Memory
  • flash memory EPROM (Erasable Programmable Read Only Memory)
  • EEPROM Electrically Stable Memory
  • Solid State Drive Solid State Drive
  • HDD Hard Disk Drive
  • the processing circuit 34 may be, for example, an ASIC (Application Specific Integrated Circuit), a PLD (Programmable Logic Device), an FPGA (Field-Programmable Gate Array), a SoC (System-LargeSemi-ChemicalSigleSigleSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigureSigure
  • the calibration processing unit 13 starts the calibration process. Thereafter, the calibration processing unit 13 continues the calibration process until the calibration process is completed (step ST1). Since a specific example of the calibration process has already been described, the description thereof will be omitted.
  • the end determination unit 14 determines whether the calibration processing by the calibration processing unit 13 has ended (step ST2).
  • step ST3 When it is determined by the end determination unit 14 that the calibration process has ended (step ST2 “YES”), the first occupant detection processing unit 15 executes the first occupant detection process (step ST3). Since the specific example of the first occupant detection process has already been described, the description thereof will be omitted.
  • step ST4 when it is determined by the end determination unit 14 that the calibration process has not ended (step ST2 “NO”), the second occupant detection processing unit 16 executes the second occupant detection process (step ST4). Since the specific example of the second occupant detection process has already been described, the description thereof will be omitted.
  • step ST2 If the determination result in step ST2 is “NO”, the calibration process is continuously executed in the background of the processes in steps ST2 and ST4. Therefore, after the process of step ST4 is executed, the end determination unit 14 may repeatedly execute the process of step ST2 until it is determined as “YES” in step ST2. And when it determines with step ST2 "YES", the 1st passenger
  • the second occupant detection processing unit 16 executes the second occupant detection process before the end of the calibration process, and then the first occupant detection processing unit 15 executes the first occupant detection process after the end of the calibration process. It may be what you do.
  • the occupant detection device 100 of Embodiment 1 includes the second occupant before the end of the calibration process.
  • a second occupant detection processing unit 16 that executes detection processing is provided. This shortens the time from when the start event occurs until the first occupant detection process is executed, as compared with a conventional occupant detection device that does not have a functional unit corresponding to the second occupant detection processing unit 16. be able to. As a result, it is possible to prevent the execution timing of the occupant detection process from being delayed with respect to the occurrence timing of an event such as the start of the vehicle 1.
  • the arrangement position of the camera 2 in the vehicle 1 is not limited to the example shown in FIG. 2, and the imageable range A by the camera 2 is not limited to the example shown in FIG.
  • only a part of the plurality of seats in the vehicle 1 may be included in the imageable range A. That is, only a part of the plurality of seats in the vehicle 1 may be an object to be imaged by the camera 2.
  • the sensors 3 are not limited to the above specific example, and the start event is not limited to the above specific example.
  • the sensors 3 may include any sensors as long as they are sensors provided in the vehicle 1.
  • any event may be set as a start event as long as the event can be determined using an output signal from the sensors 3.
  • the contents of the first occupant detection process are not limited to occupant detection, boarding position detection, physique detection, and child seat detection.
  • the first occupant detection process may include one or more of these detections.
  • the first occupant detection process may include other detections different from these detections.
  • the content of the second occupant detection process is not limited to occupant detection, boarding position detection, physique detection, and child seat detection.
  • the second occupant detection process may include one or more of these detections.
  • the second occupant detection process may include other detections different from these detections.
  • the first occupant detection process may be performed by a method that requires a prior calibration process, and the first occupant detection process method is not limited to the above specific example.
  • the second occupant detection process may be performed by a method that does not require a prior calibration process, and the method of the second occupant detection process is not limited to the above specific example.
  • the contents and method of the calibration process may be any one according to the contents and method of the first occupant detection process, and are not limited to the above specific examples.
  • the use of the results of the first occupant detection process and the second occupant detection process is not limited to the operation control of the airbag.
  • the result information may be used for any control in the vehicle 1.
  • the occupant detection device 100 uses the image data indicating the image captured by the camera 2 for imaging in the passenger compartment to perform the calibration process for the first occupant detection process.
  • Unit 13 end determination unit 14 for determining whether or not the calibration process has ended, and a first occupant detection process that uses the image data and the result of the calibration process after the end of the calibration process
  • An occupant detection processing unit 15 and a second occupant detection processing unit 16 that executes a second occupant detection process that does not require a calibration process using image data before the end of the calibration process are provided. Thereby, it can suppress that the execution timing of a passenger
  • a plurality of values (element values) E ′ corresponding to a plurality of elements are calculated by performing an image recognition process on the captured image.
  • the calculation of the value (E ′) is based on a plurality of reference values R, and the calibration process is to set a plurality of reference values R by executing an image recognition process on the captured image.
  • the second occupant detection process is based on pattern recognition based on comparison between image data and each of a plurality of dictionary data, and the plurality of dictionary data are generated by machine learning. Accordingly, it is possible to realize a second occupant detection process that is less accurate than the first condition detection process but does not require a prior calibration process.
  • the calibration processing unit 13 executes the calibration process for the first occupant detection process using image data indicating an image captured by the camera 2 for imaging in the vehicle interior.
  • Step ST1 step ST2 in which the end determination unit 14 determines whether or not the calibration process has ended, and the first occupant detection processing unit 15 after the end of the calibration process, the result of the image data and the calibration process
  • the first occupant detection process using step ST3 and the second occupant detection processing unit 16 execute the second occupant detection process that does not require the calibration process using the image data before the end of the calibration process.
  • Step ST4 Thereby, the effect similar to the said effect by the passenger
  • the occupant detection system 300 uses the camera 2 for imaging in the vehicle interior and the image data indicating the image captured by the camera 2 to execute calibration processing for the first occupant detection process.
  • An occupant detection apparatus 100 having a first occupant detection processing unit 15 and a second occupant detection processing unit 16 that executes a second occupant detection process that does not require calibration processing using image data before the end of the calibration process; Is provided. Thereby, the effect similar to the said effect by the passenger
  • FIG. FIG. 5 is a block diagram illustrating a state in which the vehicle occupant detection system according to Embodiment 2 is provided in the vehicle.
  • an occupant detection system 300a according to the second embodiment will be described.
  • FIG. 5 the same blocks as those shown in FIG.
  • the occupant detection device 100a performs personal authentication processing for each occupant in the vehicle 1 using the image data output by the image data acquisition unit 11. It has a function to do.
  • the result information storage unit 18a stores information indicating the result of the personal authentication process (that is, result information) when the occupant detection device 100a executes the personal authentication process.
  • the result information storage unit 18a stores information indicating the result of the first occupant detection process (that is, result information) when the first occupant detection processing unit 15 executes the first occupant detection process.
  • the first occupant detection process includes, for example, occupant detection, boarding position detection, physique detection, and child seat detection.
  • Information indicating the result of physique detection in the first occupant detection process is stored in the result information storage unit 18a in association with information indicating the result of the personal authentication process. That is, information indicating the result of physique detection in the first occupant detection process is stored in the result information storage unit 18a for each individual.
  • the second occupant detection processing unit 16a executes a second occupant detection process.
  • the second occupant detection process includes, for example, occupant detection, boarding position detection, physique detection, and child seat detection.
  • the occupant detection, riding position detection, and child seat detection methods by the second occupant detection processing unit 16a are the same as the occupant detection, riding position detection, and child seat detection methods by the second occupant detection processing unit 16 shown in FIG. That is, because it is based on pattern recognition), the description thereof will be omitted.
  • the physique detection method by the second occupant detection processing unit 16a is different from the physique detection method by the second occupant detection processing unit 16 shown in FIG. That is, the second occupant detection processing unit 16a displays information indicating the result of the physique detection in the first occupant detection process executed in the past among the result information stored in the result information storage unit 18a. Get from.
  • the second occupant detection processing unit 16a is a result of physique detection in the first occupant detection process executed in the past, using the acquired information, and for the same person as each occupant in the current vehicle 1
  • the result of physique detection is used as the result of physique detection in the second occupant detection process. This is because there is a low probability that the physique of the same person will greatly change between the execution of the past first occupant detection process and the execution of the current second occupant detection process.
  • the result information storage unit 18a stores information (that is, result information) indicating a result of the second occupant detection process when the second occupant detection processing unit 16a executes the second occupant detection process. However, information indicating the result of physique detection in the second occupant detection process is excluded from the storage target.
  • the calibration processing unit 13, the end determination unit 14, the first occupant detection processing unit 15, and the second occupant detection processing unit 16a constitute a main part of the occupant detection device 100a.
  • the image data acquisition unit 11, the start determination unit 12, the dictionary data storage unit 17, the result information storage unit 18a, and the occupant detection device 100a constitute a main part of the control device 200a.
  • the camera 2 and the control device 200a constitute the main part of the occupant detection system 300a.
  • the function of the second occupant detection processing unit 16a may be realized by the processor 31 and the memory 32, or may be realized by the processing circuit 34.
  • the function of the result information storage unit 18a is realized by the memory 33.
  • step ST4 the second occupant detection processing unit 16a uses the result of physique detection in the first occupant detection process executed in the past as the result of physique detection in the second occupant detection process.
  • the 2nd crew member detection process part 16a is 2nd shown in FIG.
  • the physique detection by pattern recognition similar to the occupant detection processing unit 16 may be executed.
  • information indicating the result of physique detection in the second occupant detection process may be included in the storage target.
  • the contents of the first occupant detection process are not limited to occupant detection, boarding position detection, physique detection, and child seat detection.
  • the contents of the second occupant detection process are not limited to occupant detection, boarding position detection, physique detection, and child seat detection.
  • the second occupant detection processing unit 16a may use the result of the first occupant detection process executed in the past as the result of the second occupant detection process for another detection different from the physique detection.
  • the occupant detection device 100a, the control device 200a, and the occupant detection system 300a can employ various modifications similar to those described in the first embodiment.
  • the second occupant detection process uses the result of the first occupant detection process executed in the past as the result of the second occupant detection process.
  • the physique detection it is possible to realize a second occupant detection process that does not require a prior calibration process and has the same accuracy as the first occupant detection process.
  • the occupant detection device, occupant detection method, and occupant detection system of the present invention can be applied to, for example, operation control of an airbag in a vehicle.

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Air Bags (AREA)
  • Chair Legs, Seat Parts, And Backrests (AREA)
  • Seats For Vehicles (AREA)

Abstract

An occupant detection device (100) provided with: a calibration processing unit (13) for using image data representing a photographic image from a vehicle interior imaging camera (2) to execute a calibration process for a first occupant detection process; an end determination unit (14) for determining whether the calibration process has ended; a first occupant detection processing unit (15) for using the image data and the result of the calibration process to execute a first occupant detection process after the calibration process has ended; and a second occupant detection processing unit (16) for using the image data to execute a second occupant detection process in which the calibration process is not required, before the calibration process has ended.

Description

乗員検知装置、乗員検知方法及び乗員検知システムOccupant detection device, occupant detection method, and occupant detection system
 本発明は、乗員検知装置、乗員検知方法及び乗員検知システムに関する。 The present invention relates to an occupant detection device, an occupant detection method, and an occupant detection system.
 従来、車室内撮像用のカメラによる撮像画像に対する画像認識処理を実行することにより、いわゆる「乗員検知」を実現するシステムが開発されている(例えば、特許文献1参照。)。乗員検知は、例えば、車両内の乗員の有無の検知、車両内に1人以上の乗員がいる場合における各乗員の乗車位置の検知、車両内に1人以上の乗員がいる場合における各乗員の体格の推定、及び車両内の各座席におけるチャイルドシートの設置有無の判定を含むものである。乗員検知の結果は、例えば、エアバッグの動作制御に用いられる。 Conventionally, a system that realizes so-called “occupant detection” by executing image recognition processing on an image captured by a camera for imaging in a vehicle interior has been developed (see, for example, Patent Document 1). The occupant detection includes, for example, detection of the presence or absence of an occupant in the vehicle, detection of the position of each occupant when there are one or more occupants in the vehicle, and detection of each occupant when there are one or more occupants in the vehicle. It includes estimation of physique and determination of the presence or absence of child seats in each seat in the vehicle. The result of occupant detection is used, for example, for controlling the operation of the airbag.
特開2009-107527号公報JP 2009-107527 A
 エアバッグの動作制御等を適切に実行する観点から、例えばイグニッションスイッチがオンされたときの乗員検知は、車両が発進するよりも先に実行されるのが好適である。 From the viewpoint of appropriately executing the operation control of the airbag, for example, it is preferable that the occupant detection when the ignition switch is turned on is executed before the vehicle starts.
 ここで、例えば運転者が急いでいることにより、イグニッションスイッチがオンされてから車両が発進するまでの時間T1が短いことがある。これに対して、乗員検知が事前のキャリブレーションを要する方法によるものである場合、キャリブレーションにかかる時間T2が時間T1よりも長いことがある。これにより、乗員検知の実行タイミングが車両の発進タイミングに対して遅れて、エアバッグの動作制御等を適切に実行することができなくなるという問題があった。 Here, for example, when the driver is in a hurry, the time T1 from when the ignition switch is turned on until the vehicle starts may be short. On the other hand, when occupant detection is based on a method that requires prior calibration, the time T2 required for calibration may be longer than the time T1. As a result, there is a problem in that the execution timing of the occupant detection is delayed with respect to the start timing of the vehicle, and the operation control of the airbag cannot be executed properly.
 本発明は、上記のような課題を解決するためになされたものであり、車両の発進等のイベントの発生タイミングに対する乗員検知の実行タイミングの遅れを抑制することができる乗員検知装置、乗員検知方法及び乗員検知システムを提供することを目的とする。 The present invention has been made to solve the above-described problems, and an occupant detection device and an occupant detection method capable of suppressing a delay in the execution timing of occupant detection with respect to an event generation timing such as vehicle start. And an occupant detection system.
 本発明の乗員検知装置は、車室内撮像用のカメラによる撮像画像を示す画像データを用いて、第1乗員検知処理用のキャリブレーション処理を実行するキャリブレーション処理部と、キャリブレーション処理が終了したか否かを判定する終了判定部と、キャリブレーション処理の終了後に、画像データ及びキャリブレーション処理の結果を用いて第1乗員検知処理を実行する第1乗員検知処理部と、キャリブレーション処理の終了前に、画像データを用いてキャリブレーション処理が不要な第2乗員検知処理を実行する第2乗員検知処理部とを備えるものである。 The occupant detection device according to the present invention includes a calibration processing unit that executes a calibration process for a first occupant detection process using image data indicating an image captured by a camera for imaging in a vehicle interior, and the calibration process is completed. An end determination unit that determines whether or not, a first occupant detection processing unit that executes the first occupant detection process using the image data and the result of the calibration process after the end of the calibration process, and the end of the calibration process A second occupant detection processing unit that executes a second occupant detection process that does not require a calibration process using image data is provided.
 本発明によれば、上記のように構成したので、車両の発進等のイベントの発生タイミングに対する乗員検知の実行タイミングの遅れを抑制することができる。 According to the present invention, since it is configured as described above, it is possible to suppress a delay in the execution timing of occupant detection with respect to the occurrence timing of an event such as vehicle start.
実施の形態1に係る乗員検知システムが車両に設けられている状態を示すブロック図である。It is a block diagram which shows the state by which the passenger | crew detection system which concerns on Embodiment 1 is provided in the vehicle. 図2Aは、車両におけるカメラの配置位置の一例及びカメラによる撮像可能範囲の一例を示す説明図であって、車両の上方から見た状態を示す説明図である。図2Bは、車両におけるカメラの配置位置の一例及びカメラによる撮像可能範囲の一例を示す説明図であって、車両の側方から見た状態を示す説明図である。FIG. 2A is an explanatory diagram illustrating an example of an arrangement position of a camera in the vehicle and an example of an imageable range by the camera, and is an explanatory diagram illustrating a state viewed from above the vehicle. FIG. 2B is an explanatory diagram illustrating an example of a camera arrangement position in the vehicle and an example of an imageable range by the camera, and is an explanatory diagram illustrating a state viewed from the side of the vehicle. 図3Aは、実施の形態1に係る制御装置のハードウェア構成を示すブロック図である。図3Bは、実施の形態1に係る制御装置の他のハードウェア構成を示すブロック図である。FIG. 3A is a block diagram illustrating a hardware configuration of the control device according to the first embodiment. FIG. 3B is a block diagram illustrating another hardware configuration of the control device according to the first embodiment. 図4Aは、実施の形態1に係る乗員検知装置の動作を示すフローチャートである。図4Bは、実施の形態1に係る乗員検知装置の他の動作を示すフローチャートである。FIG. 4A is a flowchart showing an operation of the occupant detection device according to Embodiment 1. FIG. 4B is a flowchart showing another operation of the occupant detection device according to Embodiment 1. 実施の形態2に係る乗員検知システムが車両に設けられている状態を示すブロック図である。It is a block diagram which shows the state by which the passenger | crew detection system which concerns on Embodiment 2 is provided in the vehicle.
 以下、この発明をより詳細に説明するために、この発明を実施するための形態について、添付の図面に従って説明する。 Hereinafter, in order to explain the present invention in more detail, modes for carrying out the present invention will be described with reference to the accompanying drawings.
実施の形態1.
 図1は、実施の形態1に係る乗員検知システムが車両に設けられている状態を示すブロック図である。図2は、車両におけるカメラの配置位置の一例及びカメラによる撮像可能範囲の一例を示す説明図である。図1及び図2を参照して、実施の形態1の乗員検知システム300について説明する。
Embodiment 1 FIG.
FIG. 1 is a block diagram illustrating a state in which an occupant detection system according to Embodiment 1 is provided in a vehicle. FIG. 2 is an explanatory diagram illustrating an example of the arrangement position of the camera in the vehicle and an example of an imageable range by the camera. With reference to FIG.1 and FIG.2, the passenger | crew detection system 300 of Embodiment 1 is demonstrated.
 車両1は車室内撮像用のカメラ2を有している。カメラ2は、例えば、赤外線カメラ、可視光カメラ又は距離画像センサにより構成されている。カメラ2は、例えば、車両1のダッシュボード(より具体的にはセンタークラスター)に設けられている。 The vehicle 1 has a camera 2 for imaging in the passenger compartment. The camera 2 is configured by, for example, an infrared camera, a visible light camera, or a distance image sensor. The camera 2 is provided, for example, on a dashboard (more specifically, a center cluster) of the vehicle 1.
 図2は、車両1におけるカメラ2の配置位置の一例及びカメラ2による撮像可能範囲Aの一例を示している。図2に示す例においては、車両1内の全座席(より具体的には運転席、助手席及び後部座席)が撮像可能範囲Aに含まれている。このため、車両1内の全座席がカメラ2による撮像対象となる。 FIG. 2 shows an example of an arrangement position of the camera 2 in the vehicle 1 and an example of an imageable range A by the camera 2. In the example shown in FIG. 2, all the seats in the vehicle 1 (more specifically, the driver's seat, the passenger seat, and the rear seat) are included in the imageable range A. For this reason, all seats in the vehicle 1 are to be imaged by the camera 2.
 車両1はセンサ類3を有している。センサ類3は、例えば、車両1におけるイグニッションスイッチのオンオフを検出するセンサ、車両1における各ドアの開閉を検出するセンサ、車両1の走行速度を検出するセンサ、車両1における各座席の座面部に設けられている荷重センサ(いわゆる「着座センサ」)、又は車両1における変速機のレンジ位置を検出するセンサのうちの少なくとも一つにより構成されている。 The vehicle 1 has sensors 3. The sensors 3 are, for example, a sensor that detects on / off of an ignition switch in the vehicle 1, a sensor that detects opening / closing of each door in the vehicle 1, a sensor that detects the traveling speed of the vehicle 1, and a seat surface portion of each seat in the vehicle 1. It is configured by at least one of a load sensor (a so-called “sitting sensor”) provided or a sensor for detecting a range position of the transmission in the vehicle 1.
 画像データ取得部11は、カメラ2による撮像画像を示す画像データを所定の時間間隔にてカメラ2から取得するものである。この時間間隔は、例えば、カメラ2による撮像フレームレート等に応じて異なる値に設定されている。画像データ取得部11は、当該取得された画像データをキャリブレーション処理部13、第1乗員検知処理部15及び第2乗員検知処理部16に出力するものである。 The image data acquisition unit 11 acquires image data indicating an image captured by the camera 2 from the camera 2 at a predetermined time interval. This time interval is set to a different value according to, for example, the imaging frame rate by the camera 2. The image data acquisition unit 11 outputs the acquired image data to the calibration processing unit 13, the first occupant detection processing unit 15, and the second occupant detection processing unit 16.
 開始判定部12は、センサ類3による出力信号を用いて、車両1における所定のイベント(以下「開始イベント」という。)が発生したか否かを判定するものである。 The start determination unit 12 uses the output signals from the sensors 3 to determine whether or not a predetermined event (hereinafter referred to as “start event”) has occurred in the vehicle 1.
 具体的には、例えば、車両1におけるイグニッションスイッチがオンされたとき、開始判定部12は開始イベントが発生したと判定する。 Specifically, for example, when an ignition switch in the vehicle 1 is turned on, the start determination unit 12 determines that a start event has occurred.
 または、例えば、車両1における複数個のドアのうちのいずれか1個以上のドアの開閉が検出されたとき(より具体的には、まず、当該1個以上のドアが開き、次いで、当該1個以上のドアが閉じたとき)、開始判定部12は開始イベントが発生したと判定する。 Alternatively, for example, when the opening / closing of any one or more of the plurality of doors in the vehicle 1 is detected (more specifically, the one or more doors are opened first, and then the 1 When more than one door is closed), the start determination unit 12 determines that a start event has occurred.
 または、例えば、車両1の走行速度が所定速度(以下「基準速度」という。)を超えたとき、開始判定部12は開始イベントが発生したと判定する。 Alternatively, for example, when the traveling speed of the vehicle 1 exceeds a predetermined speed (hereinafter referred to as “reference speed”), the start determination unit 12 determines that a start event has occurred.
 または、例えば、車両1内の複数個の座席のうちのいずれか1個以上の座席に対する乗員の着座が検出されたとき、開始判定部12は開始イベントが発生したと判定する。 Or, for example, when the seating of an occupant on any one or more of a plurality of seats in the vehicle 1 is detected, the start determination unit 12 determines that a start event has occurred.
 または、例えば、車両1における変速機が他のレンジからドライブレンジに切り替えられたとき、開始判定部12は開始イベントが発生したと判定する。 Alternatively, for example, when the transmission in the vehicle 1 is switched from another range to the drive range, the start determination unit 12 determines that a start event has occurred.
 キャリブレーション処理部13は、画像データ取得部11により出力された画像データを用いて、第1乗員検知処理部15による乗員検知処理(以下「第1乗員検知処理」という。)用のキャリブレーション処理を実行するものである。キャリブレーション処理部13によるキャリブレーション処理は、開始判定部12により開始イベントが発生したと判定されたときに開始される。 The calibration processing unit 13 uses the image data output from the image data acquisition unit 11 to perform calibration processing for occupant detection processing (hereinafter referred to as “first occupant detection processing”) by the first occupant detection processing unit 15. Is to execute. The calibration processing by the calibration processing unit 13 is started when the start determination unit 12 determines that a start event has occurred.
 終了判定部14は、キャリブレーション処理部13がキャリブレーション処理を開始した後、キャリブレーション処理部13によるキャリブレーション処理が終了したか否かを判定するものである。 The end determination unit 14 determines whether the calibration processing by the calibration processing unit 13 has ended after the calibration processing unit 13 has started the calibration processing.
 第1乗員検知処理部15は、終了判定部14によりキャリブレーション処理が終了したと判定された場合、画像データ取得部11により出力された画像データ及びキャリブレーション処理の結果を用いて第1乗員検知処理を実行するものである。すなわち、第1乗員検知処理は事前のキャリブレーション処理を要する方法によるものであり、キャリブレーション処理の終了後に実行されるものである。 When the end determination unit 14 determines that the calibration process has ended, the first occupant detection processing unit 15 detects the first occupant detection using the image data output from the image data acquisition unit 11 and the result of the calibration process. The process is executed. That is, the first occupant detection process is based on a method that requires a prior calibration process, and is executed after the calibration process is completed.
 第2乗員検知処理部16は、終了判定部14によりキャリブレーション処理が終了していないと判定された場合(すなわちキャリブレーション処理が実行中であると判定された場合)、画像データ取得部11により出力された画像データを用いて乗員検知処理(以下「第2乗員検知処理」という。)を実行するものである。すなわち、第2乗員検知処理は事前のキャリブレーション処理が不要な方法によるものであり、キャリブレーション処理の終了前に実行されるものである。 When the end determination unit 14 determines that the calibration process has not ended (that is, when it is determined that the calibration process is being performed), the second occupant detection processing unit 16 uses the image data acquisition unit 11 to The occupant detection process (hereinafter referred to as “second occupant detection process”) is executed using the output image data. That is, the second occupant detection process is based on a method that does not require a prior calibration process, and is executed before the end of the calibration process.
 第1乗員検知処理及び第2乗員検知処理の各々は、例えば、車両1内の乗員の有無の検知(以下「乗員検知」という。)、車両1内に1人以上の乗員がいる場合における各乗員の乗車位置の検知(以下「乗車位置検知」という。)、車両1内に1人以上の乗員がいる場合における各乗員の体格の推定(以下「体格検知」という。)、及び車両1内の各座席におけるチャイルドシートの設置有無の判定(以下「チャイルドシート検知」という。)を含むものである。 Each of the first occupant detection process and the second occupant detection process includes, for example, detection of the presence or absence of an occupant in the vehicle 1 (hereinafter referred to as “occupant detection”), Detection of the occupant's boarding position (hereinafter referred to as “boarding position detection”), estimation of the physique of each occupant (hereinafter referred to as “physique detection”) when there are one or more occupants in the vehicle 1, This includes the determination of whether or not a child seat is installed in each seat (hereinafter referred to as “child seat detection”).
 ここで、キャリブレーション処理、第1乗員検知処理及び第2乗員検知処理の具体例について説明する。 Here, specific examples of the calibration process, the first occupant detection process, and the second occupant detection process will be described.
〈キャリブレーション処理の具体例〉
 キャリブレーション処理部13は、カメラ2による撮像画像に対する画像認識処理を実行することにより、乗員検知、乗車位置検知、体格検知及びチャイルドシート検知に用いられる複数個の基準値Rを設定する。基準値Rは、例えば、各座席の位置に対応する値R、奥行き方向に対する個々の検知対象(乗員又はチャイルドシートなど)の位置に対応する値R、及び各座席の座面部の位置に対応する値Rを含むものである。
<Specific examples of calibration processing>
The calibration processing unit 13 sets a plurality of reference values R used for occupant detection, boarding position detection, physique detection, and child seat detection by executing image recognition processing on a captured image by the camera 2. The reference value R corresponds to, for example, the value R 1 corresponding to the position of each seat, the value R 2 corresponding to the position of each detection target (such as an occupant or a child seat) in the depth direction, and the position of the seat surface portion of each seat it is intended to include the value R 3 to.
〈第1乗員検知処理の具体例〉
 第1乗員検知処理部15は、カメラ2による撮像画像に対する画像認識処理を実行することにより、乗員検知、乗車位置検知、体格検知及びチャイルドシート検知に用いられる複数個の要素に対応する複数個の値(以下「要素値」という。)Eを算出する。要素値Eは、例えば、各乗員の肩幅に対応する値E、各乗員の顔幅に対応する値E、各乗員の座高に対応する値E、及び各座席における凹凸形状などの特徴に対応する値Eを含むものである。
<Specific example of first occupant detection process>
The first occupant detection processing unit 15 executes image recognition processing on an image captured by the camera 2, thereby a plurality of values corresponding to a plurality of elements used for occupant detection, boarding position detection, physique detection, and child seat detection. (Hereinafter referred to as “element value”) E is calculated. The element value E is, for example, a value E 1 corresponding to the shoulder width of each occupant, a value E 2 corresponding to the face width of each occupant, a value E 3 corresponding to the seat height of each occupant, and a feature such as an uneven shape in each seat. it is intended to include the value E 4 corresponding to.
 次いで、第1乗員検知処理部15は、キャリブレーション処理部13により設定された基準値Rを用いて、個々の要素値Eをピクセル単位の値(すなわち撮像画像における値)Eからメートル単位の値(すなわち実空間における値)E’に変換するための係数αを算出する。第1乗員検知処理部15は、係数αを用いて、個々の要素値Eを撮像画像における値Eから実空間における値E’に変換する。 Next, the first occupant detection processing unit 15 uses the reference value R set by the calibration processing unit 13 to convert each element value E from a pixel unit value (that is, a value in a captured image) E to a meter unit value. A coefficient α for conversion to E ′ (that is, a value in real space) is calculated. The first occupant detection processing unit 15 converts each element value E from the value E in the captured image to the value E ′ in the real space using the coefficient α.
 次いで、第1乗員検知処理部15は、変換後の要素値E’を用いて、乗員検知、乗車位置検知、体格検知及びチャイルドシート検知を実行する。ただし、乗車位置検知及び体格検知は、乗員検知により1人以上の乗員が検知された場合(すなわち車両1内に1人以上の乗員がいる場合)にのみ実行される。 Next, the first occupant detection processing unit 15 performs occupant detection, boarding position detection, physique detection, and child seat detection using the converted element value E ′. However, the boarding position detection and the physique detection are executed only when one or more passengers are detected by the passenger detection (that is, when one or more passengers are present in the vehicle 1).
 体格検知には、例えば、各乗員の肩幅に対応する要素値E’、各乗員の顔幅に対応する要素値E’及び各乗員の座高に対応する要素値E’が用いられる。例えば、乗員の体格が大人(例えば14歳以上の者)の体格、子供(例えば6歳以上13歳未満の者)の体格及び乳幼児(例えば12歳未満の者)の体格の3種類に分類されるものとする。第1乗員検知処理部15には、大人の体格に対応する要素値E’,E’,E’の各々の範囲と、子供の体格に対応する要素値E’,E’,E’の各々の範囲と、乳幼児の体格に対応する要素値E’,E’,E’の各々の範囲とを示すデータベースが予め記憶されている。このデータベースは、例えば、統計的手法により生成されたものである。第1乗員検知処理部15は、個々の要素値E’がデータベースにおけるいずれの範囲内の値であるかを判定し、かつ、個々の要素値E’がデータベースにおけるいずれの範囲内の値であるかを判定し、かつ、個々の要素値E’がデータベースにおけるいずれの範囲内の値であるかを判定することにより、各乗員の体格が大人の体格、子供の体格又は乳幼児の体格のうちのいずれであるかを推定する。 For physique detection, for example, an element value E 1 ′ corresponding to the shoulder width of each occupant, an element value E 2 ′ corresponding to the face width of each occupant, and an element value E 3 ′ corresponding to the seat height of each occupant are used. For example, the occupant's physique is classified into three types: the physique of an adult (for example, a person over 14 years old), the physique of a child (for example, a person over 6 years old and under 13 years old), and the physique of an infant (for example, a person under 12 years old). Shall be. The first passenger detection processing unit 15, an element value E 1 corresponding to the adult body size ', E 2', E 3 ' and each of the range, the element values E 1 corresponding to the child's physique', E 2 ' , E 3 ′, and a database indicating each range of element values E 1 ′, E 2 ′, E 3 ′ corresponding to the infant's physique are stored in advance. This database is generated by, for example, a statistical method. The first occupant detection processing unit 15 determines in which range each element value E 1 ′ is in the database, and each element value E 2 ′ is in which range in the database. And whether each element value E 3 ′ is a value in the database, the physique of each occupant is an adult physique, a child physique or an infant physique Which of the two is estimated.
 チャイルドシート検知には、例えば、各乗員の顔幅に対応する要素値E’及び各座席における凹凸形状などの特徴に対応する要素値Eが用いられる。すなわち、第1乗員検知処理部15は、個々の要素値E’を所定の閾値と比較することにより、個々の要素値E’に対応する顔が乳幼児の顔であるか否かを判定する。また、第1乗員検知処理部15は、個々の要素値Eに対応する凹凸形状が包み込むようなエッジ形状(すなわち一般的なチャイルドシートが有するエッジ形状)であるか否かを判定する。第1乗員検知処理部15は、これらの判定結果に基づき、各座席におけるチャイルドシートの設置有無を判定する。 In the child seat detection, for example, an element value E 2 ′ corresponding to the face width of each occupant and an element value E 4 corresponding to features such as the uneven shape in each seat are used. That is, the first occupant detection processing unit 15 determines whether the face corresponding to each element value E 2 ′ is an infant's face by comparing each element value E 2 ′ with a predetermined threshold value. To do. The first passenger detecting processor 15 determines whether the edge shape encompassing uneven shape corresponding to the individual element values E 4 (i.e. edge shape typical child seat has). Based on these determination results, the first occupant detection processing unit 15 determines whether a child seat is installed in each seat.
〈第2乗員検知処理の具体例〉
 辞書データ記憶部17には、複数個の辞書データが予め記憶されている。複数個の辞書データは、例えば、機械学習により生成されたものである。複数個の辞書データの各々は、カメラ2による撮像画像を示す画像データに対する比較対象となるものである。複数個の辞書データは、互いに異なる車両1内の状態に対応するものである。より具体的には、複数個の辞書データは、車両1内の乗員の人数、車両1内の各乗員の乗車位置、車両1内の各乗員の体格及び車両1内の各座席におけるチャイルドシートの設置有無などが互いに異なる種々の状態に対応するものである。
<Specific example of the second passenger detection process>
The dictionary data storage unit 17 stores a plurality of dictionary data in advance. The plurality of dictionary data is generated by machine learning, for example. Each of the plurality of dictionary data is to be compared with image data indicating an image captured by the camera 2. The plurality of dictionary data correspond to different states in the vehicle 1. More specifically, the plurality of dictionary data includes the number of occupants in the vehicle 1, the position of each occupant in the vehicle 1, the physique of each occupant in the vehicle 1, and the child seat installation in each seat in the vehicle 1. It corresponds to various states that are different from each other.
 第2乗員検知処理部16は、辞書データ記憶部17に記憶されている複数個の辞書データを辞書データ記憶部17から取得する。第2乗員検知処理部16は、画像データ取得部11により出力された画像データを、当該取得された複数個の辞書データの各々と比較する。第2乗員検知処理部16は、複数個の辞書データのうち、画像データに対する類似度が最も高い1個の辞書データを選択する。これにより、乗員検知、乗車位置検知、体格検知及びチャイルドシート検知が実現される。 The second occupant detection processing unit 16 acquires a plurality of dictionary data stored in the dictionary data storage unit 17 from the dictionary data storage unit 17. The second occupant detection processing unit 16 compares the image data output by the image data acquisition unit 11 with each of the acquired plurality of dictionary data. The second occupant detection processing unit 16 selects one dictionary data having the highest similarity to the image data among the plurality of dictionary data. Thereby, occupant detection, boarding position detection, physique detection, and child seat detection are realized.
 すなわち、第2乗員検知処理は、いわゆる「パターン認識」によるものである。このため、第2乗員検知処理は、第1乗員検知処理に比して各乗員の姿勢変化に対する堅牢性が低く、検知精度も低いものである。換言すれば、第1乗員検知処理は、第2乗員検知処理に比して各乗員の姿勢変化に対する堅牢性が高く、検知精度も高いものでる。 That is, the second occupant detection process is based on so-called “pattern recognition”. For this reason, the second occupant detection process is lower in robustness against the posture change of each occupant and the detection accuracy is lower than the first occupant detection process. In other words, the first occupant detection process is more robust with respect to changes in posture of each occupant and has higher detection accuracy than the second occupant detection process.
 結果情報記憶部18は、第1乗員検知処理部15が第1乗員検知処理を実行したとき、この第1乗員検知処理の結果を示す情報を記憶するものである。また、結果情報記憶部18は、第2乗員検知処理部16が第2乗員検知処理を実行したとき、この第2乗員検知処理の結果を示す情報を記憶するものである。以下、結果情報記憶部18に記憶される情報を「結果情報」と総称する。 The result information storage unit 18 stores information indicating the result of the first occupant detection process when the first occupant detection processing unit 15 executes the first occupant detection process. The result information storage unit 18 stores information indicating the result of the second occupant detection process when the second occupant detection processing unit 16 executes the second occupant detection process. Hereinafter, information stored in the result information storage unit 18 is collectively referred to as “result information”.
 エアバッグ制御装置4は、結果情報記憶部18に記憶されている結果情報を結果情報記憶部18から取得するものである。エアバッグ制御装置4は、当該取得された結果情報を用いて、車両1におけるエアバッグの動作を制御するものである。 The airbag control device 4 acquires the result information stored in the result information storage unit 18 from the result information storage unit 18. The airbag control device 4 controls the operation of the airbag in the vehicle 1 using the acquired result information.
 具体的には、例えば、エアバッグ制御装置4は、体格検知の結果に基づき、乳幼児が着座している座席に対応するエアバッグの動作を無効化する。また、例えば、エアバッグ制御装置4は、チャイルドシート検知の結果に基づき、チャイルドシートが設置されている座席に対応するエアバッグの動作を無効化する。 Specifically, for example, the airbag control device 4 invalidates the operation of the airbag corresponding to the seat on which the infant is seated based on the result of the physique detection. For example, the airbag control device 4 invalidates the operation of the airbag corresponding to the seat on which the child seat is installed based on the child seat detection result.
 キャリブレーション処理部13、終了判定部14、第1乗員検知処理部15及び第2乗員検知処理部16により、乗員検知装置100の要部が構成されている。画像データ取得部11、開始判定部12、辞書データ記憶部17、結果情報記憶部18及び乗員検知装置100により、制御装置200の要部が構成されている。カメラ2及び制御装置200により、乗員検知システム300の要部が構成されている。 The calibration processing unit 13, the end determination unit 14, the first occupant detection processing unit 15, and the second occupant detection processing unit 16 constitute a main part of the occupant detection device 100. The image data acquisition unit 11, the start determination unit 12, the dictionary data storage unit 17, the result information storage unit 18, and the occupant detection device 100 constitute a main part of the control device 200. The camera 2 and the control device 200 constitute the main part of the occupant detection system 300.
 次に、図3を参照して、制御装置200の要部のハードウェア構成について説明する。 Next, the hardware configuration of the main part of the control device 200 will be described with reference to FIG.
 図3Aに示す如く、制御装置200はコンピュータにより構成されており、当該コンピュータはプロセッサ31及びメモリ32,33を有している。メモリ32には、当該コンピュータを画像データ取得部11、開始判定部12、キャリブレーション処理部13、終了判定部14、第1乗員検知処理部15及び第2乗員検知処理部16として機能させるためのプログラムが記憶されている。メモリ32に記憶されているプログラムをプロセッサ31が読み出して実行することにより、画像データ取得部11、開始判定部12、キャリブレーション処理部13、終了判定部14、第1乗員検知処理部15及び第2乗員検知処理部16の機能が実現される。また、辞書データ記憶部17及び結果情報記憶部18の機能はメモリ33により実現される。 As shown in FIG. 3A, the control device 200 is configured by a computer, and the computer includes a processor 31 and memories 32 and 33. The memory 32 causes the computer to function as the image data acquisition unit 11, the start determination unit 12, the calibration processing unit 13, the end determination unit 14, the first occupant detection processing unit 15, and the second occupant detection processing unit 16. The program is stored. When the processor 31 reads and executes the program stored in the memory 32, the image data acquisition unit 11, the start determination unit 12, the calibration processing unit 13, the end determination unit 14, the first occupant detection processing unit 15, and the first The function of the two occupant detection processing unit 16 is realized. The functions of the dictionary data storage unit 17 and the result information storage unit 18 are realized by the memory 33.
 または、図3Bに示す如く、制御装置200はメモリ33及び処理回路34を有するものであっても良い。この場合、画像データ取得部11、開始判定部12、キャリブレーション処理部13、終了判定部14、第1乗員検知処理部15及び第2乗員検知処理部16の機能が処理回路34により実現されるものであっても良い。 Alternatively, as illustrated in FIG. 3B, the control device 200 may include a memory 33 and a processing circuit 34. In this case, the functions of the image data acquisition unit 11, the start determination unit 12, the calibration processing unit 13, the end determination unit 14, the first occupant detection processing unit 15, and the second occupant detection processing unit 16 are realized by the processing circuit 34. It may be a thing.
 または、制御装置200はプロセッサ31、メモリ32,33及び処理回路34を有するものであっても良い(不図示)。この場合、画像データ取得部11、開始判定部12、キャリブレーション処理部13、終了判定部14、第1乗員検知処理部15及び第2乗員検知処理部16の機能のうちの一部の機能がプロセッサ31及びメモリ32により実現されて、残余の機能が処理回路34により実現されるものであっても良い。 Alternatively, the control device 200 may include a processor 31, memories 32 and 33, and a processing circuit 34 (not shown). In this case, some of the functions of the image data acquisition unit 11, the start determination unit 12, the calibration processing unit 13, the end determination unit 14, the first occupant detection processing unit 15, and the second occupant detection processing unit 16 are performed. It may be realized by the processor 31 and the memory 32, and the remaining functions may be realized by the processing circuit 34.
 プロセッサ31は、例えば、CPU(Central Processing Unit)、GPU(Graphics Processing Unit)、マイクロプロセッサ、マイクロコントローラ又はDSP(Digital Signal Processor)を用いたものである。 The processor 31 uses, for example, a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), a microprocessor, a microcontroller, or a DSP (Digital Signal Processor).
 メモリ32,33は、例えば、半導体メモリ又は磁気ディスクを用いたものである。より具体的には、メモリ32,33は、RAM(Random Access Memory)、ROM(Read Only Memory)、フラッシュメモリ、EPROM(Erasable Programmable Read Only Memory)、EEPROM(Electrically Erasable Programmable Read-Only Memory)、SSD(Solid State Drive)又はHDD(Hard Disk Drive)などを用いたものである。 The memories 32 and 33 use, for example, a semiconductor memory or a magnetic disk. More specifically, the memories 32 and 33 are RAM (Random Access Memory), ROM (Read Only Memory), flash memory, EPROM (Erasable Programmable Read Only Memory), EEPROM (Electrically Stable Memory). (Solid State Drive) or HDD (Hard Disk Drive) or the like is used.
 処理回路34は、例えば、ASIC(Application Specific Integrated Circuit)、PLD(Programmable Logic Device)、FPGA(Field-Programmable Gate Array)、SoC(System-on-a-Chip)又はシステムLSI(Large-Scale Integration)を用いたものである。 The processing circuit 34 may be, for example, an ASIC (Application Specific Integrated Circuit), a PLD (Programmable Logic Device), an FPGA (Field-Programmable Gate Array), a SoC (System-LargeSemi-ChemicalSigleSigleSigureSigureSigureSigureSigureSigureSigureSigureSigureSigure). Is used.
 次に、図4のフローチャートを参照して、乗員検知装置100の動作について説明する。 Next, the operation of the occupant detection device 100 will be described with reference to the flowchart of FIG.
 開始判定部12により開始イベントが発生したと判定されたとき、キャリブレーション処理部13はキャリブレーション処理を開始する。その後、キャリブレーション処理部13は、キャリブレーション処理が終了するまで継続してキャリブレーション処理を実行する(ステップST1)。キャリブレーション処理の具体例については既に説明したとおりであるため、再度の説明は省略する。 When the start determination unit 12 determines that a start event has occurred, the calibration processing unit 13 starts the calibration process. Thereafter, the calibration processing unit 13 continues the calibration process until the calibration process is completed (step ST1). Since a specific example of the calibration process has already been described, the description thereof will be omitted.
 キャリブレーション処理部13がキャリブレーション処理を開始した後、終了判定部14は、キャリブレーション処理部13によるキャリブレーション処理が終了したか否かを判定する(ステップST2)。 After the calibration processing unit 13 starts the calibration processing, the end determination unit 14 determines whether the calibration processing by the calibration processing unit 13 has ended (step ST2).
 終了判定部14によりキャリブレーション処理が終了したと判定された場合(ステップST2“YES”)、第1乗員検知処理部15は第1乗員検知処理を実行する(ステップST3)。第1乗員検知処理の具体例は既に説明したとおりであるため、再度の説明は省略する。 When it is determined by the end determination unit 14 that the calibration process has ended (step ST2 “YES”), the first occupant detection processing unit 15 executes the first occupant detection process (step ST3). Since the specific example of the first occupant detection process has already been described, the description thereof will be omitted.
 他方、終了判定部14によりキャリブレーション処理が終了していないと判定された場合(ステップST2“NO”)、第2乗員検知処理部16は第2乗員検知処理を実行する(ステップST4)。第2乗員検知処理の具体例は既に説明したとおりであるため、再度の説明は省略する。 On the other hand, when it is determined by the end determination unit 14 that the calibration process has not ended (step ST2 “NO”), the second occupant detection processing unit 16 executes the second occupant detection process (step ST4). Since the specific example of the second occupant detection process has already been described, the description thereof will be omitted.
 なお、ステップST2の判定結果が“NO”である場合、ステップST2,ST4の処理のバックグラウンドにてキャリブレーション処理が継続して実行される。そこで、終了判定部14は、ステップST4の処理が実行された後、ステップST2“YES”と判定されるまでステップST2の処理を繰り返し実行するものであっても良い。そして、ステップST2“YES”と判定されたとき、第1乗員検知処理部15がステップST3の処理を実行するものであっても良い。すなわち、キャリブレーション処理の終了前に第2乗員検知処理部16が第2乗員検知処理を実行して、その後、キャリブレーション処理の終了後に第1乗員検知処理部15が第1乗員検知処理を実行するものであっても良い。 If the determination result in step ST2 is “NO”, the calibration process is continuously executed in the background of the processes in steps ST2 and ST4. Therefore, after the process of step ST4 is executed, the end determination unit 14 may repeatedly execute the process of step ST2 until it is determined as “YES” in step ST2. And when it determines with step ST2 "YES", the 1st passenger | crew detection process part 15 may perform the process of step ST3. In other words, the second occupant detection processing unit 16 executes the second occupant detection process before the end of the calibration process, and then the first occupant detection processing unit 15 executes the first occupant detection process after the end of the calibration process. It may be what you do.
 このように、実施の形態1の乗員検知装置100は、キャリブレーション処理の終了後に第1乗員検知処理を実行する第1乗員検知処理部15に加えて、キャリブレーション処理の終了前に第2乗員検知処理を実行する第2乗員検知処理部16を備える。これにより、第2乗員検知処理部16に相当する機能部を有しない従来の乗員検知装置に比して、開始イベントが発生してから初回の乗員検知処理が実行されるまでの時間を短くすることができる。この結果、車両1の発進等のイベントの発生タイミングに対して乗員検知処理の実行タイミングが遅れるのを抑制することができる。 Thus, in addition to the first occupant detection processing unit 15 that executes the first occupant detection process after the end of the calibration process, the occupant detection device 100 of Embodiment 1 includes the second occupant before the end of the calibration process. A second occupant detection processing unit 16 that executes detection processing is provided. This shortens the time from when the start event occurs until the first occupant detection process is executed, as compared with a conventional occupant detection device that does not have a functional unit corresponding to the second occupant detection processing unit 16. be able to. As a result, it is possible to prevent the execution timing of the occupant detection process from being delayed with respect to the occurrence timing of an event such as the start of the vehicle 1.
 なお、車両1におけるカメラ2の配置位置は図2に示す例に限定されるものではなく、カメラ2による撮像可能範囲Aは図2に示す例に限定されるものではない。例えば、車両1内の複数個の座席のうちの一部の座席(例えば運転席及び助手席)のみが撮像可能範囲Aに含まれているものであっても良い。すなわち、車両1内の複数個の座席のうちの一部の座席のみがカメラ2による撮像対象となるものであっても良い。 The arrangement position of the camera 2 in the vehicle 1 is not limited to the example shown in FIG. 2, and the imageable range A by the camera 2 is not limited to the example shown in FIG. For example, only a part of the plurality of seats in the vehicle 1 (for example, a driver seat and a passenger seat) may be included in the imageable range A. That is, only a part of the plurality of seats in the vehicle 1 may be an object to be imaged by the camera 2.
 また、センサ類3は上記の具体例に限定されるものではなく、開始イベントは上記の具体例に限定されるものではない。センサ類3は、車両1に設けられているセンサであれば、如何なるセンサを含むものであっても良い。また、センサ類3による出力信号を用いてその発生を判定可能なイベントであれば、如何なるイベントが開始イベントに設定されているものであっても良い。 Also, the sensors 3 are not limited to the above specific example, and the start event is not limited to the above specific example. The sensors 3 may include any sensors as long as they are sensors provided in the vehicle 1. In addition, any event may be set as a start event as long as the event can be determined using an output signal from the sensors 3.
 また、第1乗員検知処理の内容は、乗員検知、乗車位置検知、体格検知及びチャイルドシート検知に限定されるものではない。第1乗員検知処理は、これらの検知のうちのいずれか1個以上の検知を含むものであっても良い。また、第1乗員検知処理は、これらの検知と異なる他の検知を含むものであっても良い。 Also, the contents of the first occupant detection process are not limited to occupant detection, boarding position detection, physique detection, and child seat detection. The first occupant detection process may include one or more of these detections. The first occupant detection process may include other detections different from these detections.
 また、第2乗員検知処理の内容は、乗員検知、乗車位置検知、体格検知及びチャイルドシート検知に限定されるものではない。第2乗員検知処理は、これらの検知のうちのいずれか1個以上の検知を含むものであっても良い。また、第2乗員検知処理は、これらの検知と異なる他の検知を含むものであっても良い。 Further, the content of the second occupant detection process is not limited to occupant detection, boarding position detection, physique detection, and child seat detection. The second occupant detection process may include one or more of these detections. The second occupant detection process may include other detections different from these detections.
 また、第1乗員検知処理は事前のキャリブレーション処理を要する方法によるものであれば良く、第1乗員検知処理の方法は上記の具体例に限定されるものではない。また、第2乗員検知処理は事前のキャリブレーション処理が不要な方法によるものであれば良く、第2乗員検知処理の方法は上記の具体例に限定されるものではない。 The first occupant detection process may be performed by a method that requires a prior calibration process, and the first occupant detection process method is not limited to the above specific example. The second occupant detection process may be performed by a method that does not require a prior calibration process, and the method of the second occupant detection process is not limited to the above specific example.
 また、キャリブレーション処理の内容及び方法は、第1乗員検知処理の内容及び方法に応じたものであれば良く、上記の具体例に限定されるものではない。 Further, the contents and method of the calibration process may be any one according to the contents and method of the first occupant detection process, and are not limited to the above specific examples.
 また、第1乗員検知処理及び第2乗員検知処理の結果の用途(すなわち結果情報の用途)は、エアバッグの動作制御に限定されるものではない。結果情報は、車両1における如何なる制御に用いられるものであっても良い。 Also, the use of the results of the first occupant detection process and the second occupant detection process (that is, the use of the result information) is not limited to the operation control of the airbag. The result information may be used for any control in the vehicle 1.
 以上のように、実施の形態1の乗員検知装置100は、車室内撮像用のカメラ2による撮像画像を示す画像データを用いて、第1乗員検知処理用のキャリブレーション処理を実行するキャリブレーション処理部13と、キャリブレーション処理が終了したか否かを判定する終了判定部14と、キャリブレーション処理の終了後に、画像データ及びキャリブレーション処理の結果を用いて第1乗員検知処理を実行する第1乗員検知処理部15と、キャリブレーション処理の終了前に、画像データを用いてキャリブレーション処理が不要な第2乗員検知処理を実行する第2乗員検知処理部16とを備える。これにより、車両1の発進等のイベントの発生タイミングに対して乗員検知処理の実行タイミングが遅れるのを抑制することができる。 As described above, the occupant detection device 100 according to Embodiment 1 uses the image data indicating the image captured by the camera 2 for imaging in the passenger compartment to perform the calibration process for the first occupant detection process. Unit 13, end determination unit 14 for determining whether or not the calibration process has ended, and a first occupant detection process that uses the image data and the result of the calibration process after the end of the calibration process An occupant detection processing unit 15 and a second occupant detection processing unit 16 that executes a second occupant detection process that does not require a calibration process using image data before the end of the calibration process are provided. Thereby, it can suppress that the execution timing of a passenger | crew detection process is delayed with respect to the generation timing of events, such as start of the vehicle 1. FIG.
 また、第1乗員検知処理は、撮像画像に対する画像認識処理を実行することにより複数個の要素に対応する複数個の値(要素値)E’を算出するものであり、複数個の値(要素値)E’の算出は、複数個の基準値Rに基づくものであり、キャリブレーション処理は、撮像画像に対する画像認識処理を実行することにより複数個の基準値Rを設定するものである。これにより、各乗員の姿勢変化に対する堅牢性が高く、かつ、高精度な第1乗員検知処理を実現することができる。 In the first occupant detection process, a plurality of values (element values) E ′ corresponding to a plurality of elements are calculated by performing an image recognition process on the captured image. The calculation of the value (E ′) is based on a plurality of reference values R, and the calibration process is to set a plurality of reference values R by executing an image recognition process on the captured image. Thereby, the robustness with respect to the attitude | position change of each passenger | crew is high, and a highly accurate 1st passenger | crew detection process is realizable.
 また、第2乗員検知処理は、画像データと複数個の辞書データの各々との比較によるパターン認識に基づくものであり、複数個の辞書データは、機械学習により生成されたものである。これにより、第1条件検知処理に比して低精度であるものの、事前のキャリブレーション処理が不要な第2乗員検知処理を実現することができる。 Further, the second occupant detection process is based on pattern recognition based on comparison between image data and each of a plurality of dictionary data, and the plurality of dictionary data are generated by machine learning. Accordingly, it is possible to realize a second occupant detection process that is less accurate than the first condition detection process but does not require a prior calibration process.
 また、実施の形態1の乗員検知方法は、キャリブレーション処理部13が、車室内撮像用のカメラ2による撮像画像を示す画像データを用いて、第1乗員検知処理用のキャリブレーション処理を実行するステップST1と、終了判定部14が、キャリブレーション処理が終了したか否かを判定するステップST2と、第1乗員検知処理部15が、キャリブレーション処理の終了後に、画像データ及びキャリブレーション処理の結果を用いて第1乗員検知処理を実行するステップST3と、第2乗員検知処理部16が、キャリブレーション処理の終了前に、画像データを用いてキャリブレーション処理が不要な第2乗員検知処理を実行するステップST4とを備える。これにより、乗員検知装置100による上記効果と同様の効果を得ることができる。 In the occupant detection method according to the first embodiment, the calibration processing unit 13 executes the calibration process for the first occupant detection process using image data indicating an image captured by the camera 2 for imaging in the vehicle interior. Step ST1, step ST2 in which the end determination unit 14 determines whether or not the calibration process has ended, and the first occupant detection processing unit 15 after the end of the calibration process, the result of the image data and the calibration process The first occupant detection process using step ST3 and the second occupant detection processing unit 16 execute the second occupant detection process that does not require the calibration process using the image data before the end of the calibration process. Step ST4. Thereby, the effect similar to the said effect by the passenger | crew detection apparatus 100 can be acquired.
 また、実施の形態1の乗員検知システム300は、車室内撮像用のカメラ2と、カメラ2による撮像画像を示す画像データを用いて、第1乗員検知処理用のキャリブレーション処理を実行するキャリブレーション処理部13と、キャリブレーション処理が終了したか否かを判定する終了判定部14と、キャリブレーション処理の終了後に、画像データ及びキャリブレーション処理の結果を用いて第1乗員検知処理を実行する第1乗員検知処理部15と、キャリブレーション処理の終了前に、画像データを用いてキャリブレーション処理が不要な第2乗員検知処理を実行する第2乗員検知処理部16とを有する乗員検知装置100とを備える。これにより、乗員検知装置100による上記効果と同様の効果を得ることができる。 Further, the occupant detection system 300 according to the first embodiment uses the camera 2 for imaging in the vehicle interior and the image data indicating the image captured by the camera 2 to execute calibration processing for the first occupant detection process. A processing unit 13; an end determination unit 14 for determining whether or not the calibration process is completed; and a first occupant detection process that executes the first occupant detection process using the image data and the result of the calibration process after the calibration process is completed. An occupant detection apparatus 100 having a first occupant detection processing unit 15 and a second occupant detection processing unit 16 that executes a second occupant detection process that does not require calibration processing using image data before the end of the calibration process; Is provided. Thereby, the effect similar to the said effect by the passenger | crew detection apparatus 100 can be acquired.
実施の形態2.
 図5は、実施の形態2に係る乗員検知システムが車両に設けられている状態を示すブロック図である。図5を参照して、実施の形態2の乗員検知システム300aについて説明する。なお、図5において、図1に示すブロックと同様のブロックには同一符号を付して説明を省略する。
Embodiment 2. FIG.
FIG. 5 is a block diagram illustrating a state in which the vehicle occupant detection system according to Embodiment 2 is provided in the vehicle. With reference to FIG. 5, an occupant detection system 300a according to the second embodiment will be described. In FIG. 5, the same blocks as those shown in FIG.
 乗員検知装置100aは、例えば開始判定部12により開始イベントが発生したと判定されたとき、画像データ取得部11により出力された画像データを用いて、車両1内の各乗員に対する個人認証処理を実行する機能を有している。 For example, when the start determination unit 12 determines that a start event has occurred, the occupant detection device 100a performs personal authentication processing for each occupant in the vehicle 1 using the image data output by the image data acquisition unit 11. It has a function to do.
 結果情報記憶部18aは、乗員検知装置100aが個人認証処理を実行したとき、この個人認証処理の結果を示す情報(すなわち結果情報)を記憶するものである。また、結果情報記憶部18aは、第1乗員検知処理部15が第1乗員検知処理を実行したとき、この第1乗員検知処理の結果を示す情報(すなわち結果情報)を記憶するものでる。 The result information storage unit 18a stores information indicating the result of the personal authentication process (that is, result information) when the occupant detection device 100a executes the personal authentication process. The result information storage unit 18a stores information indicating the result of the first occupant detection process (that is, result information) when the first occupant detection processing unit 15 executes the first occupant detection process.
 ここで、第1乗員検知処理は、例えば、乗員検知、乗車位置検知、体格検知及びチャイルドシート検知を含むものである。第1乗員検知処理における体格検知の結果を示す情報は、個人認証処理の結果を示す情報と関連付けて結果情報記憶部18aに記憶されるようになっている。すなわち、第1乗員検知処理における体格検知の結果を示す情報は、個人毎に結果情報記憶部18aに記憶されるようになっている。 Here, the first occupant detection process includes, for example, occupant detection, boarding position detection, physique detection, and child seat detection. Information indicating the result of physique detection in the first occupant detection process is stored in the result information storage unit 18a in association with information indicating the result of the personal authentication process. That is, information indicating the result of physique detection in the first occupant detection process is stored in the result information storage unit 18a for each individual.
 第2乗員検知処理部16aは第2乗員検知処理を実行するものである。第2乗員検知処理は、例えば、乗員検知、乗車位置検知、体格検知及びチャイルドシート検知を含むものである。第2乗員検知処理部16aによる乗員検知、乗車位置検知及びチャイルドシート検知の方法は、図1に示す第2乗員検知処理部16による乗員検知、乗車位置検知及びチャイルドシート検知の方法と同様であるため(すなわちパターン認識によるものであるため)、再度の説明は省略する。 The second occupant detection processing unit 16a executes a second occupant detection process. The second occupant detection process includes, for example, occupant detection, boarding position detection, physique detection, and child seat detection. The occupant detection, riding position detection, and child seat detection methods by the second occupant detection processing unit 16a are the same as the occupant detection, riding position detection, and child seat detection methods by the second occupant detection processing unit 16 shown in FIG. That is, because it is based on pattern recognition), the description thereof will be omitted.
 ここで、第2乗員検知処理部16aによる体格検知の方法は、図1に示す第2乗員検知処理部16による体格検知の方法と異なるものである。すなわち、第2乗員検知処理部16aは、結果情報記憶部18aに記憶されている結果情報のうち、過去に実行された第1乗員検知処理における体格検知の結果を示す情報を結果情報記憶部18aから取得する。第2乗員検知処理部16aは、当該取得された情報を用いて、過去に実行された第1乗員検知処理における体格検知の結果であって、現在の車両1内の各乗員と同一の人物に対する体格検知の結果を第2乗員検知処理における体格検知の結果に用いる。これは、過去の第1乗員検知処理が実行されてから今回の第2乗員検知処理が実行されるまでの間に同一人物の体格が大きく変化する蓋然性は低いためである。 Here, the physique detection method by the second occupant detection processing unit 16a is different from the physique detection method by the second occupant detection processing unit 16 shown in FIG. That is, the second occupant detection processing unit 16a displays information indicating the result of the physique detection in the first occupant detection process executed in the past among the result information stored in the result information storage unit 18a. Get from. The second occupant detection processing unit 16a is a result of physique detection in the first occupant detection process executed in the past, using the acquired information, and for the same person as each occupant in the current vehicle 1 The result of physique detection is used as the result of physique detection in the second occupant detection process. This is because there is a low probability that the physique of the same person will greatly change between the execution of the past first occupant detection process and the execution of the current second occupant detection process.
 結果情報記憶部18aは、第2乗員検知処理部16aが第2乗員検知処理を実行したとき、この第2乗員検知処理の結果を示す情報(すなわち結果情報)を記憶するものである。ただし、第2乗員検知処理における体格検知の結果を示す情報は記憶対象から除外される。 The result information storage unit 18a stores information (that is, result information) indicating a result of the second occupant detection process when the second occupant detection processing unit 16a executes the second occupant detection process. However, information indicating the result of physique detection in the second occupant detection process is excluded from the storage target.
 キャリブレーション処理部13、終了判定部14、第1乗員検知処理部15及び第2乗員検知処理部16aにより、乗員検知装置100aの要部が構成されている。画像データ取得部11、開始判定部12、辞書データ記憶部17、結果情報記憶部18a及び乗員検知装置100aにより、制御装置200aの要部が構成されている。カメラ2及び制御装置200aにより、乗員検知システム300aの要部が構成されている。 The calibration processing unit 13, the end determination unit 14, the first occupant detection processing unit 15, and the second occupant detection processing unit 16a constitute a main part of the occupant detection device 100a. The image data acquisition unit 11, the start determination unit 12, the dictionary data storage unit 17, the result information storage unit 18a, and the occupant detection device 100a constitute a main part of the control device 200a. The camera 2 and the control device 200a constitute the main part of the occupant detection system 300a.
 制御装置200aの要部のハードウェア構成は、実施の形態1にて図3を参照して説明したものと同様であるため、図示及び説明を省略する。すなわち、第2乗員検知処理部16aの機能はプロセッサ31及びメモリ32により実現されるものであっても良く、又は処理回路34により実現されるものであっても良い。また、結果情報記憶部18aの機能はメモリ33により実現される。 Since the hardware configuration of the main part of the control device 200a is the same as that described with reference to FIG. 3 in the first embodiment, illustration and description thereof are omitted. That is, the function of the second occupant detection processing unit 16a may be realized by the processor 31 and the memory 32, or may be realized by the processing circuit 34. The function of the result information storage unit 18a is realized by the memory 33.
 乗員検知装置100aの動作は、実施の形態1にて図4のフローチャートを参照して説明したものと同様であるため、図示及び説明を省略する。ただし、ステップST4において、第2乗員検知処理部16aは、過去に実行された第1乗員検知処理における体格検知の結果を第2乗員検知処理における体格検知の結果に用いるようになっている。 Since the operation of the occupant detection device 100a is the same as that described with reference to the flowchart of FIG. 4 in the first embodiment, illustration and description thereof are omitted. However, in step ST4, the second occupant detection processing unit 16a uses the result of physique detection in the first occupant detection process executed in the past as the result of physique detection in the second occupant detection process.
 なお、現在の車両1内の各乗員と同一の人物に対する体格検知の結果を示す情報が結果情報記憶部18aに記憶されていない場合、第2乗員検知処理部16aは、図1に示す第2乗員検知処理部16と同様のパターン認識による体格検知を実行するものであっても良い。この場合、第2乗員検知処理における体格検知の結果を示す情報が記憶対象に含まれるものであっても良い。 In addition, when the information which shows the result of the physique detection with respect to the same person as each crew member in the present vehicle 1 is not memorize | stored in the result information storage part 18a, the 2nd crew member detection process part 16a is 2nd shown in FIG. The physique detection by pattern recognition similar to the occupant detection processing unit 16 may be executed. In this case, information indicating the result of physique detection in the second occupant detection process may be included in the storage target.
 また、第1乗員検知処理の内容は、乗員検知、乗車位置検知、体格検知及びチャイルドシート検知に限定されるものではない。第2乗員検知処理の内容は、乗員検知、乗車位置検知、体格検知及びチャイルドシート検知に限定されるものではない。第2乗員検知処理部16aは、体格検知と異なる他の検知について、過去に実行された第1乗員検知処理の結果を第2乗員検知処理の結果に用いるものであっても良い。 Also, the contents of the first occupant detection process are not limited to occupant detection, boarding position detection, physique detection, and child seat detection. The contents of the second occupant detection process are not limited to occupant detection, boarding position detection, physique detection, and child seat detection. The second occupant detection processing unit 16a may use the result of the first occupant detection process executed in the past as the result of the second occupant detection process for another detection different from the physique detection.
 そのほか、乗員検知装置100a、制御装置200a及び乗員検知システム300aは、実施の形態1にて説明したものと同様の種々の変形例を採用することができる。 In addition, the occupant detection device 100a, the control device 200a, and the occupant detection system 300a can employ various modifications similar to those described in the first embodiment.
 以上のように、実施の形態2の乗員検知装置100aにおいて、第2乗員検知処理は、過去に実行された第1乗員検知処理の結果を当該第2乗員検知処理の結果に用いるものである。これにより、例えば体格検知について、事前のキャリブレーション処理が不要であり、かつ、第1乗員検知処理と同等の精度を有する第2乗員検知処理を実現することができる。 As described above, in the occupant detection device 100a according to the second embodiment, the second occupant detection process uses the result of the first occupant detection process executed in the past as the result of the second occupant detection process. As a result, for example, for the physique detection, it is possible to realize a second occupant detection process that does not require a prior calibration process and has the same accuracy as the first occupant detection process.
 なお、本願発明はその発明の範囲内において、各実施の形態の自由な組み合わせ、あるいは各実施の形態の任意の構成要素の変形、もしくは各実施の形態において任意の構成要素の省略が可能である。 In the present invention, within the scope of the invention, any combination of the embodiments, any modification of any component in each embodiment, or omission of any component in each embodiment is possible. .
 本発明の乗員検知装置、乗員検知方法及び乗員検知システムは、例えば、車両におけるエアバッグの動作制御に応用することができる。 The occupant detection device, occupant detection method, and occupant detection system of the present invention can be applied to, for example, operation control of an airbag in a vehicle.
1 車両、2 カメラ、3 センサ類、4 エアバッグ制御装置、11 画像データ取得部、12 開始判定部、13 キャリブレーション処理部、14 終了判定部、15 第1乗員検知処理部、16,16a 第2乗員検知処理部、17 辞書データ記憶部、18,18a 結果情報記憶部、31 プロセッサ、32 メモリ、33 メモリ、34 処理回路、100,100a 乗員検知装置、200,200a 制御装置、300,300a 乗員検知システム。 1 vehicle, 2 cameras, 3 sensors, 4 airbag control device, 11 image data acquisition unit, 12 start determination unit, 13 calibration processing unit, 14 end determination unit, 15 first occupant detection processing unit, 16 and 16a 2 occupant detection processing unit, 17 dictionary data storage unit, 18, 18a result information storage unit, 31 processor, 32 memory, 33 memory, 34 processing circuit, 100, 100a occupant detection device, 200, 200a control device, 300, 300a occupant Detection system.

Claims (15)

  1.  車室内撮像用のカメラによる撮像画像を示す画像データを用いて、第1乗員検知処理用のキャリブレーション処理を実行するキャリブレーション処理部と、
     前記キャリブレーション処理が終了したか否かを判定する終了判定部と、
     前記キャリブレーション処理の終了後に、前記画像データ及び前記キャリブレーション処理の結果を用いて前記第1乗員検知処理を実行する第1乗員検知処理部と、
     前記キャリブレーション処理の終了前に、前記画像データを用いて前記キャリブレーション処理が不要な第2乗員検知処理を実行する第2乗員検知処理部と、
     を備える乗員検知装置。
    A calibration processing unit that executes a calibration process for a first occupant detection process using image data indicating an image captured by a camera for imaging in a vehicle interior;
    An end determination unit for determining whether or not the calibration process is completed;
    A first occupant detection processing unit that executes the first occupant detection process using the image data and the result of the calibration process after the calibration process is completed;
    A second occupant detection processing unit that executes a second occupant detection process that does not require the calibration process using the image data before the end of the calibration process;
    An occupant detection device.
  2.  前記キャリブレーション処理部は、車両におけるイグニッションスイッチがオンされたとき、前記キャリブレーション処理を開始することを特徴とする請求項1記載の乗員検知装置。 The occupant detection device according to claim 1, wherein the calibration processing unit starts the calibration processing when an ignition switch in a vehicle is turned on.
  3.  前記キャリブレーション処理部は、車両におけるドアの開閉が検出されたとき、前記キャリブレーション処理を開始することを特徴とする請求項1記載の乗員検知装置。 The occupant detection device according to claim 1, wherein the calibration processing unit starts the calibration processing when opening / closing of a door in the vehicle is detected.
  4.  前記キャリブレーション処理部は、車両の走行速度が基準速度を超えたとき、前記キャリブレーション処理を開始することを特徴とする請求項1記載の乗員検知装置。 The occupant detection device according to claim 1, wherein the calibration processing unit starts the calibration processing when the traveling speed of the vehicle exceeds a reference speed.
  5.  前記キャリブレーション処理部は、車両内の座席に対する乗員の着座が検出されたとき、前記キャリブレーション処理を開始することを特徴とする請求項1記載の乗員検知装置。 The occupant detection device according to claim 1, wherein the calibration processing unit starts the calibration process when the occupant is seated on a seat in the vehicle.
  6.  前記キャリブレーション処理部は、車両における変速機が他のレンジからドライブレンジに切り替えられたとき、前記キャリブレーション処理を開始することを特徴とする請求項1記載の乗員検知装置。 The occupant detection device according to claim 1, wherein the calibration processing unit starts the calibration processing when a transmission in the vehicle is switched from another range to a drive range.
  7.  前記第1乗員検知処理は、前記撮像画像に対する画像認識処理を実行することにより複数個の要素に対応する複数個の値を算出するものであり、
     前記複数個の値の算出は、複数個の基準値に基づくものであり、
     前記キャリブレーション処理は、前記撮像画像に対する画像認識処理を実行することにより前記複数個の基準値を設定するものである
     ことを特徴とする請求項1記載の乗員検知装置。
    The first occupant detection process calculates a plurality of values corresponding to a plurality of elements by executing an image recognition process on the captured image,
    The calculation of the plurality of values is based on a plurality of reference values,
    The occupant detection apparatus according to claim 1, wherein the calibration process sets the plurality of reference values by executing an image recognition process on the captured image.
  8.  前記第2乗員検知処理は、前記画像データと複数個の辞書データの各々との比較によるパターン認識に基づくものであり、
     前記複数個の辞書データは、機械学習により生成されたものである
     ことを特徴とする請求項1記載の乗員検知装置。
    The second occupant detection process is based on pattern recognition by comparison between the image data and each of a plurality of dictionary data.
    The occupant detection device according to claim 1, wherein the plurality of dictionary data are generated by machine learning.
  9.  前記第2乗員検知処理は、過去に実行された前記第1乗員検知処理の結果を当該第2乗員検知処理の結果に用いるものであることを特徴とする請求項1記載の乗員検知装置。 The occupant detection device according to claim 1, wherein the second occupant detection process uses a result of the first occupant detection process executed in the past as a result of the second occupant detection process.
  10.  前記第1乗員検知処理及び前記第2乗員検知処理の各々は、車両内の乗員の有無の検知を含むものであることを特徴とする請求項1記載の乗員検知装置。 The occupant detection device according to claim 1, wherein each of the first occupant detection process and the second occupant detection process includes detection of the presence or absence of an occupant in the vehicle.
  11.  前記第1乗員検知処理及び前記第2乗員検知処理の各々は、車両内に1人以上の乗員がいる場合における各乗員の乗車位置の検知を含むものであることを特徴とする請求項1記載の乗員検知装置。 2. The occupant according to claim 1, wherein each of the first occupant detection process and the second occupant detection process includes detection of a boarding position of each occupant when one or more occupants are present in the vehicle. Detection device.
  12.  前記第1乗員検知処理及び前記第2乗員検知処理の各々は、車両内に1人以上の乗員がいる場合における各乗員の体格の推定を含むものであることを特徴とする請求項1記載の乗員検知装置。 2. The occupant detection according to claim 1, wherein each of the first occupant detection process and the second occupant detection process includes estimation of a physique of each occupant when one or more occupants are present in the vehicle. apparatus.
  13.  前記第1乗員検知処理及び前記第2乗員検知処理の各々は、車両内の各座席におけるチャイルドシートの設置有無の判定を含むものであることを特徴とする請求項1記載の乗員検知装置。 The occupant detection device according to claim 1, wherein each of the first occupant detection process and the second occupant detection process includes determination of whether or not a child seat is installed in each seat in the vehicle.
  14.  キャリブレーション処理部が、車室内撮像用のカメラによる撮像画像を示す画像データを用いて、第1乗員検知処理用のキャリブレーション処理を実行するステップと、
     終了判定部が、前記キャリブレーション処理が終了したか否かを判定するステップと、
     第1乗員検知処理部が、前記キャリブレーション処理の終了後に、前記画像データ及び前記キャリブレーション処理の結果を用いて前記第1乗員検知処理を実行するステップと、
     第2乗員検知処理部が、前記キャリブレーション処理の終了前に、前記画像データを用いて前記キャリブレーション処理が不要な第2乗員検知処理を実行するステップと、
     を備える乗員検知方法。
    A calibration processing unit executing calibration processing for first occupant detection processing using image data indicating an image captured by a camera for imaging in a vehicle interior;
    A step of determining whether or not the end determination unit has ended the calibration process;
    A first occupant detection processing unit executing the first occupant detection process using the image data and a result of the calibration process after the calibration process is completed;
    A second occupant detection processing unit executing a second occupant detection process that does not require the calibration process using the image data before the end of the calibration process;
    An occupant detection method comprising:
  15.  車室内撮像用のカメラと、
     前記カメラによる撮像画像を示す画像データを用いて、第1乗員検知処理用のキャリブレーション処理を実行するキャリブレーション処理部と、前記キャリブレーション処理が終了したか否かを判定する終了判定部と、前記キャリブレーション処理の終了後に、前記画像データ及び前記キャリブレーション処理の結果を用いて前記第1乗員検知処理を実行する第1乗員検知処理部と、前記キャリブレーション処理の終了前に、前記画像データを用いて前記キャリブレーション処理が不要な第2乗員検知処理を実行する第2乗員検知処理部と、を有する乗員検知装置と、
     を備える乗員検知システム。
    A camera for imaging a vehicle interior;
    A calibration processing unit that executes calibration processing for first occupant detection processing using image data indicating an image captured by the camera, an end determination unit that determines whether or not the calibration processing has ended, A first occupant detection processing unit that executes the first occupant detection process using the image data and a result of the calibration process after the calibration process ends, and the image data before the calibration process ends. An occupant detection device having a second occupant detection processing unit that executes a second occupant detection process that does not require the calibration process using
    An occupant detection system.
PCT/JP2018/016455 2018-04-23 2018-04-23 Occupant detection device, occupant detection method, and occupant detection system WO2019207625A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2018/016455 WO2019207625A1 (en) 2018-04-23 2018-04-23 Occupant detection device, occupant detection method, and occupant detection system
JP2020515326A JPWO2019207625A1 (en) 2018-04-23 2018-04-23 Crew detection device, occupant detection method and occupant detection system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/016455 WO2019207625A1 (en) 2018-04-23 2018-04-23 Occupant detection device, occupant detection method, and occupant detection system

Publications (1)

Publication Number Publication Date
WO2019207625A1 true WO2019207625A1 (en) 2019-10-31

Family

ID=68293608

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/016455 WO2019207625A1 (en) 2018-04-23 2018-04-23 Occupant detection device, occupant detection method, and occupant detection system

Country Status (2)

Country Link
JP (1) JPWO2019207625A1 (en)
WO (1) WO2019207625A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023084738A1 (en) * 2021-11-12 2023-05-19 三菱電機株式会社 Physique determination device and physique determination method

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010029416A1 (en) * 1992-05-05 2001-10-11 Breed David S. Vehicular component control systems and methods
JP2001338282A (en) * 2000-05-24 2001-12-07 Tokai Rika Co Ltd Crew-member detecting system
JP2002008021A (en) * 2000-06-16 2002-01-11 Tokai Rika Co Ltd Occupant detection system
JP2006527354A (en) * 2003-04-24 2006-11-30 ローベルト ボツシユ ゲゼルシヤフト ミツト ベシユレンクテル ハフツング Apparatus and method for calibration of image sensor
JP2007153035A (en) * 2005-12-01 2007-06-21 Auto Network Gijutsu Kenkyusho:Kk Occupant sitting judgement system
JP2007198929A (en) * 2006-01-27 2007-08-09 Hitachi Ltd In-vehicle situation detection system, in-vehicle situation detector, and in-vehicle situation detection method
JP2009107527A (en) * 2007-10-31 2009-05-21 Denso Corp Occupant detection device of vehicle
JP2010195139A (en) * 2009-02-24 2010-09-09 Takata Corp Occupant restraint control device and occupant restraint control method
WO2012172865A1 (en) * 2011-06-17 2012-12-20 本田技研工業株式会社 Occupant sensing device
JP2014040198A (en) * 2012-08-23 2014-03-06 Nissan Motor Co Ltd Occupant detection device for vehicle and occupant detection method for vehicle

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010029416A1 (en) * 1992-05-05 2001-10-11 Breed David S. Vehicular component control systems and methods
JP2001338282A (en) * 2000-05-24 2001-12-07 Tokai Rika Co Ltd Crew-member detecting system
JP2002008021A (en) * 2000-06-16 2002-01-11 Tokai Rika Co Ltd Occupant detection system
JP2006527354A (en) * 2003-04-24 2006-11-30 ローベルト ボツシユ ゲゼルシヤフト ミツト ベシユレンクテル ハフツング Apparatus and method for calibration of image sensor
JP2007153035A (en) * 2005-12-01 2007-06-21 Auto Network Gijutsu Kenkyusho:Kk Occupant sitting judgement system
JP2007198929A (en) * 2006-01-27 2007-08-09 Hitachi Ltd In-vehicle situation detection system, in-vehicle situation detector, and in-vehicle situation detection method
JP2009107527A (en) * 2007-10-31 2009-05-21 Denso Corp Occupant detection device of vehicle
JP2010195139A (en) * 2009-02-24 2010-09-09 Takata Corp Occupant restraint control device and occupant restraint control method
WO2012172865A1 (en) * 2011-06-17 2012-12-20 本田技研工業株式会社 Occupant sensing device
JP2014040198A (en) * 2012-08-23 2014-03-06 Nissan Motor Co Ltd Occupant detection device for vehicle and occupant detection method for vehicle

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023084738A1 (en) * 2021-11-12 2023-05-19 三菱電機株式会社 Physique determination device and physique determination method

Also Published As

Publication number Publication date
JPWO2019207625A1 (en) 2020-12-03

Similar Documents

Publication Publication Date Title
CN111469802B (en) Seat belt state determination system and method
US11120283B2 (en) Occupant monitoring device for vehicle and traffic system
US7983475B2 (en) Vehicular actuation system
US6493620B2 (en) Motor vehicle occupant detection system employing ellipse shape models and bayesian classification
JP5059551B2 (en) Vehicle occupant detection device
JP2007022401A (en) Occupant information detection system, occupant restraint device and vehicle
EP3560770A1 (en) Occupant information determination apparatus
WO2021240777A1 (en) Occupant detection device and occupant detection method
US20180147955A1 (en) Passenger detection device and passenger detection program
JP2019057247A (en) Image processing device and program
JP6667743B2 (en) Occupant detection device, occupant detection system and occupant detection method
US11983952B2 (en) Physique determination apparatus and physique determination method
WO2019207625A1 (en) Occupant detection device, occupant detection method, and occupant detection system
JP2018147329A (en) Image processing device, image processing system, and image processing method
US11146784B2 (en) Abnormality detection device and abnormality detection method
US11673559B2 (en) Disembarkation action determination device, vehicle, disembarkation action determination method, and non-transitory storage medium stored with program
US20230408679A1 (en) Occupant determination apparatus and occupant determination method
JP2019074964A (en) Driving disabled condition prediction device and driving disabled condition prediction system
JP7363758B2 (en) Condition monitoring device and condition monitoring program
JP2019074963A (en) Device and system for detecting specific body part
JP7359084B2 (en) Emotion estimation device, emotion estimation method and program
JP7259550B2 (en) Object position detector
WO2024048185A1 (en) Occupant authentication device, occupant authentication method, and computer-readable medium
WO2020166002A1 (en) Occupant state detection device
JP2008254734A (en) Vehicle occupant protection system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18916228

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020515326

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18916228

Country of ref document: EP

Kind code of ref document: A1