WO2021210041A1 - Face detection device and face detection method - Google Patents

Face detection device and face detection method Download PDF

Info

Publication number
WO2021210041A1
WO2021210041A1 PCT/JP2020/016277 JP2020016277W WO2021210041A1 WO 2021210041 A1 WO2021210041 A1 WO 2021210041A1 JP 2020016277 W JP2020016277 W JP 2020016277W WO 2021210041 A1 WO2021210041 A1 WO 2021210041A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
search area
person
area
search
Prior art date
Application number
PCT/JP2020/016277
Other languages
French (fr)
Japanese (ja)
Inventor
和樹 國廣
太郎 熊谷
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to PCT/JP2020/016277 priority Critical patent/WO2021210041A1/en
Publication of WO2021210041A1 publication Critical patent/WO2021210041A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • This disclosure relates to a technique for detecting the face of a vehicle occupant.
  • in-vehicle image An image of the inside of the vehicle (hereinafter referred to as "in-vehicle image”) is analyzed to recognize the occupant's face, and the recognition result is used to authenticate the occupant and monitor the condition (for example, detection of the driver's doze or inattentiveness).
  • Various in-vehicle devices have been proposed.
  • Patent Document 1 discloses an authentication device capable of recognizing the faces of a plurality of occupants from an in-vehicle image taken by one wide-angle camera.
  • the in-vehicle device When the in-vehicle device recognizes the occupant's face from the in-vehicle image, it is first necessary to search for the occupant's face from the in-vehicle image.
  • the vehicle interior image When the vehicle interior image is taken by a wide-angle camera, the size of the vehicle interior image is large. Therefore, when the entire vehicle interior image is analyzed to search for the occupant's face, the processing load becomes large. This problem is particularly noticeable in applications that require repeated searches for the occupant's face, such as occupant condition monitoring.
  • an area to be analyzed (for example, an area corresponding to the periphery of the headrest of each seat) is preset in the vehicle interior image, and if the face is searched only within the area, the face can be searched.
  • the processing load required for searching can be suppressed.
  • the position of the occupant's face shown in the vehicle interior image is not constant depending on the physique and posture of the occupant, the seated position, and the like, so the position of the region is not always appropriate.
  • An object of the present invention is to provide a face detection device.
  • the face detection device includes an in-vehicle image acquisition unit that acquires an in-vehicle image that is an image of the inside of the vehicle, and a region in which a person's face is detected in the in-vehicle image by searching for a person's face from the in-vehicle image.
  • the face detection unit includes a face detection unit that identifies a certain face area and a search area setting unit that sets a search area that is an area for searching a person's face in an in-vehicle image based on the position of the face area. For the in-vehicle image acquired after the search area is set, the face of a person is searched in the search area of the in-vehicle image.
  • the search area is set based on the position of the face area specified earlier, so that the position of the search area depends on the physique and posture of the occupant, the seated position, and the like. It will be in the proper position. Further, since the face detection unit searches for a face in the search area for the in-vehicle image acquired after setting the search area, the processing load required for the face search is reduced.
  • FIG. It is a block diagram which shows the structure of the face detection apparatus which concerns on Embodiment 1.
  • FIG. It is a figure which shows the example of the in-vehicle image taken by the in-vehicle photographing device. It is a figure which shows the example of the face region specified by the face detection part. It is a figure which shows the example of the search area set by the search area setting part. It is a flowchart which shows the operation of the face detection apparatus which concerns on Embodiment 1.
  • FIG. It is a figure which shows the hardware configuration example of a face detection apparatus. It is a figure which shows the hardware configuration example of a face detection apparatus. It is a flowchart which shows the operation of the face detection apparatus which concerns on Embodiment 2. It is a flowchart which shows the operation of the face detection apparatus which concerns on Embodiment 3. It is a flowchart which shows the operation of the face detection apparatus which concerns on Embodiment 4.
  • FIG. 1 is a block diagram showing the configuration of the face detection device 10 according to the first embodiment.
  • the face detection device 10 is mounted on the vehicle.
  • the face detection device 10 does not have to be permanently installed in the vehicle, and may be realized on a portable device that can be brought into the vehicle, such as a mobile phone, a smartphone, or a PND (Portable Navigation Device). Further, some or all of the functions of the face detection device 10 may be realized on a server installed outside the vehicle.
  • the face detection device 10 is connected to an in-vehicle photographing device 20 included in a vehicle equipped with the face detection device 10.
  • the in-vehicle photographing device 20 is a camera that photographs the inside of the vehicle.
  • the in-vehicle photographing device 20 is preferably installed at a position where a person (occupant) seated at least in a plurality of seats including the driver's seat can be photographed at the same time, such as in the vicinity of the center panel of the vehicle or the rear view mirror.
  • FIG. 2 shows an example of an in-vehicle image taken when the in-vehicle photographing device 20 is installed near the rear view mirror.
  • the in-vehicle image of FIG. 2 shows five people, that is, the driver's seat occupant P1, the passenger seat occupant P2, the right rear seat occupant P3, the left rear seat occupant P4, and the rear seat center occupant P5. ..
  • the face detection device 10 includes an in-vehicle image acquisition unit 11, a face detection unit 12, and a search area setting unit 13.
  • the vehicle interior image acquisition unit 11 acquires the vehicle interior image taken by the vehicle interior photographing device 20.
  • the face detection unit 12 searches for a person's face from the vehicle interior image by analyzing the vehicle interior image acquired by the vehicle interior image acquisition unit 11, and obtains a "face area" which is an area in which the person's face is detected in the vehicle interior image. Identify.
  • face search the process of searching for the face of a person (occupant) from the image inside the vehicle may be simply referred to as "face search”.
  • the method in which the face detection unit 12 identifies the face area may be any method.
  • the face detection unit 12 specifies a rectangle surrounding the detected face image, specifically, a region in the rectangle consisting of four sides in contact with the contour line of the face as a face region.
  • the face detection unit 12 identifies the face areas F1 to F5 of the occupants P1 to P5 as shown in FIG.
  • the face detection unit 12 acquires the coordinate values (coordinate values in the vehicle interior image) of each vertex of the face region, and outputs those coordinate values as "face area information" indicating the position and size of the face region. ..
  • the face detection unit 12 outputs the face area information of each of the face areas F1 to F5.
  • the face area information output by the face detection unit 12 is output from the face detection device 10 and input to the search area setting unit 13.
  • the search area setting unit 13 targets an area in which the face detection unit 12 searches for a person's face in an in-vehicle image, that is, a face search target, based on the position of the face area indicated by the face area information input from the face detection unit 12.
  • Set the "search area" which is the area to be.
  • the search region is a region wider than the face region, which includes the face region detected by the face detection unit 12.
  • the search area may be set by any method, and here, it is assumed that a rectangular area slightly larger than the face area is set as the search area. For example, when the face areas F1 to F5 of the occupants P1 to P5 are specified as shown in FIG. 3, the search area setting unit 13 searches for each of the face areas F1 to F5 as shown in FIG. Areas S1 to S5 are set.
  • the face detection unit 12 performs a face search in the search area of the vehicle interior image for the vehicle interior image acquired thereafter. Therefore, if the search area is set small, the processing load required for face search can be reduced. However, if the search area is too small, there is a high possibility that the face will deviate from the search area when the person moves, so that the face detection accuracy will decrease. Therefore, the size of the search area is preferably as small as possible within the range in which the required face detection accuracy can be ensured.
  • the size of the search area may be a fixed value, but in the present embodiment, the search area setting unit 13 sets the size of the search area according to the size of the face area. For example, in the example of FIG. 4, the face areas F1 and F2 of the front seat occupants P1 and P2 shown in the foreground are large, and the face areas F3 to F5 of the rear seat occupants P3 to P5 shown in the rear are small. Large search areas S1 and S2 are set for F2, and small search areas S3 to S5 are set for the small face areas F3 to F5.
  • the search area setting unit 13 sets the search area based on the position of the face area specified by the face detection unit 12. Therefore, the position of the search area is set to an appropriate position according to the physique and posture of the occupant, the seated position, and the like. Further, since the face detection unit 12 performs a face search in the search area for the in-vehicle image acquired after the search area is set, the processing load on the face search is reduced.
  • the face area information (information on the position and size of the face area in the vehicle interior image) output by the face detection unit 12 is the output of the face detection device 10.
  • the face area information output from the face detection device 10 is provided to any application that requires the information.
  • Such applications include, for example, an occupant authentication device that authenticates the occupant's face, monitors the degree of arousal and the direction of the line of sight from the occupant's face, and warns against, for example, the driver's dozing or inattentiveness, or occupant's physical condition change.
  • a occupant monitoring device that emits a signal, an automatic driving device that retracts the vehicle to a safe place when the driver becomes inoperable due to a physical condition, etc. can be considered.
  • FIG. 5 is a flowchart showing the operation of the face detection device 10 according to the first embodiment.
  • the operation of the face detection device 10 will be described with reference to FIG. In the following, for the sake of simplification of the explanation, it is assumed that only one person's face is detected from the in-vehicle image.
  • the vehicle interior image acquisition unit 11 acquires the vehicle interior image from the vehicle interior photographing device 20 (step ST101), and the face detection unit 12 searches for a person's face from the vehicle interior image (step ST102). .. If the face of the person is not detected at this time (NO in step ST103), the process returns to step ST101, and steps ST101 and ST102 are repeatedly executed. On the other hand, if a person's face is detected from the vehicle interior image (YES in step ST103), the face detection unit 12 identifies the face area corresponding to the detected face, and indicates the position and size of the face area. Information is output from the face detection device 10 (step ST104). Further, the face area information is also output to the search area setting unit 13, and the search area setting unit 13 sets the search area including the face area based on the position and size of the face area (step ST105).
  • the in-vehicle image acquisition unit 11 acquires a new in-vehicle image from the in-vehicle image capturing device 20 (step ST106), and the face detection unit 12 causes the face detection unit 12 to acquire a person in the search area of the in-vehicle image.
  • Search for the face of step ST107. Since the search area is a part of the vehicle interior image, the face search processing load in step ST107 is smaller than the face search processing load in step ST102.
  • step ST107 If a person's face is detected in step ST107 (YES in step ST108), the face detection unit 12 identifies a face area corresponding to the detected face, and provides face area information indicating the position and size of the face area. Output from the face detection device 10 (step ST109). However, if the face of a person is not detected in step ST107 (NO in step ST108), the search area setting unit 13 cancels the setting of the search area (step ST110), and the process returns to step ST101.
  • the processing load of step ST107 is small. Face detection is performed. Since the search area is set to include the position of the face (face area) detected in step ST102, basically, if the person does not move the face significantly, the face of the person continues to be detected in step ST107. , It can be expected that the processing load of the face detection device 10 will be greatly reduced.
  • step ST107 The case where the face of the person is not detected in step ST107 is not only when the person moves and the position of the face deviates from the search area, but also when the position of the face is in the search area but the face is a hand or hair. It may be covered with.
  • the fact that the face of the person is not detected in step ST107, that is, the loss of sight of the face of the person once detected by the face detection unit 12, is referred to as "lost" of the face.
  • step ST102 it is assumed that only one person's face is detected from the in-vehicle image in step ST102, but as in the examples of FIGS. 2 to 4, a plurality of people's faces may be detected.
  • a plurality of search areas are set in step ST105, and steps ST106 to ST110 are executed for each of the plurality of search areas.
  • the step ST110 may be executed for all of the plurality of search areas when the face is lost in any of the plurality of search areas.
  • the process for first setting the search area may be executed by using the detection of a specific state of the vehicle as a trigger.
  • the specific state of the vehicle referred to here is, for example, a state in which the door is locked, a state in which the vehicle door is closed, a state in which the seat belt is fastened, a state in which the vehicle has started running, and a state in which the gear is set in the drive. And so on.
  • FIG. 6 and 7 are diagrams showing an example of the hardware configuration of the face detection device 10, respectively.
  • Each function of the component of the face detection device 10 shown in FIG. 1 is realized by, for example, the processing circuit 50 shown in FIG. That is, the face detection device 10 acquires an in-vehicle image which is an image of the inside of the vehicle, searches for a person's face from the in-vehicle image, and identifies a face area which is an area in which the person's face is detected in the in-vehicle image. , A search area that searches for a person's face in the vehicle interior image is set based on the position of the face region, and the vehicle interior image acquired after the search area is set is within the vehicle interior image search area.
  • the processing circuit 50 for searching the face of a person is provided.
  • the processing circuit 50 may be dedicated hardware, or a processor (Central Processing Unit (CPU)) that executes a program stored in a memory, a processing unit, an arithmetic unit, a microprocessor, a microcomputer, and the like. It may be configured by using a DSP (also called a Digital Signal Processor)).
  • CPU Central Processing Unit
  • DSP Digital Signal Processor
  • the processing circuit 50 may be, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC (Application Specific Integrated Circuit), or an FPGA (Field-Programmable). GateArray), or a combination of these, etc.
  • the functions of the components of the face detection device 10 may be realized by individual processing circuits, or these functions may be collectively realized by one processing circuit.
  • FIG. 7 shows an example of the hardware configuration of the face detection device 10 when the processing circuit 50 is configured by using the processor 51 that executes the program.
  • the function of the component of the face detection device 10 is realized by software (software, firmware, or a combination of software and firmware).
  • the software or the like is described as a program and stored in the memory 52.
  • the processor 51 realizes the functions of each part by reading and executing the program stored in the memory 52. That is, when executed by the processor 51, the face detection device 10 acquires a vehicle interior image which is an image of the inside of the vehicle, searches for a person's face from the vehicle interior image, and displays the person's face in the vehicle interior image.
  • the process of identifying the face area, which is the detected area, the process of setting the search area, which is the area for searching the face of a person in the vehicle interior image based on the position of the face area, and the process of setting the search area, which is acquired after the search area is set.
  • the vehicle interior image is provided with a memory 52 for storing a process of searching for a person's face in the search area of the vehicle interior image and a program to be executed as a result. In other words, it can be said that this program causes the computer to execute the procedure and method of operation of the components of the face detection device 10.
  • the memory 52 is a non-volatile or non-volatile memory such as a RAM (RandomAccessMemory), a ROM (ReadOnlyMemory), a flash memory, an EPROM (ErasableProgrammableReadOnlyMemory), and an EEPROM (ElectricallyErasableProgrammableReadOnlyMemory). Volatile semiconductor memory, HDD (Hard Disk Drive), magnetic disk, flexible disk, optical disk, compact disk, mini disk, DVD (Digital Versatile Disc) and its drive device, etc., or any storage medium used in the future. You may.
  • the present invention is not limited to this, and a configuration in which a part of the components of the face detection device 10 is realized by dedicated hardware and another part of the components is realized by software or the like may be used.
  • the function is realized by the processing circuit 50 as dedicated hardware, and for some other components, the processing circuit 50 as the processor 51 is stored in the memory 52. It is possible to realize the function by reading and executing it.
  • the face detection device 10 can realize each of the above-mentioned functions by hardware, software, or a combination thereof.
  • the position of the search area set by the search area setting unit 13 is fixed unless the setting of the search area is canceled, but in the second embodiment, the search area setting unit 13 is the face detection unit.
  • the position of the search area is changed by following the change in the position of the face area specified by 12.
  • the configuration of the face detection device 10 in this embodiment is the same as that in FIG.
  • FIG. 8 is a flowchart showing the operation of the face detection device 10 according to the second embodiment.
  • the flow of FIG. 8 is obtained by adding step ST111 after step ST109 to the flow shown in FIG.
  • the search area setting unit 13 adjusts the position of the search area so as to follow the position of the face area specified by the face detection unit 12 in step ST107. Since other processes are the same as those described in the first embodiment, the description here will be omitted.
  • the position of the search area moves following the movement of the face of the person, so that the position of the face of the person deviates from the search area and the face is lost. Is suppressed. That is, in step ST107 of FIG. 8, the period during which the face of the person continues to be detected can be lengthened, and the processing load can be further reduced. Further, since the position of the face of the person is less likely to deviate from the search area, the size of the search area can be reduced to reduce the processing load.
  • the search area setting unit 13 changes the position of the search area by following the change in the position of the face area, but further, the search area setting unit 13 is made to follow the change in the size of the face area.
  • the size of the search area may be changed.
  • step ST110 is immediately executed and the setting of the search area is released.
  • the search area setting unit 13 does not immediately cancel the setting of the search area, and the position of the search area is finally detected by the person's face for a certain period of time. Fix it at the time position (the position just before the face is lost).
  • the configuration of the face detection device 10 in this embodiment is also the same as in FIG.
  • FIG. 9 is a flowchart showing the operation of the face detection device 10 according to the third embodiment.
  • the flow of FIG. 9 is obtained by adding the following steps ST112 to ST116 to the flow shown in FIG. Since the other steps are the same as those described in the first and second embodiments, only steps ST112 to ST116 will be described here.
  • Step ST112 is executed when the face detection unit 12 loses the face and is determined to be NO in step ST108.
  • the search area setting unit 13 fixes the position of the search area to the position when the face of the person was last detected in step ST102 or ST107.
  • step ST113 the vehicle interior image acquisition unit 11 acquires a new vehicle interior image from the vehicle interior photographing device 20.
  • step ST114 the face detection unit 12 searches for the face of a person in the search area of the vehicle interior image acquired in step ST113. If a person's face is detected at this time (YES in step ST115), the face detection unit 12 determines that the same face as the lost face has been detected, and proceeds to step ST109.
  • face search after lost the face search after lost.
  • step ST115 the search area setting unit 13 determines whether a certain period of time has passed without detecting the face (the face detection unit 12 keeps the face lost and constant). Whether or not the time has passed) is confirmed (step ST116). At this time, if a certain time has not elapsed (NO in step ST116), the process returns to step ST113. However, if a certain period of time has elapsed (YES in step ST116), the search area setting unit 13 cancels the setting of the search area (step ST110), and returns to step ST101.
  • the frequency with which the setting of the search area is canceled can be reduced. That is, since the frequency of performing the face search in step ST102, which has a high processing load, can be reduced, it is possible to contribute to the reduction of the processing load.
  • step ST102 it is assumed that only one person's face is detected from the in-vehicle image in step ST102, but as in the examples of FIGS. 2 to 4, when the faces of a plurality of people are detected.
  • a plurality of search areas are set in step ST105, and steps ST106 to ST116 are executed for each of the plurality of search areas.
  • step ST110 may be executed for all of the plurality of search areas when a certain period of time elapses while the face is lost in any of the plurality of search areas.
  • the face detection unit 12 it is desirable that the face search after loss (step ST114) in the search area corresponding to the driver's seat is executed with a higher priority than the face search in the other search areas. For example, it is conceivable to execute the face search after the lost in the search area corresponding to the driver's seat with a higher frequency (shorter cycle) than the face search in the other search areas.
  • the faces of the occupants in the front seats and the faces of the occupants in the rear seats tend to be close to each other in the in-vehicle image. Therefore, it is possible that two or more faces are included in one search area.
  • the face detection unit 12 can distinguish the face area corresponding to each face by tracking the position of each face area, whereby two or more faces in one search area are included. However, it is prevented that the face detection unit 12 confuses those faces.
  • the tracking of the position of the face region is interrupted, so that in the face search after the loss (step ST114), the face of a person other than the lost face is erroneous. May be detected.
  • the face detection unit 12 loses the face of the occupant P1 in the driver's seat and the face search after the loss is performed, the occupant P3 in the right rear seat moves and the face of the occupant P3 becomes the search area. If the vehicle enters S1, the face of the occupant P3 in the right rear seat may be erroneously detected as the face of the occupant P1 in the driver's seat.
  • the face detection unit 12 compares the lost face with the size of the face detected in the face search after the lost, so that the face detected in the face search after the lost is the lost face. Determine if it is the same as. Specifically, when the face of a person is newly detected from the search area after the face of the person is no longer detected from the search area, the face detection unit 12 does not detect the face of the person before the face of the person is no longer detected (face). If the difference between the size of the face area (before lost) and the size of the face area corresponding to the face of the newly detected person is equal to or greater than a predetermined threshold, the detection result of the face of the newly detected person Is invalid.
  • the configuration of the face detection device 10 in this embodiment is also the same as in FIG.
  • FIG. 10 is a flowchart showing the operation of the face detection device 10 according to the fourth embodiment.
  • the flow of FIG. 10 is obtained by adding the following steps ST117 and ST118 to the flow shown in FIG. Since the other steps are the same as those described in the first to third embodiments, only steps ST117 and ST118 will be described here.
  • Step ST117 is executed when a person's face is detected in the face search after lost (step ST114) and a YES is determined in step ST115.
  • the face detection unit 12 determines the size of the face area before the face is lost (the size of the face area last specified in step ST103 or ST107) and the face newly detected in step ST114. By comparing with the size of the corresponding face area, it is determined whether or not the difference between the two is equal to or greater than a predetermined threshold value. If the difference between the two is smaller than the threshold value (NO in step ST117), the face detection unit 12 determines that the same face as the lost face could be detected, and proceeds to step ST109.
  • Step ST118 is executed when the difference from the size of the face area corresponding to the face newly detected in step ST114 is equal to or greater than the threshold value (when YES is determined in step ST117).
  • the face detection unit 12 determines that a face different from the lost face was erroneously detected in step ST114, and invalidates the face detection result in step ST114, that is, the face is detected in step ST114. It is considered that it did not exist.
  • the process proceeds to step ST116.
  • the face detection unit 12 and the lost face are based on the difference between the size of the face area specified before the face is lost and the size of the face area newly specified after the face is lost. It is possible to prevent false detection of different faces.
  • 10 face detection device 11 in-vehicle image acquisition unit, 12 face detection unit, 13 search area setting unit, 20 in-vehicle photography device, 50 processing circuit, 51 processor, 52 memory, P1 to P5 occupants, F1 to F5 face area, S1 to S5 Search area.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

A face detection device (10) is provided with: a vehicle interior image obtaining unit (11) that obtains a vehicle interior image, which is a photographed image of the interior of a vehicle; a face detection unit (12) that searches the vehicle interior image for a person's face and specifies a face area in the vehicle interior image, the face area being an area in which the person's face has been detected; and a search area setting unit (13) that sets, on the basis of the position of the face area, a search area in the vehicle interior image, the search area being an area in which the person's face is to be searched for. The face detection unit (12) searches the search area of the vehicle interior image for the person's face, with respect to the vehicle interior image obtained after the search area has been set.

Description

顔検出装置および顔検出方法Face detection device and face detection method
 本開示は、車両の乗員の顔を検出する技術に関するものである。 This disclosure relates to a technique for detecting the face of a vehicle occupant.
 車両内を撮影した画像(以下「車内画像」という)を解析して乗員の顔を認識し、その認識結果を用いて乗員の認証や状態監視(例えば、運転者の居眠りや脇見の検出など)を行う車載装置が種々提案されている。例えば、下記の特許文献1には、1台の広角なカメラで撮影した車内画像から、複数の乗員の顔を認識できる認証装置が開示されている。 An image of the inside of the vehicle (hereinafter referred to as "in-vehicle image") is analyzed to recognize the occupant's face, and the recognition result is used to authenticate the occupant and monitor the condition (for example, detection of the driver's doze or inattentiveness). Various in-vehicle devices have been proposed. For example, Patent Document 1 below discloses an authentication device capable of recognizing the faces of a plurality of occupants from an in-vehicle image taken by one wide-angle camera.
国際公開第2018/116373号International Publication No. 2018/116373
 車載装置が、車内画像から乗員の顔を認識する場合、まず車内画像から乗員の顔を探索する必要がある。車内画像が広角なカメラで撮影されたものである場合、車内画像のサイズが大きいため、車内画像の全体を解析して乗員の顔を探索すると、その処理負荷は大きいものとなる。この問題は、特に、乗員の状態監視など、乗員の顔の探索を繰り返し行う必要があるアプリケーションで顕著となる。 When the in-vehicle device recognizes the occupant's face from the in-vehicle image, it is first necessary to search for the occupant's face from the in-vehicle image. When the vehicle interior image is taken by a wide-angle camera, the size of the vehicle interior image is large. Therefore, when the entire vehicle interior image is analyzed to search for the occupant's face, the processing load becomes large. This problem is particularly noticeable in applications that require repeated searches for the occupant's face, such as occupant condition monitoring.
 特許文献1では、車内画像内に解析の対象となる領域(例えば、各座席のヘッドレストの周辺に対応する領域)が予め設定されており、顔の探索を当該領域内だけで行えば、顔の探索にかかる処理負荷は抑えられる。しかし、車内画像に写る乗員の顔の位置は、乗員の体格や姿勢、着座した位置などに依存し一定ではないため、当該領域の位置が常に適切であるとは限らない。 In Patent Document 1, an area to be analyzed (for example, an area corresponding to the periphery of the headrest of each seat) is preset in the vehicle interior image, and if the face is searched only within the area, the face can be searched. The processing load required for searching can be suppressed. However, the position of the occupant's face shown in the vehicle interior image is not constant depending on the physique and posture of the occupant, the seated position, and the like, so the position of the region is not always appropriate.
 本開示は以上のような課題を解決するためになされたものであり、車両内を撮影した画像から乗員の顔を探索する領域を適切に設定して、顔の探索にかかる処理負荷を低減できる顔検出装置を提供することを目的とする。 The present disclosure has been made in order to solve the above problems, and it is possible to appropriately set the area for searching the occupant's face from the image taken inside the vehicle and reduce the processing load required for the face search. An object of the present invention is to provide a face detection device.
 本開示に係る顔検出装置は、車両内を撮影した画像である車内画像を取得する車内画像取得部と、車内画像から人物の顔を探索し、車内画像において人物の顔が検出された領域である顔領域を特定する顔検出部と、顔領域の位置に基づいて、車内画像において人物の顔を探索する領域である探索領域を設定する探索領域設定部と、を備え、顔検出部は、探索領域が設定された後に取得された車内画像に対しては、車内画像の探索領域内で人物の顔を探索する。 The face detection device according to the present disclosure includes an in-vehicle image acquisition unit that acquires an in-vehicle image that is an image of the inside of the vehicle, and a region in which a person's face is detected in the in-vehicle image by searching for a person's face from the in-vehicle image. The face detection unit includes a face detection unit that identifies a certain face area and a search area setting unit that sets a search area that is an area for searching a person's face in an in-vehicle image based on the position of the face area. For the in-vehicle image acquired after the search area is set, the face of a person is searched in the search area of the in-vehicle image.
 本開示に係る顔検出装置によれば、先に特定された顔領域の位置に基づいて探索領域が設定されるため、探索領域の位置は、乗員の体格や姿勢、着座した位置などに応じた適切な位置となる。また、顔検出部は、探索領域の設定後に取得された車内画像に対しては、探索領域内で顔を探索するため、顔の探索にかかる処理負荷は低減される。 According to the face detection device according to the present disclosure, the search area is set based on the position of the face area specified earlier, so that the position of the search area depends on the physique and posture of the occupant, the seated position, and the like. It will be in the proper position. Further, since the face detection unit searches for a face in the search area for the in-vehicle image acquired after setting the search area, the processing load required for the face search is reduced.
 本開示の目的、特徴、態様、および利点は、以下の詳細な説明と添付図面とによって、より明白となる。 The purpose, features, aspects, and advantages of the present disclosure will be made clearer by the following detailed description and accompanying drawings.
実施の形態1に係る顔検出装置の構成を示すブロック図である。It is a block diagram which shows the structure of the face detection apparatus which concerns on Embodiment 1. FIG. 車内撮影装置により撮影される車内画像の例を示す図である。It is a figure which shows the example of the in-vehicle image taken by the in-vehicle photographing device. 顔検出部により特定される顔領域の例を示す図である。It is a figure which shows the example of the face region specified by the face detection part. 探索領域設定部により設定される探索領域の例を示す図である。It is a figure which shows the example of the search area set by the search area setting part. 実施の形態1に係る顔検出装置の動作を示すフローチャートである。It is a flowchart which shows the operation of the face detection apparatus which concerns on Embodiment 1. FIG. 顔検出装置のハードウェア構成例を示す図である。It is a figure which shows the hardware configuration example of a face detection apparatus. 顔検出装置のハードウェア構成例を示す図である。It is a figure which shows the hardware configuration example of a face detection apparatus. 実施の形態2に係る顔検出装置の動作を示すフローチャートである。It is a flowchart which shows the operation of the face detection apparatus which concerns on Embodiment 2. 実施の形態3に係る顔検出装置の動作を示すフローチャートである。It is a flowchart which shows the operation of the face detection apparatus which concerns on Embodiment 3. 実施の形態4に係る顔検出装置の動作を示すフローチャートである。It is a flowchart which shows the operation of the face detection apparatus which concerns on Embodiment 4. FIG.
 <実施の形態1>
 図1は、実施の形態1に係る顔検出装置10の構成を示すブロック図である。本実施の形態では、顔検出装置10は、車両に搭載されているものとする。ただし、顔検出装置10は、車両に常設されなくてもよく、例えば携帯電話やスマートフォン、PND(Portable Navigation Device)など、車両に持ち込み可能な携帯型の装置上で実現されていてもよい。さらに、顔検出装置10の機能の一部または全部は、車両の外部に設置されたサーバー上で実現されていてもよい。
<Embodiment 1>
FIG. 1 is a block diagram showing the configuration of the face detection device 10 according to the first embodiment. In the present embodiment, it is assumed that the face detection device 10 is mounted on the vehicle. However, the face detection device 10 does not have to be permanently installed in the vehicle, and may be realized on a portable device that can be brought into the vehicle, such as a mobile phone, a smartphone, or a PND (Portable Navigation Device). Further, some or all of the functions of the face detection device 10 may be realized on a server installed outside the vehicle.
 図1に示すように、顔検出装置10は、当該顔検出装置10を搭載する車両が備える車内撮影装置20に接続されている。車内撮影装置20は、車両内を撮影するカメラである。車内撮影装置20は、車両のセンターパネルやリアビューミラーの近傍など、少なくとも運転席を含む複数の座席に着座した人物(乗員)を同時に撮影できる位置に設置されることが好ましい。車内撮影装置20がリアビューミラーの近傍に設置された場合に撮影される車内画像の例を図2に示す。図2の車内画像には、5人の人物、すなわち運転席の乗員P1、助手席の乗員P2、右後席の乗員P3、左後席の乗員P4および後席中央の乗員P5が写っている。 As shown in FIG. 1, the face detection device 10 is connected to an in-vehicle photographing device 20 included in a vehicle equipped with the face detection device 10. The in-vehicle photographing device 20 is a camera that photographs the inside of the vehicle. The in-vehicle photographing device 20 is preferably installed at a position where a person (occupant) seated at least in a plurality of seats including the driver's seat can be photographed at the same time, such as in the vicinity of the center panel of the vehicle or the rear view mirror. FIG. 2 shows an example of an in-vehicle image taken when the in-vehicle photographing device 20 is installed near the rear view mirror. The in-vehicle image of FIG. 2 shows five people, that is, the driver's seat occupant P1, the passenger seat occupant P2, the right rear seat occupant P3, the left rear seat occupant P4, and the rear seat center occupant P5. ..
 また図1のように、顔検出装置10は、車内画像取得部11、顔検出部12および探索領域設定部13を備えている。 Further, as shown in FIG. 1, the face detection device 10 includes an in-vehicle image acquisition unit 11, a face detection unit 12, and a search area setting unit 13.
 車内画像取得部11は、車内撮影装置20が撮影した車内画像を取得する。顔検出部12は、車内画像取得部11が取得した車内画像を解析することで、車内画像から人物の顔を探索し、車内画像において人物の顔が検出された領域である「顔領域」を特定する。以下、車内画像から人物(乗員)の顔を探索する処理を、単に「顔探索」ということもある。 The vehicle interior image acquisition unit 11 acquires the vehicle interior image taken by the vehicle interior photographing device 20. The face detection unit 12 searches for a person's face from the vehicle interior image by analyzing the vehicle interior image acquired by the vehicle interior image acquisition unit 11, and obtains a "face area" which is an area in which the person's face is detected in the vehicle interior image. Identify. Hereinafter, the process of searching for the face of a person (occupant) from the image inside the vehicle may be simply referred to as "face search".
 顔検出部12が顔領域を特定する方法は任意の方法でよい。本実施の形態では、顔検出部12は、検出された顔の画像を囲む矩形、具体的には、顔の輪郭線に接する4つの辺からなる矩形内の領域を、顔領域として特定するものとする。例えば、図2のように車内画像に乗員P1~P5が写っていた場合、顔検出部12は、図3のように乗員P1~P5それぞれの顔領域F1~F5を特定する。また、顔検出部12は、顔領域の各頂点の座標値(車内画像における座標値)を取得し、それらの座標値を、顔領域の位置および大きさを示す「顔領域情報」として出力する。図3の例においては、顔検出部12は、顔領域F1~F5それぞれの顔領域情報を出力する。顔検出部12が出力する顔領域情報は、顔検出装置10から出力されると共に、探索領域設定部13へ入力される。 The method in which the face detection unit 12 identifies the face area may be any method. In the present embodiment, the face detection unit 12 specifies a rectangle surrounding the detected face image, specifically, a region in the rectangle consisting of four sides in contact with the contour line of the face as a face region. And. For example, when the occupants P1 to P5 are shown in the vehicle interior image as shown in FIG. 2, the face detection unit 12 identifies the face areas F1 to F5 of the occupants P1 to P5 as shown in FIG. Further, the face detection unit 12 acquires the coordinate values (coordinate values in the vehicle interior image) of each vertex of the face region, and outputs those coordinate values as "face area information" indicating the position and size of the face region. .. In the example of FIG. 3, the face detection unit 12 outputs the face area information of each of the face areas F1 to F5. The face area information output by the face detection unit 12 is output from the face detection device 10 and input to the search area setting unit 13.
 探索領域設定部13は、顔検出部12から入力された顔領域情報が示す顔領域の位置に基づいて、顔検出部12が車内画像において人物の顔を探索する領域、すなわち顔探索の対象となる領域である「探索領域」を設定する。探索領域は、顔検出部12によって検出された顔領域を内包する、顔領域よりも広い領域である。探索領域の設定方法は任意の方法でよく、ここでは、顔領域よりもひとまわり大きい矩形の領域が探索領域として設定されるものとする。例えば、図3のように乗員P1~P5それぞれの顔領域F1~F5が特定されている場合、探索領域設定部13は、図4に示すように、顔領域F1~F5のそれぞれに対応する探索領域S1~S5を設定する。 The search area setting unit 13 targets an area in which the face detection unit 12 searches for a person's face in an in-vehicle image, that is, a face search target, based on the position of the face area indicated by the face area information input from the face detection unit 12. Set the "search area" which is the area to be. The search region is a region wider than the face region, which includes the face region detected by the face detection unit 12. The search area may be set by any method, and here, it is assumed that a rectangular area slightly larger than the face area is set as the search area. For example, when the face areas F1 to F5 of the occupants P1 to P5 are specified as shown in FIG. 3, the search area setting unit 13 searches for each of the face areas F1 to F5 as shown in FIG. Areas S1 to S5 are set.
 探索領域設定部13により探索領域が設定されると、顔検出部12は、その後に取得された車内画像に対しては、車内画像の探索領域内で顔探索を行う。よって、探索領域を小さく設定すれば、顔探索にかかる処理負担を小さくできる。しかし、探索領域が小さすぎると、人物が動いたときに顔が探索領域から外れる可能性が高くなるため、顔の検出精度が低下する。従って、探索領域の大きさは、必要とされる顔の検出精度を確保できる範囲で、できるだけ小さいことが好ましい。 When the search area is set by the search area setting unit 13, the face detection unit 12 performs a face search in the search area of the vehicle interior image for the vehicle interior image acquired thereafter. Therefore, if the search area is set small, the processing load required for face search can be reduced. However, if the search area is too small, there is a high possibility that the face will deviate from the search area when the person moves, so that the face detection accuracy will decrease. Therefore, the size of the search area is preferably as small as possible within the range in which the required face detection accuracy can be ensured.
 探索領域の大きさは固定値でもよいが、本実施の形態では、探索領域設定部13が、顔領域の大きさに応じて、探索領域の大きさを設定するものとする。例えば図4の例では、手前に写る前席の乗員P1,P2の顔領域F1,F2が大きく、後方に写る後席の乗員P3~P5の顔領域F3~F5が小さいため、顔領域F1,F2に対しては大きな探索領域S1,S2が設定され、小さな顔領域F3~F5に対しては小さな探索領域S3~S5が設定される。 The size of the search area may be a fixed value, but in the present embodiment, the search area setting unit 13 sets the size of the search area according to the size of the face area. For example, in the example of FIG. 4, the face areas F1 and F2 of the front seat occupants P1 and P2 shown in the foreground are large, and the face areas F3 to F5 of the rear seat occupants P3 to P5 shown in the rear are small. Large search areas S1 and S2 are set for F2, and small search areas S3 to S5 are set for the small face areas F3 to F5.
 このように、実施の形態1に係る顔検出装置10では、探索領域設定部13が、顔検出部12によって特定された顔領域の位置に基づいて探索領域を設定する。よって、探索領域の位置は、乗員の体格や姿勢、着座した位置などに応じた適切な位置に設定される。また、顔検出部12は、探索領域が設定された後に取得された車内画像に対しては、探索領域内で顔探索を行うため、顔探索にかかる処理負荷は低減される。 As described above, in the face detection device 10 according to the first embodiment, the search area setting unit 13 sets the search area based on the position of the face area specified by the face detection unit 12. Therefore, the position of the search area is set to an appropriate position according to the physique and posture of the occupant, the seated position, and the like. Further, since the face detection unit 12 performs a face search in the search area for the in-vehicle image acquired after the search area is set, the processing load on the face search is reduced.
 上述のように、顔検出部12が出力する顔領域情報(車内画像における顔領域の位置および大きさの情報)は、顔検出装置10の出力となる。顔検出装置10から出力された顔領域情報は、その情報を必要とする任意のアプリケーションに提供される。そのようなアプリケーションとしては、例えば、乗員の顔認証を行う乗員認証装置、乗員の顔から覚醒度や視線の方向を監視し、例えば運転者の居眠りや脇見、乗員の体調異変などに対して警告を発する乗員監視装置、運転者が体調異変により運転不能状態に陥った場合に車両を安全な場所へ退避させる自動運転装置などが考えられる。 As described above, the face area information (information on the position and size of the face area in the vehicle interior image) output by the face detection unit 12 is the output of the face detection device 10. The face area information output from the face detection device 10 is provided to any application that requires the information. Such applications include, for example, an occupant authentication device that authenticates the occupant's face, monitors the degree of arousal and the direction of the line of sight from the occupant's face, and warns against, for example, the driver's dozing or inattentiveness, or occupant's physical condition change. A occupant monitoring device that emits a signal, an automatic driving device that retracts the vehicle to a safe place when the driver becomes inoperable due to a physical condition, etc. can be considered.
 図5は、実施の形態1に係る顔検出装置10の動作を示すフローチャートである。以下、図5を参照しつつ、顔検出装置10の動作を説明する。なお、以下においては説明の簡略化のため、車内画像から検出される人物の顔は1つだけと仮定する。 FIG. 5 is a flowchart showing the operation of the face detection device 10 according to the first embodiment. Hereinafter, the operation of the face detection device 10 will be described with reference to FIG. In the following, for the sake of simplification of the explanation, it is assumed that only one person's face is detected from the in-vehicle image.
 顔検出装置10が起動すると、車内画像取得部11が、車内撮影装置20から車内画像を取得し(ステップST101)、顔検出部12が、その車内画像から人物の顔を探索する(ステップST102)。このとき人物の顔が検出されなければ(ステップST103でNO)、ステップST101へ戻り、ステップST101およびST102が繰り返し実行される。一方、車内画像から人物の顔が検出されれば(ステップST103でYES)、顔検出部12は、検出された顔に対応する顔領域を特定し、顔領域の位置および大きさを示す顔領域情報を顔検出装置10から出力する(ステップST104)。また、顔領域情報は探索領域設定部13へも出力され、探索領域設定部13は、顔領域の位置および大きさに基づいて、当該顔領域を内包する探索領域を設定する(ステップST105)。 When the face detection device 10 is activated, the vehicle interior image acquisition unit 11 acquires the vehicle interior image from the vehicle interior photographing device 20 (step ST101), and the face detection unit 12 searches for a person's face from the vehicle interior image (step ST102). .. If the face of the person is not detected at this time (NO in step ST103), the process returns to step ST101, and steps ST101 and ST102 are repeatedly executed. On the other hand, if a person's face is detected from the vehicle interior image (YES in step ST103), the face detection unit 12 identifies the face area corresponding to the detected face, and indicates the position and size of the face area. Information is output from the face detection device 10 (step ST104). Further, the face area information is also output to the search area setting unit 13, and the search area setting unit 13 sets the search area including the face area based on the position and size of the face area (step ST105).
 ステップST105で探索領域が設定されると、車内画像取得部11は、車内撮影装置20から新たな車内画像を取得し(ステップST106)、顔検出部12が、その車内画像の探索領域内で人物の顔を探索する(ステップST107)。探索領域は車内画像の一部分であるため、ステップST107の顔探索の処理負荷は、ステップST102の顔探索の処理負荷に比べて小さい。 When the search area is set in step ST105, the in-vehicle image acquisition unit 11 acquires a new in-vehicle image from the in-vehicle image capturing device 20 (step ST106), and the face detection unit 12 causes the face detection unit 12 to acquire a person in the search area of the in-vehicle image. Search for the face of (step ST107). Since the search area is a part of the vehicle interior image, the face search processing load in step ST107 is smaller than the face search processing load in step ST102.
 ステップST107で人物の顔が検出されれば(ステップST108でYES)、顔検出部12は、検出された顔に対応する顔領域を特定し、顔領域の位置および大きさを示す顔領域情報を顔検出装置10から出力する(ステップST109)。しかし、ステップST107で人物の顔が検出されなければ(ステップST108でNO)、探索領域設定部13が探索領域の設定を解除して(ステップST110)、ステップST101へ戻る。 If a person's face is detected in step ST107 (YES in step ST108), the face detection unit 12 identifies a face area corresponding to the detected face, and provides face area information indicating the position and size of the face area. Output from the face detection device 10 (step ST109). However, if the face of a person is not detected in step ST107 (NO in step ST108), the search area setting unit 13 cancels the setting of the search area (step ST110), and the process returns to step ST101.
 このように、実施の形態1に係る顔検出装置10では、ステップST105で探索領域が設定された後は、ステップST107で人物の顔が検出され続けている限りで、処理負荷の小さいステップST107の顔探索が実行される。探索領域は、ステップST102で検出された顔(顔領域)の位置を包含するように設定されるため、人物が大きく顔を動かさなければ、基本的に、ステップST107で人物の顔が検出され続け、顔検出装置10の処理負荷が大きく低減されることが期待できる。 As described above, in the face detection device 10 according to the first embodiment, after the search area is set in step ST105, as long as the face of a person continues to be detected in step ST107, the processing load of step ST107 is small. Face detection is performed. Since the search area is set to include the position of the face (face area) detected in step ST102, basically, if the person does not move the face significantly, the face of the person continues to be detected in step ST107. , It can be expected that the processing load of the face detection device 10 will be greatly reduced.
 なお、ステップST107で人物の顔が検出されなくなる場合としては、人物が動いて顔の位置が探索領域から外れた場合だけでなく、顔の位置は探索領域内にあるが顔が手や髪の毛などで覆われた場合もある。以下、ステップST107で人物の顔が検出されなくなること、すなわち、顔検出部12が一旦検出した人物の顔を見失うことを、顔の「ロスト」という。 The case where the face of the person is not detected in step ST107 is not only when the person moves and the position of the face deviates from the search area, but also when the position of the face is in the search area but the face is a hand or hair. It may be covered with. Hereinafter, the fact that the face of the person is not detected in step ST107, that is, the loss of sight of the face of the person once detected by the face detection unit 12, is referred to as "lost" of the face.
 以上の説明では、ステップST102で車内画像から検出される人物の顔が1つだけと仮定したが、図2~図4の例のように、複数の人物の顔が検出される場合もある。その場合には、ステップST105で複数の探索領域が設定され、ステップST106~ST110が、複数の探索領域のそれぞれに対して実行される。また、ステップST110は、複数の探索領域のいずれかで顔のロストが生じたときに、複数の探索領域の全てに対して実行されればよい。 In the above description, it is assumed that only one person's face is detected from the in-vehicle image in step ST102, but as in the examples of FIGS. 2 to 4, a plurality of people's faces may be detected. In that case, a plurality of search areas are set in step ST105, and steps ST106 to ST110 are executed for each of the plurality of search areas. Further, the step ST110 may be executed for all of the plurality of search areas when the face is lost in any of the plurality of search areas.
 また、最初に探索領域を設定するための処理(ステップST101~ST105の処理)は、車両の特定の状態の検知をトリガとして実行されてもよい。ここでいう車両の特定の状態としては、例えば、ドアが施錠された状態、車両のドアが閉じられた状態、シートベルトが装着された状態、走行開始した状態、ギアがドライブに設定された状態などが考えられる。 Further, the process for first setting the search area (process of steps ST101 to ST105) may be executed by using the detection of a specific state of the vehicle as a trigger. The specific state of the vehicle referred to here is, for example, a state in which the door is locked, a state in which the vehicle door is closed, a state in which the seat belt is fastened, a state in which the vehicle has started running, and a state in which the gear is set in the drive. And so on.
 図6および図7は、それぞれ顔検出装置10のハードウェア構成の例を示す図である。図1に示した顔検出装置10の構成要素の各機能は、例えば図6に示す処理回路50により実現される。すなわち、顔検出装置10は、車両内を撮影した画像である車内画像を取得し、車内画像から人物の顔を探索し、車内画像において人物の顔が検出された領域である顔領域を特定し、顔領域の位置に基づいて、車内画像において人物の顔を探索する領域である探索領域を設定し、探索領域が設定された後に取得された車内画像に対しては、車内画像の探索領域内で人物の顔を探索する、ための処理回路50を備える。処理回路50は、専用のハードウェアであってもよいし、メモリに格納されたプログラムを実行するプロセッサ(中央処理装置(CPU:Central Processing Unit)、処理装置、演算装置、マイクロプロセッサ、マイクロコンピュータ、DSP(Digital Signal Processor)とも呼ばれる)を用いて構成されていてもよい。 6 and 7 are diagrams showing an example of the hardware configuration of the face detection device 10, respectively. Each function of the component of the face detection device 10 shown in FIG. 1 is realized by, for example, the processing circuit 50 shown in FIG. That is, the face detection device 10 acquires an in-vehicle image which is an image of the inside of the vehicle, searches for a person's face from the in-vehicle image, and identifies a face area which is an area in which the person's face is detected in the in-vehicle image. , A search area that searches for a person's face in the vehicle interior image is set based on the position of the face region, and the vehicle interior image acquired after the search area is set is within the vehicle interior image search area. The processing circuit 50 for searching the face of a person is provided. The processing circuit 50 may be dedicated hardware, or a processor (Central Processing Unit (CPU)) that executes a program stored in a memory, a processing unit, an arithmetic unit, a microprocessor, a microcomputer, and the like. It may be configured by using a DSP (also called a Digital Signal Processor)).
 処理回路50が専用のハードウェアである場合、処理回路50は、例えば、単一回路、複合回路、プログラム化したプロセッサ、並列プログラム化したプロセッサ、ASIC(Application Specific Integrated Circuit)、FPGA(Field-Programmable Gate Array)、またはこれらを組み合わせたものなどが該当する。顔検出装置10の構成要素の各々の機能が個別の処理回路で実現されてもよいし、それらの機能がまとめて一つの処理回路で実現されてもよい。 When the processing circuit 50 is dedicated hardware, the processing circuit 50 may be, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC (Application Specific Integrated Circuit), or an FPGA (Field-Programmable). GateArray), or a combination of these, etc. The functions of the components of the face detection device 10 may be realized by individual processing circuits, or these functions may be collectively realized by one processing circuit.
 図7は、処理回路50がプログラムを実行するプロセッサ51を用いて構成されている場合における顔検出装置10のハードウェア構成の例を示している。この場合、顔検出装置10の構成要素の機能は、ソフトウェア等(ソフトウェア、ファームウェア、またはソフトウェアとファームウェアとの組み合わせ)により実現される。ソフトウェア等はプログラムとして記述され、メモリ52に格納される。プロセッサ51は、メモリ52に記憶されたプログラムを読み出して実行することにより、各部の機能を実現する。すなわち、顔検出装置10は、プロセッサ51により実行されるときに、車両内を撮影した画像である車内画像を取得する処理と、車内画像から人物の顔を探索し、車内画像において人物の顔が検出された領域である顔領域を特定する処理と、顔領域の位置に基づいて、車内画像において人物の顔を探索する領域である探索領域を設定する処理と、探索領域が設定された後に取得された車内画像に対しては、車内画像の探索領域内で人物の顔を探索する処理と、が結果的に実行されることになるプログラムを格納するためのメモリ52を備える。換言すれば、このプログラムは、顔検出装置10の構成要素の動作の手順や方法をコンピュータに実行させるものであるともいえる。 FIG. 7 shows an example of the hardware configuration of the face detection device 10 when the processing circuit 50 is configured by using the processor 51 that executes the program. In this case, the function of the component of the face detection device 10 is realized by software (software, firmware, or a combination of software and firmware). The software or the like is described as a program and stored in the memory 52. The processor 51 realizes the functions of each part by reading and executing the program stored in the memory 52. That is, when executed by the processor 51, the face detection device 10 acquires a vehicle interior image which is an image of the inside of the vehicle, searches for a person's face from the vehicle interior image, and displays the person's face in the vehicle interior image. The process of identifying the face area, which is the detected area, the process of setting the search area, which is the area for searching the face of a person in the vehicle interior image based on the position of the face area, and the process of setting the search area, which is acquired after the search area is set. The vehicle interior image is provided with a memory 52 for storing a process of searching for a person's face in the search area of the vehicle interior image and a program to be executed as a result. In other words, it can be said that this program causes the computer to execute the procedure and method of operation of the components of the face detection device 10.
 ここで、メモリ52は、例えば、RAM(Random Access Memory)、ROM(Read Only Memory)、フラッシュメモリ、EPROM(Erasable Programmable Read Only Memory)、EEPROM(Electrically Erasable Programmable Read Only Memory)などの、不揮発性または揮発性の半導体メモリ、HDD(Hard Disk Drive)、磁気ディスク、フレキシブルディスク、光ディスク、コンパクトディスク、ミニディスク、DVD(Digital Versatile Disc)およびそのドライブ装置等、または、今後使用されるあらゆる記憶媒体であってもよい。 Here, the memory 52 is a non-volatile or non-volatile memory such as a RAM (RandomAccessMemory), a ROM (ReadOnlyMemory), a flash memory, an EPROM (ErasableProgrammableReadOnlyMemory), and an EEPROM (ElectricallyErasableProgrammableReadOnlyMemory). Volatile semiconductor memory, HDD (Hard Disk Drive), magnetic disk, flexible disk, optical disk, compact disk, mini disk, DVD (Digital Versatile Disc) and its drive device, etc., or any storage medium used in the future. You may.
 以上、顔検出装置10の構成要素の機能が、ハードウェアおよびソフトウェア等のいずれか一方で実現される構成について説明した。しかしこれに限ったものではなく、顔検出装置10の一部の構成要素を専用のハードウェアで実現し、別の一部の構成要素をソフトウェア等で実現する構成であってもよい。例えば、一部の構成要素については専用のハードウェアとしての処理回路50でその機能を実現し、他の一部の構成要素についてはプロセッサ51としての処理回路50がメモリ52に格納されたプログラムを読み出して実行することによってその機能を実現することが可能である。 The configuration in which the functions of the components of the face detection device 10 are realized by either hardware or software has been described above. However, the present invention is not limited to this, and a configuration in which a part of the components of the face detection device 10 is realized by dedicated hardware and another part of the components is realized by software or the like may be used. For example, for some components, the function is realized by the processing circuit 50 as dedicated hardware, and for some other components, the processing circuit 50 as the processor 51 is stored in the memory 52. It is possible to realize the function by reading and executing it.
 以上のように、顔検出装置10は、ハードウェア、ソフトウェア等、またはこれらの組み合わせによって、上述の各機能を実現することができる。 As described above, the face detection device 10 can realize each of the above-mentioned functions by hardware, software, or a combination thereof.
 <実施の形態2>
 実施の形態1では、探索領域設定部13が設定した探索領域の位置は、その探索領域の設定が解除されない限り固定されるが、実施の形態2では、探索領域設定部13が、顔検出部12によって特定される顔領域の位置の変化に追従させて、探索領域の位置を変更する。なお、本実施の形態における顔検出装置10の構成は、図1と同様である。
<Embodiment 2>
In the first embodiment, the position of the search area set by the search area setting unit 13 is fixed unless the setting of the search area is canceled, but in the second embodiment, the search area setting unit 13 is the face detection unit. The position of the search area is changed by following the change in the position of the face area specified by 12. The configuration of the face detection device 10 in this embodiment is the same as that in FIG.
 図8は、実施の形態2に係る顔検出装置10の動作を示すフローチャートである。図8のフローは、図5に示したフローに対し、ステップST109の後にステップST111を追加したものである。ステップST111では、探索領域設定部13が、ステップST107で顔検出部12が特定した顔領域の位置に追従するように、探索領域の位置を調整する。その他の処理は、実施の形態1で説明したものと同様であるため、ここでの説明は省略する。 FIG. 8 is a flowchart showing the operation of the face detection device 10 according to the second embodiment. The flow of FIG. 8 is obtained by adding step ST111 after step ST109 to the flow shown in FIG. In step ST111, the search area setting unit 13 adjusts the position of the search area so as to follow the position of the face area specified by the face detection unit 12 in step ST107. Since other processes are the same as those described in the first embodiment, the description here will be omitted.
 実施の形態2に係る顔検出装置10によれば、探索領域の位置が、人物の顔の動きに追従して移動するようになるため、人物の顔の位置が探索領域から外れて顔のロストが生じることが抑制される。つまり、図8のステップST107で人物の顔が検出され続ける期間を長くでき、処理負荷をさらに低減させることができる。また、人物の顔の位置が探索領域から外れにくくなるため、探索領域の大きさを小さくして処理負荷を低減させることもできる。 According to the face detection device 10 according to the second embodiment, the position of the search area moves following the movement of the face of the person, so that the position of the face of the person deviates from the search area and the face is lost. Is suppressed. That is, in step ST107 of FIG. 8, the period during which the face of the person continues to be detected can be lengthened, and the processing load can be further reduced. Further, since the position of the face of the person is less likely to deviate from the search area, the size of the search area can be reduced to reduce the processing load.
 また、本実施の形態では、探索領域設定部13が、顔領域の位置の変化に追従させて探索領域の位置を変更する例を示したが、さらに、顔領域の大きさの変化に追従させて探索領域の大きさを変更してもよい。 Further, in the present embodiment, an example is shown in which the search area setting unit 13 changes the position of the search area by following the change in the position of the face area, but further, the search area setting unit 13 is made to follow the change in the size of the face area. The size of the search area may be changed.
 <実施の形態3>
 実施の形態1および2では、顔のロストが生じ、ステップST108でNOと判定されると、直ちにステップST110が実行されて、探索領域の設定が解除されていた。実施の形態3では、顔のロストが生じても、探索領域設定部13は、直ちには探索領域の設定を解除せず、一定時間、探索領域の位置を、人物の顔が最後に検出されたときの位置(顔のロストが生じる直前の位置)に固定する。なお、本実施の形態における顔検出装置10の構成も、図1と同様である。
<Embodiment 3>
In the first and second embodiments, when the face is lost and NO is determined in step ST108, step ST110 is immediately executed and the setting of the search area is released. In the third embodiment, even if the face is lost, the search area setting unit 13 does not immediately cancel the setting of the search area, and the position of the search area is finally detected by the person's face for a certain period of time. Fix it at the time position (the position just before the face is lost). The configuration of the face detection device 10 in this embodiment is also the same as in FIG.
 図9は、実施の形態3に係る顔検出装置10の動作を示すフローチャートである。図9のフローは、図8に示したフローに対し、以下のステップST112~ST116を追加したものである。その他のステップは、実施の形態1および2で説明したものと同様であるため、ここではステップST112~ST116の説明のみを行う。 FIG. 9 is a flowchart showing the operation of the face detection device 10 according to the third embodiment. The flow of FIG. 9 is obtained by adding the following steps ST112 to ST116 to the flow shown in FIG. Since the other steps are the same as those described in the first and second embodiments, only steps ST112 to ST116 will be described here.
 ステップST112は、顔検出部12が顔をロストし、ステップST108でNOと判定されると実行される。ステップST112では、探索領域設定部13が、探索領域の位置を、ステップST102またはST107で人物の顔が最後に検出されたときの位置に固定する。 Step ST112 is executed when the face detection unit 12 loses the face and is determined to be NO in step ST108. In step ST112, the search area setting unit 13 fixes the position of the search area to the position when the face of the person was last detected in step ST102 or ST107.
 ステップST113では、車内画像取得部11が、車内撮影装置20から新たな車内画像を取得する。ステップST114では、顔検出部12が、ステップST113で取得された車内画像の探索領域内で人物の顔を探索する。このとき人物の顔が検出されれば(ステップST115でYES)、顔検出部12は、ロストした顔と同じ顔が検出されたと判断し、ステップST109へ移行する。以下、ステップST114の顔探索を「ロスト後の顔探索」という。 In step ST113, the vehicle interior image acquisition unit 11 acquires a new vehicle interior image from the vehicle interior photographing device 20. In step ST114, the face detection unit 12 searches for the face of a person in the search area of the vehicle interior image acquired in step ST113. If a person's face is detected at this time (YES in step ST115), the face detection unit 12 determines that the same face as the lost face has been detected, and proceeds to step ST109. Hereinafter, the face search in step ST114 is referred to as "face search after lost".
 一方、ステップST114で人物の顔が検出されなければ(ステップST115でNO)、探索領域設定部13は、顔が検出されないまま一定時間経過したかどうか(顔検出部12が顔をロストしたまま一定時間経過したかどうか)を確認する(ステップST116)。このとき、一定時間経過していなければ(ステップST116でNO)、ステップST113へ戻る。しかし、一定時間経過していれば(ステップST116でYES)、探索領域設定部13が探索領域の設定を解除して(ステップST110)、ステップST101へ戻る。 On the other hand, if the face of a person is not detected in step ST114 (NO in step ST115), the search area setting unit 13 determines whether a certain period of time has passed without detecting the face (the face detection unit 12 keeps the face lost and constant). Whether or not the time has passed) is confirmed (step ST116). At this time, if a certain time has not elapsed (NO in step ST116), the process returns to step ST113. However, if a certain period of time has elapsed (YES in step ST116), the search area setting unit 13 cancels the setting of the search area (step ST110), and returns to step ST101.
 例えば、人物が動いて顔の位置が探索領域から外れたり、顔が手や髪の毛などで覆われたりして顔のロストが生じた場合、人物が姿勢を元に戻せば、ロストが生じる前とほぼ同じ位置で顔の検出が可能になる可能性が高い。そのため、顔のロストが生じても、探索領域を一定時間固定すれば、その探索領域内で再び同じ人物の顔が検出されるようなる可能性が高い。そのため、本実施の形態によれば、探索領域の設定が解除される頻度を少なくできる。つまり、処理負荷の高いステップST102の顔探索が行われる頻度を少なくできるため、処理負荷の低減に寄与できる。 For example, if a person moves and the position of the face deviates from the search area, or if the face is covered with hands or hair and the face is lost, if the person returns to the original posture, it will be before the loss occurs. It is highly possible that the face can be detected at almost the same position. Therefore, even if the face is lost, if the search area is fixed for a certain period of time, there is a high possibility that the face of the same person will be detected again in the search area. Therefore, according to the present embodiment, the frequency with which the setting of the search area is canceled can be reduced. That is, since the frequency of performing the face search in step ST102, which has a high processing load, can be reduced, it is possible to contribute to the reduction of the processing load.
 以上の説明では、ステップST102で車内画像から検出される人物の顔が1つだけと仮定して説明したが、図2~図4の例のように、複数の人物の顔が検出される場合もある。その場合には、ステップST105で複数の探索領域が設定され、ステップST106~ST116が、複数の探索領域のそれぞれに対して実行される。また、ステップST110は、複数の探索領域のいずれかで顔をロストしたまま一定時間経過したときに、複数の探索領域の全てに対して実行されればよい。 In the above description, it is assumed that only one person's face is detected from the in-vehicle image in step ST102, but as in the examples of FIGS. 2 to 4, when the faces of a plurality of people are detected. There is also. In that case, a plurality of search areas are set in step ST105, and steps ST106 to ST116 are executed for each of the plurality of search areas. Further, step ST110 may be executed for all of the plurality of search areas when a certain period of time elapses while the face is lost in any of the plurality of search areas.
 また、探索領域設定部13により探索領域が複数設定され、顔検出部12が複数の探索領域のうち運転席に対応する位置に設定された探索領域で顔をロストした場合、顔検出部12は、運転席に対応する探索領域でのロスト後の顔探索(ステップST114)を、他の探索領域の顔探索よりも高い優先度で実行することが望ましい。例えば、運転席に対応する探索領域でのロスト後の顔探索を、他の探索領域の顔探索よりも高い頻度(短い周期)で実行することなどが考えられる。 Further, when a plurality of search areas are set by the search area setting unit 13 and the face detection unit 12 loses a face in the search area set at a position corresponding to the driver's seat among the plurality of search areas, the face detection unit 12 It is desirable that the face search after loss (step ST114) in the search area corresponding to the driver's seat is executed with a higher priority than the face search in the other search areas. For example, it is conceivable to execute the face search after the lost in the search area corresponding to the driver's seat with a higher frequency (shorter cycle) than the face search in the other search areas.
 <実施の形態4>
 図2~図4の例からも分かるように、車内画像において前席の乗員の顔と後席の乗員の顔とは近い位置になりやすい。そのため、1つの探索領域内に2つ以上の顔が入ることもあり得る。顔検出部12は、それぞれの顔領域の位置を追跡することで、それぞれの顔に対応する顔領域を区別することができ、それにより、1つの探索領域内の2つ以上の顔が入っても、顔検出部12がそれらの顔を混同することは防止される。
<Embodiment 4>
As can be seen from the examples of FIGS. 2 to 4, the faces of the occupants in the front seats and the faces of the occupants in the rear seats tend to be close to each other in the in-vehicle image. Therefore, it is possible that two or more faces are included in one search area. The face detection unit 12 can distinguish the face area corresponding to each face by tracking the position of each face area, whereby two or more faces in one search area are included. However, it is prevented that the face detection unit 12 confuses those faces.
 しかし、実施の形態3において顔のロストが生じた場合、顔領域の位置の追跡が途絶えるため、ロスト後の顔探索(ステップST114)において、ロストした顔の人物とは別の人物の顔が誤検出されるおそれがある。例えば図4の例において、顔検出部12が運転席の乗員P1の顔をロストし、ロスト後の顔探索が行われるときに、右後席の乗員P3が動いて乗員P3の顔が探索領域S1に入ってしまうと、右後席の乗員P3の顔が運転席の乗員P1の顔として誤検出されるおそれがある。 However, when the face is lost in the third embodiment, the tracking of the position of the face region is interrupted, so that in the face search after the loss (step ST114), the face of a person other than the lost face is erroneous. May be detected. For example, in the example of FIG. 4, when the face detection unit 12 loses the face of the occupant P1 in the driver's seat and the face search after the loss is performed, the occupant P3 in the right rear seat moves and the face of the occupant P3 becomes the search area. If the vehicle enters S1, the face of the occupant P3 in the right rear seat may be erroneously detected as the face of the occupant P1 in the driver's seat.
 そこで実施の形態4では、顔検出部12が、ロストした顔とロスト後の顔探索で検出された顔の大きさとを比較することで、ロスト後の顔探索で検出された顔がロストした顔と同じであるかどうかを判断する。具体的には、顔検出部12は、探索領域から人物の顔が検出されなくなった後に、その探索領域から新たに人物の顔が検出された場合、人物の顔が検出されなくなる前(顔をロストする前)の顔領域の大きさと新たに検出された人物の顔に対応する顔領域の大きさとの差が予め定められた閾値以上であれば、新たに検出された人物の顔の検出結果を無効とする。なお、本実施の形態における顔検出装置10の構成も、図1と同様である。 Therefore, in the fourth embodiment, the face detection unit 12 compares the lost face with the size of the face detected in the face search after the lost, so that the face detected in the face search after the lost is the lost face. Determine if it is the same as. Specifically, when the face of a person is newly detected from the search area after the face of the person is no longer detected from the search area, the face detection unit 12 does not detect the face of the person before the face of the person is no longer detected (face). If the difference between the size of the face area (before lost) and the size of the face area corresponding to the face of the newly detected person is equal to or greater than a predetermined threshold, the detection result of the face of the newly detected person Is invalid. The configuration of the face detection device 10 in this embodiment is also the same as in FIG.
 図10は、実施の形態4に係る顔検出装置10の動作を示すフローチャートである。図10のフローは、図9に示したフローに対し、以下のステップST117およびST118を追加したものである。その他のステップは、実施の形態1~3で説明したものと同様であるため、ここではステップST117およびST118の説明のみを行う。 FIG. 10 is a flowchart showing the operation of the face detection device 10 according to the fourth embodiment. The flow of FIG. 10 is obtained by adding the following steps ST117 and ST118 to the flow shown in FIG. Since the other steps are the same as those described in the first to third embodiments, only steps ST117 and ST118 will be described here.
 ステップST117は、ロスト後の顔探索(ステップST114)で人物の顔が検出され、ステップST115でYESと判定されると実行される。ステップST117では、顔検出部12が、顔をロストする前の顔領域の大きさ(ステップST103またはST107で最後に特定された顔領域の大きさ)と、ステップST114で新たに検出された顔に対応する顔領域の大きさとを比較して、両者の差が予め定められた閾値以上であるか否かを判断する。両者の差が閾値よりも小さければ(ステップST117でNO)、顔検出部12は、ロストした顔と同じ顔を検出できたと判断し、ステップST109へ移行する。 Step ST117 is executed when a person's face is detected in the face search after lost (step ST114) and a YES is determined in step ST115. In step ST117, the face detection unit 12 determines the size of the face area before the face is lost (the size of the face area last specified in step ST103 or ST107) and the face newly detected in step ST114. By comparing with the size of the corresponding face area, it is determined whether or not the difference between the two is equal to or greater than a predetermined threshold value. If the difference between the two is smaller than the threshold value (NO in step ST117), the face detection unit 12 determines that the same face as the lost face could be detected, and proceeds to step ST109.
 ステップST118は、ステップST114で新たに検出された顔に対応する顔領域の大きさとの差が閾値以上であったとき(ステップST117でYESと判断されたとき)に実行される。ステップST118では、顔検出部12は、ステップST114ではロストした顔とは異なる顔が誤検出されたと判断し、ステップST114での顔の検出結果を無効にする、すなわち、ステップST114で顔が検出されなかったものとみなす。ステップST118の後はステップST116へ移行する。 Step ST118 is executed when the difference from the size of the face area corresponding to the face newly detected in step ST114 is equal to or greater than the threshold value (when YES is determined in step ST117). In step ST118, the face detection unit 12 determines that a face different from the lost face was erroneously detected in step ST114, and invalidates the face detection result in step ST114, that is, the face is detected in step ST114. It is considered that it did not exist. After step ST118, the process proceeds to step ST116.
 実施の形態4によれば、顔検出部12が、顔をロストする前に特定した顔領域の大きさとロストした後に新たに特定された顔領域の大きさとの差に基づいて、ロストした顔と異なる顔が誤検出されることを防止することができる。 According to the fourth embodiment, the face detection unit 12 and the lost face are based on the difference between the size of the face area specified before the face is lost and the size of the face area newly specified after the face is lost. It is possible to prevent false detection of different faces.
 なお、各実施の形態を自由に組み合わせたり、各実施の形態を適宜、変形、省略したりすることが可能である。 It is possible to freely combine each embodiment, and to appropriately modify or omit each embodiment.
 上記した説明は、すべての態様において、例示であって、例示されていない無数の変形例が想定され得るものと解される。 It is understood that the above description is an example in all aspects, and innumerable variations that are not illustrated can be assumed.
 10 顔検出装置、11 車内画像取得部、12 顔検出部、13 探索領域設定部、20 車内撮影装置、50 処理回路、51 プロセッサ、52 メモリ、P1~P5 乗員、F1~F5 顔領域、S1~S5 探索領域。 10 face detection device, 11 in-vehicle image acquisition unit, 12 face detection unit, 13 search area setting unit, 20 in-vehicle photography device, 50 processing circuit, 51 processor, 52 memory, P1 to P5 occupants, F1 to F5 face area, S1 to S5 Search area.

Claims (11)

  1.  車両内を撮影した画像である車内画像を取得する車内画像取得部と、
     前記車内画像から人物の顔を探索し、前記車内画像において人物の顔が検出された領域である顔領域を特定する顔検出部と、
     前記顔領域の位置に基づいて、前記車内画像において人物の顔を探索する領域である探索領域を設定する探索領域設定部と、
    を備え、
     前記顔検出部は、前記探索領域が設定された後に取得された前記車内画像に対しては、前記車内画像の前記探索領域内で人物の顔を探索する、
    顔検出装置。
    An in-vehicle image acquisition unit that acquires an in-vehicle image that is an image of the inside of the vehicle,
    A face detection unit that searches for a person's face from the in-vehicle image and identifies a face area that is an area in which the person's face is detected in the in-vehicle image.
    A search area setting unit that sets a search area that is an area for searching a person's face in the vehicle interior image based on the position of the face area, and a search area setting unit.
    With
    The face detection unit searches for a person's face in the search area of the vehicle interior image with respect to the vehicle interior image acquired after the search area is set.
    Face detector.
  2.  前記探索領域設定部は、前記顔領域の大きさに応じて前記探索領域の大きさを設定する、
    請求項1に記載の顔検出装置。
    The search area setting unit sets the size of the search area according to the size of the face area.
    The face detection device according to claim 1.
  3.  前記探索領域設定部は、前記顔領域の位置の変化に追従させて前記探索領域の位置を変更する、
    請求項1に記載の顔検出装置。
    The search area setting unit changes the position of the search area in accordance with a change in the position of the face area.
    The face detection device according to claim 1.
  4.  前記探索領域から人物の顔が検出されなくなると、前記探索領域設定部は、前記探索領域の位置を、人物の顔が最後に検出されたときの位置に固定する、
    請求項3に記載の顔検出装置。
    When the face of the person is no longer detected from the search area, the search area setting unit fixes the position of the search area to the position when the face of the person was last detected.
    The face detection device according to claim 3.
  5.  前記探索領域から人物の顔が検出されなくなったまま一定時間経過すると、前記探索領域設定部は前記探索領域の設定を解除し、前記顔検出部は前記車内画像の全体から人物の顔を探索する、
    請求項4に記載の顔検出装置。
    When a certain period of time elapses while the person's face is not detected from the search area, the search area setting unit cancels the setting of the search area, and the face detection unit searches for the person's face from the entire vehicle interior image. ,
    The face detection device according to claim 4.
  6.  前記顔検出部は、前記探索領域から人物の顔が検出されなくなった後に、その探索領域から新たに人物の顔が検出された場合、人物の顔が検出されなくなる前の前記顔領域の大きさと新たに検出された人物の顔に対応する前記顔領域の大きさとの差が予め定められた閾値以上であれば、新たに検出された人物の顔の検出結果を無効とする、
    請求項1に記載の顔検出装置。
    When a person's face is newly detected from the search area after the person's face is no longer detected from the search area, the face detection unit has the size of the face area before the person's face is no longer detected. If the difference from the size of the face area corresponding to the newly detected face of the person is equal to or greater than a predetermined threshold value, the detection result of the newly detected face of the person is invalidated.
    The face detection device according to claim 1.
  7.  前記車内画像から複数の人物の顔が検出されると、前記顔検出部は、複数の人物の顔のそれぞれに対応する複数の前記顔領域を特定し、前記探索領域設定部は、複数の前記顔領域のそれぞれに対応する複数の前記探索領域を設定し、
     前記顔検出部は、複数の前記探索領域が設定された後に取得された前記車内画像に対しては、それぞれの前記探索領域内で人物の顔を探索する、
    請求項1に記載の顔検出装置。
    When the faces of a plurality of persons are detected from the in-vehicle image, the face detection unit identifies a plurality of the face areas corresponding to each of the faces of the plurality of persons, and the search area setting unit performs the plurality of said. A plurality of the search areas corresponding to each of the face areas are set, and a plurality of the search areas are set.
    The face detection unit searches for a person's face in each of the search areas for the in-vehicle image acquired after the plurality of search areas are set.
    The face detection device according to claim 1.
  8.  前記車両の運転席に対応する前記探索領域から人物の顔が検出されなくなると、前記顔検出部は、他の前記探索領域よりも高い優先度で、前記運転席に対応する前記探索領域で人物の顔を探索する、
    請求項7に記載の顔検出装置。
    When the face of a person is no longer detected from the search area corresponding to the driver's seat of the vehicle, the face detection unit has a higher priority than the other search areas and the person in the search area corresponding to the driver's seat. Explore the face of
    The face detection device according to claim 7.
  9.  前記探索領域設定部は、前記車両の特定の状態の検知をトリガとして、前記探索領域を設定する、
    請求項1に記載の顔検出装置。
    The search area setting unit sets the search area by using the detection of a specific state of the vehicle as a trigger.
    The face detection device according to claim 1.
  10.  前記車両の特定の状態は、ドアが施錠された状態、ドアが閉じられた状態、シートベルトが装着された状態、走行開始した状態、ギアがドライブに設定された状態のうちの1つ以上を含む
    請求項9に記載の顔検出装置。
    The specific state of the vehicle may be one or more of a locked door, a closed door, a seatbelt fastened, a start running, and a gear set on the drive. The face detection device according to claim 9, which includes.
  11.  顔検出装置の車内画像取得部が、車両内を撮影した画像である車内画像を取得し、
     前記顔検出装置の顔検出部が、前記車内画像から人物の顔を探索し、前記車内画像において人物の顔が検出された領域である顔領域を特定し、
     前記顔検出装置の探索領域設定部が、前記顔領域の位置に基づいて、前記車内画像において人物の顔を探索する領域である探索領域を設定し、
     前記顔検出部は、前記探索領域が設定された後に取得された前記車内画像に対しては、前記車内画像の前記探索領域内で人物の顔を探索する、
    顔検出方法。
    The in-vehicle image acquisition unit of the face detection device acquires the in-vehicle image, which is an image of the inside of the vehicle.
    The face detection unit of the face detection device searches for a person's face from the in-vehicle image, identifies a face area in which the person's face is detected in the in-vehicle image, and identifies the face area.
    The search area setting unit of the face detection device sets a search area, which is an area for searching for a person's face in the in-vehicle image, based on the position of the face area.
    The face detection unit searches for a person's face in the search area of the vehicle interior image with respect to the vehicle interior image acquired after the search area is set.
    Face detection method.
PCT/JP2020/016277 2020-04-13 2020-04-13 Face detection device and face detection method WO2021210041A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/016277 WO2021210041A1 (en) 2020-04-13 2020-04-13 Face detection device and face detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/016277 WO2021210041A1 (en) 2020-04-13 2020-04-13 Face detection device and face detection method

Publications (1)

Publication Number Publication Date
WO2021210041A1 true WO2021210041A1 (en) 2021-10-21

Family

ID=78083761

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/016277 WO2021210041A1 (en) 2020-04-13 2020-04-13 Face detection device and face detection method

Country Status (1)

Country Link
WO (1) WO2021210041A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010103947A (en) * 2008-10-27 2010-05-06 Canon Inc Image processing apparatus, imaging apparatus, and image processing method
WO2017203769A1 (en) * 2016-05-23 2017-11-30 アルプス電気株式会社 Sight line detection method
JP2019074965A (en) * 2017-10-17 2019-05-16 株式会社デンソーテン Device and system for detecting driving incapacity
JP2019185557A (en) * 2018-04-13 2019-10-24 オムロン株式会社 Image analysis device, method, and program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010103947A (en) * 2008-10-27 2010-05-06 Canon Inc Image processing apparatus, imaging apparatus, and image processing method
WO2017203769A1 (en) * 2016-05-23 2017-11-30 アルプス電気株式会社 Sight line detection method
JP2019074965A (en) * 2017-10-17 2019-05-16 株式会社デンソーテン Device and system for detecting driving incapacity
JP2019185557A (en) * 2018-04-13 2019-10-24 オムロン株式会社 Image analysis device, method, and program

Similar Documents

Publication Publication Date Title
CN110088761B (en) Image authentication device, image authentication method, and automobile
JP6693357B2 (en) Image recording apparatus, image recording method and program
WO2007043452A1 (en) Vehicle-mounted imaging device and method of measuring imaging/movable range
JP6735955B2 (en) Occupant monitoring device and occupant monitoring method
CN108162859B (en) Vehicle rearview mirror, image display method, vehicle and storage medium
WO2021210041A1 (en) Face detection device and face detection method
WO2018154857A1 (en) Vehicle display control device, vehicle display control system, vehicle display control method, and program
JP6678844B2 (en) Abnormality detection device and abnormality detection method
JP7109649B2 (en) Arousal level estimation device, automatic driving support device, and arousal level estimation method
JP2022143854A (en) Occupant state determination device and occupant state determination method
JP6594595B2 (en) Inoperable state determination device and inoperable state determination method
JP7051014B2 (en) Face detection processing device and face detection processing method
JP2020126551A (en) Vehicle periphery monitoring system
US20230408679A1 (en) Occupant determination apparatus and occupant determination method
JP2009159568A (en) Abnormality detecting device, method, and program
JP6921316B2 (en) Vehicle image processing equipment and image processing method
JP7483060B2 (en) Hand detection device, gesture recognition device, and hand detection method
WO2021240668A1 (en) Gesture detection device and gesture detection method
JP2017039333A (en) Vehicle crime prevention system and vehicle crime prevention method
WO2020075215A1 (en) Sight line detection device and sight line detection method
JP7003332B2 (en) Driver monitoring device and driver monitoring method
JP7505660B2 (en) Occupant monitoring device, occupant monitoring method, and occupant monitoring program
WO2022172400A1 (en) Vehicular monitoring device and vehicular monitoring system
JP2019074965A (en) Device and system for detecting driving incapacity
WO2022176037A1 (en) Adjustment device, adjustment system, display device, occupant monitoring device, and adjustment method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20931314

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20931314

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP