WO2021005702A1 - Face detection processing device and face detection processing method - Google Patents

Face detection processing device and face detection processing method Download PDF

Info

Publication number
WO2021005702A1
WO2021005702A1 PCT/JP2019/027103 JP2019027103W WO2021005702A1 WO 2021005702 A1 WO2021005702 A1 WO 2021005702A1 JP 2019027103 W JP2019027103 W JP 2019027103W WO 2021005702 A1 WO2021005702 A1 WO 2021005702A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
image
person
face detection
detected
Prior art date
Application number
PCT/JP2019/027103
Other languages
French (fr)
Japanese (ja)
Inventor
翔悟 甫天
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to PCT/JP2019/027103 priority Critical patent/WO2021005702A1/en
Priority to DE112019007033.9T priority patent/DE112019007033T5/en
Priority to JP2021530389A priority patent/JP7051014B2/en
Publication of WO2021005702A1 publication Critical patent/WO2021005702A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Definitions

  • the present invention relates to a face detection processing device and a face detection processing method.
  • the driver monitoring system which has been actively developed in recent years, is a technology necessary for determining the transfer of driving initiative from a vehicle to a human in a partially autonomous vehicle.
  • the process of detecting the featured part of the passenger's face or facial parts is an elemental technology in the driver monitoring system.
  • the feature site detection process is required not only to determine the accuracy but also to reduce the amount of calculation and perform the process at high speed.
  • the vehicle control device described in Patent Document 1 discloses a face detection processing technique that reduces the amount of calculation and increases the processing speed.
  • the position of the face may shift from the center of the screen due to the movement of the driver.
  • the position of the face is usually the position of the headrest.
  • the vehicle control device of Patent Document 1 searches for a face from a portion where the driver's face is likely to exist by grasping the position of the headrest on the shooting screen in advance. As a result, the search time is shortened.
  • the face detection device may detect the face of the person to be non-detected. In that case, the face of the person to be detected is excluded from the subsequent search area, so that the detection accuracy is lowered.
  • the present invention has been made to solve the above problems, and an object of the present invention is to provide a face detection processing device that enables a face detection device to accurately detect the face of a person to be detected. ..
  • the face detection processing device outputs an image to the face detection device.
  • the face detection device detects the face of the person in the search area set in at least a part of the sequentially input images, and is input next based on the success or failure of the detection of the face of the person. Change the search area in the image.
  • the face detection processing device includes a validation unit and an image insertion unit.
  • the face of the person detected by the face detection device is the face of the person to be detected in advance based on the predetermined conditions regarding the face of the person detected by the face detection device. Judge whether or not.
  • the image insertion unit uses a dummy image as an image for the face detection device to cause the face detection device to fail to detect the face of a person in the search area and change the search area. Output.
  • a face detection processing device that enables a face detection device to accurately detect the face of a person to be detected.
  • FIG. It is a block diagram which shows the structure of the face detection processing apparatus in Embodiment 1.
  • FIG. It is a figure which shows an example of the structure of the processing circuit which a face detection processing apparatus has. It is a figure which shows another example of the structure of the processing circuit which a face detection processing apparatus has.
  • FIG. It is a flowchart which shows the face detection processing method in Embodiment 2.
  • FIG. It is a figure which shows an example of the image image
  • FIG. It is a flowchart which shows the face detection processing method in Embodiment 3. It is a figure which shows an example of the simulated image in Embodiment 3.
  • FIG. It is a figure which shows an example of the image image
  • FIG. 1 is a block diagram showing the configuration of the face detection processing device 100 according to the first embodiment.
  • the face detection processing device 100 outputs an image for the face detection device 110 to detect the face to the face detection device 110 that detects the face of the person in the image.
  • the face detection device 110 has a function of detecting the face of a person in a search area set in at least a part of the sequentially input images. Further, the face detection device 110 has a function of changing the search area in the next input image based on the success or failure of the face detection.
  • the face detection processing device 100 includes a validity determination unit 10 and an image insertion unit 20.
  • the validity determination unit 10 determines that the face of the person detected by the face detection device 110 is the face of the person to be detected in advance based on the predetermined conditions regarding the face of the person detected by the face detection device 110. Determine if it is a face. In other words, the validity determination unit 10 determines the validity of the detected person's face.
  • the predetermined conditions are, for example, a condition relating to the position of the face in the image or the position of the feature portion of the face in the image, a condition relating to the size of the face in the image, or a condition relating to the feature of the face. ..
  • the predetermined condition may be, for example, a condition relating to the relationship between the traveling state of the vehicle on which the person is on board and the orientation of the person's face.
  • the image insertion unit 20 outputs a dummy image to the face detection device 110 based on the determination result of the validity determination unit 10.
  • the dummy image is an image for causing the face detection device 110 to fail to detect the face of the person previously detected in the search area and change the search area.
  • the dummy image is an image for resetting the search area.
  • the face detection device 110 When a dummy image is input to the face detection device 110, the face detection device 110 fails to detect the face of the person previously detected in the dummy image.
  • the face detection device 110 changes the search area in the image input next to the dummy image. Then, the face detection device 110 detects the face of the person in the changed search area.
  • the modified search area is likely to contain the face of the person to be detected. Therefore, the face detection device 110 can accurately detect the face of the person to be detected.
  • FIG. 2 is a diagram showing an example of the configuration of the processing circuit 90 included in the face detection processing device 100.
  • Each function of the validity determination unit 10 and the image insertion unit 20 is realized by the processing circuit 90. That is, the processing circuit 90 has a validity determination unit 10 and an image insertion unit 20.
  • the processing circuit 90 may be, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC (Application Specific Integrated Circuit), or an FPGA (Field). -ProgrammableGateArray), or a circuit that combines these.
  • Each function of the validity determination unit 10 and the image insertion unit 20 may be individually realized by a plurality of processing circuits, or may be collectively realized by one processing circuit.
  • FIG. 3 is a diagram showing another example of the configuration of the processing circuit included in the face detection processing device 100.
  • the processing circuit includes a processor 91 and a memory 92.
  • each function of the validity determination unit 10 and the image insertion unit 20 is realized.
  • each function is realized by executing software or firmware described as a program by the processor 91.
  • the face detection processing device 100 has a memory 92 for storing the program and a processor 91 for executing the program.
  • the face of the person detected by the face detection device 110 is defined in advance by the face detection processing device 100 based on the predetermined conditions regarding the face of the person detected by the face detection device 110. It is determined whether or not the face is the face of the person to be detected, and based on the determination result, the face detection device 110 fails to detect the face of the person in the search area and detects a dummy image for changing the search area. The function of outputting to the device 110 is described.
  • the program causes the computer to execute the procedure or method of the validity determination unit 10 and the image insertion unit 20.
  • the processor 91 is, for example, a CPU (Central Processing Unit), an arithmetic unit, a microprocessor, a microcomputer, a DSP (Digital Signal Processor), or the like.
  • the memory 92 is, for example, non-volatile or volatile such as RAM (RandomAccessMemory), ROM (ReadOnlyMemory), flash memory, EPROM (ErasableProgrammableReadOnlyMemory), and EEPROM (ElectricallyErasableProgrammableReadOnlyMemory). It is a semiconductor memory.
  • the memory 92 may be any storage medium used in the future, such as a magnetic disk, a flexible disk, an optical disk, a compact disk, a mini disk, or a DVD.
  • Some of the functions of the validity determination unit 10 and the image insertion unit 20 described above may be realized by dedicated hardware, and some may be realized by software or firmware. In this way, the processing circuit realizes each of the above-mentioned functions by hardware, software, firmware, or a combination thereof.
  • FIG. 4 is a flowchart showing the face detection processing method according to the first embodiment.
  • step S1 the validity determination unit 10 determines the face of the person detected by the face detection device 110 in advance based on the predetermined conditions regarding the face of the person detected by the face detection device 110. It is determined whether or not the face is the face of the person to be detected. When the face of the detected person is a predetermined person to be detected, the face detection processing method ends. If the face of the detected person is not a predetermined face of the person to be detected, step S2 is executed.
  • step S2 the image insertion unit 20 outputs a dummy image to the face detection device 110.
  • the face detection device 110 When a dummy image is input to the face detection device 110, the face detection device 110 fails to detect the face of the person previously detected in the dummy image.
  • the face detection device 110 changes the search area in the image input next to the dummy image. Then, the face detection device 110 detects the face of the person in the changed search area.
  • the modified search area is likely to contain the face of the person to be detected. Therefore, the face detection device 110 can accurately detect the face of the person to be detected.
  • the face detection processing device 100 in the first embodiment outputs an image to the face detection device 110.
  • the face detection device 110 detects the face of the person in the search area set in at least a part of the sequentially input images, and is input next based on the success or failure of the detection of the face of the person. Change the search area in the image.
  • the face detection processing device 100 includes a validity determination unit 10 and an image insertion unit 20.
  • the validity determination unit 10 determines that the face of the person detected by the face detection device 110 is a predetermined detection target person based on the predetermined conditions regarding the face of the person detected by the face detection device 110. Determine if it is a face.
  • the image insertion unit 20 outputs a dummy image to the face detection device 110 based on the determination result of the validity determination unit 10.
  • the dummy image is an image for causing the face detection device 110 to fail to detect the face of a person in the search area and change the search area.
  • Such a face detection processing device 100 enables the face detection device 110 to accurately detect the face of a person to be detected.
  • the face detection processing method in the first embodiment outputs an image to the face detection device 110.
  • the face detection device 110 detects the face of the person in the search area set in at least a part of the sequentially input images, and is input next based on the success or failure of the detection of the face of the person. Change the search area in the image.
  • the face of the person detected by the face detection device 110 is the face of the person to be detected in advance based on the predetermined conditions regarding the face of the person detected by the face detection device 110. It is determined whether or not the face is, and a dummy image is output to the face detection device 110 based on the determination result.
  • the dummy image is an image for causing the face detection device 110 to fail to detect the face of a person in the search area and change the search area.
  • Such a face detection processing method enables the face detection device 110 to accurately detect the face of the person to be detected.
  • the face detection processing apparatus and the face detection processing method according to the second embodiment will be described.
  • the second embodiment is a subordinate concept of the first embodiment, and the face detection processing device according to the second embodiment includes each configuration of the face detection processing device 100 according to the first embodiment. The same configuration and operation as in the first embodiment will not be described.
  • FIG. 5 is a block diagram showing the configurations of the face detection processing device 101 and the face detection processing system 200 according to the second embodiment.
  • the face detection processing system 200 includes an image acquisition device 120, a face detection processing device 101, and a face detection device 110.
  • the face detection processing device 101 includes a storage unit 30, an image insertion unit 20, and a validity determination unit 10.
  • the face detection device 110 includes a feature portion detection unit 111.
  • the image acquisition device 120 sequentially acquires images including the face of a person.
  • the image acquisition device 120 is, for example, a camera mounted on a moving body and acquiring an image of a passenger of the moving body.
  • the moving body is a vehicle, an airplane, a railroad, a bus, a motorcycle, or the like. Passengers include drivers.
  • the camera is installed in the center of the front of the vehicle, such as the center console of the vehicle, and captures both the driver and the passenger in the passenger seat.
  • the camera includes a lens, an aperture, a shutter and an image sensor.
  • the storage unit 30 stores the images sequentially acquired by the image acquisition device 120 as time-series data. Further, the storage unit 30 stores a dummy image for causing the face detection device 110 to fail in detecting the face of a person in the search area.
  • the dummy image is, for example, a uniform image having the same brightness of all pixels.
  • the uniform image in the second embodiment is a black image in which the brightness of all pixels is zero.
  • the image insertion unit 20 reads a dummy image or an image acquired by the image acquisition device 120 from the storage unit 30 and outputs the dummy image or the image acquired by the image acquisition device 120 to the face detection device 110 based on the determination result of the validity determination unit 10.
  • the validity determination unit 10 determines that the face of the person detected by the face detection device 110 is a predetermined detection target person based on the predetermined conditions regarding the face of the person detected by the face detection device 110. Determine if it is a face. In other words, the validity determination unit 10 determines the validity of the detected person's face.
  • Predetermined conditions include, for example, conditions relating to the position of the face or the position of the featured part of the face in the image.
  • the validity determination unit 10 determines whether or not the position of the face of the detected person or the position of the featured portion of the detected face is included in a predetermined range.
  • the predetermined range includes the position of the face or the position of the facial feature of the person to be detected.
  • the validity determination unit 10 statistically obtains the amount of variation between the position of the face detected in the previously input image and the position of the face detected in the image input next. It may be determined whether or not it is within the operating range of the person to be detected. In this case, it is premised that the position of the face detected in the previously input image is the position of the face of the person to be detected.
  • the operating range is statistically determined based on, for example, information on the seat layout in the vehicle and information on the physique of the driver.
  • Predetermined conditions include, for example, conditions related to face size.
  • the validity determination unit 10 determines whether or not the size of the face of the detected person is included in the range of the predetermined size.
  • the predetermined size range is determined based on, for example, the average face size of the person to be detected in the image.
  • the validity determination unit 10 determines that the amount of variation between the face size detected in the previously input image and the face size detected in the next input image is the same as described above. , It may be determined whether or not it is within the operating range of the person to be detected that is statistically determined. In this case, it is premised that the size of the face detected in the previously input image is the size of the face of the person to be detected.
  • Predetermined conditions include, for example, conditions related to facial features.
  • the validity determination unit 10 determines whether or not the facial features of the detected person match the facial features of the person to be detected that are registered in advance.
  • the predetermined conditions shown above were the conditions regarding the relationship between the face of the person detected by the face detection device 110 and the face of the person to be detected in advance, but the determination of validity was made. Does not necessarily require predetermined information on the face of the person to be detected. An example is shown below.
  • the predetermined conditions include, for example, a condition relating to the relationship between the running state of the vehicle on which the person detected by the face detection device 110 is on board and the orientation of the face of the detected person.
  • the validity determination unit 10 detects the face of the detected person when the angle formed by the traveling direction of the vehicle and the direction of the face of the detected person is included in a predetermined angle range. Judge that it is the face of a person. At this time, the validity determination unit acquires information on the steering angle of the vehicle from the vehicle control device or the like as information on the traveling direction of the vehicle.
  • the validity determination unit 10 does not use the predetermined face information of the person to be detected, but obtains the information on the running state of the vehicle and the information on the face of the person detected by the face detection device 110. The validity is judged based on the predetermined conditions regarding.
  • the validity determination unit 10 determines that the face of the person to be detected has been detected. When the face of the person to be detected is detected, the validity determination unit 10 outputs the face detection result by the face detection device 110 to the outside. When the face of the non-detection target person is detected, the validity determination unit 10 rejects the face detection result by the face detection device 110 and outputs the detection failure to the outside.
  • the face detection processing device 101 has a counter for determining whether or not the face of the person to be detected has been detected.
  • the face detection processing device 101 resets the counter.
  • the face detection processing device 101 counts up the counter.
  • the feature site detection unit 111 detects the face of a person in the search area set in at least a part of the image sequentially input as time series data. At that time, the feature region detection unit 111 detects the face or the feature region of the face by the pattern matching process. As a pattern matching process, the feature region detection unit 111 learns, for example, a face at a specific angle or an image of a feature region of a specific face as a model for each of a plurality of identifiers. The feature portion detection unit 111 evaluates the degree of coincidence between the image of the learning model and the image input from the image insertion unit 20. Based on the evaluation result, the feature portion detection unit 111 detects the face of a person or the feature portion.
  • the feature site detection unit 111 performs pattern matching processing in the search area even when the image input from the image insertion unit 20 is a dummy image. However, the feature portion detection unit 111 fails to detect the face of a person in the dummy image.
  • the feature site detection unit 111 changes the search area in the image to be input next as time series data based on the success or failure of face detection. For example, when the feature portion detection unit 111 succeeds in detecting the face, the search area in the next input image is set based on the movement amount of the person expected from the time interval of the time series data. For example, when the time series data is composed of 10 fps, the time interval of each image is 100 msec. At that time interval, the range in which the human face is expected to be movable is statistically determined. The feature portion detection unit 111 sets a face existence range estimated based on the expected movement range and the distance from the face position detected in the previous image to the camera as a search area.
  • the distance from the face position detected in the previous image to the camera can be estimated from the previous face detection result.
  • the feature portion detection unit 111 may set a new search area obtained by expanding the previous search area by a certain amount based on the position of the face detected in the previous image.
  • the feature site detection unit 111 when the feature site detection unit 111 fails to detect the face, the feature site detection unit 111 sets the entire area in the next input image as the search area. As described above, when the image input from the image insertion unit 20 is a dummy image, the feature portion detection unit 111 fails to detect the face of the person. Therefore, the search area in the image input next to the dummy image is set to the entire area of the input image.
  • the face detection processing device 101 includes the processing circuit shown in FIG. 2 or 3, and the functions of the validity determination unit 10 and the image insertion unit 20 are realized by the processing circuit. Further, the function of the storage unit 30 is realized by, for example, the memory shown in FIG. Similarly, the face detection device 110 also includes the processing circuit shown in FIG. 2 or FIG. 3, and the function of the feature portion detection unit 111 is realized by the processing circuit.
  • FIG. 6 is a flowchart showing the face detection processing method according to the second embodiment.
  • the flowchart shown in FIG. 6 shows a case where the face detection processing system 200 executes a face detection operation on one image in the time series data.
  • the image captured by the image acquisition device 120 is already stored in the storage unit 30.
  • step S10 the image insertion unit 20 determines whether or not the value of the counter is equal to or greater than a predetermined value. If the value of the counter is equal to or greater than a predetermined value, step S20 is executed. If the value of the counter is less than a predetermined value, step S30 is executed.
  • step S20 the image insertion unit 20 reads a dummy image from the storage unit 30 and outputs it to the feature portion detection unit 111 of the face detection device 110.
  • step S30 the image insertion unit 20 outputs the image captured by the image acquisition device 120 from the storage unit 30 to the feature portion detection unit 111 of the face detection device 110. At this time, the image insertion unit 20 outputs to the feature portion detection unit 111 the image next to the image previously output to the face detection device 110 among the images constituting the time series data.
  • step S40 the feature site detection unit 111 determines the success or failure of face detection in the previously input image. If the face detection is successful, step S50 is executed. If the face detection was unsuccessful, step S60 is executed.
  • step S50 the feature site detection unit 111 sets the search area in the image input from the image insertion unit 20 this time based on the face detection result in the previous image.
  • step S60 the feature site detection unit 111 sets the entire image input from the image insertion unit 20 in the search area this time.
  • step S70 the feature site detection unit 111 detects the face of the person in the set search area by using the pattern matching process.
  • step S80 the validity determination unit 10 determines whether or not the face detection device 110 succeeds in detecting the face of the person. If the face detection device 110 succeeds in detecting the face of the person, step S90 is executed. If the face detection device 110 fails to detect the face of a person, step S120 is executed.
  • step S90 the validity determination unit 10 compares the face of the person detected by the face detection device 110 with a predetermined condition and confirms the validity. For example, the validity determination unit 10 compares the position of the detected person's face in the image with a predetermined range in the image.
  • step S100 the validity determination unit 10 determines whether or not the face of the person detected by the face detection device 110 satisfies a predetermined condition. In other words, the validity determination unit 10 determines whether or not the face detection result is valid. If the face of the detected person satisfies a predetermined condition, that is, the detection result is valid, step S120 is executed. If the face of the detected person does not meet the predetermined conditions, that is, the detection result is not valid, step S110 is executed.
  • step S110 the face detection processing device 101 counts up the counter. In this state, the face detection device 110 succeeds in detecting the face of the person, but fails to detect the face of the person to be detected.
  • step S120 the face detection processing device 101 resets the counter. In this state, the face detection device 110 succeeds in detecting the face of the person to be detected, or the face detection device 110 fails to detect the face of the person in the first place.
  • step S10 to S120 are processing for one image in the time series data. Therefore, in practice, after step S110 or S120, step S10 is executed again as a process for the next image.
  • the image acquisition device 120 is a camera mounted on the vehicle and images the passengers in the vehicle, and the predetermined detection target person is a driver among the passengers of the vehicle.
  • FIG. 7 is an example of an image captured by the image acquisition device 120, and is a diagram showing an image output by the image insertion unit 20 to the feature portion detection unit 111. The image shows four passengers 2, including the driver of the vehicle. Further, in the following, the case where the face detection device 110 succeeds in detecting the face in the image immediately before the image of FIG. 7 will be described.
  • step S40 the feature site detection unit 111 determines that the face has been successfully detected in the previous image.
  • step S50 is executed.
  • step S50 the feature site detection unit 111 sets the search area based on the face detection result in the previous image.
  • the feature portion detection unit 111 sets the search region 3 in the region on the left side of the image corresponding to the driver's seat of the vehicle, as shown in FIG.
  • step S70 the feature site detection unit 111 detects the face of the person in the search area 3.
  • a part of the face of the passenger 2A in the front seat, which is the driver, and the entire face of the passenger 2B in the rear seat are shown in the search area 3.
  • the state as shown in FIG. 7 can be obtained.
  • the feature portion detecting unit 111 preferentially detects the face of the passenger 2B in the rear seat.
  • step S80 the validity determination unit 10 determines that the feature site detection unit 111 has succeeded in detecting the face.
  • Step S90 is executed.
  • step S90 the validity determination unit 10 compares the coordinates in the image of the face of the passenger 2B in the rear seat detected by the feature site detection unit 111 with the coordinate range in which the driver's face should exist. And confirm the validity.
  • step S100 the validity determination unit 10 determines that the coordinates of the face of the passenger 2B in the rear seat are not included in the coordinate range in which the driver's face should exist. That is, the validity determination unit 10 determines that the detection of the face of the driver to be detected has failed, and the step S110 is executed.
  • step S110 the face detection processing device 101 counts up the counter. Subsequently, step S10 is executed.
  • step S10 the image insertion unit 20 determines whether or not the value of the counter is equal to or greater than a predetermined value.
  • a predetermined value the case where the value of the counter reaches a predetermined value or more will be described.
  • step S20 the image insertion unit 20 reads a dummy image from the storage unit 30 and outputs it to the feature portion detection unit 111 of the face detection device 110.
  • the dummy image is a black image.
  • step S40 the feature site detection unit 111 determines that the face of the passenger 2B in the rear seat has been successfully detected in the image of FIG. 7, that is, the previous image.
  • step S50 is executed.
  • step S50 the feature region detection unit 111 sets the search region 3 in the region on the left side of the dummy image based on the face detection result in the image of FIG. 7.
  • step S70 the feature site detection unit 111 attempts to detect the face of the person in the search area 3 in the dummy image. However, since the dummy image is a black image, the feature portion detection unit 111 fails to detect the face of the person in the dummy image.
  • step S80 the validity determination unit 10 determines that the feature region detection unit 111 has failed to detect the face.
  • step S120 is executed.
  • step S120 the face detection processing device 101 resets the counter. Step S10 is executed again.
  • step S10 the image insertion unit 20 determines whether or not the value of the counter is equal to or greater than a predetermined value.
  • the value of the counter is less than a predetermined value because it has been reset.
  • step S30 is executed.
  • step S30 the image insertion unit 20 reads the image next to the image previously output to the face detection device 110 among the time series data stored in the storage unit 30, and outputs the image to the face detection device 110. ..
  • the image insertion unit 20 may skip the frame in which the dummy image is read out of the time-series data stored in the storage unit 30 and read the next image.
  • FIG. 8 is an example of an image captured by the image acquisition device 120, and is a diagram showing an image output by the image insertion unit 20 to the feature portion detection unit 111 in this step S30.
  • the image shown in FIG. 8 is an image captured by the image acquisition device 120 after the image shown in FIG. 7.
  • step S40 the feature portion detection unit 111 determines that the face detection in the previous dummy image has failed.
  • step S60 is executed.
  • step S60 the feature region detection unit 111 sets the entire image shown in FIG. 8 in the search region 3.
  • step S70 the feature site detection unit 111 detects the faces of the four passengers 2 shown in the search area 3.
  • step S80 the validity determination unit 10 determines that the feature site detection unit 111 has succeeded in detecting the face.
  • step S90 is executed.
  • step S90 the validity determination unit 10 compares the coordinates in the image of the faces of the four passengers 2 detected by the feature site detection unit 111 with the coordinate range in which the driver's face should exist. Check the validity.
  • step S100 the validity determination unit 10 determines that the coordinates of the face of one passenger 2A in the front seat among the four passengers 2 are included in the coordinate range in which the driver's face should exist. to decide. That is, the validity determination unit 10 determines that the face of the driver to be detected has been successfully detected, and the step S120 is executed.
  • step S120 the face detection processing device 101 resets the counter.
  • the face detection device 110 limits the search area 3 and detects the face of the person to be detected. Therefore, the amount of calculation for face detection is reduced, and the processing speed is improved.
  • the face detection processing device 101 rejects the face detection result of the non-detection target person and sends a dummy image to the face detection device 110. Output. Since the dummy image in the second embodiment is a black image, the face detection device 110 fails to detect the face in the input dummy image. Therefore, the face detection device 110 changes the search area 3 in the image input thereafter.
  • the face detection device 110 Since the face detection device 110 performs a face detection operation in the changed search area 3, it can detect the face of a person to be detected that satisfies a predetermined condition.
  • the face detection device 110 and the face detection processing system 200 of the second embodiment improve the face detection speed of the face detection device 110 and also improve the face detection accuracy of the person to be detected.
  • the predetermined conditions in the second embodiment are the condition regarding the position of the face in the image or the position of the feature portion of the face in the image, the condition regarding the size of the face in the image, and the condition regarding the feature of the face. Or, the condition regarding the relationship between the running state of the vehicle on which the person is on board and the orientation of the face is included.
  • the face detection processing device 101 enables the face detection device 110 to accurately detect the face of the person to be detected even under various conditions.
  • the face detection processing device 101 executes a determination process regarding the validity of the detection result of the face detection device 110, but the determination process can also be incorporated into the face detection device 110. .. However, its incorporation requires modification or new design of the face detection device 110.
  • the face detection processing device 101 according to the second embodiment has the above-mentioned effect only by connecting to the existing image acquisition device 120 and the face detection device 110. That is, the face detection processing device 101 enables the face detection device 110 to accurately detect the face of the person to be detected without changing the input / output interfaces of the existing image acquisition device 120 and the face detection device 110. To.
  • the face detection processing device 101 and the face detection processing system 200 can be incorporated into a driver monitoring system mounted on a moving body.
  • the driver monitoring system can output a warning regarding dozing and inattentiveness to the driver based on the detection result output by the face detection processing device 101.
  • the driving support device for supporting the driving of the vehicle or the automatic driving control device for controlling the automatic driving of the vehicle is linked with the driver monitoring system, the portion is based on the detection result output by the face detection processing device 101. It is possible to accurately determine the transfer of driving initiative from a vehicle to a human in a self-driving vehicle.
  • the face detection processing device 101 and the face detection processing system 200 can be incorporated into a personal authentication system that authenticates based on the face of a person.
  • the personal authentication system can accurately perform personal authentication based on the detection result output by the face detection processing device 101.
  • the storage unit 30 may store the processed image as a dummy image.
  • the processed image is an image in which the face of a person whose validity has been denied by the validity determination unit 10 is filled.
  • the processed image is an image in which the face of the passenger 2B in the rear seat shown in FIG. 7 is filled.
  • step S20 of the second embodiment the image insertion unit 20 outputs the processed image to the feature portion detection unit 111 of the face detection device 110 instead of the black image. Similar to the black image, in step S70, the feature portion detecting unit 111 fails to detect the face of the person in the search area 3. Other processing is the same as in the second embodiment. Such a face detection processing device 101 also has the same effect as that of the second embodiment.
  • the storage unit 30 may store a mask image as a dummy image.
  • the mask image is an image for being superimposed on the image acquired by the image acquisition device 120.
  • the mask image makes the pixels corresponding to the face of the person whose validity is denied by the validity determination unit 10 opaque. For example, by superimposing the mask image on the image shown in FIG. 7, a state is generated in which the pixel corresponding to the face of the passenger 2B in the rear seat is not transmitted and the face is not detected by the face detection device 110.
  • the image insertion unit 20 may output an image in which the mask image is superimposed on the next image acquired by the image acquisition device 120 instead of the black image. Similar to the black image, in step S70, the feature portion detecting unit 111 fails to detect the face of the person in the search area 3. Other processing is the same as in the second embodiment. Such a face detection processing device 101 also has the same effect as that of the second embodiment.
  • the face detection processing device 101 and the face detection processing method according to the third embodiment will be described.
  • the third embodiment is a subordinate concept of the first embodiment, and the face detection processing device 101 in the third embodiment includes each configuration of the face detection processing device 100 in the first embodiment.
  • the description of the configuration and operation similar to those of the first or second embodiment will be omitted.
  • the configuration of the face detection processing device 101 and the face detection processing system 200 in the third embodiment is the same as the block diagram shown in FIG.
  • the storage unit 30 in the third embodiment stores a black image and a simulated image as dummy images.
  • the simulated image is an image in which the face of a dummy person who satisfies a predetermined condition and is simulated by a predetermined person to be detected is drawn. For example, when the person to be detected is the driver of the vehicle, the face of the dummy person is drawn in the range where the driver's face should exist in the image acquired by the image acquisition device 120.
  • FIG. 9 is a flowchart showing the face detection processing method according to the third embodiment. Step S115 and step S130 are added to the flowchart shown in FIG. 6 of the second embodiment. Based on the flowchart of FIG. 9, the detailed operation of the face detection device 110 resetting the search area 3 by the dummy image will be described. Since the same as in the second embodiment up to step S70 in which the feature portion detecting unit 111 fails to detect the face of the person in the search area 3 of the black image which is a dummy image, the description thereof will be omitted.
  • step S80 the validity determination unit 10 determines that the feature region detection unit 111 has failed to detect the face.
  • step S115 is executed.
  • step S115 the validity determination unit 10 determines whether or not the image output to the feature portion detection unit 111 is a dummy image.
  • step S130 is executed.
  • step S130 the face detection processing device 101 holds the counter. Step S10 is executed again.
  • step S10 the image insertion unit 20 determines whether or not the value of the counter is equal to or greater than a predetermined value. Here, since the value of the counter is held, it still reaches a predetermined value or more. Therefore, step S20 is executed.
  • step S20 the image insertion unit 20 reads a dummy image from the storage unit 30 and outputs it to the feature portion detection unit 111 of the face detection device 110. At this time, the image insertion unit 20 remembers that the black image was read as the dummy image last time.
  • the image insertion unit 20 reads a simulated image as a dummy image next to the black image and outputs the simulated image to the feature portion detection unit 111.
  • FIG. 10 is a diagram showing an example of a simulated image according to the third embodiment. The position of the dummy passenger 2C's face in the simulated image corresponds to the position of the driver's face when the driver is seated in the driver's seat and is facing the front.
  • step S40 the feature region detection unit 111 determines that the face detection in the previous black image has failed.
  • step S60 is executed.
  • step S60 the feature region detection unit 111 sets the entire image shown in FIG. 10 in the search region 3.
  • step S70 the feature site detection unit 111 detects the face of the dummy passenger 2C shown in the search area 3.
  • step S80 the validity determination unit 10 determines that the feature site detection unit 111 has succeeded in detecting the face.
  • step S90 is executed.
  • step S90 the validity determination unit 10 compares the coordinates in the image of the face of the dummy passenger 2C detected by the feature site detection unit 111 with the coordinate range in which the driver's face should exist. Check the validity.
  • step S100 the validity determination unit 10 determines that the coordinates of the face of the dummy passenger 2C are included in the coordinate range in which the driver's face should exist. That is, the validity determination unit 10 determines that the face of the driver to be detected has been successfully detected, and the step S120 is executed.
  • step S120 the face detection processing device 101 resets the counter. Step S10 is executed again.
  • step S10 the image insertion unit 20 determines whether or not the value of the counter is equal to or greater than a predetermined value.
  • the value of the counter is less than a predetermined value because it has been reset.
  • step S30 is executed.
  • step S30 the image insertion unit 20 reads the image next to the image previously output to the face detection device 110 among the time series data stored in the storage unit 30, and outputs the image to the face detection device 110.
  • FIG. 11 is an example of an image captured by the image acquisition device 120, and is a diagram showing an image output by the image insertion unit 20 to the feature portion detection unit 111 in this step S30.
  • step S40 the feature site detection unit 111 determines that the face has been successfully detected in the previous simulated image.
  • step S50 is executed.
  • step S50 the feature site detection unit 111 sets the search area 3 based on the face detection result in the previous simulated image.
  • the feature region detection unit 111 sets the search region 3 in the region corresponding to the face of the dummy passenger 2C shown in FIG. 10 in the image of FIG.
  • step S70 the feature site detection unit 111 detects the face of the passenger 2 in the search area 3.
  • the search area 3 the entire face of the passenger 2A in the front seat and a part of the face of the passenger 2B in the rear seat are shown.
  • the feature portion detecting unit 111 preferentially detects the face of the passenger 2A in the front seat.
  • step S80 the validity determination unit 10 determines that the feature site detection unit 111 has succeeded in detecting the face.
  • Step S90 is executed.
  • step S90 the validity determination unit 10 compares the coordinates in the image of the face of the passenger 2A in the front seat detected by the feature site detection unit 111 with the coordinate range in which the driver's face should exist. Check the validity.
  • step S100 the validity determination unit 10 determines that the coordinates of the face of the passenger 2A in the front seat are included in the coordinate range in which the driver's face should exist. That is, the validity determination unit 10 determines that the face of the driver to be detected has been successfully detected, and the step S120 is executed.
  • step S120 the face detection processing device 101 resets the counter.
  • the dummy image in the third embodiment is a black image and a simulated image in which the face of the dummy person is drawn.
  • the face detection device 110 After failing to detect a face in the dummy image, the face detection device 110 changes the search area to an appropriate search area by using a simulated image. Since the face detection device 110 performs a face detection operation in the appropriately changed search area 3, it can detect the face of a person to be detected that satisfies a predetermined condition.
  • the face detection device 110 and the face detection processing system 200 of the third embodiment improve the face detection speed of the face detection device 110 and also improve the face detection accuracy of the person to be detected.
  • the face detection processing device shown in each of the above embodiments shall also be applied to a system constructed by appropriately combining a navigation device, a communication terminal, a server, and the functions of applications installed in the navigation device. Can be done.
  • the navigation device includes, for example, a PND (Portable Navigation Device) and the like.
  • Communication terminals include, for example, mobile terminals such as mobile phones, smartphones and tablets.
  • FIG. 12 is a block diagram showing the configuration of the face detection processing device 100 and the device operating in connection therewith according to the fourth embodiment.
  • the face detection processing device 100, the face detection device 110, and the communication device 130 are provided in the server 300.
  • the face detection processing device 100 acquires an image of the inside of the vehicle 1 from the image acquisition device 120 provided in the vehicle 1 via the communication device 140 and the communication device 130.
  • the face detection processing device 100 determines whether or not the face of the person detected by the face detection device 110 satisfies a predetermined condition.
  • the face detection processing device 100 outputs a dummy image or an image acquired from the image acquisition device 120 to the face detection device 110 based on the determination result. Further, the face detection processing device 100 outputs the face detection result by the face detection device 110 to the control device 150 provided in the vehicle 1 via each communication device.
  • the control device 150 provides notification and driving support for the vehicle 1 based on the detection result.
  • the face detection processing device 100 By arranging the face detection processing device 100 on the server 300 in this way, the configuration of the in-vehicle device can be simplified.
  • the face detection processing device 100 may be provided in the server 300, and the other part may be provided in the vehicle 1 in a distributed manner.

Abstract

The purpose of the present invention is to provide a face detection processing device capable of accurately detecting the face of a person to be detected. The face detection processing device outputs an image to a face detection device. The face detection device detects the face of a person captured in a search area set in a region of at least a portion of sequentially inputted images, and changes a search area in an image to be inputted next on the basis of the success or failure of detecting the face of the person. The face detection processing device includes a validity determination unit and an image insertion unit. The validity determination unit determines, on the basis of a predetermined condition regarding the face of the person detected by the face detection device, whether the face of the person detected by the face detection device is the face of a person to be detected who is decided in advance. On the basis of the determination result of the validity determination unit, the image insertion unit outputs, as an image to the face detection device, a dummy image for causing the face detection device to fail in detection of the face of the person in the search area and change the search area.

Description

顔検出処理装置および顔検出処理方法Face detection processing device and face detection processing method
 本発明は、顔検出処理装置および顔検出処理方法に関する。 The present invention relates to a face detection processing device and a face detection processing method.
 近年、開発が盛んであるドライバーモニタリングシステムは、部分的自動運転車両における車両から人間への運転主導権の委譲判断に必要な技術である。中でも、搭乗者の顔または顔のパーツの特徴部位を検出する処理は、ドライバーモニタリングシステムにおける要素技術である。特徴部位検出処理には、その判定精度だけでなく、計算量を削減して高速に処理することが求められている。 The driver monitoring system, which has been actively developed in recent years, is a technology necessary for determining the transfer of driving initiative from a vehicle to a human in a partially autonomous vehicle. Above all, the process of detecting the featured part of the passenger's face or facial parts is an elemental technology in the driver monitoring system. The feature site detection process is required not only to determine the accuracy but also to reduce the amount of calculation and perform the process at high speed.
 特許文献1に記載の車輌制御装置には、計算量を削減して処理速度を高速化する顔検出処理技術が開示されている。カメラが固定されている場合、ドライバーが動くことにより、顔の位置が画面の中心からずれることがある。しかし、その顔の位置は、通常ヘッドレストの位置にある。特許文献1の車輌制御装置は、予め撮影画面におけるヘッドレストの位置を把握することで、ドライバーの顔が存在する可能性の高い部分から顔を探索する。それにより、探索時間の短縮を実現している。 The vehicle control device described in Patent Document 1 discloses a face detection processing technique that reduces the amount of calculation and increases the processing speed. When the camera is fixed, the position of the face may shift from the center of the screen due to the movement of the driver. However, the position of the face is usually the position of the headrest. The vehicle control device of Patent Document 1 searches for a face from a portion where the driver's face is likely to exist by grasping the position of the headrest on the shooting screen in advance. As a result, the search time is shortened.
特開2005―205943号公報Japanese Unexamined Patent Publication No. 2005-205943
 顔検出対象の画像における探索領域の限定は、顔検出のための計算量を削減する。しかし、画像内に検出対象の人物と非検出対象の人物の2名が含まれる場合、顔検出装置が非検出対象の人物の顔を検出することがある。その場合、検出対象の人物の顔が、その後の探索領域から除外されるため、検出精度が低下する。 Limiting the search area in the image to be detected by face reduces the amount of calculation for face detection. However, when the image includes two persons, a person to be detected and a person to be non-detected, the face detection device may detect the face of the person to be non-detected. In that case, the face of the person to be detected is excluded from the subsequent search area, so that the detection accuracy is lowered.
 本発明は、以上のような課題を解決するためになされたものであり、顔検出装置が検出対象の人物の顔を正確に検出することを可能にする顔検出処理装置の提供を目的とする。 The present invention has been made to solve the above problems, and an object of the present invention is to provide a face detection processing device that enables a face detection device to accurately detect the face of a person to be detected. ..
 本発明に係る顔検出処理装置は、顔検出装置に対し画像を出力する。顔検出装置は、順次入力される画像の少なくとも一部の領域に設定される探索領域に写っている人物の顔を検出し、かつ、人物の顔の検出の成否に基づいて次に入力される画像における探索領域を変更する。顔検出処理装置は、妥当性判断部および画像挿入部を含む。妥当性判断部は、顔検出装置によって検出された人物の顔に関する予め定められた条件に基づいて、顔検出装置によって検出された人物の顔が、予め定められた検出対象の人物の顔であるか否かを判定する。画像挿入部は、妥当性判断部の判定結果に基づいて、探索領域における人物の顔の検出を顔検出装置に失敗させて探索領域を変更させるためのダミー画像を、画像として、顔検出装置に出力する。 The face detection processing device according to the present invention outputs an image to the face detection device. The face detection device detects the face of the person in the search area set in at least a part of the sequentially input images, and is input next based on the success or failure of the detection of the face of the person. Change the search area in the image. The face detection processing device includes a validation unit and an image insertion unit. In the validity determination unit, the face of the person detected by the face detection device is the face of the person to be detected in advance based on the predetermined conditions regarding the face of the person detected by the face detection device. Judge whether or not. Based on the judgment result of the validity judgment unit, the image insertion unit uses a dummy image as an image for the face detection device to cause the face detection device to fail to detect the face of a person in the search area and change the search area. Output.
 本発明によれば、顔検出装置が検出対象の人物の顔を正確に検出することを可能にする顔検出処理装置が提供できる。 According to the present invention, it is possible to provide a face detection processing device that enables a face detection device to accurately detect the face of a person to be detected.
 本発明の目的、特徴、局面、および利点は、以下の詳細な説明と添付図面とによって、より明白になる。 The objectives, features, aspects, and advantages of the present invention will be made clearer by the following detailed description and accompanying drawings.
実施の形態1における顔検出処理装置の構成を示すブロック図である。It is a block diagram which shows the structure of the face detection processing apparatus in Embodiment 1. FIG. 顔検出処理装置が有する処理回路の構成の一例を示す図である。It is a figure which shows an example of the structure of the processing circuit which a face detection processing apparatus has. 顔検出処理装置が有する処理回路の構成の別の一例を示す図である。It is a figure which shows another example of the structure of the processing circuit which a face detection processing apparatus has. 実施の形態1における顔検出処理方法を示すフローチャートである。It is a flowchart which shows the face detection processing method in Embodiment 1. 実施の形態2における顔検出処理装置および顔検出処理システムの構成を示すブロック図である。It is a block diagram which shows the structure of the face detection processing apparatus and the face detection processing system in Embodiment 2. FIG. 実施の形態2における顔検出処理方法を示すフローチャートである。It is a flowchart which shows the face detection processing method in Embodiment 2. 実施の形態2における画像取得装置によって撮像された画像の一例を示す図である。It is a figure which shows an example of the image imaged by the image acquisition apparatus in Embodiment 2. 実施の形態2における画像取得装置によって撮像された画像の一例を示す図である。It is a figure which shows an example of the image image | photographed by the image acquisition apparatus in Embodiment 2. FIG. 実施の形態3における顔検出処理方法を示すフローチャートである。It is a flowchart which shows the face detection processing method in Embodiment 3. 実施の形態3における模擬画像の一例を示す図である。It is a figure which shows an example of the simulated image in Embodiment 3. FIG. 実施の形態3における画像取得装置によって撮像された画像の一例を示す図である。It is a figure which shows an example of the image image | photographed by the image acquisition apparatus in Embodiment 3. 実施の形態4における顔検出処理装置およびそれに関連して動作する装置の構成を示すブロック図である。It is a block diagram which shows the structure of the face detection processing apparatus in Embodiment 4, and the apparatus which operates in connection with it.
 <実施の形態1>
 図1は、実施の形態1における顔検出処理装置100の構成を示すブロック図である。
<Embodiment 1>
FIG. 1 is a block diagram showing the configuration of the face detection processing device 100 according to the first embodiment.
 顔検出処理装置100は、画像に写っている人物の顔を検出する顔検出装置110に対し、顔検出装置110が顔の検出を行うための画像を出力する。その顔検出装置110は、順次入力される画像の少なくとも一部の領域に設定される探索領域に写っている人物の顔を検出する機能を有する。さらに、顔検出装置110は、その顔の検出の成否に基づいて次に入力される画像における探索領域を変更する機能を有する。 The face detection processing device 100 outputs an image for the face detection device 110 to detect the face to the face detection device 110 that detects the face of the person in the image. The face detection device 110 has a function of detecting the face of a person in a search area set in at least a part of the sequentially input images. Further, the face detection device 110 has a function of changing the search area in the next input image based on the success or failure of the face detection.
 顔検出処理装置100は、妥当性判断部10と画像挿入部20とを含む。 The face detection processing device 100 includes a validity determination unit 10 and an image insertion unit 20.
 妥当性判断部10は、顔検出装置110によって検出された人物の顔に関する予め定められた条件に基づいて、顔検出装置110によって検出された人物の顔が、予め定められた検出対象の人物の顔であるか否かを判定する。言い換えると、妥当性判断部10は、検出された人物の顔についての妥当性を判断する。予め定められた条件とは、例えば、画像内での顔の位置もしくは画像内での顔の特徴部位の位置に関する条件、画像内での顔の大きさに関する条件、または顔の特徴に関する条件である。または、予め定められた条件とは、例えば、人物が搭乗している車両の走行状態とその人物の顔の向きとの関係に関する条件であってもよい。 The validity determination unit 10 determines that the face of the person detected by the face detection device 110 is the face of the person to be detected in advance based on the predetermined conditions regarding the face of the person detected by the face detection device 110. Determine if it is a face. In other words, the validity determination unit 10 determines the validity of the detected person's face. The predetermined conditions are, for example, a condition relating to the position of the face in the image or the position of the feature portion of the face in the image, a condition relating to the size of the face in the image, or a condition relating to the feature of the face. .. Alternatively, the predetermined condition may be, for example, a condition relating to the relationship between the traveling state of the vehicle on which the person is on board and the orientation of the person's face.
 画像挿入部20は、妥当性判断部10の判定結果に基づいて、ダミー画像を顔検出装置110に出力する。ダミー画像とは、探索領域において前回検出された人物の顔の検出を顔検出装置110に失敗させて探索領域を変更させるための画像である。言い換えると、ダミー画像は、探索領域をリセットするための画像である。 The image insertion unit 20 outputs a dummy image to the face detection device 110 based on the determination result of the validity determination unit 10. The dummy image is an image for causing the face detection device 110 to fail to detect the face of the person previously detected in the search area and change the search area. In other words, the dummy image is an image for resetting the search area.
 顔検出装置110にダミー画像が入力された場合、顔検出装置110は、そのダミー画像において前回検出された人物の顔の検出に失敗する。顔検出装置110は、ダミー画像の次に入力される画像における探索領域を変更する。そして、顔検出装置110は、変更された探索領域における人物の顔を検出する。その変更された探索領域には、検出対象の人物の顔が含まれている可能性が高い。そのため、顔検出装置110は検出対象の人物の顔を正確に検出することができる。 When a dummy image is input to the face detection device 110, the face detection device 110 fails to detect the face of the person previously detected in the dummy image. The face detection device 110 changes the search area in the image input next to the dummy image. Then, the face detection device 110 detects the face of the person in the changed search area. The modified search area is likely to contain the face of the person to be detected. Therefore, the face detection device 110 can accurately detect the face of the person to be detected.
 図2は、顔検出処理装置100が有する処理回路90の構成の一例を示す図である。妥当性判断部10および画像挿入部20の各機能は、処理回路90により実現される。すなわち、処理回路90は、妥当性判断部10および画像挿入部20を有する。 FIG. 2 is a diagram showing an example of the configuration of the processing circuit 90 included in the face detection processing device 100. Each function of the validity determination unit 10 and the image insertion unit 20 is realized by the processing circuit 90. That is, the processing circuit 90 has a validity determination unit 10 and an image insertion unit 20.
 処理回路90が専用のハードウェアである場合、処理回路90は、例えば、単一回路、複合回路、プログラム化されたプロセッサ、並列プログラム化されたプロセッサ、ASIC(Application Specific Integrated Circuit)、FPGA(Field-Programmable Gate Array)、またはこれらを組み合わせた回路等である。妥当性判断部10および画像挿入部20の各機能は、複数の処理回路により個別に実現されてもよいし、1つの処理回路によりまとめて実現されてもよい。 When the processing circuit 90 is dedicated hardware, the processing circuit 90 may be, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC (Application Specific Integrated Circuit), or an FPGA (Field). -ProgrammableGateArray), or a circuit that combines these. Each function of the validity determination unit 10 and the image insertion unit 20 may be individually realized by a plurality of processing circuits, or may be collectively realized by one processing circuit.
 図3は、顔検出処理装置100が有する処理回路の構成の別の一例を示す図である。処理回路は、プロセッサ91とメモリ92とを有する。プロセッサ91がメモリ92に格納されるプログラムを実行することにより、妥当性判断部10および画像挿入部20の各機能が実現される。例えば、プログラムとして記述されたソフトウェアまたはファームウェアがプロセッサ91により実行されることにより各機能が実現される。このように、顔検出処理装置100は、プログラムを格納するメモリ92と、そのプログラムを実行するプロセッサ91とを有する。 FIG. 3 is a diagram showing another example of the configuration of the processing circuit included in the face detection processing device 100. The processing circuit includes a processor 91 and a memory 92. By executing the program stored in the memory 92 by the processor 91, each function of the validity determination unit 10 and the image insertion unit 20 is realized. For example, each function is realized by executing software or firmware described as a program by the processor 91. As described above, the face detection processing device 100 has a memory 92 for storing the program and a processor 91 for executing the program.
 プログラムには、顔検出処理装置100が、顔検出装置110によって検出された人物の顔に関する予め定められた条件に基づいて、顔検出装置110によって検出された人物の顔が、その予め定められた検出対象の人物の顔であるか否かを判定し、判定結果に基づいて、探索領域における人物の顔の検出を顔検出装置110に失敗させて探索領域を変更させるためのダミー画像を顔検出装置110に出力する機能が記述されている。また、プログラムは、妥当性判断部10および画像挿入部20の手順または方法をコンピュータに実行させるものである。 In the program, the face of the person detected by the face detection device 110 is defined in advance by the face detection processing device 100 based on the predetermined conditions regarding the face of the person detected by the face detection device 110. It is determined whether or not the face is the face of the person to be detected, and based on the determination result, the face detection device 110 fails to detect the face of the person in the search area and detects a dummy image for changing the search area. The function of outputting to the device 110 is described. In addition, the program causes the computer to execute the procedure or method of the validity determination unit 10 and the image insertion unit 20.
 プロセッサ91は、例えば、CPU(Central Processing Unit)、演算装置、マイクロプロセッサ、マイクロコンピュータ、DSP(Digital Signal Processor)等である。メモリ92は、例えば、RAM(Random Access Memory)、ROM(Read Only Memory)、フラッシュメモリ、EPROM(Erasable Programmable Read Only Memory)、EEPROM(Electrically Erasable Programmable Read Only Memory)等の、不揮発性または揮発性の半導体メモリである。または、メモリ92は、磁気ディスク、フレキシブルディスク、光ディスク、コンパクトディスク、ミニディスク、DVD等、今後使用されるあらゆる記憶媒体であってもよい。 The processor 91 is, for example, a CPU (Central Processing Unit), an arithmetic unit, a microprocessor, a microcomputer, a DSP (Digital Signal Processor), or the like. The memory 92 is, for example, non-volatile or volatile such as RAM (RandomAccessMemory), ROM (ReadOnlyMemory), flash memory, EPROM (ErasableProgrammableReadOnlyMemory), and EEPROM (ElectricallyErasableProgrammableReadOnlyMemory). It is a semiconductor memory. Alternatively, the memory 92 may be any storage medium used in the future, such as a magnetic disk, a flexible disk, an optical disk, a compact disk, a mini disk, or a DVD.
 上述した妥当性判断部10および画像挿入部20の各機能は、一部が専用のハードウェアによって実現され、他の一部がソフトウェアまたはファームウェアにより実現されてもよい。このように、処理回路は、ハードウェア、ソフトウェア、ファームウェア、またはこれらの組み合わせによって、上述の各機能を実現する。 Some of the functions of the validity determination unit 10 and the image insertion unit 20 described above may be realized by dedicated hardware, and some may be realized by software or firmware. In this way, the processing circuit realizes each of the above-mentioned functions by hardware, software, firmware, or a combination thereof.
 図4は、実施の形態1における顔検出処理方法を示すフローチャートである。 FIG. 4 is a flowchart showing the face detection processing method according to the first embodiment.
 ステップS1にて、妥当性判断部10は、顔検出装置110によって検出される人物の顔に関する予め定められた条件に基づいて、顔検出装置110によって検出された人物の顔が、予め定められた検出対象の人物の顔であるか否かを判定する。検出された人物の顔が、予め定められた検出対象の人物である場合、顔検出処理方法は終了する。検出された人物の顔が、予め定められた検出対象の人物の顔でない場合、ステップS2が実行される。 In step S1, the validity determination unit 10 determines the face of the person detected by the face detection device 110 in advance based on the predetermined conditions regarding the face of the person detected by the face detection device 110. It is determined whether or not the face is the face of the person to be detected. When the face of the detected person is a predetermined person to be detected, the face detection processing method ends. If the face of the detected person is not a predetermined face of the person to be detected, step S2 is executed.
 ステップS2にて、画像挿入部20は、ダミー画像を顔検出装置110に出力する。 In step S2, the image insertion unit 20 outputs a dummy image to the face detection device 110.
 顔検出装置110にダミー画像が入力された場合、顔検出装置110は、そのダミー画像において前回検出された人物の顔の検出に失敗する。顔検出装置110は、ダミー画像の次に入力される画像における探索領域を変更する。そして、顔検出装置110は、変更された探索領域における人物の顔を検出する。その変更された探索領域には、検出対象の人物の顔が含まれている可能性が高い。そのため、顔検出装置110は検出対象の人物の顔を正確に検出することができる。 When a dummy image is input to the face detection device 110, the face detection device 110 fails to detect the face of the person previously detected in the dummy image. The face detection device 110 changes the search area in the image input next to the dummy image. Then, the face detection device 110 detects the face of the person in the changed search area. The modified search area is likely to contain the face of the person to be detected. Therefore, the face detection device 110 can accurately detect the face of the person to be detected.
 以上をまとめると、実施の形態1における顔検出処理装置100は、顔検出装置110に対し、画像を出力する。顔検出装置110は、順次入力される画像の少なくとも一部の領域に設定される探索領域に写っている人物の顔を検出し、かつ、人物の顔の検出の成否に基づいて次に入力される画像における探索領域を変更する。顔検出処理装置100は、妥当性判断部10および画像挿入部20を含む。妥当性判断部10は、顔検出装置110によって検出される人物の顔に関する予め定められた条件に基づいて、顔検出装置110によって検出された人物の顔が、予め定められた検出対象の人物の顔であるか否かを判定する。画像挿入部20は、妥当性判断部10の判定結果に基づいて、ダミー画像を顔検出装置110に出力する。ダミー画像は、探索領域における人物の顔の検出を顔検出装置110に失敗させて探索領域を変更させるための画像である。 Summarizing the above, the face detection processing device 100 in the first embodiment outputs an image to the face detection device 110. The face detection device 110 detects the face of the person in the search area set in at least a part of the sequentially input images, and is input next based on the success or failure of the detection of the face of the person. Change the search area in the image. The face detection processing device 100 includes a validity determination unit 10 and an image insertion unit 20. The validity determination unit 10 determines that the face of the person detected by the face detection device 110 is a predetermined detection target person based on the predetermined conditions regarding the face of the person detected by the face detection device 110. Determine if it is a face. The image insertion unit 20 outputs a dummy image to the face detection device 110 based on the determination result of the validity determination unit 10. The dummy image is an image for causing the face detection device 110 to fail to detect the face of a person in the search area and change the search area.
 このような顔検出処理装置100は、顔検出装置110が検出対象の人物の顔を正確に検出することを可能にする。 Such a face detection processing device 100 enables the face detection device 110 to accurately detect the face of a person to be detected.
 また、実施の形態1における顔検出処理方法は、顔検出装置110に対し、画像を出力する。顔検出装置110は、順次入力される画像の少なくとも一部の領域に設定される探索領域に写っている人物の顔を検出し、かつ、人物の顔の検出の成否に基づいて次に入力される画像における探索領域を変更する。顔検出処理方法は、顔検出装置110によって検出される人物の顔に関する予め定められた条件に基づいて、顔検出装置110によって検出された人物の顔が、予め定められた検出対象の人物の顔であるか否かを判定し、その判定結果に基づいて、ダミー画像を顔検出装置110に出力する。ダミー画像は、探索領域における人物の顔の検出を顔検出装置110に失敗させて探索領域を変更させるための画像である。 Further, the face detection processing method in the first embodiment outputs an image to the face detection device 110. The face detection device 110 detects the face of the person in the search area set in at least a part of the sequentially input images, and is input next based on the success or failure of the detection of the face of the person. Change the search area in the image. In the face detection processing method, the face of the person detected by the face detection device 110 is the face of the person to be detected in advance based on the predetermined conditions regarding the face of the person detected by the face detection device 110. It is determined whether or not the face is, and a dummy image is output to the face detection device 110 based on the determination result. The dummy image is an image for causing the face detection device 110 to fail to detect the face of a person in the search area and change the search area.
 このような顔検出処理方法は、顔検出装置110が検出対象の人物の顔を正確に検出することを可能にする。 Such a face detection processing method enables the face detection device 110 to accurately detect the face of the person to be detected.
 <実施の形態2>
 実施の形態2における顔検出処理装置および顔検出処理方法を説明する。実施の形態2は実施の形態1の下位概念であり、実施の形態2における顔検出処理装置は、実施の形態1における顔検出処理装置100の各構成を含む。なお、実施の形態1と同様の構成および動作については説明を省略する。
<Embodiment 2>
The face detection processing apparatus and the face detection processing method according to the second embodiment will be described. The second embodiment is a subordinate concept of the first embodiment, and the face detection processing device according to the second embodiment includes each configuration of the face detection processing device 100 according to the first embodiment. The same configuration and operation as in the first embodiment will not be described.
 図5は、実施の形態2における顔検出処理装置101および顔検出処理システム200の構成を示すブロック図である。 FIG. 5 is a block diagram showing the configurations of the face detection processing device 101 and the face detection processing system 200 according to the second embodiment.
 顔検出処理システム200は、画像取得装置120、顔検出処理装置101および顔検出装置110を含む。顔検出処理装置101は、記憶部30、画像挿入部20および妥当性判断部10を含む。顔検出装置110は、特徴部位検出部111を含む。 The face detection processing system 200 includes an image acquisition device 120, a face detection processing device 101, and a face detection device 110. The face detection processing device 101 includes a storage unit 30, an image insertion unit 20, and a validity determination unit 10. The face detection device 110 includes a feature portion detection unit 111.
 画像取得装置120は、人物の顔を含む画像を順次取得する。画像取得装置120は、例えば、移動体に搭載されており、移動体の搭乗者の画像を取得するカメラである。移動体とは、車両、飛行機、鉄道、バス、バイク等である。搭乗者は、ドライバーを含む。移動体が車両である場合、カメラは、車内のセンターコンソール部など車内前方の中央部に設置され、ドライバーおよび助手席の搭乗者の両方を撮影する。そのカメラは、レンズ、絞り部、シャッター部およびイメージセンサを含む。 The image acquisition device 120 sequentially acquires images including the face of a person. The image acquisition device 120 is, for example, a camera mounted on a moving body and acquiring an image of a passenger of the moving body. The moving body is a vehicle, an airplane, a railroad, a bus, a motorcycle, or the like. Passengers include drivers. When the moving body is a vehicle, the camera is installed in the center of the front of the vehicle, such as the center console of the vehicle, and captures both the driver and the passenger in the passenger seat. The camera includes a lens, an aperture, a shutter and an image sensor.
 記憶部30は、画像取得装置120によって順次取得された画像を時系列データとして記憶する。さらに、記憶部30は、探索領域における人物の顔の検出を顔検出装置110に失敗させるためのダミー画像を記憶する。ダミー画像は、例えば、全画素の輝度が同じ均一画像である。実施の形態2における均一画像は、全画素の輝度が0の黒画像である。 The storage unit 30 stores the images sequentially acquired by the image acquisition device 120 as time-series data. Further, the storage unit 30 stores a dummy image for causing the face detection device 110 to fail in detecting the face of a person in the search area. The dummy image is, for example, a uniform image having the same brightness of all pixels. The uniform image in the second embodiment is a black image in which the brightness of all pixels is zero.
 画像挿入部20は、妥当性判断部10の判定結果に基づいて、ダミー画像もしくは画像取得装置120にて取得された画像を、記憶部30から読み込んで顔検出装置110に出力する。 The image insertion unit 20 reads a dummy image or an image acquired by the image acquisition device 120 from the storage unit 30 and outputs the dummy image or the image acquired by the image acquisition device 120 to the face detection device 110 based on the determination result of the validity determination unit 10.
 妥当性判断部10は、顔検出装置110によって検出される人物の顔に関する予め定められた条件に基づいて、顔検出装置110によって検出された人物の顔が、予め定められた検出対象の人物の顔であるか否かを判定する。言い換えると、妥当性判断部10は、検出された人物の顔についての妥当性を判断する。 The validity determination unit 10 determines that the face of the person detected by the face detection device 110 is a predetermined detection target person based on the predetermined conditions regarding the face of the person detected by the face detection device 110. Determine if it is a face. In other words, the validity determination unit 10 determines the validity of the detected person's face.
 予め定められた条件は、例えば、画像内での顔の位置もしくは顔の特徴部位の位置に関する条件を含む。この場合、妥当性判断部10は、検出された人物の顔の位置または検出された顔の特徴部位の位置が、予め定められた範囲に含まれるか否かを判定する。予め定められた範囲は、検出対象の人物の顔の位置もしくは顔の特徴の位置を含む。 Predetermined conditions include, for example, conditions relating to the position of the face or the position of the featured part of the face in the image. In this case, the validity determination unit 10 determines whether or not the position of the face of the detected person or the position of the featured portion of the detected face is included in a predetermined range. The predetermined range includes the position of the face or the position of the facial feature of the person to be detected.
 または、妥当性判断部10は、1つ前に入力された画像において検出された顔の位置と、その次に入力された画像において検出された顔の位置との変動量が、統計的に求められる検出対象の人物の動作範囲内であるか否かを判定してもよい。この場合、1つ前に入力された画像において検出された顔の位置は、検出対象の人物の顔の位置であることが前提である。移動体が車両であり、検出対象の人物がドライバーである場合、その動作範囲は、例えば、車内のシートレイアウトの情報と、ドライバーの体格情報とに基づいて、統計的に求められる。 Alternatively, the validity determination unit 10 statistically obtains the amount of variation between the position of the face detected in the previously input image and the position of the face detected in the image input next. It may be determined whether or not it is within the operating range of the person to be detected. In this case, it is premised that the position of the face detected in the previously input image is the position of the face of the person to be detected. When the moving body is a vehicle and the person to be detected is a driver, the operating range is statistically determined based on, for example, information on the seat layout in the vehicle and information on the physique of the driver.
 予め定められた条件は、例えば、顔の大きさに関する条件を含む。この場合、妥当性判断部10は、検出された人物の顔の大きさが、予め定められた大きさの範囲に含まれるか否かを判定する。予め定められた大きさの範囲とは、例えば、画像内での検出対象の人物の平均的な顔の大きさに基づいて定められる。 Predetermined conditions include, for example, conditions related to face size. In this case, the validity determination unit 10 determines whether or not the size of the face of the detected person is included in the range of the predetermined size. The predetermined size range is determined based on, for example, the average face size of the person to be detected in the image.
 または、妥当性判断部10は、1つ前に入力された画像において検出された顔の大きさと、その次に入力された画像において検出された顔の大きさとの変動量が、上記と同様に、統計的に求められる検出対象の人物の動作範囲内であるか否かを判定してもよい。この場合、1つ前に入力された画像において検出された顔の大きさは、検出対象の人物の顔の大きさであることが前提である。 Alternatively, the validity determination unit 10 determines that the amount of variation between the face size detected in the previously input image and the face size detected in the next input image is the same as described above. , It may be determined whether or not it is within the operating range of the person to be detected that is statistically determined. In this case, it is premised that the size of the face detected in the previously input image is the size of the face of the person to be detected.
 予め定められた条件は、例えば、顔の特徴に関する条件を含む。この場合、妥当性判断部10は、検出された人物の顔の特徴が、予め登録されている検出対象の人物の顔の特徴と一致するか否かを判定する。 Predetermined conditions include, for example, conditions related to facial features. In this case, the validity determination unit 10 determines whether or not the facial features of the detected person match the facial features of the person to be detected that are registered in advance.
 以上に示された予め定められた条件は、顔検出装置110によって検出された人物の顔と、予め定められた検出対象の人物の顔との関係についての条件であったが、妥当性の判断には、必ずしも、予め定められた検出対象の人物の顔の情報は必要ではない。その一例を次に示す。 The predetermined conditions shown above were the conditions regarding the relationship between the face of the person detected by the face detection device 110 and the face of the person to be detected in advance, but the determination of validity was made. Does not necessarily require predetermined information on the face of the person to be detected. An example is shown below.
 予め定められた条件は、例えば、顔検出装置110によって検出された人物が搭乗している車両の走行状態と、その検出された人物の顔の向きとの関係に関する条件を含む。例えば、妥当性判断部10は、車両の進行方向とその検出された人物の顔の向きとが成す角度が、予め定められた角度範囲に含まれる場合、検出された人物の顔が検出対象の人物の顔であると判断する。この際、妥当性判断部は、車両の進行方向の情報として、車両の操舵角の情報を、車両の制御装置等から取得する。このように、妥当性判断部10は、予め定められた検出対象の人物の顔の情報は使用せず、車両の走行状態の情報と、顔検出装置110によって検出される人物の顔に関する情報とに関する予め定められた条件に基づいて、妥当性を判断している。 The predetermined conditions include, for example, a condition relating to the relationship between the running state of the vehicle on which the person detected by the face detection device 110 is on board and the orientation of the face of the detected person. For example, the validity determination unit 10 detects the face of the detected person when the angle formed by the traveling direction of the vehicle and the direction of the face of the detected person is included in a predetermined angle range. Judge that it is the face of a person. At this time, the validity determination unit acquires information on the steering angle of the vehicle from the vehicle control device or the like as information on the traveling direction of the vehicle. As described above, the validity determination unit 10 does not use the predetermined face information of the person to be detected, but obtains the information on the running state of the vehicle and the information on the face of the person detected by the face detection device 110. The validity is judged based on the predetermined conditions regarding.
 顔検出装置110によって検出された人物の顔が上記の予め定められた条件を満たす場合、妥当性判断部10は、検出対象の人物の顔が検出されたと判断する。検出対象の人物の顔が検出された場合、妥当性判断部10は、顔検出装置110による顔検出結果を外部に出力する。非検出対象の人物の顔が検出された場合、妥当性判断部10は、顔検出装置110による顔検出結果を棄却し、検出失敗を外部に出力する。 When the face of the person detected by the face detection device 110 satisfies the above-mentioned predetermined conditions, the validity determination unit 10 determines that the face of the person to be detected has been detected. When the face of the person to be detected is detected, the validity determination unit 10 outputs the face detection result by the face detection device 110 to the outside. When the face of the non-detection target person is detected, the validity determination unit 10 rejects the face detection result by the face detection device 110 and outputs the detection failure to the outside.
 実施の形態2における顔検出処理装置101は、検出対象の人物の顔が検出されたか否かの判定結果に関するカウンタを有する。検出対象の人物の顔が検出された場合、または、顔検出装置110が人物の顔の検出自体に失敗した場合、顔検出処理装置101はカウンタをリセットする。一方で、時系列データにおいて連続して検出対象でない人物の顔が検出された場合、顔検出処理装置101はカウンタをカウントアップする。 The face detection processing device 101 according to the second embodiment has a counter for determining whether or not the face of the person to be detected has been detected. When the face of the person to be detected is detected, or when the face detection device 110 fails to detect the face of the person itself, the face detection processing device 101 resets the counter. On the other hand, when the face of a person who is not the detection target is continuously detected in the time series data, the face detection processing device 101 counts up the counter.
 特徴部位検出部111は、時系列データとして順次入力される画像の少なくとも一部の領域に設定される探索領域に写っている人物の顔を検出する。その際、特徴部位検出部111は、パターンマッチング処理によって、顔の検出または顔の特徴部位の検出を行う。特徴部位検出部111は、パターンマッチング処理として、例えば、複数の識別子の各々において、特定の角度の顔、または特定の顔の特徴部位の画像をモデルとして学習する。特徴部位検出部111は、学習モデルの画像と画像挿入部20から入力される画像との一致度を評価する。その評価結果に基づいて、特徴部位検出部111は、人物の顔、または特徴部位を検出する。 The feature site detection unit 111 detects the face of a person in the search area set in at least a part of the image sequentially input as time series data. At that time, the feature region detection unit 111 detects the face or the feature region of the face by the pattern matching process. As a pattern matching process, the feature region detection unit 111 learns, for example, a face at a specific angle or an image of a feature region of a specific face as a model for each of a plurality of identifiers. The feature portion detection unit 111 evaluates the degree of coincidence between the image of the learning model and the image input from the image insertion unit 20. Based on the evaluation result, the feature portion detection unit 111 detects the face of a person or the feature portion.
 特徴部位検出部111は、画像挿入部20から入力される画像がダミー画像である場合であっても、探索領域においてパターンマッチング処理を行う。ただし、特徴部位検出部111は、ダミー画像における人物の顔の検出に失敗する。 The feature site detection unit 111 performs pattern matching processing in the search area even when the image input from the image insertion unit 20 is a dummy image. However, the feature portion detection unit 111 fails to detect the face of a person in the dummy image.
 また、特徴部位検出部111は、顔の検出の成否に基づいて、時系列データとして次に入力される画像における探索領域を変更する。例えば、特徴部位検出部111は、顔の検出に成功した場合、時系列データの時間間隔から予想される人物の移動量に基づいて、次に入力される画像における探索領域を設定する。例えば、時系列データが10fpsで構成される場合、各画像の時間間隔は100msecである。その時間間隔において、人間の顔が移動可能と予想される範囲が統計的に求められる。特徴部位検出部111は、その予想移動範囲と、前回の画像で検出された顔の位置からカメラまでの距離と、に基づいて推測される顔存在範囲を探索領域として設定する。なお、前回の画像で検出された顔の位置からカメラまでの距離は、前回の顔の検出結果から推定可能である。特徴部位検出部111は、前回の画像において検出された顔の位置に基づいて、その前回の探索領域を一定量拡大した新たな探索領域を設定してもよい。 In addition, the feature site detection unit 111 changes the search area in the image to be input next as time series data based on the success or failure of face detection. For example, when the feature portion detection unit 111 succeeds in detecting the face, the search area in the next input image is set based on the movement amount of the person expected from the time interval of the time series data. For example, when the time series data is composed of 10 fps, the time interval of each image is 100 msec. At that time interval, the range in which the human face is expected to be movable is statistically determined. The feature portion detection unit 111 sets a face existence range estimated based on the expected movement range and the distance from the face position detected in the previous image to the camera as a search area. The distance from the face position detected in the previous image to the camera can be estimated from the previous face detection result. The feature portion detection unit 111 may set a new search area obtained by expanding the previous search area by a certain amount based on the position of the face detected in the previous image.
 一方で、特徴部位検出部111は、顔の検出に失敗した場合、次に入力される画像における全領域を探索領域に設定する。上述のとおり、画像挿入部20から入力される画像がダミー画像である場合、特徴部位検出部111は、人物の顔の検出に失敗する。したがって、ダミー画像の次に入力される画像における探索領域は、その入力された画像の全領域に設定される。 On the other hand, when the feature site detection unit 111 fails to detect the face, the feature site detection unit 111 sets the entire area in the next input image as the search area. As described above, when the image input from the image insertion unit 20 is a dummy image, the feature portion detection unit 111 fails to detect the face of the person. Therefore, the search area in the image input next to the dummy image is set to the entire area of the input image.
 顔検出処理装置101は、図2または図3に示される処理回路を含み、上記の妥当性判断部10および画像挿入部20の機能は、その処理回路により実現される。また、記憶部30の機能は、例えば、図3に示されるメモリによって実現される。また、顔検出装置110も同様に、図2または図3に示される処理回路を含み、上記の特徴部位検出部111の機能は、その処理回路によって実現される。 The face detection processing device 101 includes the processing circuit shown in FIG. 2 or 3, and the functions of the validity determination unit 10 and the image insertion unit 20 are realized by the processing circuit. Further, the function of the storage unit 30 is realized by, for example, the memory shown in FIG. Similarly, the face detection device 110 also includes the processing circuit shown in FIG. 2 or FIG. 3, and the function of the feature portion detection unit 111 is realized by the processing circuit.
 次に顔検出処理装置101の動作を説明する。図6は、実施の形態2における顔検出処理方法を示すフローチャートである。図6にされるフローチャートは、時系列データ内の1つの画像に対して、顔検出処理システム200が顔の検出動作を実行する場合を示している。その開始時点では、画像取得装置120によって撮像された画像は、既に記憶部30に記憶されている。 Next, the operation of the face detection processing device 101 will be described. FIG. 6 is a flowchart showing the face detection processing method according to the second embodiment. The flowchart shown in FIG. 6 shows a case where the face detection processing system 200 executes a face detection operation on one image in the time series data. At the start, the image captured by the image acquisition device 120 is already stored in the storage unit 30.
 ステップS10にて、画像挿入部20は、カウンタの値が予め定められた値以上であるか否かを判定する。カウンタの値が予め定められた値以上である場合、ステップS20が実行される。カウンタの値が予め定められた値未満である場合、ステップS30が実行される。 In step S10, the image insertion unit 20 determines whether or not the value of the counter is equal to or greater than a predetermined value. If the value of the counter is equal to or greater than a predetermined value, step S20 is executed. If the value of the counter is less than a predetermined value, step S30 is executed.
 ステップS20にて、画像挿入部20は、記憶部30からダミー画像を読み込み、顔検出装置110の特徴部位検出部111に出力する。 In step S20, the image insertion unit 20 reads a dummy image from the storage unit 30 and outputs it to the feature portion detection unit 111 of the face detection device 110.
 ステップS30にて、画像挿入部20は、記憶部30から画像取得装置120で撮像された画像を顔検出装置110の特徴部位検出部111に出力する。この際、画像挿入部20は、時系列データを構成する画像のうち、前回、顔検出装置110に出力した画像の次の画像を特徴部位検出部111に出力する。 In step S30, the image insertion unit 20 outputs the image captured by the image acquisition device 120 from the storage unit 30 to the feature portion detection unit 111 of the face detection device 110. At this time, the image insertion unit 20 outputs to the feature portion detection unit 111 the image next to the image previously output to the face detection device 110 among the images constituting the time series data.
 ステップS40にて、特徴部位検出部111は、前回、入力された画像における顔の検出の成否を判定する。顔の検出が成功していた場合、ステップS50が実行される。顔の検出が成功していなかった場合、ステップS60が実行される。 In step S40, the feature site detection unit 111 determines the success or failure of face detection in the previously input image. If the face detection is successful, step S50 is executed. If the face detection was unsuccessful, step S60 is executed.
 ステップS50にて、特徴部位検出部111は、前回の画像における顔の検出結果に基づいて、今回、画像挿入部20から入力された画像における探索領域を設定する。 In step S50, the feature site detection unit 111 sets the search area in the image input from the image insertion unit 20 this time based on the face detection result in the previous image.
 ステップS60にて、特徴部位検出部111は、今回、画像挿入部20から入力された画像の全体を探索領域に設定する。 In step S60, the feature site detection unit 111 sets the entire image input from the image insertion unit 20 in the search area this time.
 ステップS70にて、特徴部位検出部111は、パターンマッチング処理を用いて、設定された探索領域に写っている人物の顔を検出する。 In step S70, the feature site detection unit 111 detects the face of the person in the set search area by using the pattern matching process.
 ステップS80にて、妥当性判断部10は、顔検出装置110が人物の顔の検出に成功したか否かを判定する。顔検出装置110が人物の顔の検出に成功している場合、ステップS90が実行される。顔検出装置110が人物の顔の検出に失敗している場合、ステップS120が実行される。 In step S80, the validity determination unit 10 determines whether or not the face detection device 110 succeeds in detecting the face of the person. If the face detection device 110 succeeds in detecting the face of the person, step S90 is executed. If the face detection device 110 fails to detect the face of a person, step S120 is executed.
 ステップS90にて、妥当性判断部10は、顔検出装置110によって検出された人物の顔と、予め定められた条件とを比較し妥当性を確認する。例えば、妥当性判断部10は、検出された人物の顔の画像内での位置と、画像内の予め定められた範囲とを比較する。 In step S90, the validity determination unit 10 compares the face of the person detected by the face detection device 110 with a predetermined condition and confirms the validity. For example, the validity determination unit 10 compares the position of the detected person's face in the image with a predetermined range in the image.
 ステップS100にて、妥当性判断部10は、顔検出装置110によって検出された人物の顔が、予め定められた条件を満たすか否かを判定する。言い換えると、妥当性判断部10は、顔の検出結果が妥当であるか否かを判定する。検出された人物の顔が、予め定められた条件を満たす、つまり検出結果が妥当である場合、ステップS120が実行される。検出された人物の顔が、予め定められた条件を満たさない、つまり検出結果が妥当でない場合、ステップS110が実行される。 In step S100, the validity determination unit 10 determines whether or not the face of the person detected by the face detection device 110 satisfies a predetermined condition. In other words, the validity determination unit 10 determines whether or not the face detection result is valid. If the face of the detected person satisfies a predetermined condition, that is, the detection result is valid, step S120 is executed. If the face of the detected person does not meet the predetermined conditions, that is, the detection result is not valid, step S110 is executed.
 ステップS110にて、顔検出処理装置101は、カウンタをカウントアップする。この状態は、顔検出装置110が人物の顔の検出には成功したものの、検出対象の人物の顔の検出には失敗している状態である。 In step S110, the face detection processing device 101 counts up the counter. In this state, the face detection device 110 succeeds in detecting the face of the person, but fails to detect the face of the person to be detected.
 ステップS120にて、顔検出処理装置101は、カウンタをリセットする。この状態は、顔検出装置110が検出対象の人物の顔の検出には成功した状態、または、顔検出装置110がそもそも人物の顔の検出に失敗した状態である。 In step S120, the face detection processing device 101 resets the counter. In this state, the face detection device 110 succeeds in detecting the face of the person to be detected, or the face detection device 110 fails to detect the face of the person in the first place.
 以上で、顔検出処理方法は終了するが、上述したように、ステップS10からS120は、時系列データ内の1つの画像に対する処理である。したがって、実際には、ステップS110またはS120の後、次の画像に対する処理として、ステップS10が再び実行される。 With the above, the face detection processing method is completed, but as described above, steps S10 to S120 are processing for one image in the time series data. Therefore, in practice, after step S110 or S120, step S10 is executed again as a process for the next image.
 次に、顔検出装置110がダミー画像によって探索領域をリセットする詳細な動作を、図6のフローチャートに基づいて説明する。ここでは、ステップS30から説明を開始する。また、ここでは、画像取得装置120は、車両に搭載され、車内の搭乗者を撮像するカメラであり、予め定められた検出対象の人物は、車両の搭乗者のうちのドライバーである。図7は、画像取得装置120によって撮像された画像の一例であって、画像挿入部20が特徴部位検出部111に出力した画像を示す図である。画像には、車両のドライバーを含む4人の搭乗者2が写っている。また、以下においては、顔検出装置110が図7の画像の1つ前の画像における顔の検出に成功している場合を説明する。 Next, the detailed operation of the face detection device 110 resetting the search area by the dummy image will be described with reference to the flowchart of FIG. Here, the description starts from step S30. Further, here, the image acquisition device 120 is a camera mounted on the vehicle and images the passengers in the vehicle, and the predetermined detection target person is a driver among the passengers of the vehicle. FIG. 7 is an example of an image captured by the image acquisition device 120, and is a diagram showing an image output by the image insertion unit 20 to the feature portion detection unit 111. The image shows four passengers 2, including the driver of the vehicle. Further, in the following, the case where the face detection device 110 succeeds in detecting the face in the image immediately before the image of FIG. 7 will be described.
 ステップS40にて、特徴部位検出部111は、前回の画像における顔の検出に成功したと判断する。次に、ステップS50が実行される。 In step S40, the feature site detection unit 111 determines that the face has been successfully detected in the previous image. Next, step S50 is executed.
 ステップS50にて、特徴部位検出部111は、前回の画像における顔の検出結果に基づいて探索領域を設定する。ここでは、特徴部位検出部111は、図7に示されるように、車両の運転席に対応する画像の左側の領域に探索領域3を設定する。 In step S50, the feature site detection unit 111 sets the search area based on the face detection result in the previous image. Here, the feature portion detection unit 111 sets the search region 3 in the region on the left side of the image corresponding to the driver's seat of the vehicle, as shown in FIG.
 ステップS70にて、特徴部位検出部111は、探索領域3に写っている人物の顔を検出する。図7に示されるように、探索領域3内には、ドライバーである前席の搭乗者2Aの顔の一部と、後方の座席の搭乗者2Bの顔の全部とが写っている。例えば、ドライバーが、運転の都合上、一時的に顔が探索領域3からはみ出るような姿勢をとった場合に、図7のような状態が得られる。この場合、後方の座席の搭乗者2Bの顔の全部が探索領域3に含まれるため、特徴部位検出部111は、後方の座席の搭乗者2Bの顔を優先して検出する。 In step S70, the feature site detection unit 111 detects the face of the person in the search area 3. As shown in FIG. 7, a part of the face of the passenger 2A in the front seat, which is the driver, and the entire face of the passenger 2B in the rear seat are shown in the search area 3. For example, when the driver takes a posture in which the face temporarily protrudes from the search area 3 for the convenience of driving, the state as shown in FIG. 7 can be obtained. In this case, since the entire face of the passenger 2B in the rear seat is included in the search area 3, the feature portion detecting unit 111 preferentially detects the face of the passenger 2B in the rear seat.
 ステップS80にて、妥当性判断部10は、特徴部位検出部111が顔の検出に成功したと判断する。ステップS90が実行される。 In step S80, the validity determination unit 10 determines that the feature site detection unit 111 has succeeded in detecting the face. Step S90 is executed.
 ステップS90にて、妥当性判断部10は、特徴部位検出部111によって検出された後方の座席の搭乗者2Bの顔の画像内での座標と、ドライバーの顔が存在すべき座標範囲とを比較し妥当性を確認する。 In step S90, the validity determination unit 10 compares the coordinates in the image of the face of the passenger 2B in the rear seat detected by the feature site detection unit 111 with the coordinate range in which the driver's face should exist. And confirm the validity.
 ステップS100にて、妥当性判断部10は、後方の座席の搭乗者2Bの顔の座標が、ドライバーの顔が存在すべき座標範囲内に含まれていないと判断する。すなわち、妥当性判断部10は、検出対象であるドライバーの顔の検出に失敗したと判断し、ステップS110が実行される。 In step S100, the validity determination unit 10 determines that the coordinates of the face of the passenger 2B in the rear seat are not included in the coordinate range in which the driver's face should exist. That is, the validity determination unit 10 determines that the detection of the face of the driver to be detected has failed, and the step S110 is executed.
 ステップS110にて、顔検出処理装置101は、カウンタをカウントアップする。つづいて、ステップS10が実行される。 In step S110, the face detection processing device 101 counts up the counter. Subsequently, step S10 is executed.
 ステップS10にて、画像挿入部20は、カウンタの値が予め定められた値以上であるか否かを判定する。ここでは、カウンタの値が予め定められた値以上に達している場合を説明する。 In step S10, the image insertion unit 20 determines whether or not the value of the counter is equal to or greater than a predetermined value. Here, the case where the value of the counter reaches a predetermined value or more will be described.
 ステップS20にて、画像挿入部20は、記憶部30からダミー画像を読み込み、顔検出装置110の特徴部位検出部111に出力する。ここでは、ダミー画像は、黒画像である。 In step S20, the image insertion unit 20 reads a dummy image from the storage unit 30 and outputs it to the feature portion detection unit 111 of the face detection device 110. Here, the dummy image is a black image.
 ステップS40にて、特徴部位検出部111は、図7の画像つまり前回の画像において、後方の座席の搭乗者2Bの顔の検出に成功したと判断する。次に、ステップS50が実行される。 In step S40, the feature site detection unit 111 determines that the face of the passenger 2B in the rear seat has been successfully detected in the image of FIG. 7, that is, the previous image. Next, step S50 is executed.
 ステップS50にて、特徴部位検出部111は、図7の画像における顔の検出結果に基づいて、ダミー画像の左側の領域に探索領域3を設定する。 In step S50, the feature region detection unit 111 sets the search region 3 in the region on the left side of the dummy image based on the face detection result in the image of FIG. 7.
 ステップS70にて、特徴部位検出部111は、ダミー画像における探索領域3に写っている人物の顔の検出を試みる。しかし、ダミー画像は黒画像であるため、特徴部位検出部111は、そのダミー画像における人物の顔の検出に失敗する。 In step S70, the feature site detection unit 111 attempts to detect the face of the person in the search area 3 in the dummy image. However, since the dummy image is a black image, the feature portion detection unit 111 fails to detect the face of the person in the dummy image.
 ステップS80にて、妥当性判断部10は、特徴部位検出部111が顔の検出に失敗したと判断する。次に、ステップS120が実行される。 In step S80, the validity determination unit 10 determines that the feature region detection unit 111 has failed to detect the face. Next, step S120 is executed.
 ステップS120にて、顔検出処理装置101は、カウンタをリセットする。再び、ステップS10が実行される。 In step S120, the face detection processing device 101 resets the counter. Step S10 is executed again.
 ステップS10にて、画像挿入部20は、カウンタの値が予め定められた値以上であるか否かを判定する。カウンタの値は、リセットされたため、予め定められた値未満である。次に、ステップS30が実行される。 In step S10, the image insertion unit 20 determines whether or not the value of the counter is equal to or greater than a predetermined value. The value of the counter is less than a predetermined value because it has been reset. Next, step S30 is executed.
 ステップS30にて、画像挿入部20は、記憶部30に記憶されている時系列データのうち、前回、顔検出装置110に出力された画像の次の画像を読み込み、顔検出装置110に出力する。なお、画像挿入部20は、記憶部30に記憶されている時系列データのうち、ダミー画像を読み込んだフレーム分だけ読み飛ばして次の画像を読み込んでもよい。図8は、画像取得装置120によって撮像された画像の一例であって、画像挿入部20が本ステップS30にて、特徴部位検出部111に出力した画像を示す図である。図8に示される画像は、図7に示される画像よりも後に、画像取得装置120によって撮像された画像である。 In step S30, the image insertion unit 20 reads the image next to the image previously output to the face detection device 110 among the time series data stored in the storage unit 30, and outputs the image to the face detection device 110. .. The image insertion unit 20 may skip the frame in which the dummy image is read out of the time-series data stored in the storage unit 30 and read the next image. FIG. 8 is an example of an image captured by the image acquisition device 120, and is a diagram showing an image output by the image insertion unit 20 to the feature portion detection unit 111 in this step S30. The image shown in FIG. 8 is an image captured by the image acquisition device 120 after the image shown in FIG. 7.
 ステップS40にて、特徴部位検出部111は、前回のダミー画像における顔の検出に失敗したと判断する。次に、ステップS60が実行される。 In step S40, the feature portion detection unit 111 determines that the face detection in the previous dummy image has failed. Next, step S60 is executed.
 ステップS60にて、特徴部位検出部111は、図8に示される画像の全体を探索領域3に設定する。 In step S60, the feature region detection unit 111 sets the entire image shown in FIG. 8 in the search region 3.
 ステップS70にて、特徴部位検出部111は、探索領域3に写っている4人の搭乗者2の顔を検出する。 In step S70, the feature site detection unit 111 detects the faces of the four passengers 2 shown in the search area 3.
 ステップS80にて、妥当性判断部10は、特徴部位検出部111が顔の検出に成功したと判断する。次に、ステップS90が実行される。 In step S80, the validity determination unit 10 determines that the feature site detection unit 111 has succeeded in detecting the face. Next, step S90 is executed.
 ステップS90にて、妥当性判断部10は、特徴部位検出部111によって検出された4人の搭乗者2の顔の画像内での座標と、ドライバーの顔が存在すべき座標範囲とを比較し妥当性を確認する。 In step S90, the validity determination unit 10 compares the coordinates in the image of the faces of the four passengers 2 detected by the feature site detection unit 111 with the coordinate range in which the driver's face should exist. Check the validity.
 ステップS100にて、妥当性判断部10は、4人の搭乗者2のうち前席の1人の搭乗者2Aの顔の座標が、ドライバーの顔が存在すべき座標範囲に含まれていると判断する。すなわち、妥当性判断部10は、検出対象であるドライバーの顔の検出に成功したと判断し、ステップS120が実行される。 In step S100, the validity determination unit 10 determines that the coordinates of the face of one passenger 2A in the front seat among the four passengers 2 are included in the coordinate range in which the driver's face should exist. to decide. That is, the validity determination unit 10 determines that the face of the driver to be detected has been successfully detected, and the step S120 is executed.
 ステップS120にて、顔検出処理装置101は、カウンタをリセットする。 In step S120, the face detection processing device 101 resets the counter.
 このように、顔検出装置110は、原則として、探索領域3を限定して検出対象の人物の顔を検出する。そのため、顔の検出のための計算量が削減され、処理速度が向上する。また、顔検出装置110が非検出対象の人物の顔を検出した場合には、顔検出処理装置101は、その非検出対象の人物の顔検出結果を棄却し、ダミー画像を顔検出装置110に出力する。実施の形態2におけるダミー画像は、黒画像であることから、顔検出装置110は、その入力されたダミー画像における顔の検出に失敗する。そのため、顔検出装置110は、その後に入力される画像における探索領域3を変更する。顔検出装置110は、変更された探索領域3において顔の検出動作を行うため、予め定められた条件を満たす検出対象の人物の顔を検出することができる。実施の形態2の顔検出装置110および顔検出処理システム200は、顔検出装置110の顔の検出速度を向上させつつ、検出対象の人物の顔の検出精度をも向上させる。 In this way, the face detection device 110, in principle, limits the search area 3 and detects the face of the person to be detected. Therefore, the amount of calculation for face detection is reduced, and the processing speed is improved. When the face detection device 110 detects the face of the non-detection target person, the face detection processing device 101 rejects the face detection result of the non-detection target person and sends a dummy image to the face detection device 110. Output. Since the dummy image in the second embodiment is a black image, the face detection device 110 fails to detect the face in the input dummy image. Therefore, the face detection device 110 changes the search area 3 in the image input thereafter. Since the face detection device 110 performs a face detection operation in the changed search area 3, it can detect the face of a person to be detected that satisfies a predetermined condition. The face detection device 110 and the face detection processing system 200 of the second embodiment improve the face detection speed of the face detection device 110 and also improve the face detection accuracy of the person to be detected.
 実施の形態2における予め定められた条件とは、画像内での顔の位置もしくは画像内での顔の特徴部位の位置に関する条件、画像内での顔の大きさに関する条件、顔の特徴に関する条件、または、人物が搭乗している車両の走行状態と顔の向きとの関係に関する条件を含む。このような構成により、顔検出処理装置101は、様々条件下においても、顔検出装置110が検出対象の人物の顔を正確に検出することを可能にする。 The predetermined conditions in the second embodiment are the condition regarding the position of the face in the image or the position of the feature portion of the face in the image, the condition regarding the size of the face in the image, and the condition regarding the feature of the face. Or, the condition regarding the relationship between the running state of the vehicle on which the person is on board and the orientation of the face is included. With such a configuration, the face detection processing device 101 enables the face detection device 110 to accurately detect the face of the person to be detected even under various conditions.
 実施の形態2においては、顔検出装置110の検出結果の妥当性についての判断処理を顔検出処理装置101が実行しているが、その判断処理を、顔検出装置110に組み込むことも可能である。しかし、その組み込みには、顔検出装置110の改造または新規設計が必要となる。一方で、実施の形態2における顔検出処理装置101は、既存の画像取得装置120および顔検出装置110に接続するだけで、上記の効果を奏する。すなわち、顔検出処理装置101は、既存の画像取得装置120および顔検出装置110の入出力のインターフェースを変更することなく、顔検出装置110が検出対象の人物の顔を正確に検出することを可能にする。 In the second embodiment, the face detection processing device 101 executes a determination process regarding the validity of the detection result of the face detection device 110, but the determination process can also be incorporated into the face detection device 110. .. However, its incorporation requires modification or new design of the face detection device 110. On the other hand, the face detection processing device 101 according to the second embodiment has the above-mentioned effect only by connecting to the existing image acquisition device 120 and the face detection device 110. That is, the face detection processing device 101 enables the face detection device 110 to accurately detect the face of the person to be detected without changing the input / output interfaces of the existing image acquisition device 120 and the face detection device 110. To.
 また、顔検出処理装置101および顔検出処理システム200は、移動体に搭載されたドライバーモニタリングシステムに組み込むことが可能である。例えば、ドライバーモニタリングシステムは、顔検出処理装置101が出力する検出結果に基づいて、ドライバーに対し、居眠り、脇見に関する警告を出力することができる。また、車両の運転を支援する運転支援装置や車両の自動運転を制御する自動運転制御装置が、ドライバーモニタリングシステムと連携している場合、顔検出処理装置101が出力する検出結果に基づいて、部分的自動運転車両における車両から人間への運転主導権の委譲判断を正確に行うことができる。例えば、ドライバーモニタリングシステムが取得した画像に後部座席の搭乗員の顔が写り込んでいる場合であっても、正確に検出されたドライバーの顔の状態に基づいて、運転主導権の委譲についての判断を下すことができる。また、顔検出処理装置101および顔検出処理システム200は、人物の顔に基づいて認証を行う個人認証システムにも組み込むことが可能である。個人認証システムは、顔検出処理装置101が出力する検出結果に基づいて、正確に個人認証をすることができる。 Further, the face detection processing device 101 and the face detection processing system 200 can be incorporated into a driver monitoring system mounted on a moving body. For example, the driver monitoring system can output a warning regarding dozing and inattentiveness to the driver based on the detection result output by the face detection processing device 101. Further, when the driving support device for supporting the driving of the vehicle or the automatic driving control device for controlling the automatic driving of the vehicle is linked with the driver monitoring system, the portion is based on the detection result output by the face detection processing device 101. It is possible to accurately determine the transfer of driving initiative from a vehicle to a human in a self-driving vehicle. For example, even if the image acquired by the driver monitoring system shows the face of a crew member in the back seat, a decision on delegation of driving initiative is made based on the accurately detected face condition of the driver. Can be given. Further, the face detection processing device 101 and the face detection processing system 200 can be incorporated into a personal authentication system that authenticates based on the face of a person. The personal authentication system can accurately perform personal authentication based on the detection result output by the face detection processing device 101.
 (実施の形態2の変形例1)
 記憶部30は、ダミー画像として、加工画像を記憶していてもよい。加工画像とは、妥当性判断部10による妥当性が否定された人物の顔が塗りつぶされた画像である。例えば、加工画像とは、図7に示される後方の座席の搭乗者2Bの顔が塗りつぶされた画像である。
(Modification 1 of Embodiment 2)
The storage unit 30 may store the processed image as a dummy image. The processed image is an image in which the face of a person whose validity has been denied by the validity determination unit 10 is filled. For example, the processed image is an image in which the face of the passenger 2B in the rear seat shown in FIG. 7 is filled.
 実施の形態2のステップS20において、画像挿入部20は、黒画像に代えて、加工画像を顔検出装置110の特徴部位検出部111に出力する。黒画像と同様に、ステップS70において、特徴部位検出部111は、探索領域3に写っている人物の顔の検出に失敗する。その他の処理は、実施の形態2と同様である。このような顔検出処理装置101も、実施の形態2と同様の効果を奏する。 In step S20 of the second embodiment, the image insertion unit 20 outputs the processed image to the feature portion detection unit 111 of the face detection device 110 instead of the black image. Similar to the black image, in step S70, the feature portion detecting unit 111 fails to detect the face of the person in the search area 3. Other processing is the same as in the second embodiment. Such a face detection processing device 101 also has the same effect as that of the second embodiment.
 (実施の形態2の変形例2)
 記憶部30は、ダミー画像として、マスク画像を記憶していてもよい。マスク画像とは、画像取得装置120によって取得された画像に重ね合わせるための画像である。マスク画像は、妥当性判断部10による妥当性が否定された人物の顔に対応する画素を不透過にする。例えば、マスク画像を、図7に示される画像に重ね合わせることにより、後方の座席の搭乗者2Bの顔に対応する画素が透過せず、その顔が顔検出装置110によって検出されない状態が生成される。
(Modification 2 of Embodiment 2)
The storage unit 30 may store a mask image as a dummy image. The mask image is an image for being superimposed on the image acquired by the image acquisition device 120. The mask image makes the pixels corresponding to the face of the person whose validity is denied by the validity determination unit 10 opaque. For example, by superimposing the mask image on the image shown in FIG. 7, a state is generated in which the pixel corresponding to the face of the passenger 2B in the rear seat is not transmitted and the face is not detected by the face detection device 110. To.
 実施の形態2のステップS20において、画像挿入部20は、黒画像に代えて、画像取得装置120にて取得された次の画像にマスク画像を重ね合わせた画像を出力してもよい。黒画像と同様に、ステップS70において、特徴部位検出部111は、探索領域3に写っている人物の顔の検出に失敗する。その他の処理は、実施の形態2と同様である。このような顔検出処理装置101も、実施の形態2と同様の効果を奏する。 In step S20 of the second embodiment, the image insertion unit 20 may output an image in which the mask image is superimposed on the next image acquired by the image acquisition device 120 instead of the black image. Similar to the black image, in step S70, the feature portion detecting unit 111 fails to detect the face of the person in the search area 3. Other processing is the same as in the second embodiment. Such a face detection processing device 101 also has the same effect as that of the second embodiment.
 <実施の形態3>
 実施の形態3における顔検出処理装置101および顔検出処理方法を説明する。実施の形態3は実施の形態1の下位概念であり、実施の形態3における顔検出処理装置101は、実施の形態1における顔検出処理装置100の各構成を含む。なお、実施の形態1または2と同様の構成および動作については説明を省略する。
<Embodiment 3>
The face detection processing device 101 and the face detection processing method according to the third embodiment will be described. The third embodiment is a subordinate concept of the first embodiment, and the face detection processing device 101 in the third embodiment includes each configuration of the face detection processing device 100 in the first embodiment. The description of the configuration and operation similar to those of the first or second embodiment will be omitted.
 実施の形態3における顔検出処理装置101および顔検出処理システム200の構成は、図5に示されるブロック図と同様である。 The configuration of the face detection processing device 101 and the face detection processing system 200 in the third embodiment is the same as the block diagram shown in FIG.
 実施の形態3における記憶部30は、ダミー画像として、黒画像および模擬画像を記憶している。模擬画像とは、予め定められた条件を満たし、かつ、予め定められた検出対象の人物が模擬されたダミーの人物の顔が描かれている画像である。例えば、検出対象の人物が車両のドライバーである場合、ダミーの人物の顔は、画像取得装置120によって取得される画像内でドライバーの顔が存在すべき範囲に描かれている。 The storage unit 30 in the third embodiment stores a black image and a simulated image as dummy images. The simulated image is an image in which the face of a dummy person who satisfies a predetermined condition and is simulated by a predetermined person to be detected is drawn. For example, when the person to be detected is the driver of the vehicle, the face of the dummy person is drawn in the range where the driver's face should exist in the image acquired by the image acquisition device 120.
 図9は、実施の形態3における顔検出処理方法を示すフローチャートである。実施の形態2の図6に示されたフローチャートに、ステップS115およびステップS130が追加されている。図9のフローチャートに基づいて、顔検出装置110がダミー画像によって探索領域3をリセットする詳細な動作を説明する。特徴部位検出部111が、ダミー画像である黒画像の探索領域3に写っている人物の顔の検出に失敗するステップS70までは、実施の形態2と同様であるため、その説明は省略する。 FIG. 9 is a flowchart showing the face detection processing method according to the third embodiment. Step S115 and step S130 are added to the flowchart shown in FIG. 6 of the second embodiment. Based on the flowchart of FIG. 9, the detailed operation of the face detection device 110 resetting the search area 3 by the dummy image will be described. Since the same as in the second embodiment up to step S70 in which the feature portion detecting unit 111 fails to detect the face of the person in the search area 3 of the black image which is a dummy image, the description thereof will be omitted.
 ステップS80にて、妥当性判断部10は、特徴部位検出部111が顔の検出に失敗したと判断する。次に、ステップS115が実行される。 In step S80, the validity determination unit 10 determines that the feature region detection unit 111 has failed to detect the face. Next, step S115 is executed.
 ステップS115にて、妥当性判断部10は、特徴部位検出部111に出力した画像はダミー画像であったか否かを判定する。ここでは、その画像はダミー画像であったため、ステップS130が実行される。 In step S115, the validity determination unit 10 determines whether or not the image output to the feature portion detection unit 111 is a dummy image. Here, since the image was a dummy image, step S130 is executed.
 ステップS130にて、顔検出処理装置101は、カウンタをホールドする。再び、ステップS10が実行される。 In step S130, the face detection processing device 101 holds the counter. Step S10 is executed again.
 ステップS10にて、画像挿入部20は、カウンタの値が予め定められた値以上であるか否かを判定する。ここでは、カウンタの値は、ホールドされているため、未だ予め定められた値以上に達している。そのため、ステップS20が実行される。 In step S10, the image insertion unit 20 determines whether or not the value of the counter is equal to or greater than a predetermined value. Here, since the value of the counter is held, it still reaches a predetermined value or more. Therefore, step S20 is executed.
 ステップS20にて、画像挿入部20は、記憶部30からダミー画像を読み込み、顔検出装置110の特徴部位検出部111に出力する。この際、画像挿入部20は、前回、ダミー画像として黒画像を読み込んだことを記憶している。画像挿入部20は、黒画像の次のダミー画像として、模擬画像を読み込んで、特徴部位検出部111に出力する。図10は、実施の形態3における模擬画像の一例を示す図である。模擬画像におけるダミーの搭乗者2Cの顔の位置は、ドライバーが運転席に着座し、正面を向いている場合のドライバーの顔の位置に対応する。 In step S20, the image insertion unit 20 reads a dummy image from the storage unit 30 and outputs it to the feature portion detection unit 111 of the face detection device 110. At this time, the image insertion unit 20 remembers that the black image was read as the dummy image last time. The image insertion unit 20 reads a simulated image as a dummy image next to the black image and outputs the simulated image to the feature portion detection unit 111. FIG. 10 is a diagram showing an example of a simulated image according to the third embodiment. The position of the dummy passenger 2C's face in the simulated image corresponds to the position of the driver's face when the driver is seated in the driver's seat and is facing the front.
 ステップS40にて、特徴部位検出部111は、前回の黒画像における顔の検出に失敗したと判断する。次に、ステップS60が実行される。 In step S40, the feature region detection unit 111 determines that the face detection in the previous black image has failed. Next, step S60 is executed.
 ステップS60にて、特徴部位検出部111は、図10に示される画像の全体を探索領域3に設定する。 In step S60, the feature region detection unit 111 sets the entire image shown in FIG. 10 in the search region 3.
 ステップS70にて、特徴部位検出部111は、探索領域3に写っているダミーの搭乗者2Cの顔を検出する。 In step S70, the feature site detection unit 111 detects the face of the dummy passenger 2C shown in the search area 3.
 ステップS80にて、妥当性判断部10は、特徴部位検出部111が顔の検出に成功したと判断する。次に、ステップS90が実行される。 In step S80, the validity determination unit 10 determines that the feature site detection unit 111 has succeeded in detecting the face. Next, step S90 is executed.
 ステップS90にて、妥当性判断部10は、特徴部位検出部111によって検出されたダミーの搭乗者2Cの顔の画像内での座標と、ドライバーの顔が存在すべき座標範囲とを比較し、妥当性を確認する。 In step S90, the validity determination unit 10 compares the coordinates in the image of the face of the dummy passenger 2C detected by the feature site detection unit 111 with the coordinate range in which the driver's face should exist. Check the validity.
 ステップS100にて、妥当性判断部10は、ダミーの搭乗者2Cの顔の座標が、ドライバーの顔が存在すべき座標範囲に含まれていると判断する。すなわち、妥当性判断部10は、検出対象であるドライバーの顔の検出に成功したと判断し、ステップS120が実行される。 In step S100, the validity determination unit 10 determines that the coordinates of the face of the dummy passenger 2C are included in the coordinate range in which the driver's face should exist. That is, the validity determination unit 10 determines that the face of the driver to be detected has been successfully detected, and the step S120 is executed.
 ステップS120にて、顔検出処理装置101は、カウンタをリセットする。再び、ステップS10が実行される。 In step S120, the face detection processing device 101 resets the counter. Step S10 is executed again.
 ステップS10にて、画像挿入部20は、カウンタの値が予め定められた値以上であるか否かを判定する。カウンタの値は、リセットされたため、予め定められた値未満である。次に、ステップS30が実行される。 In step S10, the image insertion unit 20 determines whether or not the value of the counter is equal to or greater than a predetermined value. The value of the counter is less than a predetermined value because it has been reset. Next, step S30 is executed.
 ステップS30にて、画像挿入部20は、記憶部30に記憶されている時系列データのうち、前回、顔検出装置110に出力された画像の次の画像を読み込み、顔検出装置110に出力する。図11は、画像取得装置120によって撮像された画像の一例であって、画像挿入部20が本ステップS30において、特徴部位検出部111に出力した画像を示す図である。 In step S30, the image insertion unit 20 reads the image next to the image previously output to the face detection device 110 among the time series data stored in the storage unit 30, and outputs the image to the face detection device 110. .. FIG. 11 is an example of an image captured by the image acquisition device 120, and is a diagram showing an image output by the image insertion unit 20 to the feature portion detection unit 111 in this step S30.
 ステップS40にて、特徴部位検出部111は、前回の模擬画像における顔の検出に成功したと判断する。次に、ステップS50が実行される。 In step S40, the feature site detection unit 111 determines that the face has been successfully detected in the previous simulated image. Next, step S50 is executed.
 ステップS50にて、特徴部位検出部111は、前回の模擬画像における顔の検出結果に基づいて探索領域3を設定する。ここでは、特徴部位検出部111は、図11の画像において、図10に示されるダミーの搭乗者2Cの顔に対応する領域に探索領域3を設定する。 In step S50, the feature site detection unit 111 sets the search area 3 based on the face detection result in the previous simulated image. Here, the feature region detection unit 111 sets the search region 3 in the region corresponding to the face of the dummy passenger 2C shown in FIG. 10 in the image of FIG.
 ステップS70にて、特徴部位検出部111は、探索領域3に写っている搭乗者2の顔を検出する。図11に示されるように、探索領域3内には、前席の搭乗者2Aの顔の全部と、後方の座席の搭乗者2Bの顔の一部とが写っている。この場合、前席の搭乗者2Aの顔の全部が探索領域3に含まれるため、特徴部位検出部111は、前席の搭乗者2Aの顔を優先して検出する。 In step S70, the feature site detection unit 111 detects the face of the passenger 2 in the search area 3. As shown in FIG. 11, in the search area 3, the entire face of the passenger 2A in the front seat and a part of the face of the passenger 2B in the rear seat are shown. In this case, since the entire face of the passenger 2A in the front seat is included in the search area 3, the feature portion detecting unit 111 preferentially detects the face of the passenger 2A in the front seat.
 ステップS80にて、妥当性判断部10は、特徴部位検出部111が顔の検出に成功したと判断する。ステップS90が実行される。 In step S80, the validity determination unit 10 determines that the feature site detection unit 111 has succeeded in detecting the face. Step S90 is executed.
 ステップS90にて、妥当性判断部10は、特徴部位検出部111によって検出された前席の搭乗者2Aの顔の画像内での座標と、ドライバーの顔が存在すべき座標範囲とを比較し妥当性を確認する。 In step S90, the validity determination unit 10 compares the coordinates in the image of the face of the passenger 2A in the front seat detected by the feature site detection unit 111 with the coordinate range in which the driver's face should exist. Check the validity.
 ステップS100にて、妥当性判断部10は、前席の搭乗者2Aの顔の座標が、ドライバーの顔が存在すべき座標範囲に含まれていると判断する。すなわち、妥当性判断部10は、検出対象であるドライバーの顔の検出に成功したと判断し、ステップS120が実行される。 In step S100, the validity determination unit 10 determines that the coordinates of the face of the passenger 2A in the front seat are included in the coordinate range in which the driver's face should exist. That is, the validity determination unit 10 determines that the face of the driver to be detected has been successfully detected, and the step S120 is executed.
 ステップS120にて、顔検出処理装置101は、カウンタをリセットする。 In step S120, the face detection processing device 101 resets the counter.
 このように、実施の形態3におけるダミー画像は、黒画像およびダミーの人物の顔が描かれた模擬画像である。顔検出装置110は、ダミー画像における顔の検出に失敗した後、模擬画像によって適正な探索領域に変更する。顔検出装置110は、その適正に変更された探索領域3において顔の検出動作を行うため、予め定められた条件を満たす検出対象の人物の顔を検出することができる。実施の形態3の顔検出装置110および顔検出処理システム200は、顔検出装置110の顔の検出速度を向上させつつ、検出対象の人物の顔の検出精度をも向上させる。 As described above, the dummy image in the third embodiment is a black image and a simulated image in which the face of the dummy person is drawn. After failing to detect a face in the dummy image, the face detection device 110 changes the search area to an appropriate search area by using a simulated image. Since the face detection device 110 performs a face detection operation in the appropriately changed search area 3, it can detect the face of a person to be detected that satisfies a predetermined condition. The face detection device 110 and the face detection processing system 200 of the third embodiment improve the face detection speed of the face detection device 110 and also improve the face detection accuracy of the person to be detected.
 <実施の形態4>
 以上の各実施の形態に示された顔検出処理装置は、ナビゲーション装置と、通信端末と、サーバと、これらにインストールされるアプリケーションの機能とを適宜に組み合わせて構築されるシステムにも適用することができる。ここで、ナビゲーション装置とは、例えば、PND(Portable Navigation Device)などを含む。通信端末とは、例えば、携帯電話、スマートフォンおよびタブレットなどの携帯端末を含む。
<Embodiment 4>
The face detection processing device shown in each of the above embodiments shall also be applied to a system constructed by appropriately combining a navigation device, a communication terminal, a server, and the functions of applications installed in the navigation device. Can be done. Here, the navigation device includes, for example, a PND (Portable Navigation Device) and the like. Communication terminals include, for example, mobile terminals such as mobile phones, smartphones and tablets.
 図12は、実施の形態4における顔検出処理装置100およびそれに関連して動作する装置の構成を示すブロック図である。 FIG. 12 is a block diagram showing the configuration of the face detection processing device 100 and the device operating in connection therewith according to the fourth embodiment.
 顔検出処理装置100、顔検出装置110および通信装置130がサーバ300に設けられている。顔検出処理装置100は、車両1に設けられた画像取得装置120から通信装置140および通信装置130を介して車両1の内部の画像を取得する。顔検出処理装置100は、顔検出装置110によって検出された人物の顔が、予め定められた条件を満たすか否かを判定する。顔検出処理装置100は、判定結果に基づいて、ダミー画像または画像取得装置120から取得した画像を顔検出装置110に出力する。また、顔検出処理装置100は、顔検出装置110による顔の検出結果を、車両1に設けられた制御装置150に、各通信装置を介して出力する。制御装置150は、その検出結果に基づいて、報知や車両1の運転支援を行う。 The face detection processing device 100, the face detection device 110, and the communication device 130 are provided in the server 300. The face detection processing device 100 acquires an image of the inside of the vehicle 1 from the image acquisition device 120 provided in the vehicle 1 via the communication device 140 and the communication device 130. The face detection processing device 100 determines whether or not the face of the person detected by the face detection device 110 satisfies a predetermined condition. The face detection processing device 100 outputs a dummy image or an image acquired from the image acquisition device 120 to the face detection device 110 based on the determination result. Further, the face detection processing device 100 outputs the face detection result by the face detection device 110 to the control device 150 provided in the vehicle 1 via each communication device. The control device 150 provides notification and driving support for the vehicle 1 based on the detection result.
 このように、顔検出処理装置100がサーバ300に配置されることにより、車載装置の構成を簡素化することができる。 By arranging the face detection processing device 100 on the server 300 in this way, the configuration of the in-vehicle device can be simplified.
 また、顔検出処理装置100の機能あるいは構成要素の一部がサーバ300に設けられ、他の一部が車両1に設けられるなど、分散して配置されてもよい。 Further, some of the functions or components of the face detection processing device 100 may be provided in the server 300, and the other part may be provided in the vehicle 1 in a distributed manner.
 なお、本発明は、その発明の範囲内において、各実施の形態を自由に組み合わせたり、各実施の形態を適宜、変形、省略したりすることが可能である。 It should be noted that, within the scope of the invention, the present invention can be freely combined with each embodiment, and each embodiment can be appropriately modified or omitted.
 本発明は詳細に説明されたが、上記した説明は、全ての局面において、例示であって、本発明がそれに限定されるものではない。例示されていない無数の変形例が、この発明の範囲から外れることなく想定され得るものと解される。 Although the present invention has been described in detail, the above description is an example in all aspects, and the present invention is not limited thereto. It is understood that a myriad of variations not illustrated can be envisioned without departing from the scope of the invention.
 2 搭乗者、2A 前席の搭乗者、2B 後方の座席の搭乗者、3 探索領域、10 妥当性判断部、20 画像挿入部、30 記憶部、100 顔検出処理装置、101 顔検出処理装置、110 顔検出装置、111 特徴部位検出部、120 画像取得装置、200 顔検出処理システム。 2 passengers, 2A front seat passengers, 2B rear seat passengers, 3 search area, 10 validity judgment unit, 20 image insertion unit, 30 storage unit, 100 face detection processing device, 101 face detection processing device, 110 face detection device, 111 feature part detection unit, 120 image acquisition device, 200 face detection processing system.

Claims (5)

  1.  順次入力される画像の少なくとも一部の領域に設定される探索領域に写っている人物の顔を検出し、かつ、前記人物の前記顔の検出の成否に基づいて次に入力される画像における前記探索領域を変更する顔検出装置に対し、前記画像を出力する顔検出処理装置であって、
     前記顔検出装置によって検出された前記人物の前記顔に関する予め定められた条件に基づいて、前記顔検出装置によって検出された前記人物の前記顔が、予め定められた検出対象の人物の顔であるか否かを判定する妥当性判断部と、
     前記妥当性判断部の判定結果に基づいて、前記探索領域における前記人物の前記顔の前記検出を前記顔検出装置に失敗させて前記探索領域を変更させるためのダミー画像を、前記画像として、前記顔検出装置に出力する画像挿入部と、を備える顔検出処理装置。
    The face of a person appearing in a search area set in at least a part of the sequentially input images is detected, and the face of the person is detected in the image to be input next based on the success or failure of the detection of the face. A face detection processing device that outputs the image to a face detection device that changes the search area.
    The face of the person detected by the face detection device is the face of the person to be detected in advance based on the predetermined conditions regarding the face of the person detected by the face detection device. The validity judgment unit that determines whether or not it is
    Based on the determination result of the validity determination unit, a dummy image for causing the face detection device to fail the detection of the face of the person in the search area and changing the search area is used as the image. A face detection processing device including an image insertion unit that outputs to a face detection device.
  2.  前記ダミー画像は、全画素の輝度が同じ均一画像である、請求項1に記載の顔検出処理装置。 The face detection processing device according to claim 1, wherein the dummy image is a uniform image having the same brightness of all pixels.
  3.  前記ダミー画像は、前記予め定められた条件を満たし、かつ、前記予め定められた検出対象の前記人物が模擬されたダミーの人物の顔が描かれた模擬画像である、請求項1に記載の顔検出処理装置。 The dummy image according to claim 1, wherein the dummy image is a simulated image in which the face of the dummy person who satisfies the predetermined conditions and is simulated by the predetermined detection target person is drawn. Face detection processing device.
  4.  前記予め定められた条件とは、前記画像内での前記顔の位置もしくは前記画像内での前記顔の特徴部位の位置に関する条件、前記画像内での前記顔の大きさに関する条件、前記顔の特徴に関する条件、または、前記人物が搭乗している車両の走行状態と前記顔の向きとの関係に関する条件を含む、請求項1に記載の顔検出処理装置。 The predetermined conditions include a condition relating to the position of the face in the image or the position of a feature portion of the face in the image, a condition relating to the size of the face in the image, and the condition of the face. The face detection processing device according to claim 1, further comprising a condition relating to the feature or a condition relating to the relationship between the traveling state of the vehicle on which the person is on board and the orientation of the face.
  5.  順次入力される画像の少なくとも一部の領域に設定される探索領域に写っている人物の顔を検出し、かつ、前記人物の前記顔の検出の成否に基づいて次に入力される画像における前記探索領域を変更する顔検出装置に対し、前記画像を出力する顔検出処理方法であって、
     前記顔検出装置によって検出された前記人物の前記顔に関する予め定められた条件に基づいて、前記顔検出装置によって検出された前記人物の前記顔が、予め定められた検出対象の人物の顔であるか否かを判定し、
     判定結果に基づいて、前記探索領域における前記人物の前記顔の前記検出を前記顔検出装置に失敗させて前記探索領域を変更させるためのダミー画像を、前記画像として、前記顔検出装置に出力する、顔検出処理方法。
    The face of a person appearing in a search area set in at least a part of the sequentially input images is detected, and the face of the person is detected in the image to be input next based on the success or failure of the detection of the face. A face detection processing method that outputs the image to a face detection device that changes the search area.
    The face of the person detected by the face detection device is the face of the person to be detected in advance based on the predetermined conditions regarding the face of the person detected by the face detection device. Judge whether or not
    Based on the determination result, a dummy image for causing the face detection device to fail the detection of the face of the person in the search area and changing the search area is output to the face detection device as the image. , Face detection processing method.
PCT/JP2019/027103 2019-07-09 2019-07-09 Face detection processing device and face detection processing method WO2021005702A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/JP2019/027103 WO2021005702A1 (en) 2019-07-09 2019-07-09 Face detection processing device and face detection processing method
DE112019007033.9T DE112019007033T5 (en) 2019-07-09 2019-07-09 Face detection processing apparatus and face detection processing method
JP2021530389A JP7051014B2 (en) 2019-07-09 2019-07-09 Face detection processing device and face detection processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2019/027103 WO2021005702A1 (en) 2019-07-09 2019-07-09 Face detection processing device and face detection processing method

Publications (1)

Publication Number Publication Date
WO2021005702A1 true WO2021005702A1 (en) 2021-01-14

Family

ID=74114457

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/027103 WO2021005702A1 (en) 2019-07-09 2019-07-09 Face detection processing device and face detection processing method

Country Status (3)

Country Link
JP (1) JP7051014B2 (en)
DE (1) DE112019007033T5 (en)
WO (1) WO2021005702A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009026146A (en) * 2007-07-20 2009-02-05 Canon Inc Image processing apparatus and image processing method
JP2011138388A (en) * 2009-12-28 2011-07-14 Canon Inc Data correction apparatus and method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009026146A (en) * 2007-07-20 2009-02-05 Canon Inc Image processing apparatus and image processing method
JP2011138388A (en) * 2009-12-28 2011-07-14 Canon Inc Data correction apparatus and method

Also Published As

Publication number Publication date
JP7051014B2 (en) 2022-04-08
JPWO2021005702A1 (en) 2021-10-14
DE112019007033T5 (en) 2021-12-09

Similar Documents

Publication Publication Date Title
US11461595B2 (en) Image processing apparatus and external environment recognition apparatus
EP3357793A1 (en) Parking assist apparatus
JP6259132B2 (en) In-vehicle camera device
WO2017006853A1 (en) Driver abnormality detection device and driver abnormality detection method
US11189048B2 (en) Information processing system, storing medium storing program, and information processing device controlling method for performing image processing on target region
JP2008258778A (en) Imaging system
JP4539427B2 (en) Image processing device
JP7051014B2 (en) Face detection processing device and face detection processing method
WO2020174601A1 (en) Alertness level estimation device, automatic driving assistance device, and alertness level estimation method
US11440407B2 (en) Non-contact operating apparatus for vehicle and vehicle
JP2016030478A (en) In-vehicle information processing device, in-vehicle information processing method, program, and camera
WO2021240668A1 (en) Gesture detection device and gesture detection method
US20230306707A1 (en) Around view monitoring system and the method thereof
JP2021189852A (en) Condition determination device and condition determination method
JPWO2020129146A1 (en) Display control device and display control method
CN113784875B (en) Camera position detection device and method, camera unit, and storage medium
WO2021240671A1 (en) Gesture detection device and gesture detection method
JP2019031264A (en) Information processing method, information processing system, and program
JP2019114118A (en) Warning device, vehicle, warning method, and program
JP2019153938A (en) Image processing apparatus, image processing method, and program
JP6847323B2 (en) Line-of-sight detection device and line-of-sight detection method
WO2021229741A1 (en) Gesture detecting device and gesture detecting method
JP2019205032A (en) On-vehicle failure detection device and failure detection method
JP2022143854A (en) Occupant state determination device and occupant state determination method
JP7483060B2 (en) Hand detection device, gesture recognition device, and hand detection method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19936915

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021530389

Country of ref document: JP

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 19936915

Country of ref document: EP

Kind code of ref document: A1