WO2020213284A1 - Image processing device, image processing method, and program - Google Patents

Image processing device, image processing method, and program Download PDF

Info

Publication number
WO2020213284A1
WO2020213284A1 PCT/JP2020/009561 JP2020009561W WO2020213284A1 WO 2020213284 A1 WO2020213284 A1 WO 2020213284A1 JP 2020009561 W JP2020009561 W JP 2020009561W WO 2020213284 A1 WO2020213284 A1 WO 2020213284A1
Authority
WO
WIPO (PCT)
Prior art keywords
image processing
image
processing target
success probability
target
Prior art date
Application number
PCT/JP2020/009561
Other languages
French (fr)
Japanese (ja)
Inventor
悠介 篠原
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Publication of WO2020213284A1 publication Critical patent/WO2020213284A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to an image processing apparatus, an image processing method and a program.
  • Image data (video data) captured by the camera is collected, and image processing (information processing) related to the collected image data is performed.
  • a computer (computer) performs image processing on a person or an object included in the image data.
  • a computer extracts a photographed face image of a person and uses the face image for a person authentication process.
  • a computer may use a facial image to determine the age and gender of the person.
  • a step of specifying an area in which a face image is captured from image data, a step of cutting out the specified face image, and a step of cutting out the cut out face image are sequentially predetermined.
  • a process of processing face recognition, age / gender determination
  • Patent Document 1 discloses a face recognition process.
  • the feature amount that characterizes the face image such as the shape and size of the part (extracted part; for example, eyes, nose, mouth, and the entire face) cut out from the face image.
  • the calculated face feature amount feature vector composed of a plurality of feature amounts
  • the face feature amount feature vector registered in the database.
  • image processing processing typified by face recognition and age / gender determination
  • the face recognition process or the like may fail.
  • the feature amount is extracted from the face image.
  • the feature amount such as the shape and size of the part (eyes, nose, mouth, entire face) to be calculated for the feature amount cannot be extracted.
  • image processing such as face recognition is performed by a server on the network (server in the cloud environment) or a server near the sensor (server in the edge environment). Whether the face recognition process is performed in the cloud environment or the edge environment is determined in consideration of cost and communication delay.
  • a computer When face recognition processing is performed in an edge environment, a computer is placed near the sensor (for example, a camera). The sensor sends data to an application on a nearby computer. Normally, in an edge environment, the computational resources are placed near the sensor, so that the computational resources are expensive. That is, if a plurality of sensors are included in the system, each of the plurality of sensors requires a computational resource (computer), and the cost is high.
  • Whether the application for example, face recognition processing is executed in the cloud environment or the edge environment is determined by the requirements required for the system. For example, when performing camera image analysis in a small store, it is difficult to transmit data to the cloud environment from the viewpoint of privacy and communication cost. Therefore, when performing image analysis of a camera in a small store, it is suitable to process the face image in an edge environment.
  • Patent Document 2 discloses that the image quality is adjusted by a camera in order to reduce the network load.
  • the image is detected as the best shot image.
  • Patent Document 2 face image analysis is performed using only the best shot image in order to accurately perform image processing related to a person's face image by performing face image recognition only once. At that time, in Patent Document 2, as a judgment index of whether or not the face image is an image suitable for face recognition, it is determined that the person is facing the front, that the person is not out of focus, and that the person's eyes are open. I am using it.
  • Patent Document 2 a high-load process utilizing machine learning such as face orientation determination is executed in order to determine whether the image is suitable for authentication processing or the like (process for determining the best shot image). Therefore, even in the technique of Patent Document 2, it is still difficult to process the entire face image in the image in real time.
  • a main object of the present invention is to provide an image processing apparatus, an image processing method, and a program that contribute to executing image processing with a low load.
  • the image processing is based on the success probability when the image processing target is executed for the image processing target for each imaging situation, which is the situation when the image processing target is imaged.
  • a determination unit that determines whether or not to execute the image processing on the target, and the image processing on the image processing target when it is determined that the image processing is executed on the image processing target.
  • An image processing apparatus comprising an image processing unit for executing the above is provided.
  • the success probability when the image processing is executed on the image processing target for each imaging situation which is the situation when the image processing target is imaged. Based on this, it is determined whether or not to execute the image processing on the image processing target, and when it is determined that the image processing is executed on the image processing target, the image is subjected to the image processing target.
  • Image processing methods are provided that include performing the processing.
  • image processing is executed on the image processing target for each imaging situation, which is the situation when the image processing target is imaged by the computer mounted on the image processing device. Based on the success probability of the case, the process of determining whether or not to execute the image processing on the image processing target, and the image when it is determined to execute the image processing on the image processing target.
  • a process for executing the image processing on the processing target and a program for executing the image processing are provided.
  • an image processing apparatus an image processing method, and a program that contribute to executing image processing with a low load are provided.
  • other effects may be produced in place of or in combination with the effect.
  • FIG. 1 is a diagram for explaining an outline of one embodiment.
  • FIG. 2 is a diagram showing an example of a schematic configuration of an image processing system according to the first embodiment.
  • FIG. 3 is a diagram showing an example of the internal configuration of the image processing apparatus according to the first embodiment.
  • FIG. 4 is a diagram for explaining the operation of the acquisition unit according to the first embodiment.
  • FIG. 5 is a diagram for explaining division of still image data.
  • FIG. 6 is a diagram showing an example of information held by the storage unit according to the first embodiment.
  • FIG. 7 is a flowchart showing an example of the operation of the image processing apparatus according to the first embodiment.
  • FIG. 8 is a diagram showing an example of information held by the storage unit according to the second embodiment.
  • FIG. 9 is a diagram showing an example of information held by the storage unit according to the third embodiment.
  • FIG. 10 is a diagram showing an example of the hardware configuration of the image processing device.
  • FIG. 11 is a diagram showing an example of the internal configuration of the image processing apparatus according to the modified example.
  • the image processing device 100 includes a determination unit 101 and an image processing unit 102 (see FIG. 1).
  • the determination unit 101 executes image processing on the image processing target based on the success probability when the image processing target is executed on the image processing target for each imaging situation, which is the situation when the image processing target is imaged. Determine whether or not to do so.
  • the image processing unit 102 executes image processing on the image processing target when it is determined that the image processing is executed on the image processing target.
  • the image processing device 100 executes processing related to the image processing target according to the success probability of image processing according to the situation (for example, the position of the person) when the image processing target (for example, a person) is imaged. Judge whether or not. For example, with respect to an image of a person far away from the camera device 10, it is unlikely that the face recognition process for the image will succeed. Therefore, the image processing apparatus 100 calculates the success probability of the image processing in advance for each situation when the image is acquired, and performs the image processing only when the processing is expected to be completed normally. As a result, the situation where the image processing is tried but fails is avoided, and the resource waste of the limited computational resources (particularly the computational resources arranged in the edge environment) is prevented. That is, image processing can be realized with a low load, and a large number of objects are processed with limited computational resources.
  • FIG. 2 is a diagram showing an example of a schematic configuration of an image processing system according to the first embodiment.
  • the image processing system includes a plurality of camera devices 10-1 to 10-n (n is a positive integer, the same applies hereinafter), an image processing device 20, and a result storage device 30. Will be done.
  • the term "camera device 10" is simply used.
  • each of the plurality of camera devices 10 is connected to the image processing device 20. Further, the image processing device 20 and the result storage device 30 are connected.
  • the system configuration shown in FIG. 2 is an example, and is not intended to limit the number of camera devices 10 and the like.
  • the image processing system may include at least one or more camera devices 10.
  • the image processing device 20 acquires video data from each camera device 10.
  • the image processing device 20 performs image processing (data analysis) on the acquired video data, and stores the result in the result storage device 30.
  • the result storage device 30 stores the processing result of the image processing device 20.
  • the image processing device 20 determines whether or not to actually execute the image processing on the person included in the still image data SDj based on the probability that the image processing is normally completed (hereinafter referred to as the success probability). to decide.
  • the success probability is calculated for each situation (hereinafter referred to as an imaging situation) when an image processing target (a person appearing in an image in the above example) is imaged, and the success probability is calculated inside the image processing device 20. Results are accumulated.
  • the imaging situation is the position of a person imaged by the camera device 10 (coordinate position of the person in the image).
  • the image processing device 20 counts the results (normal end, abnormal end of processing) when image processing is performed on the face image FPk for each imaging status.
  • the image processing device 20 calculates the success probability in each imaging situation based on the number of trials of the image processing (hereinafter, the number of trials).
  • the image processing device 20 utilizes the accumulated success probability to determine whether or not to perform image processing. Specifically, the image processing device 20 performs threshold processing on the acquired success probability, and does not execute image processing (for example, face recognition processing) if the success probability is low.
  • the processing is performed under a meaningful situation in which the image processing is executed, so that the load on the computer that performs the face recognition processing and the like can be reduced.
  • image processing for example, authentication processing
  • image processing target for example, the person appearing in the image
  • FIG. 3 is a diagram showing an example of the internal configuration of the image processing device 20 according to the first embodiment.
  • the image processing device 20 includes an acquisition unit 201, a determination unit 202, a storage unit 203, an image cutting unit 204, and an image processing unit 205.
  • the acquisition unit 201 extracts the still image data SDj from the video data DM acquired from the camera device 10 at a predetermined timing (predetermined sampling). For example, the acquisition unit 201 extracts (captures) the still image data SDj shown in FIG.
  • the acquisition unit 201 attempts to extract a person from the extracted still image data SDj. For example, in the example of FIG. 4, the acquisition unit 201 extracts the person 301. If the acquisition unit 201 cannot extract a person from the still image data SDj, the acquisition unit 201 targets the next still image data SDj + 1 and the video data DM for processing.
  • the acquisition unit 201 calculates the position of the extracted person in the still image data SDj. For example, the acquisition unit 201 sets the lower left of the still image data SDj as the origin, and calculates the center of gravity of the extracted person and the center of the face as the position of the person. More specifically, the acquisition unit 201 converts the number of pixels from the origin to the position of the center of gravity into XY coordinates and calculates it as the position of a person.
  • the acquisition unit 201 delivers the still image data SDj from which the person has been extracted and the calculated person position PPk to the determination unit 202.
  • the acquisition unit 201 provides the still image data SDj and the person position PPk of the person 301 included in the still image data SDj to the determination unit 202.
  • various methods can be used as a method for extracting a person included in the still image data SDj and a method for calculating the person position PPk.
  • the acquisition unit 201 uses a learning model learned by a CNN (Convolutional Neural Network) to detect a target object (in this case, a person is detected) from the still image data SDj.
  • the acquisition unit 201 may extract a person by using a method such as template matching.
  • the determination unit 202 determines whether or not to execute the image processing on the image processing target based on the success probability when the image processing is executed on the image processing target for each imaging situation. More specifically, when the determination unit 202 acquires the still image data SDj and the person position PPk, the determination unit 202 acquires the success probability in the person position PPk from the storage unit 203.
  • the storage unit 203 stores at least the success probability for each imaging status. More specifically, the storage unit 203 stores information about the imaging status, the success probability, and the number of trials, which is the number of attempts to execute the image processing in the imaging status, in association with each other.
  • the information regarding the imaging status is information regarding the position of a person in an image in which a person to be image-processed is captured.
  • the information regarding the imaging status is information (for example, a coordinate range) that identifies each small area in which the still image data SDj is divided into predetermined areas.
  • the still image data SDj is divided into a plurality of small areas as shown in FIG. Note that FIG. 5 is an example, and does not mean that the division of the still image data SDj is limited to “9”. Further, the still image data SDj may be divided only in the row direction (horizontal direction) or may be divided only in the column direction (vertical direction). Further, the area of each subregion may be equal or different.
  • FIG. 6 is a diagram showing an example of information held by the storage unit 203 according to the first embodiment.
  • the success probability Pk and the number of trials Tk are stored for each small area (divided area) of the still image data SDj.
  • the storage unit 203 stores the small area of the still image data SDj, the success probability of the image processing in the small area, and the number of attempts (trial number) of the image processing in the small area in association with each other. ..
  • the number of trials Tk is the number of times that the image processing unit 205 tried image processing (for example, face recognition processing) in each small area of the still image data SDj.
  • the information (database) stored in the storage unit 203 is updated (added) by the image processing unit 205.
  • the determination unit 202 accesses the storage unit 203 to acquire the success probability Pk and the number of trials Tk of the small area corresponding to the person position PPk. Specifically, the determination unit 202 identifies a small area including the coordinates of the person position PPk, and acquires the success probability Pk and the number of trials Tk from the entry of the specified small area. For example, when the person position PPk of the person 301 is included in the area A shown in FIG. 5 in FIG. 4, the determination unit 202 acquires the success probability P01 and the number of trials R01 (see the first line of FIG. 6).
  • the determination unit 202 executes threshold processing for the acquired number of trials Tk, and executes image processing (for example, face recognition processing) for an image processing target (for example, a person appearing in the image) according to the result. Judge whether or not. Specifically, when the number of trials Tk is smaller than the trial threshold value, the determination unit 202 determines that the process is executed.
  • the information stored in the storage unit 203 is updated with the operation of the system. Therefore, at the start of system operation or the like, a sufficient number of image processes may not be executed for a small area corresponding to the desired person position PPk.
  • the index indicating whether or not a sufficient number of processes have been executed is the "number of trials". If the number of trials is small, the corresponding success probability is judged to be unreliable, and it is judged that the reliability of the success probability needs to be increased.
  • the determination unit 202 determines that the subsequent processing (image cropping processing, image processing) is executed. That is, it is necessary for the image processing unit 205 to accumulate the results of image processing related to the person position PPk and to make the success probability Pk in the small area of the corresponding still image data SDj highly reliable information. Therefore, when the number of trials is smaller than the trial threshold value, the determination unit 202 determines that the subsequent processing is "processing execution".
  • the determination unit 202 determines that the success probability Pk stored in the storage unit 203 is highly reliable data. In this case, the determination unit 202 executes the threshold value processing on the acquired success probability Pk, and determines whether or not to execute the subsequent processing (image cropping processing, image processing) according to the result.
  • the determination unit 202 determines that the process is executed. This is because if the success probability Pk is equal to or greater than the threshold value, there is a high probability that the processing will be completed normally if the image processing is executed at the person position PPk.
  • the determination unit 202 determines that "processing is not executed". The fact that the success probability Pk is smaller than the threshold value indicates that even if the image processing is executed at the person position PPk, the probability that the processing is normally completed is low.
  • the determination unit 202 determines that "processing is not executed"
  • the determination unit 202 does not execute any special processing.
  • the determination unit 202 notifies the acquisition unit 201 to that effect.
  • the acquisition unit 201 shifts the processing target to the next data (next still image data, next person).
  • the determination unit 202 determines that "processing is executed"
  • the determination unit 202 notifies the image cutout unit 204 of the image cutout request together with the still image data SDj and the person position PPk.
  • the image cutout unit 204 When the image cutout unit 204 acquires the image cutout request or the like, the image cutout unit 204 cuts out the face image (face image area) of the person existing at the person position PPk in the still image data SDj (extracts the face image). The image cutting unit 204 delivers the person position PPk corresponding to the cut out face image FPk to the image processing unit 205.
  • a method of specifying the position of the face existing in the person position PPk in the image cutting unit 204 for example, a method of extracting a face image using CNN can be used as in the acquisition unit 201.
  • the image processing unit 205 executes image processing on the image processing target when it is determined that the image processing is executed on the image processing target. Specifically, the image processing unit 205 performs a predetermined process (for example, face recognition process) using the cut-out face image FPk and the person position PPk. Since existing techniques can be applied to the calculation of the feature amount (feature vector) required for the face recognition process and the similarity (distance between the feature vectors) required for the collation process, detailed description thereof will be omitted.
  • a predetermined process for example, face recognition process
  • the image processing unit 205 updates and adds the information stored in the storage unit 203 according to the processing result (normal end, abnormal end). Specifically, the image processing unit 205 updates the fields of the success probability Pk and the number of trials Tk of the entry corresponding to the image-processed person position PPk (small area of the still image data SDj).
  • the image processing unit 205 calculates, for example, a case where a predetermined number of feature points (for example, feature points such as eyes and nose) cannot be extracted in the calculation of the feature vector, or a large number of unreliable feature quantities are calculated. If so, the authentication process is determined to be "abnormal termination".
  • a predetermined number of feature points for example, feature points such as eyes and nose
  • the image processing unit 205 calculates the success probability Pk and the number of trials Tk of the corresponding small area according to the following equations (1) and (2), and updates the information held by the storage unit 203.
  • Res in the above equation (1) indicates the processing result, and "1" is assigned if the processing ends normally, and "0" is assigned if the processing ends abnormally (error end).
  • the success probability regarding the image processing for each small area of the still image data SDj is accumulated in the storage unit 203.
  • the image processing device 20 extracts the person position PPk from the still image data SDj (step S101).
  • the image processing device 20 reads out the success probability Pk and the number of trials Tk corresponding to the person position PPk from the storage unit 203 (step S102).
  • the image processing device 20 determines whether or not the acquired number of trials Tk is equal to or greater than the trial threshold value (step S103).
  • step S103 If the number of trials Tk is smaller than the trial threshold value (step S103, No branch), the image processing apparatus 20 executes the processes after step S106.
  • step S103 If the number of trials Tk is equal to or greater than the trial threshold value (step S103, Yes branch), the image processing apparatus 20 determines whether or not the acquired success probability Pk is equal to or greater than the execution threshold value (step S104).
  • step S104 If the success probability Pk is equal to or greater than the execution threshold value (step S104, Yes branch), the image processing device 20 executes the processes after step S106.
  • step S104 If the success probability Pk is smaller than the execution threshold value (step S104, No branch), the image processing device 20 determines that the image processing such as face recognition processing is "not executed” (step S105).
  • step S106 the image processing device 20 sets image processing such as face recognition processing to "execution".
  • the image processing device 20 executes predetermined image processing (for example, face recognition processing and age / gender determination processing) (step S107).
  • predetermined image processing for example, face recognition processing and age / gender determination processing
  • the image processing device 20 reflects the processing result in step S107 in the database constructed in the storage unit 203 (step S108).
  • the success probability (accuracy) of image processing depends on the size of the face, the orientation of the face, and the amount of light shining on the face in the still image data SDj. It is known. Factors (parameters) that affect the success probability often depend on the position in the image. For example, in a store, since the display layout is fixed, many people look in a specific direction (for example, the direction in which the product is located) when passing through a specific place, and there is a certain tendency toward the face.
  • the success or failure of the image processing is biased according to the position of the person.
  • the processing related to the person at the position is not performed, thereby preventing the waste of resources.
  • the image processing device 20 performs image processing by the determination unit 202 according to the success probability Pk for each imaging situation (person position PPk in the still image data SDj) stored in the storage unit 203. Determine whether to execute. Specifically, the success probability of image processing for the processing target is estimated by a light load of threshold processing, and it is determined whether or not to actually execute the image processing. Further, when sufficient data on the success or failure of the image processing is not accumulated, the image processing is actually executed and the success or failure is reflected in the success probability Pk. As a result, the number of image processing trials per person can be reduced while maintaining the success probability of image processing, so that the number of people that can be processed by the entire system can be increased.
  • the person position PPk in the still image data SDj was used as the "imaging situation".
  • the time when the image processing target is photographed that is, the time when the still image data SDj is acquired (current time) is used as the imaging situation will be described.
  • FIG. 8 is a diagram showing an example of information held by the storage unit 203 according to the second embodiment. As shown in FIG. 8, the storage unit 203 stores the success probability Pk and the number of trials Tk for each time zone.
  • the acquisition unit 201 delivers the current time CT from which the still image data SDj has been acquired to the determination unit 202 together with the still image data SDj and the person position PPk.
  • the determination unit 202 acquires the success probability Pk and the number of trials Tk according to the current time CT from the storage unit 203.
  • the determination unit 202 performs processing related to the success probability Pk and the number of trials Tk in the same manner as the contents described in the first embodiment.
  • the determination unit 202 determines that "processing is executed". If the number of trials Tk is equal to or greater than the trial threshold value and the acquired success probability Pk is equal to or greater than the execution threshold value, the determination unit 202 determines that the process is executed.
  • the determination unit 202 determines that the process is not executed.
  • the image processing device 20 Since the operation of the image processing device 20 after the execution or non-execution of the processing is determined can be the same as that of the first embodiment, detailed description thereof will be omitted.
  • the image processing device 20 updates the corresponding entry (success probability Pk for each time zone, number of trials Tk) of the storage unit 203.
  • the success rate of image processing may depend on the amount of light hitting the face. Since the amount of light changes according to the time zone, there may be a phenomenon that the face is easily exposed to light in a specific time zone.
  • the success probability of the image processing that changes according to the time zone is taken into consideration, and the image processing is not performed in a situation where the image processing is likely to fail, such as at night. This prevents wasting resources.
  • the information regarding the imaging status according to the third embodiment includes the position of the person in the image in which the person to be image-processed is captured, and the time when the person was photographed.
  • FIG. 9 is a diagram showing an example of information held by the storage unit 203 according to the third embodiment. As shown in FIG. 9, the storage unit 203 stores the success probability Pk and the number of trials Tk in association with each other for each time zone of the small area of the still image data SDj.
  • the acquisition unit 201 delivers the still image data SDj, the current time CT for acquiring the still image data SDj, and the person position PPk to the determination unit 202.
  • the determination unit 202 acquires the success probability Pk and the number of trials Tk according to the person position PPk and the current time CT from the storage unit 203.
  • the determination unit 202 performs processing related to the success probability Pk and the number of trials Tk in the same manner as the contents described in the first embodiment.
  • the determination unit 202 determines that "processing is executed". If the number of trials Tk is equal to or greater than the trial threshold value and the acquired success probability Pk is equal to or greater than the execution threshold value, the determination unit 202 determines that the process is executed.
  • the determination unit 202 determines that the process is not executed.
  • the image processing device 20 updates the corresponding entry (small area, success probability Pk for each time zone, number of trials Tk) of the storage unit 203 when face recognition processing or the like is performed. ..
  • the image processing device 20 determines whether or not to perform image processing according to the position of the person and the success probability for each time. Therefore, as compared with the first and second embodiments, in the third embodiment, the situation in which the person is photographed can be defined in more detail and the success probability can be set. Therefore, it is possible to determine whether or not to perform accurate image processing. Can be done.
  • FIG. 10 is a diagram showing an example of the hardware configuration of the image processing device 20.
  • the image processing device 20 can be configured by an image processing device (so-called computer), and includes the configuration illustrated in FIG.
  • the image processing device 20 includes a processor 311, a memory 312, an input / output interface 313, a communication interface 314, and the like.
  • the components such as the processor 311 are connected by an internal bus or the like so that they can communicate with each other.
  • the configuration shown in FIG. 10 does not mean to limit the hardware configuration of the image processing device 20.
  • the image processing device 20 may include hardware (not shown), or may not include an input / output interface 313 if necessary.
  • the number of processors 311 and the like included in the image processing device 20 is not limited to the example of FIG. 10, and for example, a plurality of processors 311 may be included in the image processing device 20.
  • the processor 311 is a programmable device such as a CPU (Central Processing Unit), an MPU (Micro Processing Unit), or a DSP (Digital Signal Processor). Alternatively, the processor 311 may be a device such as an FPGA (Field Programmable Gate Array) or an ASIC (Application Specific Integrated Circuit). The processor 311 executes various programs including an operating system (OS; Operating System).
  • OS Operating System
  • the memory 312 is a RAM (RandomAccessMemory), a ROM (ReadOnlyMemory), an HDD (HardDiskDrive), an SSD (SolidStateDrive), or the like.
  • the memory 312 stores an OS program, an application program, and various data.
  • the input / output interface 313 is an interface of a display device or an input device (not shown).
  • the display device is, for example, a liquid crystal display or the like.
  • the input device is, for example, a device that accepts user operations such as a keyboard and a mouse.
  • the communication interface 314 is a circuit, module, or the like that communicates with another device.
  • the communication interface 314 includes a NIC (Network Interface Card) and the like.
  • the function of the image processing device 20 is realized by various processing modules.
  • the processing module is realized, for example, by the processor 311 executing a program stored in the memory 312.
  • the program can also be recorded on a computer-readable storage medium.
  • the storage medium may be a non-transient such as a semiconductor memory, a hard disk, a magnetic recording medium, or an optical recording medium. That is, the present invention can also be embodied as a computer program product.
  • the program can be downloaded via a network or updated using a storage medium in which the program is stored.
  • the processing module may be realized by a semiconductor chip.
  • the configuration, operation, and the like of the image processing system described in the above embodiment are examples, and are not intended to limit the configuration and the like of the system.
  • the processing result of the image processing device 20 may not be stored in the result storage device 30, but may be directly transmitted to a device that uses the result of the image processing.
  • the processing result may be transmitted to a device that controls the opening and closing of the gate according to the result of the face recognition.
  • the image processing system according to the above embodiment may be realized in a cloud environment or an edge environment.
  • the image processing device 20 and the result storage device 30 operate as servers on the network.
  • the image processing device 20 operates as a server on the edge side
  • the result storage device 30 operates as a server on the cloud side.
  • the image processing device 20 captures the still image data from the moving image data acquired from the camera device 10
  • the camera device 10 periodically (for example, at 1-second intervals) captures the still image data. It may be transmitted to the processing device 20.
  • the camera device 10 is assumed to be a fixed camera device such as a surveillance camera, but the image (moving image) data input to the image processing device 20 is data acquired from the mobile camera device. It may be. That is, the camera device 10 includes a surveillance camera, a digital camera, a mobile phone, a smartphone, and the like. That is, the "camera device" disclosed in the present application can be any electronic device having a photographing function.
  • a person is set as the target of image processing, but the target of image processing can be arbitrary.
  • an animal may be the target of image processing, or a device (object) such as a robot may be the target of image processing.
  • the authentication process using the face image has been described as a predetermined image process, but the process performed by the image processing unit 205 is not limited to the authentication process.
  • the image processing unit 205 may execute the age / gender determination process.
  • the portion to be processed by the image processing unit 205 is not limited to the “face”.
  • the image processing unit 205 may process a portion such as a “hand” or a “foot”. In this case, the image cutting unit 204 cuts out a portion required for processing by the image processing unit 205.
  • each of the plurality of camera devices 10 included in the image processing system executes the same image processing (for example, face recognition processing), but the implementation is performed according to the camera device 10 for acquiring moving image data.
  • face recognition processing is performed on the moving image data (still image data) transmitted by the camera device 10-1
  • age / gender determination processing is performed on the moving image data transmitted by the camera device 10-2. May be good.
  • each camera device 10 transmits an identifier that identifies itself to the image processing device 20 together with the moving image data.
  • the image processing device 20 may change the content of the image processing according to the attribute (for example, the installation position) of the camera device 10.
  • the position of the person appearing in the image and the time (current time) at the time of image acquisition are set as the "imaging status", but other information may be set as the imaging status.
  • information such as "weather” and "brightness” at the time of image acquisition may be used as the imaging status.
  • the storage unit 203 stores the success probability Pk for each weather (sunny, rain, cloudy) and the success probability Pk for each brightness.
  • the weather at the time of image acquisition may be acquired from an external server, or the weather may be estimated using a brightness sensor or the like.
  • the result of image processing for each imaging status is accumulated from the time of system operation to improve the reliability of the success probability, but the success probability and the number of trials for each imaging status are acquired before the system is operated. It may be stored in the storage unit 203 in advance. By storing the success probability and the number of trials in the storage unit 203 in advance in this way, it is possible to realize execution determination of image processing based on highly reliable data (success probability) from the start of system operation. That is, by setting the initial value of the storage unit 203 to a value measured in advance, the success probability can be quickly converged to an appropriate value.
  • the image processing device 20 may be provided with a test mode, and a person may stand at various places at various times to perform image processing and use the result as an initial value to be stored in the storage unit 203.
  • the image processing unit 205 calculates the success probability and the number of trials, and updates the contents of the storage unit 203.
  • the image processing unit 205 may store only the result of the image processing in the storage unit 203, and the determination unit 202 may calculate the success probability based on the stored result.
  • the image processing device 20 may further include a prediction unit 206.
  • the prediction unit 206 predicts the success probability according to the imaging situation to be processed, based on the success probability of the imaging situation different from the imaging situation to be processed. For example, in FIG.
  • the success probabilities of the small area A and the small area C are stored in the storage unit 203 as highly reliable data (the number of trials in each area is equal to or greater than the trial threshold value), but the success probabilities of the small area B Consider the case where is not stored (success probability is 0).
  • the prediction unit 206 calculates the average value of the success probabilities of the small area A and the small area B adjacent to the small area B, and provides the success probability of the small area B to the determination unit 202.
  • the computer By installing an image processing program in the storage unit of the computer, the computer can function as an image processing device. Further, by causing the computer to execute the image processing program, the image processing method can be executed by the computer.
  • [Appendix 6] The description in Appendix 4 or 5, wherein the image processing unit (102, 205) updates the information stored in the storage unit (203) according to the result of attempting the image processing on the image processing target.
  • [Appendix 7] The image processing apparatus (20, 100) according to any one of Appendix 4 to 6, wherein the information regarding the imaging status includes the position of the image processing target in the image in which the image processing target is captured.
  • [Appendix 8] The image processing apparatus (20, 100) according to any one of Supplementary note 4 to 7, wherein the information regarding the imaging status includes the time when the image processing target was photographed.
  • [Appendix 9] The image processing apparatus (20, 100) according to any one of Appendix 1 to 8, wherein the image processing target is a human face.
  • Appendix 10 The image processing apparatus (20, 100) according to Appendix 9, further comprising an image cutting section (204) for cutting out the face region to be image processed from the image.
  • Appendix 11 In the image processing apparatus (20, 100) Whether to execute the image processing on the image processing target based on the success probability when the image processing is executed on the image processing target for each imaging situation, which is the situation when the image processing target is imaged. Judge whether or not An image processing method including executing the image processing on the image processing target when it is determined to execute the image processing on the image processing target.
  • Appendix 12 On the computer (311) mounted on the image processing device (20, 100), Whether to execute the image processing on the image processing target based on the success probability when the image processing is executed on the image processing target for each imaging situation, which is the situation when the image processing target is imaged.
  • the process of determining whether or not When it is determined that the image processing is executed on the image processing target, the processing of executing the image processing on the image processing target and the processing of executing the image processing on the image processing target.
  • a program that executes Note that the form of Appendix 11 and the form of Appendix 12 can be expanded to the forms of Appendix 2 to the form of Appendix 10 in the same manner as the form of Appendix 1.
  • the present invention contributes to executing image processing such as face recognition processing with a low load in an environment where computational resources are limited.

Abstract

[Problem] To provide an image processing device which executes an image process at a low load. [Solution] This image processing device comprises a determination unit and an image processing unit. The determination unit determines whether to execute an image process on an image process subject on the basis of the probability of success of executing the image process on the image process subject per image capture situation, which is the situation under which the image process subject is captured. The image processing unit executes the image process on the image process subject when it is determined that an image process is to be performed on the image process subject.

Description

画像処理装置、画像処理方法及びプログラムImage processing equipment, image processing methods and programs
 本発明は、画像処理装置、画像処理方法及びプログラムに関する。 The present invention relates to an image processing apparatus, an image processing method and a program.
 カメラが撮像した画像データ(映像データ)が収集され、当該収集された画像データに関する画像処理(情報処理)が行われる。具体的には、計算機(コンピュータ)が画像データに含まれる人物や物体に関する画像処理を行う。例えば、計算機が撮影された人物の顔画像を抽出し、当該顔画像を人物の認証処理に使用する。あるいは、計算機が顔画像を用いて当該人物の年齢や性別を判定することもある。 Image data (video data) captured by the camera is collected, and image processing (information processing) related to the collected image data is performed. Specifically, a computer (computer) performs image processing on a person or an object included in the image data. For example, a computer extracts a photographed face image of a person and uses the face image for a person authentication process. Alternatively, a computer may use a facial image to determine the age and gender of the person.
 これらの処理(顔認証処理、年齢性別判定処理)は、画像データから顔画像が写る領域を特定する工程と、当該特定した顔画像を切り出す工程と、切り出された顔画像を、逐次、所定の内容で処理(顔認証、年齢性別判定)する工程と、を含む。 In these processes (face recognition process, age / gender determination process), a step of specifying an area in which a face image is captured from image data, a step of cutting out the specified face image, and a step of cutting out the cut out face image are sequentially predetermined. Includes a process of processing (face recognition, age / gender determination) based on the content.
 例えば、特許文献1には顔認証処理が開示されている。顔認証処理では、顔画像から切り出した部位(抽出した部位;例えば、目、鼻、口、顔全体)の形状や大きさ等、顔画像を特徴付ける特徴量を算出する。その後、顔認証処理では、当該算出された顔特徴量(複数の特徴量からなる特徴ベクトル)とデータベースに登録されている顔特徴量(特徴ベクトル)の照合が行われる。通常、このような画像処理(顔認証や年齢性別判定に代表される処理)は計算量が多く、計算機の負荷が大きい。 For example, Patent Document 1 discloses a face recognition process. In the face recognition process, the feature amount that characterizes the face image, such as the shape and size of the part (extracted part; for example, eyes, nose, mouth, and the entire face) cut out from the face image, is calculated. After that, in the face recognition process, the calculated face feature amount (feature vector composed of a plurality of feature amounts) is collated with the face feature amount (feature vector) registered in the database. Usually, such image processing (processing typified by face recognition and age / gender determination) requires a large amount of calculation and a heavy load on the computer.
 処理対象とする画像データの画質(例えば、カメラにより撮像された画像の画質)が低い場合には、顔認証処理等に失敗することもある。例えば、顔認証処理において、顔画像から特徴量が抽出される。その際、撮影された顔画像の大きさや向きが的確でなければ、上記特徴量算出の対象となる部位(目、鼻、口、顔全体)の形状や大きさのような特徴量を抽出できないことがある。 If the image quality of the image data to be processed (for example, the image quality of the image captured by the camera) is low, the face recognition process or the like may fail. For example, in the face recognition process, the feature amount is extracted from the face image. At that time, if the size and orientation of the captured face image are not accurate, the feature amount such as the shape and size of the part (eyes, nose, mouth, entire face) to be calculated for the feature amount cannot be extracted. Sometimes.
 このように、顔画像の大きさや向き等が適切でない状態の画像から特徴量抽出処理を実行した結果、特徴量の抽出ができない、又は、不正確な特徴量が抽出されてしまうといった事態が生じかねない。このような不都合を回避するため、同一人物に対して複数回の認識処理が実施され、数多く得られた認識結果を用いるなどの対応が図られる。 In this way, as a result of executing the feature amount extraction process from an image in which the size and orientation of the face image are not appropriate, a situation occurs in which the feature amount cannot be extracted or an inaccurate feature amount is extracted. It could be. In order to avoid such inconvenience, the same person is subjected to a plurality of recognition processes, and a large number of obtained recognition results are used.
 通常、顔認証のような画像処理はネットワーク上のサーバ(クラウド環境のサーバ)又はセンサ近傍のサーバ(エッジ環境のサーバ)により実施される。上記顔認証処理がクラウド環境及びエッジ環境のいずれかで実施されるかは、コストや通信遅延が考慮され決定される。 Normally, image processing such as face recognition is performed by a server on the network (server in the cloud environment) or a server near the sensor (server in the edge environment). Whether the face recognition process is performed in the cloud environment or the edge environment is determined in consideration of cost and communication delay.
 エッジ環境にて顔認証処理等が実施される場合には、センサ(例えば、カメラ)の近傍に計算機が配置される。当該センサは、近傍の計算機上のアプリケーションにデータを送信する。通常、エッジ環境においては、センサ近傍に計算資源を配置することから、計算資源は高コストになる。即ち、複数のセンサがシステムに含まれていれば、当該複数のセンサそれぞれに計算資源(コンピュータ)が必要であり、そのコストが高くなる。 When face recognition processing is performed in an edge environment, a computer is placed near the sensor (for example, a camera). The sensor sends data to an application on a nearby computer. Normally, in an edge environment, the computational resources are placed near the sensor, so that the computational resources are expensive. That is, if a plurality of sensors are included in the system, each of the plurality of sensors requires a computational resource (computer), and the cost is high.
 エッジ環境で顔認証処理等を実施する場合には、計算機の配置場所の制約のため、大量の計算資源を配置することが困難である。従って、使用可能な計算資源は限られており、アプリケーション負荷に応じて計算資源を増減させることも困難である。 When performing face recognition processing etc. in an edge environment, it is difficult to allocate a large amount of computational resources due to restrictions on the location of computers. Therefore, the available computational resources are limited, and it is difficult to increase or decrease the computational resources according to the application load.
 一方で、エッジ環境で顔認証処理等が実施されれば、センサ近傍に配置された計算機内で処理が完結する。その結果、クラウド環境で必要となるような計算機(サーバ)同士の通信が不要となるため通信コストが減少する。具体的には、計算された結果を集計する際の通信だけが発生することになる。 On the other hand, if face recognition processing is performed in the edge environment, the processing will be completed in the computer located near the sensor. As a result, communication costs are reduced because communication between computers (servers), which is required in a cloud environment, becomes unnecessary. Specifically, only communication will occur when aggregating the calculated results.
 クラウド環境、エッジ環境のいずれの環境でアプリケーション(例えば、顔認証処理)を実行させるかは、システムに要求されている要件によって定まる。例えば、小規模店舗でカメラの映像分析を実施する場合などは、プライバシーや通信コストの観点からクラウド環境へのデータ送信することは困難である。従って、小規模店舗でカメラの映像分析を実施するような場合には、エッジ環境にて顔画像を処理することが適している。 Whether the application (for example, face recognition processing) is executed in the cloud environment or the edge environment is determined by the requirements required for the system. For example, when performing camera image analysis in a small store, it is difficult to transmit data to the cloud environment from the viewpoint of privacy and communication cost. Therefore, when performing image analysis of a camera in a small store, it is suitable to process the face image in an edge environment.
 但し、狭小店舗内に計算資源(計算機)を大量に配備することは困難であるため、小規模な計算機が上記顔認証等のアプリケーションを実行する必要がある。上述のように、顔認証処理、年齢性別判定処理の負荷は重い。そのため、リアルタイムにカメラから撮像された画像に含まれる全ての顔画像を分析することが難しく、一部の顔画像しか処理できない可能性がある。 However, since it is difficult to deploy a large amount of computational resources (computers) in a small store, it is necessary for a small-scale computer to execute the above application such as face recognition. As described above, the load of face recognition processing and age / gender determination processing is heavy. Therefore, it is difficult to analyze all the face images included in the image captured by the camera in real time, and there is a possibility that only a part of the face images can be processed.
 特許文献2は、ネットワーク負荷を低減するためにカメラで画質調整を行うことを開示している。当該文献では、同一人物の追跡を行い過去に検出された顔画像よりも直近に検出された顔画像が顔画像分析に適した画像である場合に、当該画像がベストショット画像として検出される。 Patent Document 2 discloses that the image quality is adjusted by a camera in order to reduce the network load. In this document, when the same person is tracked and the face image detected more recently than the face image detected in the past is an image suitable for face image analysis, the image is detected as the best shot image.
特開平11-161790号公報Japanese Unexamined Patent Publication No. 11-161790 特開2017-163228号公報JP-A-2017-163228
 特許文献2では、一度だけの顔画像認識で人物の顔画像に関する画像処理を正確に行うために、ベストショット画像だけを用いて顔画像分析が実行されている。その際、特許文献2では、顔画像が顔認証に適した画像であるかの判断指標として、人物が正面を向いていること、ピンボケしていないこと、及び人物の目が開いていることを用いている。 In Patent Document 2, face image analysis is performed using only the best shot image in order to accurately perform image processing related to a person's face image by performing face image recognition only once. At that time, in Patent Document 2, as a judgment index of whether or not the face image is an image suitable for face recognition, it is determined that the person is facing the front, that the person is not out of focus, and that the person's eyes are open. I am using it.
 特許文献2では、認証処理等に適した画像であるかの判断(ベストショット画像を決定する処理)のために顔の向き判定などの機械学習を活用した高負荷な処理が実行されている。そのため、特許文献2の技術においても、依然として画像内の全顔画像をリアルタイムに処理することは困難な状況にある。 In Patent Document 2, a high-load process utilizing machine learning such as face orientation determination is executed in order to determine whether the image is suitable for authentication processing or the like (process for determining the best shot image). Therefore, even in the technique of Patent Document 2, it is still difficult to process the entire face image in the image in real time.
 本発明は、低負荷で画像処理を実行することに寄与する、画像処理装置、画像処理方法及びプログラムを提供することを主たる目的とする。 A main object of the present invention is to provide an image processing apparatus, an image processing method, and a program that contribute to executing image processing with a low load.
 本発明の第1の視点によれば、画像処理対象が撮像された際の状況である、撮像状況ごとの前記画像処理対象に対して画像処理を実行した際の成功確率に基づき、前記画像処理対象に対して前記画像処理を実行するか否かを判定する、判定部と、前記画像処理対象に対して前記画像処理を実行すると判定された場合に、前記画像処理対象に対して前記画像処理を実行する、画像処理部と、を備える、画像処理装置が提供される。 According to the first viewpoint of the present invention, the image processing is based on the success probability when the image processing target is executed for the image processing target for each imaging situation, which is the situation when the image processing target is imaged. A determination unit that determines whether or not to execute the image processing on the target, and the image processing on the image processing target when it is determined that the image processing is executed on the image processing target. An image processing apparatus comprising an image processing unit for executing the above is provided.
 本発明の第2の視点によれば、画像処理装置において、画像処理対象が撮像された際の状況である、撮像状況ごとの前記画像処理対象に対して画像処理を実行した際の成功確率に基づき、前記画像処理対象に対して前記画像処理を実行するか否かを判定し、前記画像処理対象に対して前記画像処理を実行すると判定された場合に、前記画像処理対象に対して前記画像処理を実行することを含む、画像処理方法が提供される。 According to the second viewpoint of the present invention, in the image processing apparatus, the success probability when the image processing is executed on the image processing target for each imaging situation, which is the situation when the image processing target is imaged. Based on this, it is determined whether or not to execute the image processing on the image processing target, and when it is determined that the image processing is executed on the image processing target, the image is subjected to the image processing target. Image processing methods are provided that include performing the processing.
 本発明の第3の視点によれば、画像処理装置に搭載されたコンピュータに、画像処理対象が撮像された際の状況である、撮像状況ごとの前記画像処理対象に対して画像処理を実行した際の成功確率に基づき、前記画像処理対象に対して前記画像処理を実行するか否かを判定する処理と、前記画像処理対象に対して前記画像処理を実行すると判定された場合に、前記画像処理対象に対して前記画像処理を実行する処理と、を実行させるプログラムが提供される。 According to the third viewpoint of the present invention, image processing is executed on the image processing target for each imaging situation, which is the situation when the image processing target is imaged by the computer mounted on the image processing device. Based on the success probability of the case, the process of determining whether or not to execute the image processing on the image processing target, and the image when it is determined to execute the image processing on the image processing target. A process for executing the image processing on the processing target and a program for executing the image processing are provided.
 本発明の各視点によれば、低負荷で画像処理を実行することに寄与する、画像処理装置、画像処理方法及びプログラムが提供される。なお、本発明により、当該効果の代わりに、又は当該効果と共に、他の効果が奏されてもよい。 According to each viewpoint of the present invention, an image processing apparatus, an image processing method, and a program that contribute to executing image processing with a low load are provided. In addition, according to the present invention, other effects may be produced in place of or in combination with the effect.
図1は、一実施形態の概要を説明するための図である。FIG. 1 is a diagram for explaining an outline of one embodiment. 図2は、第1の実施形態に係る画像処理システムの概略構成の一例を示す図である。FIG. 2 is a diagram showing an example of a schematic configuration of an image processing system according to the first embodiment. 図3は、第1の実施形態に係る画像処理装置の内部構成の一例を示す図である。FIG. 3 is a diagram showing an example of the internal configuration of the image processing apparatus according to the first embodiment. 図4は、第1の実施形態に係る取得部の動作を説明するための図である。FIG. 4 is a diagram for explaining the operation of the acquisition unit according to the first embodiment. 図5は、静止画データの分割を説明するための図である。FIG. 5 is a diagram for explaining division of still image data. 図6は、第1の実施形態に係る記憶部が保持する情報の一例を示す図である。FIG. 6 is a diagram showing an example of information held by the storage unit according to the first embodiment. 図7は、第1の実施形態に係る画像処理装置の動作の一例を示すフローチャートである。FIG. 7 is a flowchart showing an example of the operation of the image processing apparatus according to the first embodiment. 図8は、第2の実施形態に係る記憶部が保持する情報の一例を示す図である。FIG. 8 is a diagram showing an example of information held by the storage unit according to the second embodiment. 図9は、第3の実施形態に係る記憶部が保持する情報の一例を示す図である。FIG. 9 is a diagram showing an example of information held by the storage unit according to the third embodiment. 図10は、画像処理装置のハードウェア構成の一例を示す図である。FIG. 10 is a diagram showing an example of the hardware configuration of the image processing device. 図11は、変形例に係る画像処理装置の内部構成の一例を示す図である。FIG. 11 is a diagram showing an example of the internal configuration of the image processing apparatus according to the modified example.
 はじめに、一実施形態の概要について説明する。なお、この概要に付記した図面参照符号は、理解を助けるための一例として各要素に便宜上付記したものであり、この概要の記載はなんらの限定を意図するものではない。なお、本明細書及び図面において、同様に説明されることが可能な要素については、同一の符号を付することにより重複説明が省略され得る。 First, the outline of one embodiment will be explained. It should be noted that the drawing reference reference numerals added to this outline are added to each element for convenience as an example to aid understanding, and the description of this outline is not intended to limit anything. In the present specification and the drawings, elements that can be similarly described may be designated by the same reference numerals so that duplicate description may be omitted.
 一実施形態に係る画像処理装置100は、判定部101と、画像処理部102と、を備える(図1参照)。判定部101は、画像処理対象が撮像された際の状況である、撮像状況ごとの画像処理対象に対して画像処理を実行した際の成功確率に基づき、画像処理対象に対して画像処理を実行するか否かを判定する。画像処理部102は、画像処理対象に対して画像処理を実行すると判定された場合に、画像処理対象に対して画像処理を実行する。 The image processing device 100 according to the embodiment includes a determination unit 101 and an image processing unit 102 (see FIG. 1). The determination unit 101 executes image processing on the image processing target based on the success probability when the image processing target is executed on the image processing target for each imaging situation, which is the situation when the image processing target is imaged. Determine whether or not to do so. The image processing unit 102 executes image processing on the image processing target when it is determined that the image processing is executed on the image processing target.
 上記画像処理装置100は、画像処理対象(例えば、人物)が撮像された際の状況(例えば、人物の位置)に応じた画像処理の成功確率に応じて、当該画像処理対象に関する処理を実行するか否か判定する。例えば、カメラ装置10から遠く離れた人物の画像に関し、当該画像に対する顔認証処理が成功する可能性は低い。そこで、画像処理装置100は、画像が取得された際の状況ごとに画像処理の成功確率を事前に計算しておき、処理が正常に終了すると見込まれる場合に限り画像処理を行う。その結果、画像処理を試みたが失敗に終わるような状況が回避され、限られた計算資源(特に、エッジ環境に配置される計算資源)のリソース浪費が防止される。即ち、低負荷で画像処理を実現することができ、限られた計算資源により多数の対象が処理される。 The image processing device 100 executes processing related to the image processing target according to the success probability of image processing according to the situation (for example, the position of the person) when the image processing target (for example, a person) is imaged. Judge whether or not. For example, with respect to an image of a person far away from the camera device 10, it is unlikely that the face recognition process for the image will succeed. Therefore, the image processing apparatus 100 calculates the success probability of the image processing in advance for each situation when the image is acquired, and performs the image processing only when the processing is expected to be completed normally. As a result, the situation where the image processing is tried but fails is avoided, and the resource waste of the limited computational resources (particularly the computational resources arranged in the edge environment) is prevented. That is, image processing can be realized with a low load, and a large number of objects are processed with limited computational resources.
 以下に具体的な実施の形態について、図面を参照してさらに詳しく説明する。 The specific embodiment will be described in more detail below with reference to the drawings.
[第1の実施形態]
 第1の実施形態について、図面を用いてより詳細に説明する。
[First Embodiment]
The first embodiment will be described in more detail with reference to the drawings.
<システム構成>
 図2は、第1の実施形態に係る画像処理システムの概略構成の一例を示す図である。図2を参照すると、画像処理システムは、複数のカメラ装置10-1~10-n(nは正の整数、以下同じ)と、画像処理装置20と、結果記憶装置30と、を含んで構成される。なお、以降の説明において、カメラ装置10-1~10-nを区別する特段の理由がない場合には、単に「カメラ装置10」と表記する。
<System configuration>
FIG. 2 is a diagram showing an example of a schematic configuration of an image processing system according to the first embodiment. Referring to FIG. 2, the image processing system includes a plurality of camera devices 10-1 to 10-n (n is a positive integer, the same applies hereinafter), an image processing device 20, and a result storage device 30. Will be done. In the following description, unless there is a particular reason for distinguishing the camera devices 10-1 to 10-n, the term "camera device 10" is simply used.
 画像処理システムでは、上記複数のカメラ装置10のそれぞれと画像処理装置20が接続されている。また、画像処理装置20と結果記憶装置30が接続されている。図2に示すシステム構成は例示であって、カメラ装置10等の数を制限する趣旨ではない。画像処理システムには、少なくとも1台以上のカメラ装置10が含まれていればよい。 In the image processing system, each of the plurality of camera devices 10 is connected to the image processing device 20. Further, the image processing device 20 and the result storage device 30 are connected. The system configuration shown in FIG. 2 is an example, and is not intended to limit the number of camera devices 10 and the like. The image processing system may include at least one or more camera devices 10.
 画像処理装置20は、各カメラ装置10から映像データを取得する。画像処理装置20は、当該取得した映像データに対して画像処理(データ分析)を実施し、その結果を結果記憶装置30に格納する。 The image processing device 20 acquires video data from each camera device 10. The image processing device 20 performs image processing (data analysis) on the acquired video data, and stores the result in the result storage device 30.
 結果記憶装置30は、画像処理装置20による処理結果を保存する。 The result storage device 30 stores the processing result of the image processing device 20.
<システム動作の概略>
 画像処理装置20は、各カメラ装置10から取得した映像データ(動画)MDをm(mは正の整数、以下同じ)枚の静止画データSDj(j=0~m)に分離する。画像処理装置20は、上記分離された静止画データSDjに含まれるk番目の人物の顔画像FPk(k=0~p;pは正の整数、以下同じ)に対して画像処理(例えば、顔認証処理)を試みる。
<Outline of system operation>
The image processing device 20 separates the video data (moving image) MD acquired from each camera device 10 into m (m is a positive integer, the same applies hereinafter) still image data SDj (j = 0 to m). The image processing device 20 performs image processing (for example, a face) on the face image FPk (k = 0 to p; p is a positive integer, the same applies hereinafter) of the kth person included in the separated still image data SDj. Authentication process) is tried.
 その際、画像処理装置20は、上記画像処理が正常に終了する確率(以下、成功確率と表記する)に基づき、静止画データSDjに含まれる人物に関して実際に画像処理を実行するか否かを判断する。 At that time, the image processing device 20 determines whether or not to actually execute the image processing on the person included in the still image data SDj based on the probability that the image processing is normally completed (hereinafter referred to as the success probability). to decide.
 ここで、当該成功確率は、画像処理対象(上記の例では画像に写る人物)が撮像された際の状況(以下、撮像状況と表示する)ごとに算出され、画像処理装置20の内部にその結果が蓄積される。例えば、上記撮像状況は、カメラ装置10が撮像した人物の位置(画像内での人物の座標位置)である。 Here, the success probability is calculated for each situation (hereinafter referred to as an imaging situation) when an image processing target (a person appearing in an image in the above example) is imaged, and the success probability is calculated inside the image processing device 20. Results are accumulated. For example, the imaging situation is the position of a person imaged by the camera device 10 (coordinate position of the person in the image).
 画像処理装置20は、顔画像FPkに対して画像処理を実施した際の結果(処理が正常終了、異常終了)を撮像状況ごとにカウントする。画像処理装置20は、上記画像処理を試みた数(以下、試行数)に基づき、各撮像状況における上記成功確率を算出する。 The image processing device 20 counts the results (normal end, abnormal end of processing) when image processing is performed on the face image FPk for each imaging status. The image processing device 20 calculates the success probability in each imaging situation based on the number of trials of the image processing (hereinafter, the number of trials).
 画像処理装置20は、上記成功確率に関するデータが十分蓄積されると、当該蓄積された成功確率を活用し、画像処理を行うか否かを判断する。具体的には、画像処理装置20は、取得した成功確率に対して閾値処理を施し、成功確率が低ければ画像処理(例えば、顔認証処理)を実行しない。 When the data on the success probability is sufficiently accumulated, the image processing device 20 utilizes the accumulated success probability to determine whether or not to perform image processing. Specifically, the image processing device 20 performs threshold processing on the acquired success probability, and does not execute image processing (for example, face recognition processing) if the success probability is low.
 このような対応により、画像処理を実行する意味のある状況下で処理が行われるため、顔認証処理等を行う計算機の負荷を低減できる。換言するならば、画像処理が成功する可能性が低い状況では、画像処理対象(例えば、画像に写る人物)に対して画像処理(例えば、認証処理)が行われず計算機のリソースが浪費されることがない。その結果、全体として、処理できる人物の数が増加する。 With such a response, the processing is performed under a meaningful situation in which the image processing is executed, so that the load on the computer that performs the face recognition processing and the like can be reduced. In other words, in a situation where image processing is unlikely to succeed, image processing (for example, authentication processing) is not performed on the image processing target (for example, the person appearing in the image), and computer resources are wasted. There is no. As a result, the number of people that can be processed increases as a whole.
 例えば、図2において、カメラ装置10-1から取得した静止画データに写る人物に関する画像処理の成功確率が低ければ、そのような処理は見送られる。その結果、カメラ装置10-1以外のカメラ装置10から取得した静止画データに対して画像処理装置20のリソースが割り当て可能であり、結果として数多くの顔画像FPkが処理できる。 For example, in FIG. 2, if the success probability of image processing relating to a person appearing in the still image data acquired from the camera device 10-1 is low, such processing is postponed. As a result, the resources of the image processing device 20 can be allocated to the still image data acquired from the camera device 10 other than the camera device 10-1, and as a result, a large number of face image FPk can be processed.
<画像処理装置の構成>
 図3は、第1の実施形態に係る画像処理装置20の内部構成の一例を示す図である。図3を参照すると、画像処理装置20は、取得部201と、判定部202と、記憶部203と、画像切り出し部204、画像処理部205と、を含んで構成される。
<Configuration of image processing device>
FIG. 3 is a diagram showing an example of the internal configuration of the image processing device 20 according to the first embodiment. Referring to FIG. 3, the image processing device 20 includes an acquisition unit 201, a determination unit 202, a storage unit 203, an image cutting unit 204, and an image processing unit 205.
 取得部201は、カメラ装置10から取得した映像データDMから所定のタイミング(所定のサンプリング)で静止画データSDjを抽出する。例えば、取得部201は、図4に示す静止画データSDjを抽出(キャプチャ)する。 The acquisition unit 201 extracts the still image data SDj from the video data DM acquired from the camera device 10 at a predetermined timing (predetermined sampling). For example, the acquisition unit 201 extracts (captures) the still image data SDj shown in FIG.
 次に、取得部201は、抽出した静止画データSDjから人物の抽出を試みる。例えば、図4の例では、取得部201は、人物301を抽出する。なお、取得部201は、静止画データSDjから人物の抽出を行えない場合には、次の静止画データSDj+1や映像データDMを処理の対象とする。 Next, the acquisition unit 201 attempts to extract a person from the extracted still image data SDj. For example, in the example of FIG. 4, the acquisition unit 201 extracts the person 301. If the acquisition unit 201 cannot extract a person from the still image data SDj, the acquisition unit 201 targets the next still image data SDj + 1 and the video data DM for processing.
 次に、取得部201は、抽出した人物の静止画データSDjにおける位置を計算する。例えば、取得部201は、静止画データSDjの左下を原点に設定し、抽出した人物の重心や顔の中心を人物の位置として算出する。より具体的には、取得部201は、原点からの上記重心位置までの画素数をXY座標に変換し、人物の位置として算出する。 Next, the acquisition unit 201 calculates the position of the extracted person in the still image data SDj. For example, the acquisition unit 201 sets the lower left of the still image data SDj as the origin, and calculates the center of gravity of the extracted person and the center of the face as the position of the person. More specifically, the acquisition unit 201 converts the number of pixels from the origin to the position of the center of gravity into XY coordinates and calculates it as the position of a person.
 取得部201は、人物の抽出を行った静止画データSDjと算出した人物の位置PPkを、判定部202に引き渡す。例えば、図4の例では、取得部201は、静止画データSDjと、当該静止画データSDjに含まれる人物301の人物位置PPkを判定部202に提供する。 The acquisition unit 201 delivers the still image data SDj from which the person has been extracted and the calculated person position PPk to the determination unit 202. For example, in the example of FIG. 4, the acquisition unit 201 provides the still image data SDj and the person position PPk of the person 301 included in the still image data SDj to the determination unit 202.
 なお、取得部201において、静止画データSDjに含まれる人物を抽出する方法や人物位置PPkを計算する方法には種々の方法を用いることができる。例えば、取得部201は、CNN(Convolutional Neural Network)により学習された学習モデルを用いて、静止画データSDjの中から目的とする物体検出(この場合、人物の検出)を行う。あるいは、取得部201は、テンプレートマッチング等の手法を用いて人物を抽出してもよい。 In the acquisition unit 201, various methods can be used as a method for extracting a person included in the still image data SDj and a method for calculating the person position PPk. For example, the acquisition unit 201 uses a learning model learned by a CNN (Convolutional Neural Network) to detect a target object (in this case, a person is detected) from the still image data SDj. Alternatively, the acquisition unit 201 may extract a person by using a method such as template matching.
 図3に説明を戻す。判定部202は、撮像状況ごとの画像処理対象に対して画像処理を実行した際の成功確率に基づき、画像処理対象に対して画像処理を実行するか否かを判定する。より具体的には、判定部202は、上記静止画データSDj及び人物位置PPkを取得すると、記憶部203から人物位置PPkにおける成功確率を取得する。 Return the explanation to Fig. 3. The determination unit 202 determines whether or not to execute the image processing on the image processing target based on the success probability when the image processing is executed on the image processing target for each imaging situation. More specifically, when the determination unit 202 acquires the still image data SDj and the person position PPk, the determination unit 202 acquires the success probability in the person position PPk from the storage unit 203.
 記憶部203は、撮像状況ごとの少なくとも成功確率を記憶する。より具体的には、記憶部203は、撮像状況に関する情報と、成功確率と、撮像状況にて画像処理の実行を試みた回数である試行数と、を関連付けて記憶する。 The storage unit 203 stores at least the success probability for each imaging status. More specifically, the storage unit 203 stores information about the imaging status, the success probability, and the number of trials, which is the number of attempts to execute the image processing in the imaging status, in association with each other.
 なお、第1の実施形態では、上記撮像状況に関する情報は、画像処理対象である人物が写る画像における人物位置に関する情報である。具体的には、上記撮像状況に関する情報は、静止画データSDjが所定の領域に分割された各小領域を特定する情報(例えば、座標範囲)である。 In the first embodiment, the information regarding the imaging status is information regarding the position of a person in an image in which a person to be image-processed is captured. Specifically, the information regarding the imaging status is information (for example, a coordinate range) that identifies each small area in which the still image data SDj is divided into predetermined areas.
 例えば、静止画データSDjは、図5に示すように複数の小領域に分割される。なお、図5は例示であって、静止画データSDjの分割を「9」に限定する趣旨ではない。また、静止画データSDjは、行方向(横方向)に限り分割されていてもよいし、列方向(縦方向)に限り分割されていてもよい。さらに、各小領域の面積は等しくてもよいし異なっていてもよい。 For example, the still image data SDj is divided into a plurality of small areas as shown in FIG. Note that FIG. 5 is an example, and does not mean that the division of the still image data SDj is limited to “9”. Further, the still image data SDj may be divided only in the row direction (horizontal direction) or may be divided only in the column direction (vertical direction). Further, the area of each subregion may be equal or different.
 図6は、第1の実施形態に係る記憶部203が保持する情報の一例を示す図である。図6を参照すると、静止画データSDjの小領域(分割された領域)ごとに、成功確率Pkと試行数Tkが記憶されている。このように、記憶部203は、静止画データSDjの小領域と、当該小領域における画像処理の成功確率と、当該小領域における画像処理を試みた回数(試行数)と、を関連付けて記憶する。 FIG. 6 is a diagram showing an example of information held by the storage unit 203 according to the first embodiment. Referring to FIG. 6, the success probability Pk and the number of trials Tk are stored for each small area (divided area) of the still image data SDj. In this way, the storage unit 203 stores the small area of the still image data SDj, the success probability of the image processing in the small area, and the number of attempts (trial number) of the image processing in the small area in association with each other. ..
 試行数Tkは、画像処理部205が静止画データSDjの各小領域にて画像処理(例えば、顔認証処理)を試みた回数である。なお、記憶部203が記憶する情報(データベース)は、画像処理部205により更新(追記)される。 The number of trials Tk is the number of times that the image processing unit 205 tried image processing (for example, face recognition processing) in each small area of the still image data SDj. The information (database) stored in the storage unit 203 is updated (added) by the image processing unit 205.
 図3に説明を戻す。判定部202は、記憶部203にアクセスして人物位置PPkに対応する小領域の成功確率Pkと試行数Tkを取得する。具体的には、判定部202は、人物位置PPkの座標が含まれる小領域を特定し、当該特定した小領域のエントリから成功確率Pkと試行数Tkを取得する。例えば、図4において人物301の人物位置PPkが図5に示す領域Aに含まれる場合には、判定部202は、成功確率P01と試行数R01を取得する(図6の1行目参照)。 Return the explanation to Fig. 3. The determination unit 202 accesses the storage unit 203 to acquire the success probability Pk and the number of trials Tk of the small area corresponding to the person position PPk. Specifically, the determination unit 202 identifies a small area including the coordinates of the person position PPk, and acquires the success probability Pk and the number of trials Tk from the entry of the specified small area. For example, when the person position PPk of the person 301 is included in the area A shown in FIG. 5 in FIG. 4, the determination unit 202 acquires the success probability P01 and the number of trials R01 (see the first line of FIG. 6).
 判定部202は、取得した試行数Tkに対して閾値処理を実行し、その結果に応じて画像処理対象(例えば、画像に写る人物)に対して画像処理(例えば、顔認証処理)を実行するか否かを判定する。具体的には、判定部202は、試行数Tkが試行閾値よりも小さい場合には、「処理実行」と判定する。 The determination unit 202 executes threshold processing for the acquired number of trials Tk, and executes image processing (for example, face recognition processing) for an image processing target (for example, a person appearing in the image) according to the result. Judge whether or not. Specifically, when the number of trials Tk is smaller than the trial threshold value, the determination unit 202 determines that the process is executed.
 上述のように、記憶部203に格納される情報はシステムの運用と共に更新される。従って、システム運用開始時等では、所望の人物位置PPkに対応する小領域に対して十分な数の画像処理が実行されていないことがある。十分な数の処理が実行されたか否かを示す指標が「試行数」である。当該試行数が少なければ、対応する成功確率は信頼性の低いものと判断され、当該成功確率の信頼性を高める必要があると判断する。 As described above, the information stored in the storage unit 203 is updated with the operation of the system. Therefore, at the start of system operation or the like, a sufficient number of image processes may not be executed for a small area corresponding to the desired person position PPk. The index indicating whether or not a sufficient number of processes have been executed is the "number of trials". If the number of trials is small, the corresponding success probability is judged to be unreliable, and it is judged that the reliability of the success probability needs to be increased.
 上記成功確率の信頼性を高めるために、判定部202は、後段の処理(画像切り出し処理、画像処理)を実行すると判断する。つまり、画像処理部205が人物位置PPkに関する画像処理の結果を蓄積し、対応する静止画データSDjの小領域における成功確率Pkを信頼性の高い情報にする必要がある。そのため、試行数が試行閾値よりも小さい場合には、判定部202は、後段の処理に関し「処理実行」と判定する。 In order to increase the reliability of the success probability, the determination unit 202 determines that the subsequent processing (image cropping processing, image processing) is executed. That is, it is necessary for the image processing unit 205 to accumulate the results of image processing related to the person position PPk and to make the success probability Pk in the small area of the corresponding still image data SDj highly reliable information. Therefore, when the number of trials is smaller than the trial threshold value, the determination unit 202 determines that the subsequent processing is "processing execution".
 判定部202は、試行数Tkが試行閾値以上であれば、記憶部203に格納された成功確率Pkは信頼性の高いデータであると判断する。この場合、判定部202は、取得した成功確率Pkに対して閾値処理を実行し、その結果に応じて後段の処理(画像切り出し処理、画像処理)を実行するか否かを判断する。 If the number of trials Tk is equal to or greater than the trial threshold value, the determination unit 202 determines that the success probability Pk stored in the storage unit 203 is highly reliable data. In this case, the determination unit 202 executes the threshold value processing on the acquired success probability Pk, and determines whether or not to execute the subsequent processing (image cropping processing, image processing) according to the result.
 具体的には、判定部202は、取得した成功確率Pkが実施閾値以上であれば、「処理実行」と判定する。成功確率Pkが閾値以上であれば、人物位置PPkにて画像処理を実行すれば当該処理は正常に終了する確率が高いためである。 Specifically, if the acquired success probability Pk is equal to or greater than the execution threshold value, the determination unit 202 determines that the process is executed. This is because if the success probability Pk is equal to or greater than the threshold value, there is a high probability that the processing will be completed normally if the image processing is executed at the person position PPk.
 一方、取得した成功確率Pkが実施閾値よりも小さい場合には、判定部202は「処理不実行」と判定する。成功確率Pkが閾値より小さいという事実は、人物位置PPkにて画像処理を実行しても当該処理は正常に終了する確率が低い事を示すためである。 On the other hand, when the acquired success probability Pk is smaller than the execution threshold value, the determination unit 202 determines that "processing is not executed". The fact that the success probability Pk is smaller than the threshold value indicates that even if the image processing is executed at the person position PPk, the probability that the processing is normally completed is low.
 判定部202は、「処理不実行」と判定した場合には、特段の処理を実行しない。「処理不実行」と判定した場合には、判定部202は、その旨を取得部201に通知する。当該通知を受けた取得部201は、処理の対象を次のデータ(次の静止画データ、次の人物)に移す。 When the determination unit 202 determines that "processing is not executed", the determination unit 202 does not execute any special processing. When it is determined that "processing is not executed", the determination unit 202 notifies the acquisition unit 201 to that effect. Upon receiving the notification, the acquisition unit 201 shifts the processing target to the next data (next still image data, next person).
 判定部202は、「処理実行」と判定した場合には、静止画データSDj、人物位置PPkと共に、画像切り出し要求を画像切り出し部204に通知する。 When the determination unit 202 determines that "processing is executed", the determination unit 202 notifies the image cutout unit 204 of the image cutout request together with the still image data SDj and the person position PPk.
 画像切り出し部204は、上記画像切り出し要求等を取得すると、静止画データSDjにおける人物位置PPkに存在する人物の顔画像(顔画像領域)を切り出す(顔画像を抽出する)。画像切り出し部204は、当該切り出した顔画像FPkと対応する人物位置PPkを画像処理部205に引き渡す。 When the image cutout unit 204 acquires the image cutout request or the like, the image cutout unit 204 cuts out the face image (face image area) of the person existing at the person position PPk in the still image data SDj (extracts the face image). The image cutting unit 204 delivers the person position PPk corresponding to the cut out face image FPk to the image processing unit 205.
 なお、画像切り出し部204において人物位置PPkに存在する顔の位置を特定する方法としては、取得部201と同様に、例えば、CNNを使って顔画像を抽出する方法を用いることができる。 As a method of specifying the position of the face existing in the person position PPk in the image cutting unit 204, for example, a method of extracting a face image using CNN can be used as in the acquisition unit 201.
 画像処理部205は、画像処理対象に対して画像処理を実行すると判定された場合に、画像処理対象に対して画像処理を実行する。具体的には、画像処理部205は、上記切り出された顔画像FPkと人物位置PPkを用いて、予め定められた処理(例えば、顔認証処理)を実施する。なお、顔認証処理に必要な特徴量(特徴ベクトル)の算出や照合処理に必要な類似度(特徴ベクトル間の距離)の算出には既存の技術を適用できるため詳細な説明を省略する。 The image processing unit 205 executes image processing on the image processing target when it is determined that the image processing is executed on the image processing target. Specifically, the image processing unit 205 performs a predetermined process (for example, face recognition process) using the cut-out face image FPk and the person position PPk. Since existing techniques can be applied to the calculation of the feature amount (feature vector) required for the face recognition process and the similarity (distance between the feature vectors) required for the collation process, detailed description thereof will be omitted.
 画像処理部205は、処理の結果(正常終了、異常終了)に応じて記憶部203に記憶された情報を更新、追記する。具体的には、画像処理部205は、上記画像処理した人物位置PPk(静止画データSDjの小領域)に対応するエントリの成功確率Pk及び試行数Tkのフィールドを更新する。 The image processing unit 205 updates and adds the information stored in the storage unit 203 according to the processing result (normal end, abnormal end). Specifically, the image processing unit 205 updates the fields of the success probability Pk and the number of trials Tk of the entry corresponding to the image-processed person position PPk (small area of the still image data SDj).
 なお、画像処理部205は、例えば、上記特徴ベクトルの算出において、予め定めた数の特徴点(例えば、目や鼻等の特徴点)を抽出できない場合や、信頼性の低い特徴量が多数計算された場合に、認証処理を「異常終了」と判定する。 The image processing unit 205 calculates, for example, a case where a predetermined number of feature points (for example, feature points such as eyes and nose) cannot be extracted in the calculation of the feature vector, or a large number of unreliable feature quantities are calculated. If so, the authentication process is determined to be "abnormal termination".
 画像処理部205は、対応する小領域の成功確率Pkと試行数Tkを下記の式(1)、(2)に従い計算し、記憶部203が保持する情報を更新する。 The image processing unit 205 calculates the success probability Pk and the number of trials Tk of the corresponding small area according to the following equations (1) and (2), and updates the information held by the storage unit 203.
Figure JPOXMLDOC01-appb-M000001
 
Figure JPOXMLDOC01-appb-M000001
 
 
Figure JPOXMLDOC01-appb-M000002
 
 
Figure JPOXMLDOC01-appb-M000002
 
 なお、上記式(1)におけるResは、処理結果を示し、処理が正常終了であれば「1」が、異常終了(エラー終了)であれば「0」がそれぞれ代入される。 Res in the above equation (1) indicates the processing result, and "1" is assigned if the processing ends normally, and "0" is assigned if the processing ends abnormally (error end).
 このような処理により、記憶部203には静止画データSDjの小領域ごとの画像処理に関する成功確率が蓄積されていく。 By such processing, the success probability regarding the image processing for each small area of the still image data SDj is accumulated in the storage unit 203.
 第1の実施形態に係る画像処理装置20の動作をまとめると図7に示すフローチャートのとおりとなる。 The operation of the image processing device 20 according to the first embodiment is summarized in the flowchart shown in FIG.
 画像処理装置20は、静止画データSDjから人物位置PPkを抽出する(ステップS101)。 The image processing device 20 extracts the person position PPk from the still image data SDj (step S101).
 画像処理装置20は、当該人物位置PPkに対応する成功確率Pkと試行数Tkを記憶部203から読み出す(ステップS102)。 The image processing device 20 reads out the success probability Pk and the number of trials Tk corresponding to the person position PPk from the storage unit 203 (step S102).
 画像処理装置20は、当該取得した試行数Tkが試行閾値以上か否かを判定する(ステップS103)。 The image processing device 20 determines whether or not the acquired number of trials Tk is equal to or greater than the trial threshold value (step S103).
 試行数Tkが試行閾値より小さければ(ステップS103、No分岐)、画像処理装置20は、ステップS106以降の処理を実行する。 If the number of trials Tk is smaller than the trial threshold value (step S103, No branch), the image processing apparatus 20 executes the processes after step S106.
 試行数Tkが試行閾値以上であれば(ステップS103、Yes分岐)、画像処理装置20は、上記取得した成功確率Pkが実施閾値以上であるか否かを判定する(ステップS104)。 If the number of trials Tk is equal to or greater than the trial threshold value (step S103, Yes branch), the image processing apparatus 20 determines whether or not the acquired success probability Pk is equal to or greater than the execution threshold value (step S104).
 成功確率Pkが実施閾値以上であれば(ステップS104、Yes分岐)、画像処理装置20は、ステップS106以降の処理を実行する。 If the success probability Pk is equal to or greater than the execution threshold value (step S104, Yes branch), the image processing device 20 executes the processes after step S106.
 成功確率Pkが実施閾値より小さければ(ステップS104、No分岐)、画像処理装置20は、顔認証処理等の画像処理を「不実行」に判定する(ステップS105)。 If the success probability Pk is smaller than the execution threshold value (step S104, No branch), the image processing device 20 determines that the image processing such as face recognition processing is "not executed" (step S105).
 ステップS106において、画像処理装置20は、顔認証処理等の画像処理を「実行」に設定する。 In step S106, the image processing device 20 sets image processing such as face recognition processing to "execution".
 画像処理装置20は、所定の画像処理(例えば、顔認証処理や年齢性別判定処理)を実行する(ステップS107)。 The image processing device 20 executes predetermined image processing (for example, face recognition processing and age / gender determination processing) (step S107).
 画像処理装置20は、上記ステップS107における処理結果を記憶部203に構築されたデータベースに反映する(ステップS108)。 The image processing device 20 reflects the processing result in step S107 in the database constructed in the storage unit 203 (step S108).
 ここで、発明者らが鋭意検討した結果、画像処理の成功確率(精度)は、静止画データSDjに写っている顔の大きさ、顔の向き、顔にあたっている光の量に依存することが判明している。上記成功確率に影響を与える要因(パラメータ)は、画像内の位置に依存することが多い。例えば、店舗内であれば陳列レイアウトが決まっているため特定の場所を通る際には特定の方向(例えば、商品のある方向)を見る人が多く顔の方向に一定の傾向がある。そのため、画像処理の成否と画像に写る人物の位置(例えば、商品陳列レイアウトに応じた位置)の間には相関関係があり、人物の位置に応じて画像処理の成否に偏りが生じる。第1の実施形態では、人物の位置を考慮した結果、画像処理が成功する可能性が低ければ当該位置の人物に関する処理を不実施とすることで、リソースの浪費を防止している。 Here, as a result of diligent studies by the inventors, the success probability (accuracy) of image processing depends on the size of the face, the orientation of the face, and the amount of light shining on the face in the still image data SDj. It is known. Factors (parameters) that affect the success probability often depend on the position in the image. For example, in a store, since the display layout is fixed, many people look in a specific direction (for example, the direction in which the product is located) when passing through a specific place, and there is a certain tendency toward the face. Therefore, there is a correlation between the success or failure of the image processing and the position of the person appearing in the image (for example, the position according to the product display layout), and the success or failure of the image processing is biased according to the position of the person. In the first embodiment, as a result of considering the position of the person, if the possibility that the image processing is successful is low, the processing related to the person at the position is not performed, thereby preventing the waste of resources.
 以上のように、第1の実施形態に係る画像処理装置20は、記憶部203に蓄積された撮像状況(静止画データSDjにおける人物位置PPk)ごとの成功確率Pkに従い判定部202により画像処理を実行するか否かを判定する。具体的には、閾値処理という軽い負荷により、処理対象に対する画像処理の成功確率が推定され、実際に画像処理を実行するか否かが判定される。また、画像処理の成否について十分なデータが蓄積されていない場合には、実際に画像処理が実行され、その成否が成功確率Pkに反映される。その結果、画像処理の成功確率を維持しつつ、一人当たりの画像処理試行回数を減らすことができるので、システム全体で処理できる人物の数を増加させることができる。 As described above, the image processing device 20 according to the first embodiment performs image processing by the determination unit 202 according to the success probability Pk for each imaging situation (person position PPk in the still image data SDj) stored in the storage unit 203. Determine whether to execute. Specifically, the success probability of image processing for the processing target is estimated by a light load of threshold processing, and it is determined whether or not to actually execute the image processing. Further, when sufficient data on the success or failure of the image processing is not accumulated, the image processing is actually executed and the success or failure is reflected in the success probability Pk. As a result, the number of image processing trials per person can be reduced while maintaining the success probability of image processing, so that the number of people that can be processed by the entire system can be increased.
[第2の実施形態]
 続いて、第2の実施形態について図面を参照して詳細に説明する。
[Second Embodiment]
Subsequently, the second embodiment will be described in detail with reference to the drawings.
 第1の実施形態では、「撮像状況」として静止画データSDjの中の人物位置PPkを用いていた。第2の実施形態では、上記撮像状況として画像処理対象が撮影された時刻、即ち、静止画データSDjが取得された時刻(現在時刻)を用いる場合について説明する。 In the first embodiment, the person position PPk in the still image data SDj was used as the "imaging situation". In the second embodiment, a case where the time when the image processing target is photographed, that is, the time when the still image data SDj is acquired (current time) is used as the imaging situation will be described.
 なお、第2の実施形態に係る画像処理装置20の処理構成は第1の実施形態と同様とすることができるので、図3に相当する説明を省略する。 Since the processing configuration of the image processing apparatus 20 according to the second embodiment can be the same as that of the first embodiment, the description corresponding to FIG. 3 will be omitted.
 図8は、第2の実施形態に係る記憶部203が保持する情報の一例を示す図である。図8に示すように、記憶部203には、時間帯ごとの成功確率Pkと試行数Tkが格納されている。 FIG. 8 is a diagram showing an example of information held by the storage unit 203 according to the second embodiment. As shown in FIG. 8, the storage unit 203 stores the success probability Pk and the number of trials Tk for each time zone.
 取得部201は、静止画データSDjを取得した現在時刻CTを静止画データSDj及び人物位置PPkと共に判定部202に引き渡す。 The acquisition unit 201 delivers the current time CT from which the still image data SDj has been acquired to the determination unit 202 together with the still image data SDj and the person position PPk.
 判定部202は、現在時刻CTに応じた成功確率Pkと試行数Tkを記憶部203から取得する。判定部202は、第1の実施形態にて説明した内容と同様に、成功確率Pkと試行数Tkに関する処理を行う。 The determination unit 202 acquires the success probability Pk and the number of trials Tk according to the current time CT from the storage unit 203. The determination unit 202 performs processing related to the success probability Pk and the number of trials Tk in the same manner as the contents described in the first embodiment.
 具体的には、判定部202は、試行数Tkが試行閾値よりも小さい場合には、「処理実行」と判定する。また、試行数Tkが試行閾値以上、且つ、取得した成功確率Pkが実施閾値以上であれば、判定部202は、「処理実行」と判定する。 Specifically, when the number of trials Tk is smaller than the trial threshold value, the determination unit 202 determines that "processing is executed". If the number of trials Tk is equal to or greater than the trial threshold value and the acquired success probability Pk is equal to or greater than the execution threshold value, the determination unit 202 determines that the process is executed.
 試行数Tkが試行閾値以上であっても、取得した成功確率Pkが実施閾値よりも小さい場合には、判定部202は「処理不実行」と判定する。 Even if the number of trials Tk is equal to or greater than the trial threshold value, if the acquired success probability Pk is smaller than the execution threshold value, the determination unit 202 determines that the process is not executed.
 処理に関する実行、不実行が判定された後の画像処理装置20の動作は、第1の実施形態と同様とすることができるので詳細な説明は省略する。第2の実施形態に係る画像処理装置20は、顔認証処理等を実施した場合には、記憶部203の対応するエントリ(時間帯別の成功確率Pk、試行数Tk)を更新する。 Since the operation of the image processing device 20 after the execution or non-execution of the processing is determined can be the same as that of the first embodiment, detailed description thereof will be omitted. When the face recognition process or the like is performed, the image processing device 20 according to the second embodiment updates the corresponding entry (success probability Pk for each time zone, number of trials Tk) of the storage unit 203.
 以上のように、第2の実施形態では、静止画データSDjを取得した際の時刻に応じた成功確率に基づき画像処理を実行するか否かを判定している。上述のように、画像処理の成功確率は顔にあたっている光の量に依存することもある。光の量は時間帯に応じて変化するため、特定の時間帯では顔に光があたりやすい等の現象があり得る。第2の実施形態では、このように時間帯に応じて変化する画像処理の成功確率を考慮し、夜間のように画像処理が失敗する可能性が高い状況下での画像処理を不実施とすることでリソースの浪費を防止する。 As described above, in the second embodiment, it is determined whether or not to execute the image processing based on the success probability according to the time when the still image data SDj is acquired. As mentioned above, the success rate of image processing may depend on the amount of light hitting the face. Since the amount of light changes according to the time zone, there may be a phenomenon that the face is easily exposed to light in a specific time zone. In the second embodiment, the success probability of the image processing that changes according to the time zone is taken into consideration, and the image processing is not performed in a situation where the image processing is likely to fail, such as at night. This prevents wasting resources.
[第3の実施形態]
 続いて、第3の実施形態について図面を参照して詳細に説明する。
[Third Embodiment]
Subsequently, the third embodiment will be described in detail with reference to the drawings.
 第3の実施形態では、第1及び第2の実施形態を組み合わせた場合について説明する。 In the third embodiment, a case where the first and second embodiments are combined will be described.
 なお、第3の実施形態に係る画像処理装置20の処理構成は第1及び第2の実施形態と同様とすることができるので、図3に相当する説明を省略する。 Since the processing configuration of the image processing apparatus 20 according to the third embodiment can be the same as that of the first and second embodiments, the description corresponding to FIG. 3 will be omitted.
 第3の実施形態では、上記「撮像状況」として、静止画データSDjに写る人物の位置(人物位置PPk)と当該静止画データSDjが取得された時刻(現在時刻CT)を用いる場合について説明する。即ち、第3の実施形態に係る撮像状況に関する情報には、画像処理対象である人物が写る画像における人物の位置と、当該人物が撮影された時刻と、が含まれる。 In the third embodiment, a case where the position of a person (person position PPk) reflected in the still image data SDj and the time when the still image data SDj is acquired (current time CT) is used as the “imaging situation” will be described. .. That is, the information regarding the imaging status according to the third embodiment includes the position of the person in the image in which the person to be image-processed is captured, and the time when the person was photographed.
 図9は、第3の実施形態に係る記憶部203が保持する情報の一例を示す図である。図9に示すように、記憶部203には、静止画データSDjの小領域の時間帯ごとに、成功確率Pkと試行数Tkが関連付けられて格納されている。 FIG. 9 is a diagram showing an example of information held by the storage unit 203 according to the third embodiment. As shown in FIG. 9, the storage unit 203 stores the success probability Pk and the number of trials Tk in association with each other for each time zone of the small area of the still image data SDj.
 取得部201は、静止画データSDj、静止画データSDjを取得した現在時刻CT、人物位置PPkを判定部202に引き渡す。 The acquisition unit 201 delivers the still image data SDj, the current time CT for acquiring the still image data SDj, and the person position PPk to the determination unit 202.
 判定部202は、人物位置PPk及び現在時刻CTに応じた成功確率Pkと試行数Tkを記憶部203から取得する。判定部202は、第1の実施形態にて説明した内容と同様に、成功確率Pkと試行数Tkに関する処理を行う。 The determination unit 202 acquires the success probability Pk and the number of trials Tk according to the person position PPk and the current time CT from the storage unit 203. The determination unit 202 performs processing related to the success probability Pk and the number of trials Tk in the same manner as the contents described in the first embodiment.
 具体的には、判定部202は、試行数Tkが試行閾値よりも小さい場合には、「処理実行」と判定する。また、試行数Tkが試行閾値以上、且つ、取得した成功確率Pkが実施閾値以上であれば、判定部202は、「処理実行」と判定する。 Specifically, when the number of trials Tk is smaller than the trial threshold value, the determination unit 202 determines that "processing is executed". If the number of trials Tk is equal to or greater than the trial threshold value and the acquired success probability Pk is equal to or greater than the execution threshold value, the determination unit 202 determines that the process is executed.
 試行数Tkが試行閾値以上であっても、取得した成功確率Pkが実施閾値よりも小さい場合には、判定部202は「処理不実行」と判定する。 Even if the number of trials Tk is equal to or greater than the trial threshold value, if the acquired success probability Pk is smaller than the execution threshold value, the determination unit 202 determines that the process is not executed.
 処理に関する実行、不実行が判定された後の画像処理装置20の動作は、第1、第2の実施形態と同様とすることができるので詳細な説明は省略する。第3の実施形態に係る画像処理装置20は、顔認証処理等を実施した場合には、記憶部203の対応するエントリ(小領域、時間帯別の成功確率Pk、試行数Tk)を更新する。 Since the operation of the image processing device 20 after the execution or non-execution of the processing is determined can be the same as that of the first and second embodiments, detailed description thereof will be omitted. The image processing device 20 according to the third embodiment updates the corresponding entry (small area, success probability Pk for each time zone, number of trials Tk) of the storage unit 203 when face recognition processing or the like is performed. ..
 以上のように、第3の実施形態に係る画像処理装置20は、人物の位置と時刻ごとの成功確率に応じて画像処理を実施するか否かを判定している。そのため、第1、第2の実施形態と比較し、第3の実施形態では人物が撮影された状況をより細かく規定し成功確率を設定できるため、精度の良い画像処理の実施、不実施に関する判定が行える。 As described above, the image processing device 20 according to the third embodiment determines whether or not to perform image processing according to the position of the person and the success probability for each time. Therefore, as compared with the first and second embodiments, in the third embodiment, the situation in which the person is photographed can be defined in more detail and the success probability can be set. Therefore, it is possible to determine whether or not to perform accurate image processing. Can be done.
<ハードウェア構成>
 続いて、画像処理システムを構成する各装置のハードウェアについて説明する。図10は、画像処理装置20のハードウェア構成の一例を示す図である。
<Hardware configuration>
Subsequently, the hardware of each device constituting the image processing system will be described. FIG. 10 is a diagram showing an example of the hardware configuration of the image processing device 20.
 画像処理装置20は、画像処理装置(所謂、コンピュータ)により構成可能であり、図10に例示する構成を備える。例えば、画像処理装置20は、プロセッサ311、メモリ312、入出力インターフェイス313及び通信インターフェイス314等を備える。上記プロセッサ311等の構成要素は内部バス等により接続され、相互に通信可能に構成されている。 The image processing device 20 can be configured by an image processing device (so-called computer), and includes the configuration illustrated in FIG. For example, the image processing device 20 includes a processor 311, a memory 312, an input / output interface 313, a communication interface 314, and the like. The components such as the processor 311 are connected by an internal bus or the like so that they can communicate with each other.
 但し、図10に示す構成は、画像処理装置20のハードウェア構成を限定する趣旨ではない。画像処理装置20は、図示しないハードウェアを含んでもよいし、必要に応じて入出力インターフェイス313を備えていなくともよい。また、画像処理装置20に含まれるプロセッサ311等の数も図10の例示に限定する趣旨ではなく、例えば、複数のプロセッサ311が画像処理装置20に含まれていてもよい。 However, the configuration shown in FIG. 10 does not mean to limit the hardware configuration of the image processing device 20. The image processing device 20 may include hardware (not shown), or may not include an input / output interface 313 if necessary. Further, the number of processors 311 and the like included in the image processing device 20 is not limited to the example of FIG. 10, and for example, a plurality of processors 311 may be included in the image processing device 20.
 プロセッサ311は、例えば、CPU(Central Processing Unit)、MPU(Micro Processing Unit)、DSP(Digital Signal Processor)等のプログラマブルなデバイスである。あるいは、プロセッサ311は、FPGA(Field Programmable Gate Array)、ASIC(Application Specific Integrated Circuit)等のデバイスであってもよい。プロセッサ311は、オペレーティングシステム(OS;Operating System)を含む各種プログラムを実行する。 The processor 311 is a programmable device such as a CPU (Central Processing Unit), an MPU (Micro Processing Unit), or a DSP (Digital Signal Processor). Alternatively, the processor 311 may be a device such as an FPGA (Field Programmable Gate Array) or an ASIC (Application Specific Integrated Circuit). The processor 311 executes various programs including an operating system (OS; Operating System).
 メモリ312は、RAM(Random Access Memory)、ROM(Read Only Memory)、HDD(Hard Disk Drive)、SSD(Solid State Drive)等である。メモリ312は、OSプログラム、アプリケーションプログラム、各種データを格納する。 The memory 312 is a RAM (RandomAccessMemory), a ROM (ReadOnlyMemory), an HDD (HardDiskDrive), an SSD (SolidStateDrive), or the like. The memory 312 stores an OS program, an application program, and various data.
 入出力インターフェイス313は、図示しない表示装置や入力装置のインターフェイスである。表示装置は、例えば、液晶ディスプレイ等である。入力装置は、例えば、キーボードやマウス等のユーザ操作を受け付ける装置である。 The input / output interface 313 is an interface of a display device or an input device (not shown). The display device is, for example, a liquid crystal display or the like. The input device is, for example, a device that accepts user operations such as a keyboard and a mouse.
 通信インターフェイス314は、他の装置と通信を行う回路、モジュール等である。例えば、通信インターフェイス314は、NIC(Network Interface Card)等を備える。 The communication interface 314 is a circuit, module, or the like that communicates with another device. For example, the communication interface 314 includes a NIC (Network Interface Card) and the like.
 画像処理装置20の機能は、各種処理モジュールにより実現される。当該処理モジュールは、例えば、メモリ312に格納されたプログラムをプロセッサ311が実行することで実現される。また、当該プログラムは、コンピュータが読み取り可能な記憶媒体に記録することができる。記憶媒体は、半導体メモリ、ハードディスク、磁気記録媒体、光記録媒体等の非トランジェント(non-transitory)なものとすることができる。即ち、本発明は、コンピュータプログラム製品として具現することも可能である。また、上記プログラムは、ネットワークを介してダウンロードするか、あるいは、プログラムを記憶した記憶媒体を用いて、更新することができる。さらに、上記処理モジュールは、半導体チップにより実現されてもよい。 The function of the image processing device 20 is realized by various processing modules. The processing module is realized, for example, by the processor 311 executing a program stored in the memory 312. The program can also be recorded on a computer-readable storage medium. The storage medium may be a non-transient such as a semiconductor memory, a hard disk, a magnetic recording medium, or an optical recording medium. That is, the present invention can also be embodied as a computer program product. In addition, the program can be downloaded via a network or updated using a storage medium in which the program is stored. Further, the processing module may be realized by a semiconductor chip.
 なお、カメラ装置10、結果記憶装置30のハードウェア構成は当業者にとって明らかであるのでその説明を省略する。 Since the hardware configurations of the camera device 10 and the result storage device 30 are obvious to those skilled in the art, the description thereof will be omitted.
[変形例]
 なお、上記実施形態にて説明した画像処理システムの構成、動作等は例示であって、システムの構成等を限定する趣旨ではない。例えば、画像処理装置20の処理結果は、結果記憶装置30に格納されるのではなく、画像処理の結果を用いる装置に直接送信されてもよい。例えば、画像処理が顔認証処理であれば、顔認証の結果に応じてゲートの開閉を制御する装置に、上記処理結果が送信されてもよい。
[Modification example]
The configuration, operation, and the like of the image processing system described in the above embodiment are examples, and are not intended to limit the configuration and the like of the system. For example, the processing result of the image processing device 20 may not be stored in the result storage device 30, but may be directly transmitted to a device that uses the result of the image processing. For example, if the image processing is a face recognition processing, the processing result may be transmitted to a device that controls the opening and closing of the gate according to the result of the face recognition.
 上記実施形態に係る画像処理システムは、クラウド環境で実現されてもよいし、エッジ環境で実現されてもよい。画像処理システムがクラウド環境で実現される場合には、画像処理装置20、結果記憶装置30がネットワーク上のサーバとして動作する。画像処理システムがエッジ環境で実現される場合には、画像処理装置20がエッジ側のサーバ、結果記憶装置30がクラウド側のサーバとして動作する。 The image processing system according to the above embodiment may be realized in a cloud environment or an edge environment. When the image processing system is realized in a cloud environment, the image processing device 20 and the result storage device 30 operate as servers on the network. When the image processing system is realized in an edge environment, the image processing device 20 operates as a server on the edge side, and the result storage device 30 operates as a server on the cloud side.
 上記実施形態では、画像処理装置20がカメラ装置10から取得した動画データから静止画データをキャプチャする場合について説明したが、カメラ装置10が定期的(例えば、1秒間隔)に静止画データを画像処理装置20に送信してもよい。 In the above embodiment, the case where the image processing device 20 captures the still image data from the moving image data acquired from the camera device 10 has been described, but the camera device 10 periodically (for example, at 1-second intervals) captures the still image data. It may be transmitted to the processing device 20.
 上記実施形態では、カメラ装置10には監視カメラのような固定式のカメラ装置を想定しているが、画像処理装置20に入力される画像(動画)データは移動式のカメラ装置から取得したデータであってもよい。即ち、カメラ装置10には、監視カメラ、デジタルカメラ、携帯電話、スマートフォン等が例示される。即ち、本願開示における「カメラ装置」は、撮影機能を備えた任意の電子機器とすることができる。 In the above embodiment, the camera device 10 is assumed to be a fixed camera device such as a surveillance camera, but the image (moving image) data input to the image processing device 20 is data acquired from the mobile camera device. It may be. That is, the camera device 10 includes a surveillance camera, a digital camera, a mobile phone, a smartphone, and the like. That is, the "camera device" disclosed in the present application can be any electronic device having a photographing function.
 上記実施形態では、人を画像処理の対象に設定しているが、画像処理の対象は任意とすることができる。例えば、動物が画像処理の対象あってもよいし、ロボットのような装置(物)が画像処理の対象であってもよい。 In the above embodiment, a person is set as the target of image processing, but the target of image processing can be arbitrary. For example, an animal may be the target of image processing, or a device (object) such as a robot may be the target of image processing.
 上記実施形態では、顔画像を用いた認証処理を所定の画像処理として説明したが、画像処理部205行う処理は認証処理に限定されない。例えば、画像処理部205は、年齢性別判定処理を実行してもよい。また、画像処理部205が処理の対象とする部位は「顔」に限定されない。例えば、画像処理部205は、「手」や「足」といった部位を処理の対象としてもよい。この場合、画像切り出し部204は、画像処理部205の処理に必要な部位を切り出す。 In the above embodiment, the authentication process using the face image has been described as a predetermined image process, but the process performed by the image processing unit 205 is not limited to the authentication process. For example, the image processing unit 205 may execute the age / gender determination process. Further, the portion to be processed by the image processing unit 205 is not limited to the “face”. For example, the image processing unit 205 may process a portion such as a “hand” or a “foot”. In this case, the image cutting unit 204 cuts out a portion required for processing by the image processing unit 205.
 上記実施形態では、画像処理システムに含まれる複数のカメラ装置10それぞれが同じ画像処理(例えば、顔認証処理)を実行する場合を前提としているが、動画データを取得するカメラ装置10に応じて実施する処理を変更してもよい。例えば、カメラ装置10-1が送信する動画データ(静止画データ)に対しては顔認証処理が行われ、カメラ装置10-2が送信する動画データに対しては年齢性別判定処理が行われてもよい。この場合、各カメラ装置10は自身を特定する識別子を動画データと共に画像処理装置20に送信する。このように、画像処理装置20は、カメラ装置10の属性(例えば、設置位置)に応じて画像処理の内容を変更してもよい。 In the above embodiment, it is assumed that each of the plurality of camera devices 10 included in the image processing system executes the same image processing (for example, face recognition processing), but the implementation is performed according to the camera device 10 for acquiring moving image data. You may change the processing to be performed. For example, face recognition processing is performed on the moving image data (still image data) transmitted by the camera device 10-1, and age / gender determination processing is performed on the moving image data transmitted by the camera device 10-2. May be good. In this case, each camera device 10 transmits an identifier that identifies itself to the image processing device 20 together with the moving image data. In this way, the image processing device 20 may change the content of the image processing according to the attribute (for example, the installation position) of the camera device 10.
 上記実施形態では、画像に写る人物の位置や画像取得時の時刻(現在時刻)を「撮像状況」としているが、他の情報が撮像状況として設定されてもよい。例えば、画像取得時の「天候」や「明るさ」等の情報が撮像状況として用いられてもよい。例えば、この場合、記憶部203には、天候(晴れ、雨、曇り)ごとの成功確率Pkや輝度ごとの成功確率Pkが格納される。画像取得時の天候は外部のサーバから取得されてもよいし、輝度センサ等を用いて天候が推測されてもよい。 In the above embodiment, the position of the person appearing in the image and the time (current time) at the time of image acquisition are set as the "imaging status", but other information may be set as the imaging status. For example, information such as "weather" and "brightness" at the time of image acquisition may be used as the imaging status. For example, in this case, the storage unit 203 stores the success probability Pk for each weather (sunny, rain, cloudy) and the success probability Pk for each brightness. The weather at the time of image acquisition may be acquired from an external server, or the weather may be estimated using a brightness sensor or the like.
 上記実施形態では、システム稼働時から撮像状況ごとの画像処理の結果を蓄積して成功確率の信頼性を高めているが、システムの稼働前に撮像状況ごとの成功確率や試行数を取得して予め記憶部203に格納しておいてもよい。このように、成功確率や試行数を予め記憶部203格納しておくことで、システム運用開始時から信頼性の高いデータ(成功確率)に基づく画像処理の実行判定が実現できる。即ち、記憶部203の初期値を予め測定した値に設定しておくことで、成功確率を早期に妥当な値に収束させることができる。例えば、画像処理装置20にテストモードを設け、様々な時間帯で色々な場所に人が立って画像処理を行いその結果を記憶部203に格納する初期値として使用すればよい。 In the above embodiment, the result of image processing for each imaging status is accumulated from the time of system operation to improve the reliability of the success probability, but the success probability and the number of trials for each imaging status are acquired before the system is operated. It may be stored in the storage unit 203 in advance. By storing the success probability and the number of trials in the storage unit 203 in advance in this way, it is possible to realize execution determination of image processing based on highly reliable data (success probability) from the start of system operation. That is, by setting the initial value of the storage unit 203 to a value measured in advance, the success probability can be quickly converged to an appropriate value. For example, the image processing device 20 may be provided with a test mode, and a person may stand at various places at various times to perform image processing and use the result as an initial value to be stored in the storage unit 203.
 上記実施形態では、画像処理部205が成功確率や試行数を計算し、記憶部203の内容を更新している。しかし、画像処理部205は、画像処理の結果だけを記憶部203に格納し、判定部202が当該格納された結果に基づき成功確率を算出してもよい。 In the above embodiment, the image processing unit 205 calculates the success probability and the number of trials, and updates the contents of the storage unit 203. However, the image processing unit 205 may store only the result of the image processing in the storage unit 203, and the determination unit 202 may calculate the success probability based on the stored result.
 上記実施形態では、記憶部203に必要な撮像状況の成功確率が登録されていない場合には、画像処理は「不実施」と判定されている。しかし、このような場合であっても、記憶部203に格納された情報から必要な成功確率が予測されてもよい。例えば、図11に示すように、画像処理装置20は予測部206をさらに備えていてもよい。例えば、予測部206は、処理の対象となる撮像状況とは異なる撮像状況の成功確率に基づき、上記処理の対象となる撮像状況に応じた成功確率を予測する。例えば、図5において、小領域Aと小領域Cの成功確率は信頼性の高いデータとして記憶部203に格納されている(各領域の試行数が試行閾値以上)が、小領域Bの成功確率が格納されていない(成功確率が0)場合を考える。この場合、予測部206は、小領域Bに隣接する小領域Aと小領域Bの成功確率の平均値を計算し、小領域Bの成功確率として判定部202に提供する。 In the above embodiment, if the success probability of the imaging status required for the storage unit 203 is not registered, the image processing is determined to be "not performed". However, even in such a case, the required success probability may be predicted from the information stored in the storage unit 203. For example, as shown in FIG. 11, the image processing device 20 may further include a prediction unit 206. For example, the prediction unit 206 predicts the success probability according to the imaging situation to be processed, based on the success probability of the imaging situation different from the imaging situation to be processed. For example, in FIG. 5, the success probabilities of the small area A and the small area C are stored in the storage unit 203 as highly reliable data (the number of trials in each area is equal to or greater than the trial threshold value), but the success probabilities of the small area B Consider the case where is not stored (success probability is 0). In this case, the prediction unit 206 calculates the average value of the success probabilities of the small area A and the small area B adjacent to the small area B, and provides the success probability of the small area B to the determination unit 202.
 コンピュータの記憶部に画像処理プログラムをインストールすることにより、コンピュータを画像処理装置として機能させることができる。また、画像処理プログラムをコンピュータに実行させることにより、コンピュータにより画像処理方法を実行することができる。 By installing an image processing program in the storage unit of the computer, the computer can function as an image processing device. Further, by causing the computer to execute the image processing program, the image processing method can be executed by the computer.
 また、上述の説明で用いたフローチャートでは、複数の工程(処理)が順番に記載されているが、各実施形態で実行される工程の実行順序は、その記載の順番に制限されない。各実施形態では、例えば各処理を並行して実行する等、図示される工程の順番を内容的に支障のない範囲で変更することができる。 Further, in the flowchart used in the above description, a plurality of steps (processes) are described in order, but the execution order of the steps executed in each embodiment is not limited to the order of description. In each embodiment, the order of the illustrated steps can be changed within a range that does not hinder the contents, for example, each process is executed in parallel.
 上記の実施形態の一部又は全部は、以下の付記のようにも記載され得るが、以下には限られない。
[付記1]
 画像処理対象が撮像された際の状況である、撮像状況ごとの前記画像処理対象に対して画像処理を実行した際の成功確率に基づき、前記画像処理対象に対して前記画像処理を実行するか否かを判定する、判定部(101、202)と、
 前記画像処理対象に対して前記画像処理を実行すると判定された場合に、前記画像処理対象に対して前記画像処理を実行する、画像処理部(102、205)と、
 を備える、画像処理装置(20、100)。
[付記2]
 前記判定部(101、202)は、前記成功確率に対する閾値処理の結果に基づき、前記画像処理対象に対して前記画像処理を実行するか否かを判定する、付記1に記載の画像処理装置(20、100)。
[付記3]
 前記撮像状況ごとの成功確率を記憶する、記憶部(203)をさらに備える、付記1又は2に記載の画像処理装置(20、100)。
[付記4]
 前記記憶部(203)は、前記撮像状況に関する情報と、前記成功確率と、前記撮像状況にて前記画像処理の実行を試みた回数である試行数と、を関連付けて記憶する、付記3に記載の画像処理装置(20、100)。
[付記5]
 前記判定部(101、202)は、前記試行数が試行閾値よりも小さい場合には、前記画像処理対象に対して前記画像処理を実行すると判定する、付記4に記載の画像処理装置(20、100)。
[付記6]
 前記画像処理部(102、205)は、前記画像処理対象に対して前記画像処理を試みた結果に応じて、前記記憶部(203)に記憶された情報を更新する、付記4又は5に記載の画像処理装置(20、100)。
[付記7]
 前記撮像状況に関する情報は、前記画像処理対象が写る画像における前記画像処理対象の位置を含む、付記4乃至6のいずれか一つに記載の画像処理装置(20、100)。
[付記8]
 前記撮像状況に関する情報は、前記画像処理対象が撮影された時刻を含む、付記4乃至7のいずれか一つに記載の画像処理装置(20、100)。
[付記9]
 前記画像処理対象は、人の顔である、付記1乃至8のいずれか一つに記載の画像処理装置(20、100)。
[付記10]
 画像から前記画像処理対象である顔領域を切り出す、画像切り出し部(204)をさらに備える付記9に記載の画像処理装置(20、100)。
[付記11]
 画像処理装置(20、100)において、
 画像処理対象が撮像された際の状況である、撮像状況ごとの前記画像処理対象に対して画像処理を実行した際の成功確率に基づき、前記画像処理対象に対して前記画像処理を実行するか否かを判定し、
 前記画像処理対象に対して前記画像処理を実行すると判定された場合に、前記画像処理対象に対して前記画像処理を実行することを含む、画像処理方法。
[付記12]
 画像処理装置(20、100)に搭載されたコンピュータ(311)に、
 画像処理対象が撮像された際の状況である、撮像状況ごとの前記画像処理対象に対して画像処理を実行した際の成功確率に基づき、前記画像処理対象に対して前記画像処理を実行するか否かを判定する処理と、
 前記画像処理対象に対して前記画像処理を実行すると判定された場合に、前記画像処理対象に対して前記画像処理を実行する処理と、
 を実行させるプログラム。
 なお、付記11の形態及び付記12の形態は、付記1の形態と同様に、付記2の形態~付記10の形態に展開することが可能である。
Some or all of the above embodiments may also be described, but not limited to:
[Appendix 1]
Whether to execute the image processing on the image processing target based on the success probability when the image processing is executed on the image processing target for each imaging situation, which is the situation when the image processing target is imaged. Judgment units (101, 202) that determine whether or not,
When it is determined that the image processing is to be executed on the image processing target, the image processing unit (102, 205) that executes the image processing on the image processing target and
An image processing apparatus (20, 100).
[Appendix 2]
The image processing apparatus according to Appendix 1, wherein the determination unit (101, 202) determines whether or not to execute the image processing on the image processing target based on the result of the threshold value processing for the success probability. 20, 100).
[Appendix 3]
The image processing apparatus (20, 100) according to Appendix 1 or 2, further comprising a storage unit (203) for storing the success probability for each imaging situation.
[Appendix 4]
The storage unit (203) stores information about the imaging status, the success probability, and the number of trials, which is the number of attempts to execute the image processing in the imaging status, in association with each other, as described in Appendix 3. Image processing device (20, 100).
[Appendix 5]
The image processing apparatus (20, 20) according to Appendix 4, wherein the determination unit (101, 202) determines that the image processing is executed on the image processing target when the number of trials is smaller than the trial threshold value. 100).
[Appendix 6]
The description in Appendix 4 or 5, wherein the image processing unit (102, 205) updates the information stored in the storage unit (203) according to the result of attempting the image processing on the image processing target. Image processing device (20, 100).
[Appendix 7]
The image processing apparatus (20, 100) according to any one of Appendix 4 to 6, wherein the information regarding the imaging status includes the position of the image processing target in the image in which the image processing target is captured.
[Appendix 8]
The image processing apparatus (20, 100) according to any one of Supplementary note 4 to 7, wherein the information regarding the imaging status includes the time when the image processing target was photographed.
[Appendix 9]
The image processing apparatus (20, 100) according to any one of Appendix 1 to 8, wherein the image processing target is a human face.
[Appendix 10]
The image processing apparatus (20, 100) according to Appendix 9, further comprising an image cutting section (204) for cutting out the face region to be image processed from the image.
[Appendix 11]
In the image processing apparatus (20, 100)
Whether to execute the image processing on the image processing target based on the success probability when the image processing is executed on the image processing target for each imaging situation, which is the situation when the image processing target is imaged. Judge whether or not
An image processing method including executing the image processing on the image processing target when it is determined to execute the image processing on the image processing target.
[Appendix 12]
On the computer (311) mounted on the image processing device (20, 100),
Whether to execute the image processing on the image processing target based on the success probability when the image processing is executed on the image processing target for each imaging situation, which is the situation when the image processing target is imaged. The process of determining whether or not
When it is determined that the image processing is executed on the image processing target, the processing of executing the image processing on the image processing target and the processing of executing the image processing on the image processing target.
A program that executes.
Note that the form of Appendix 11 and the form of Appendix 12 can be expanded to the forms of Appendix 2 to the form of Appendix 10 in the same manner as the form of Appendix 1.
 以上、本発明の実施形態を説明したが、本発明はこれらの実施形態に限定されるものではない。これらの実施形態は例示にすぎないということ、及び、本発明のスコープ及び精神から逸脱することなく様々な変形が可能であるということは、当業者に理解されるであろう。 Although the embodiments of the present invention have been described above, the present invention is not limited to these embodiments. It will be appreciated by those skilled in the art that these embodiments are merely exemplary and that various modifications are possible without departing from the scope and spirit of the invention.
 この出願は、2019年4月15日に出願された日本出願特願2019-076908を基礎とする優先権を主張し、その開示の全てをここに取り込む。 This application claims priority based on Japanese application Japanese Patent Application No. 2019-076908 filed on April 15, 2019, and incorporates all of its disclosures herein.
 本発明は、計算資源が限られた環境下で、顔認証処理等の画像処理を低負荷で実行することに寄与する。 The present invention contributes to executing image processing such as face recognition processing with a low load in an environment where computational resources are limited.
10、10-1~10-n カメラ装置
20、100 画像処理装置
30 結果記憶装置
101、202 判定部
102、205 画像処理部
201 取得部
203 記憶部
204 画像切り出し部
206 予測部
311 プロセッサ
312 メモリ
313 入出力インターフェイス
314 通信インターフェイス
 
 
10, 10-1 to 10- n Camera device 20, 100 Image processing device 30 Result storage device 101, 202 Judgment unit 102, 205 Image processing unit 201 Acquisition unit 203 Storage unit 204 Image cutting unit 206 Prediction unit 311 Processor 312 Memory 313 I / O interface 314 communication interface

Claims (12)

  1.  画像処理対象が撮像された際の状況である、撮像状況ごとの前記画像処理対象に対して画像処理を実行した際の成功確率に基づき、前記画像処理対象に対して前記画像処理を実行するか否かを判定する、判定部と、
     前記画像処理対象に対して前記画像処理を実行すると判定された場合に、前記画像処理対象に対して前記画像処理を実行する、画像処理部と、
     を備える、画像処理装置。
    Whether to execute the image processing on the image processing target based on the success probability when the image processing is executed on the image processing target for each imaging situation, which is the situation when the image processing target is imaged. Judgment unit to judge whether or not
    An image processing unit that executes the image processing on the image processing target when it is determined to execute the image processing on the image processing target.
    An image processing device.
  2.  前記判定部は、前記成功確率に対する閾値処理の結果に基づき、前記画像処理対象に対して前記画像処理を実行するか否かを判定する、請求項1に記載の画像処理装置。 The image processing apparatus according to claim 1, wherein the determination unit determines whether or not to execute the image processing on the image processing target based on the result of the threshold value processing for the success probability.
  3.  前記撮像状況ごとの成功確率を記憶する、記憶部をさらに備える、請求項1又は2に記載の画像処理装置。 The image processing device according to claim 1 or 2, further comprising a storage unit that stores the success probability for each imaging situation.
  4.  前記記憶部は、前記撮像状況に関する情報と、前記成功確率と、前記撮像状況にて前記画像処理の実行を試みた回数である試行数と、を関連付けて記憶する、請求項3に記載の画像処理装置。 The image according to claim 3, wherein the storage unit stores information about the imaging status, the success probability, and the number of trials, which is the number of attempts to execute the image processing in the imaging status, in association with each other. Processing equipment.
  5.  前記判定部は、前記試行数が試行閾値よりも小さい場合には、前記画像処理対象に対して前記画像処理を実行すると判定する、請求項4に記載の画像処理装置。 The image processing apparatus according to claim 4, wherein the determination unit determines that the image processing is executed on the image processing target when the number of trials is smaller than the trial threshold value.
  6.  前記画像処理部は、前記画像処理対象に対して前記画像処理を試みた結果に応じて、前記記憶部に記憶された情報を更新する、請求項4又は5に記載の画像処理装置。 The image processing device according to claim 4 or 5, wherein the image processing unit updates the information stored in the storage unit according to the result of attempting the image processing on the image processing target.
  7.  前記撮像状況に関する情報は、前記画像処理対象が写る画像における前記画像処理対象の位置を含む、請求項4乃至6のいずれか一項に記載の画像処理装置。 The image processing apparatus according to any one of claims 4 to 6, wherein the information regarding the imaging status includes the position of the image processing target in the image in which the image processing target is captured.
  8.  前記撮像状況に関する情報は、前記画像処理対象が撮影された時刻を含む、請求項4乃至7のいずれか一項に記載の画像処理装置。 The image processing apparatus according to any one of claims 4 to 7, wherein the information regarding the imaging status includes the time when the image processing target was photographed.
  9.  前記画像処理対象は、人の顔である、請求項1乃至8のいずれか一項に記載の画像処理装置。 The image processing device according to any one of claims 1 to 8, wherein the image processing target is a human face.
  10.  画像から前記画像処理対象である顔領域を切り出す、画像切り出し部をさらに備える請求項9に記載の画像処理装置。 The image processing apparatus according to claim 9, further comprising an image cutting portion that cuts out the face region to be image processed from the image.
  11.  画像処理装置において、
     画像処理対象が撮像された際の状況である、撮像状況ごとの前記画像処理対象に対して画像処理を実行した際の成功確率に基づき、前記画像処理対象に対して前記画像処理を実行するか否かを判定し、
     前記画像処理対象に対して前記画像処理を実行すると判定された場合に、前記画像処理対象に対して前記画像処理を実行することを含む、画像処理方法。
    In the image processing device
    Whether to execute the image processing on the image processing target based on the success probability when the image processing is executed on the image processing target for each imaging situation, which is the situation when the image processing target is imaged. Judge whether or not
    An image processing method including executing the image processing on the image processing target when it is determined to execute the image processing on the image processing target.
  12.  画像処理装置に搭載されたコンピュータに、
     画像処理対象が撮像された際の状況である、撮像状況ごとの前記画像処理対象に対して画像処理を実行した際の成功確率に基づき、前記画像処理対象に対して前記画像処理を実行するか否かを判定する処理と、
     前記画像処理対象に対して前記画像処理を実行すると判定された場合に、前記画像処理対象に対して前記画像処理を実行する処理と、
     を実行させるプログラム。
     
     
    For computers installed in image processing equipment
    Whether to execute the image processing on the image processing target based on the success probability when the image processing is executed on the image processing target for each imaging situation, which is the situation when the image processing target is imaged. The process of determining whether or not
    When it is determined that the image processing is executed on the image processing target, the processing of executing the image processing on the image processing target and the processing of executing the image processing on the image processing target.
    A program that executes.

PCT/JP2020/009561 2019-04-15 2020-03-06 Image processing device, image processing method, and program WO2020213284A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019076908 2019-04-15
JP2019-076908 2019-04-15

Publications (1)

Publication Number Publication Date
WO2020213284A1 true WO2020213284A1 (en) 2020-10-22

Family

ID=72837355

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/009561 WO2020213284A1 (en) 2019-04-15 2020-03-06 Image processing device, image processing method, and program

Country Status (1)

Country Link
WO (1) WO2020213284A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016194804A (en) * 2015-03-31 2016-11-17 Kddi株式会社 Person identifying apparatus and program
JP2018148367A (en) * 2017-03-03 2018-09-20 キヤノン株式会社 Image processing device, image processing system, image processing method, and program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016194804A (en) * 2015-03-31 2016-11-17 Kddi株式会社 Person identifying apparatus and program
JP2018148367A (en) * 2017-03-03 2018-09-20 キヤノン株式会社 Image processing device, image processing system, image processing method, and program

Similar Documents

Publication Publication Date Title
KR101687530B1 (en) Control method in image capture system, control apparatus and a computer-readable storage medium
JP6141079B2 (en) Image processing system, image processing apparatus, control method therefor, and program
US8938092B2 (en) Image processing system, image capture apparatus, image processing apparatus, control method therefor, and program
JP4642128B2 (en) Image processing method, image processing apparatus and system
US8314854B2 (en) Apparatus and method for image recognition of facial areas in photographic images from a digital camera
US20110142299A1 (en) Recognition of faces using prior behavior
KR20140013407A (en) Apparatus and method for tracking object
WO2019033569A1 (en) Eyeball movement analysis method, device and storage medium
JP6921694B2 (en) Monitoring system
CN108875507B (en) Pedestrian tracking method, apparatus, system, and computer-readable storage medium
US20160217326A1 (en) Fall detection device, fall detection method, fall detection camera and computer program
KR20190118619A (en) Pedestrian tracking method and electronic device
JP2008197904A (en) Person retrieving device and person retrieving method
JP2016095808A (en) Object detection device, object detection method, image recognition device and computer program
JP2015082245A (en) Image processing apparatus, image processing method, and program
EP2544148A1 (en) Foreign object assessment device, foreign object assessment method, and foreign object assessment program
CN110378328B (en) certificate image processing method and device
EP3846114A1 (en) Animal information management system and animal information management method
KR102022971B1 (en) Method for object of image and apparatus for the same
JP2018081402A (en) Image processing system, image processing method, and program
CN110505438B (en) Queuing data acquisition method and camera
CN108875488B (en) Object tracking method, object tracking apparatus, and computer-readable storage medium
JP6798609B2 (en) Video analysis device, video analysis method and program
WO2020213284A1 (en) Image processing device, image processing method, and program
CN114743264A (en) Shooting behavior detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20791808

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20791808

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP