CN107038362B - Image processing apparatus, image processing method, and computer-readable recording medium - Google Patents

Image processing apparatus, image processing method, and computer-readable recording medium Download PDF

Info

Publication number
CN107038362B
CN107038362B CN201610908844.5A CN201610908844A CN107038362B CN 107038362 B CN107038362 B CN 107038362B CN 201610908844 A CN201610908844 A CN 201610908844A CN 107038362 B CN107038362 B CN 107038362B
Authority
CN
China
Prior art keywords
image
face
person
privacy level
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610908844.5A
Other languages
Chinese (zh)
Other versions
CN107038362A (en
Inventor
佐佐木雅昭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Casio Computer Co Ltd
Original Assignee
Casio Computer Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Casio Computer Co Ltd filed Critical Casio Computer Co Ltd
Publication of CN107038362A publication Critical patent/CN107038362A/en
Application granted granted Critical
Publication of CN107038362B publication Critical patent/CN107038362B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an image processing apparatus and an image processing method. The present invention addresses the problem of executing processing relating to an image corresponding to a desired privacy level without appearing unnatural. An image processing device (100) is provided with: a first determination unit (6d) that determines whether or not a privacy level, which indicates a degree of difficulty in recognizing the face of the person included in the image as the face of the specific person, satisfies a predetermined condition; and an imaging control unit (6e) that controls execution of a predetermined process related to the image when the privacy level is determined to satisfy a predetermined condition.

Description

Image processing apparatus, image processing method, and computer-readable recording medium
The present application claims priority based on Japanese patent application laid-open at 2015, 12/1, and Japanese patent application laid-open at 2016, 6/15, 2016, and 118545, the entire contents of which are incorporated herein by reference.
Technical Field
The invention relates to an image processing apparatus and an image processing method.
Background
Conventionally, there is a case where it is desired to disclose or record an image in a state where a person is photographed but it is difficult for others to recognize the person even if they watch the image. Therefore, a technique is known in which a mosaic process or a mask process is performed on a face of a person so as not to impair privacy, as in japanese patent application laid-open No. 2000-322660.
However, in the case of the above patent document, it is necessary to perform image processing such as mosaic processing and occlusion processing on an image to be published or recorded, and there is a problem that the image becomes an unnatural image due to local mosaic or occlusion, which leads to deterioration in appearance.
Disclosure of Invention
The present invention has been made in view of such a problem, and an object of the present invention is to execute processing related to an image corresponding to a desired privacy level without causing unnatural appearance.
An aspect of the present invention relates to an image processing apparatus including a processor that: determining whether or not a privacy level indicating a degree of difficulty in recognizing the face of the person included in the image as the face of the specific person satisfies a predetermined condition; and controlling execution of a predetermined process related to the image based on a result of the determination as to whether or not the privacy level satisfies a predetermined condition.
Another aspect of the present invention relates to an image processing apparatus including a processor that: calculating a privacy level indicating a degree of difficulty in recognizing the face of the person included in the image as the face of the specific person; the calculated privacy level is used to control execution of a prescribed process related to the image.
In accordance with another aspect of the present invention, there is provided an image processing method using an image processing apparatus, the image processing method including: a process of determining whether or not a privacy level indicating a degree of difficulty in recognizing a face of a person included in an image as a face of a specific person satisfies a predetermined condition; and a process of controlling execution of a predetermined process related to the image according to a determination result of whether or not the privacy level satisfies a predetermined condition.
Still another aspect of the present invention relates to an image processing method using an image processing apparatus, the image processing method including: a process of calculating a privacy level indicating a degree of difficulty in recognizing a face of a person included in the image as a face of a specific person; and a process of controlling execution of a prescribed process related to the image using the calculated privacy level.
The above and other objects and novel features of the present invention will become more apparent from the following description and the accompanying drawings. It should be noted that the drawings are only for illustrating the present invention, and the present invention is not limited thereto.
Drawings
The present application can be more fully understood when the following detailed description is considered in conjunction with the following drawings.
Fig. 1 is a block diagram showing a schematic configuration of an image processing apparatus according to embodiment 1 to which the present invention is applied.
Fig. 2 is a flowchart showing an example of an operation related to the automatic image capturing process performed by the image processing apparatus of fig. 1.
Fig. 3 is a flowchart showing an example of an operation related to the privacy level calculation process performed by the image processing apparatus of fig. 1.
Fig. 4 is a flowchart continuously showing the privacy level calculation process of fig. 3.
Fig. 5 is a flowchart showing an example of an operation related to the manual shooting process performed by the image processing apparatus of fig. 1.
Fig. 6 is a diagram for explaining the privacy level calculation processing.
Fig. 7A to 7C are diagrams for explaining the privacy level calculation processing.
Fig. 8A and 8B are diagrams for explaining the privacy level calculation processing.
Fig. 9A and 9B are diagrams for explaining the privacy level calculation processing.
Fig. 10 is a diagram schematically showing an example of a display mode of an image by the image processing apparatus of fig. 1.
Fig. 11 is a flowchart showing an example of an operation related to the determination value setting process performed by the image processing apparatus of fig. 1.
Fig. 12 is a block diagram showing a schematic configuration of an image processing apparatus according to embodiment 2 to which the present invention is applied.
Fig. 13 is a flowchart showing an example of an operation related to the image acquisition process performed by the image processing apparatus of fig. 12.
Fig. 14 is a flowchart showing an example of an operation related to the privacy level calculation processing performed by the image processing apparatus of fig. 12.
Fig. 15 is a flowchart continuously showing the privacy level calculation process of fig. 14.
Detailed Description
Hereinafter, specific embodiments of the present invention will be described with reference to the drawings. However, the scope of the invention is not limited to the examples of the figures.
[ embodiment 1]
Fig. 1 is a block diagram showing a schematic configuration of an image processing apparatus 100 according to embodiment 1 to which the present invention is applied.
As shown in fig. 1, the image processing apparatus 100 according to embodiment 1 specifically includes: a central control unit 1, a memory 2, an imaging unit 3, a signal processing unit 4, a motion detection unit 5, an operation control unit 6, an image processing unit 7, an image recording unit 8, a display unit 9, a communication control unit 10, and an operation input unit 11.
The central control unit 1, the memory 2, the imaging unit 3, the signal processing unit 4, the motion detection unit 5, the motion control unit 6, the image processing unit 7, the image recording unit 8, the display unit 9, and the communication control unit 10 are connected via a bus 12.
The image processing apparatus 100 may be constituted by, for example, a mobile station used in a mobile communication network such as a mobile phone and a smart phone having an imaging function, or a communication terminal such as a PDA (Personal Data Assistants), or may be constituted by a digital camera having a communication function.
The central control unit 1 controls each unit of the image processing apparatus 100. Specifically, although not shown, the Central control Unit 1 includes a CPU (Central Processing Unit) or the like, and performs various control operations in accordance with various Processing programs (not shown) for the image Processing apparatus 100.
The Memory 2 is configured by, for example, a DRAM (Dynamic Random Access Memory) or the like, and temporarily stores data and the like processed by the central control unit 1, the operation control unit 6, and the like. For example, the memory 2 temporarily stores a reference face image F (described later in detail) captured in the automatic shooting process.
The image pickup unit (image pickup means) 3 picks up an image of a predetermined object (e.g., a person) and generates a frame image. Specifically, the imaging unit 3 includes a lens unit 3a, an electronic imaging unit 3b, and an imaging control unit 3 c.
The lens unit 3a is configured by a plurality of lenses such as a zoom lens and a focus lens, a diaphragm for adjusting the amount of light passing through the lenses, and the like. Further, the lens section 3a is configured to be exposed on the same side (object side) as the display panel 9b, so that so-called self-timer shooting is enabled.
The electronic imaging unit 3b is constituted by an image sensor (imaging element) such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal-Oxide Semiconductor). The electronic imaging unit 3b converts an optical image passing through the various lenses of the lens unit 3a into a two-dimensional image signal.
The imaging control unit 3c scans and drives the electronic imaging unit 3b by, for example, a timing generator or a driver so that the electronic imaging unit 3b converts an optical image passing through the lens unit 3a into a two-dimensional image signal at predetermined intervals, and reads out frame images from the imaging area of the electronic imaging unit 3b on a screen-by-screen basis and outputs the frame images to the signal processing unit 4.
The signal processing unit 4 performs various image signal processes on the analog value signal of the frame image transferred from the electronic image pickup unit 3 b. Specifically, the signal processing unit 4 performs gain adjustment for each color component of RGB on an analog value signal of a frame image, performs sample and hold by a sample and hold circuit (not shown), converts the signal into digital data by an a/D converter (not shown), performs color process processing including pixel interpolation processing and γ correction processing by a color process circuit (not shown), and generates a luminance signal Y and color difference signals Cb and Cr (YUV data) as digital values. The signal processing unit 4 outputs the generated luminance signal Y and color difference signals Cb and Cr to the memory 2 used as a buffer memory.
The motion detection unit 5 detects the motion of the image processing apparatus 100.
That is, the motion detection unit 5 includes, for example, 3-axis angular velocity sensors that detect angular velocities of rotations around 3 axes (roll axis, pitch axis, and yaw axis) orthogonal to each other of the image processing apparatus 100. The motion detector 5 outputs signals sequentially detected by the 3-axis angular velocity sensor to the motion controller 6 as motion information, for example, when capturing an object.
The operation control unit 6 includes a first image acquisition unit 6a, a detection processing unit 6b, a first calculation unit 6c, a first determination unit 6d, and an imaging control unit 6 e.
Further, each part of the operation control unit 6 is configured by, for example, a predetermined logic circuit, but this configuration is merely an example and is not limited thereto.
The first image acquiring unit 6a acquires frame images sequentially picked up by the image pickup unit 3.
Specifically, the first image acquiring unit 6a sequentially acquires, from the memory 2, image data of frame images related to live view images sequentially captured by the imaging unit 3 and sequentially generated by the signal processing unit 4.
The detection processing unit 6b detects a face, a constituent part of the face, and a line of sight from the frame image acquired by the first image acquisition unit 6 a.
That is, the detection processing unit 6b performs face detection processing for each frame image sequentially captured by the imaging unit 3 and sequentially acquired by the first image acquisition unit 6a to detect a face area including the face of a person as a subject, and further detects a component of the face such as eyes or mouth from within the face area detected in the face detection processing. The detection processing unit 6b performs line-of-sight detection processing on each frame image sequentially captured by the imaging unit 3 and sequentially acquired by the first image acquisition unit 6a, and detects the line of sight of a person as an object.
Further, the detection processing unit 6b detects the entire body of the person as the object and determines the shape thereof for each frame image sequentially captured by the imaging unit 3 and sequentially acquired by the first image acquiring unit 6 a. From the change in the shape of the specified person, it can be determined whether the person is outside the frame image or whether the body direction has changed although the person is within the frame image.
Note that although the above-described face detection processing, component part detection processing, line-of-sight detection processing, and processing for specifying the shape of a person are well-known techniques, detailed description thereof is omitted here, and for example, AAM (Active Appearance Model) may be used for the face detection processing and component part detection processing, and for example, the line-of-sight detection processing may detect the positions of eyes and detect the line of sight from the area ratio of white eyes on both sides of a black eye.
The first calculation section (calculation means) 6c calculates the privacy level.
Here, the privacy level indicates how difficult it is to recognize that the face of a person included in an image (e.g., a frame image or the like) is the face of a specific person, and the privacy level is relatively lower in the case of an image that is more easily recognized as the face of a specific person, and is relatively higher in the case of an image that is more difficult to recognize as the face of a specific person. Specifically, the privacy level varies depending on the state (for example, angle, size, whether or not the face is blocked, color, and the like) of the face of the person included in the image, the constituent parts (for example, eyes, mouth, and the like) of the face, and the state of an external blocking object (for example, sunglasses, a mask, and the like) that blocks the face of the person and the constituent parts of the face.
That is, a state in which it is possible to recognize that the face of the person included in the frame image is the face of a specific person, for example, a state in which the face of the person can be detected and the face and the line of sight of the person are in the frontal direction (predetermined direction) is set as a reference state, and based on a relative change in the entire face of the person and a constituent part of the face with respect to the reference state; a change in the rate at which the face of the person or a constituent part of the face is blocked; a change in the size of the face of the person or the imaging distance to the face of the person included in the frame image; a change of an external blocking object for blocking the face of the person or a constituent part of the face; and a change in the color of the face of the person, the privacy level may change.
Here, the relative change of the entire face of the person and the constituent parts of the face can be caused by, for example, performing the following operations: an operation in which a person as a subject rotates a face about a predetermined axis (for example, a yaw axis, a pitch axis; see fig. 6); a figure rotates the whole body around the yaw axis, so that the face rotates around the yaw axis; an operation of moving the line of sight so that the line of sight deviates from the front direction; and an operation of displacing the image processing apparatus 100 in the up-down direction and the left-right direction so that the optical axis direction of the lens portion 3a of the image processing apparatus 100 is inclined with respect to the front direction of the face of the person; and the like. The operation of rotating the face of the person about the roll axis and the operation of rotating the image processing apparatus 100 about the optical axis direction of the lens unit 3a are excluded because of a small influence on the face detection processing, but for example, the privacy level of the image may be set more finely, and these operations may be included.
Further, a change in the rate at which the face of the person or the constituent parts of the face are blocked can occur, for example, by the person being the subject blocking the entire face or the constituent parts of the face (e.g., eyes, mouth, etc.) with his or her hands, hair, or an external blocking object (e.g., sunglasses, a mask, etc.).
Further, the change in the size of the face of the person or the imaging distance to the face of the person included in the frame image can be caused by, for example: an operation of adjusting a physical distance between a person as a subject and the image processing apparatus 100; an operation of adjusting the zoom magnification (focal length) of the image pickup unit 3; and an operation of designating a region to be cut out from the captured image; and the like.
Further, the change of the external blocking object that blocks the face of the person and the constituent parts of the face can be caused by the difference in the types of the external blocking object that blocks the eyes, such as glasses, sunglasses, eyecups, and eyeshields. Further, even in the case of a mask, the difference in the component parts of the face portion to be blocked, such as a mask for the mouth and a mask for the nose, can occur.
Further, the change in color of the face of the person can be generated, for example, in the following cases: depending on the light beam irradiated to the person as the subject and the exposure state of the image processing apparatus 100, the face may be completely black due to backlight or completely white due to overexposure; or applying a different color to the face from the skin color, such as makeup; and the like.
Specifically, the first calculation unit 6c determines, as the reference state, a state in which the face of the person can be detected and the face and the line of sight of the person are in the front direction (predetermined direction) based on the detection result of the frame images sequentially captured by the imaging unit 3 and sequentially acquired by the first image acquisition unit 6a by the detection processing unit 6 b. Then, the first calculation section 6c calculates the privacy level by detecting a change in the frame image (reference face image F) with respect to the reference state, the change including: relative changes in the entire face of a person and in the constituent parts of the face; a change in the rate at which the face of the person or a constituent part of the face is blocked; a change in the size of the face of the person or the imaging distance to the face of the person included in the frame image; a change of an external blocking object for blocking the face of the person or a constituent part of the face; and a change in color of the face of the person.
For example, the first calculation unit 6c compares the shape of the face detection frame corresponding to the face area of the person included in the reference face image F with the shape of the face detection frame corresponding to the face area of the person included in the frame image sequentially acquired by the first image acquisition unit 6a, and determines a change in the rotation angle of the face of the person as the subject with respect to the reference state about a predetermined axis (for example, the yaw axis or the pitch axis). That is, for example, when the face of the person is rotated about the yaw axis, the face detection frame becomes a vertically long rectangle, and when the face of the person is rotated about the pitch axis, the face detection frame becomes a horizontally long rectangle. The first calculation unit 6c calculates the central axis and the rotation angle of the rotation of the face of the person from the change in the shape of the face detection frame, for example, and specifies the amount or rate of change in the rotation angle of the face of the person about a predetermined axis with respect to the reference state. Here, when the person rotates the entire body by approximately 180 ° around the yaw axis, that is, when the person rotates such that the face faces rearward, the detection processing unit 6b may detect the rotation of the entire body of the person, and the first calculation unit 6c may determine the change in the rotation angle based on the detection result. The first calculation unit 6c may calculate the rotation angle of the image processing apparatus 100 based on the motion information output from the motion detection unit 5, and may determine the amount or rate of change in the direction of the optical axis of the lens unit 3a of the image processing apparatus 100 with respect to the front of the face of the person with respect to the reference state.
For example, the first calculation unit 6c compares the line of sight of the person included in the reference face image F with the line of sight of the person included in the frame images sequentially acquired by the first image acquisition unit 6a, and determines a change in the line of sight of the person as the subject from the reference state.
The first calculation unit 6c calculates the privacy level based on the amount or rate of change in the rotation angle around the predetermined axis with respect to the face of the person in the reference state, the amount or rate of change in the direction of the optical axis of the lens portion 3a of the image processing apparatus 100 with respect to the front of the face of the person in the reference state, and the change in the line of sight of the person as the subject with respect to the reference state. For example, when the face of the person is rotated to the left or right about the yaw axis or the line of sight is moved to the left or right, the first calculation unit 6c calculates the privacy level with a correlation in which the privacy level is the lowest in a state where the direction of the face is the front and the line of sight is only moved to the left (or right), and the privacy level gradually increases as the direction of the face is rotated to the left (or right), as shown in fig. 7A. Further, for example, in the case of rotating the face of the person up and down around the pitch axis or moving the line of sight up and down, the first calculation section 6c calculates the privacy level in a correlation relationship in which the privacy level in a state where the direction of the face is the front and only the line of sight moves down (or up) is the lowest, and the privacy level gradually increases as the direction of the face is rotated down (or up), as shown in fig. 7B. Further, for example, in the case of rotating the face of the person to the left and right and up and down around the yaw axis and the pitch axis or moving the line of sight to the left and right and up and down, the first calculation unit 6C calculates the privacy level in a correlation relationship in which the direction of the face is the front and the privacy level is the lowest in a state in which the line of sight is moved only to the left-down (or right-down, left-up, right-up) and the privacy level gradually increases as the direction of the face is rotated to the left-down (or right-down, left-up, right-up) as shown in fig. 7C.
The first calculation unit 6c may set the privacy level independently without making the line of sight of the person and the rotation angle of the face about the predetermined axis have a correlation.
The above-described method of specifying the change in the rotation angle of the face of the person about the predetermined axis is merely an example, and is not limited to this, and for example, a plurality of discriminators corresponding to the rotation angle of the face may be used in the face detection processing, and the detection result indicating which discriminator detected the face is may be used. The above-described method of specifying the change of the line of sight is merely an example, and is not limited to this, and can be appropriately and arbitrarily changed.
For example, the first calculation unit 6c compares the number of components of the face detected from the reference face image F with the number of components of the face detected from the frame images sequentially acquired by the first image acquisition unit 6a, and determines a change in the ratio at which the components of the face of the person are blocked with respect to the reference state. Specifically, the first calculation unit 6c determines a change in the rate at which the constituent parts of the face of the person are blocked from the reference state based on a change in the number of constituent parts of the face such as eyes and mouth detected from the face region detected in the face detection processing.
The first calculation unit 6c calculates the privacy level based on the change in the ratio of the occlusion of the constituent parts of the face of the person with respect to the reference state. For example, as shown in fig. 8A, the first calculation section 6c calculates the privacy level in a correlation relationship in which the privacy level is lowest in a state in which any one of the constituent parts of the face (e.g., mouth, etc.) is blocked, and the privacy level gradually increases as the number of blocked constituent parts of the face increases.
The above-described method of determining the change in the ratio of the occlusion of the component part of the face of the person is merely an example, and is not limited to this, and can be appropriately and arbitrarily changed.
For example, the first calculation unit 6c compares the number of pixels constituting the face of the person included in the reference face image F with the number of pixels constituting the face of the person included in the frame images sequentially acquired by the first image acquisition unit 6a, and determines a change in the size of the face of the person included in the frame image with respect to the reference state. That is, for example, the greater the physical distance between the person and the image processing apparatus 100, the smaller the number of pixels constituting the face of the person included in the frame image. The first calculation unit 6c determines a change in the size of the face of the person included in the frame image from the reference state, for example, based on a change in the number of pixels constituting a face area including the face of the person. The first calculation unit 6c may convert an imaging distance (object distance) to the face of the person based on the focal length of the imaging unit 3, and may determine a change in the size of the face of the person included in the frame image from the reference state based on the change in the imaging distance.
The first calculation unit 6c calculates the privacy level based on the change in the size of the face of the person included in the frame image from the reference state. For example, as shown in fig. 8B, the first calculation unit 6c calculates the privacy level in such a correlation that the larger the size of the face of the person included in the frame image, the lower the privacy level, and the smaller the size of the face of the person included in the frame image, the higher the privacy level.
The above-described method of specifying the change in the size of the face of the person is merely an example, and is not limited to this, and can be appropriately and arbitrarily changed.
For example, the first calculation unit 6c compares the types of external masks detected from the reference face image F and the frame images sequentially acquired by the first image acquisition unit 6a, and specifies the change of the external mask from the reference state. Specifically, for example, if the face detection processing is performed using sunglasses, the first calculation unit 6c learns a large number of face images wearing sunglasses from the face region detected in the face detection processing, and if the face detection processing is performed using a mask, the first calculation unit 6c learns a large number of face images wearing a mask from the face region detected in the face detection processing, detects external obstacles such as sunglasses and a mask from the face region, and determines a change in the external obstacle from a reference state based on a change in the type of the detected external obstacle.
Then, the first calculation unit 6c calculates the privacy level based on the change in the external blocking object from the reference state. For example, the first calculation unit 6c calculates the privacy level corresponding to the type of the external blocking object with reference to the blocking object table ST shown in fig. 9A. The type of the external blocking object is stored in the blocking object table ST in association with the privacy level.
The method of identifying the change in the external blocking object is only an example, and may be appropriately and arbitrarily changed, and for example, an object different from a portion constituting the face may be recognized and detected as an external blocking object by using a known target recognition technique.
The type of the external blocking object to be detected is only an example, and is not limited to this, and may be, for example, a hat, a headband, or the like that blocks hair.
Further, a higher privacy level may also be calculated in the event that multiple external obstructions are detected.
Further, for example, the first calculation unit 6c compares the color of the face of the person detected from the reference face image F with the color of the face of the person detected from the frame images sequentially acquired by the first image acquisition unit 6a to determine the change in the color of the face of the person from the reference state. Specifically, the first calculation unit 6c measures the average RGB value of the skin color region excluding the components having colors different from skin colors, such as eyes and mouth, from the face region detected in the face detection processing, and adds up the absolute values of the differences between the R value, G value, and B value of the face of the person detected from the reference face image F and the R value, G value, and B value of the face of the person detected from the frame image sequentially acquired by the first image acquisition unit 6a, thereby specifying the change in the color of the face of the person.
Then, the first calculation unit 6c calculates the privacy level based on the change in the color of the face of the person from the reference state. For example, as shown in fig. 9B, the first calculation unit 6c calculates the privacy level in such a correlation that the smaller the difference between the color of the face of the person detected from the reference face image F and the color of the face of the person included in the frame image, the lower the privacy level, and the larger the difference between the colors, the higher the privacy level.
The above-described method of specifying the change in the color of the face of the person is merely an example, and is not limited to this, and it is possible to measure the area ratio of a region having a color different from the average RGB value of the colors of the face of the person detected from the reference face image F by a predetermined value or more among the colors of the face of the person included in the frame image, and to calculate the privacy level from the size of the area ratio, for example, as appropriate and arbitrarily.
The first calculation unit 6c sequentially calculates the privacy level for each frame image of the live view image sequentially captured by the imaging unit 3 and sequentially acquired by the first image acquisition unit 6a in the automatic imaging process, and sequentially outputs the calculated privacy level to the first determination unit 6 d.
The first calculation unit 6c may calculate the privacy level based on at least one of the following changes from the reference state, the changes including: relative changes in the entire face of a person and in the constituent parts of the face; a change in the rate at which the face of the person or a constituent part of the face is blocked; a change in the size of the face of the person or the imaging distance to the face of the person included in the frame image; changes in external shades; and a change in color of the face of the person. When these changes are used as a reference, the first calculation unit 6c may perform individual evaluation for each item and perform comprehensive evaluation based on the results of the evaluation to calculate the privacy level.
Further, the calculation of the privacy level is conditioned on the fact that the body of the person is within the frame image, and when the body of the person is outside the frame image, the privacy level is not calculated and the automatic photographing process is not performed.
The first determination unit (determination means) 6d determines whether or not the privacy level calculated by the first calculation unit 6c is higher than a predetermined determination value.
That is, the first determination unit 6d determines whether or not the privacy level sequentially calculated by the first calculation unit 6c is higher than the determination value (whether or not a predetermined condition is satisfied) for each live view image sequentially captured by the imaging unit 3 in the automatic imaging process. Specifically, the first judgment section 6d first acquires, from the memory 2, a desired privacy level specified in advance by the user and stored in the memory 2 as a judgment value. Then, the first judgment unit 6d sequentially acquires the privacy levels sequentially calculated by the first calculation unit 6c, and judges whether or not the acquired privacy levels are higher than the judgment value.
The determination value may be set as a default value, for example, a value empirically obtained from a plurality of privacy levels calculated in the privacy level calculation process.
The imaging control unit (control means) 6e controls the imaging unit 3 to execute predetermined processing related to the image.
That is, when the first determination unit 6d determines that the privacy level is higher than the predetermined determination value, the image capturing control unit 6e controls the image capturing unit 3 to capture an image for recording. Specifically, for example, in the automatic imaging process, based on the determination result of the first determination unit 6d, the imaging control unit 6e outputs an imaging instruction of the image for recording to the imaging unit 3 when a state change from a state in which the privacy level sequentially calculated by the first calculation unit 6c is equal to or less than the determination value to a state in which the privacy level sequentially calculated by the first calculation unit 6c is higher than the determination value is triggered, and causes the imaging unit 3 to image the image for recording.
The imaging control unit 6e may cause the imaging unit 3 to image an image for recording, for example, when a change from a state in which the privacy levels sequentially calculated by the first calculation unit 6c are higher than the determination value to a state in which the privacy levels sequentially calculated by the first calculation unit 6c are equal to or lower than the determination value is made. That is, for example, when the privacy level of the live view image becomes too high, the user may perform an operation to lower the privacy level of the image (for example, an operation to bring the face direction closer to the front) to lower the privacy level, and in this process, the image pickup control unit 6e may cause the image pickup unit 3 to pick up an image for recording based on the determination result of the first determination unit 6 d.
The imaging control unit 6e controls the imaging unit 3 to capture an image for recording in accordance with an imaging instruction operation by a user. Specifically, for example, in the manual shooting process, when a shooting instruction operation is performed by the user via the operation input unit 11 described later, the shooting control unit 6e outputs a shooting instruction of the image for recording to the imaging unit 3, and causes the imaging unit 3 to shoot the image for recording.
The image processing unit 7 encodes the image data of the still image generated by the signal processing unit 4 in a predetermined compression format (for example, JPEG format or the like) to generate image data for recording of the still image.
The image processing unit 7 decodes the image data of the still image to be displayed, which is read from the memory 2 and the image recording unit 8, in accordance with a corresponding predetermined encoding method, and outputs the decoded image data to the display unit 9. At this time, the image processing unit 7 may be deformed to a predetermined size (for example, VGA, full high-definition) based on, for example, a display resolution of the display panel 9b described later and output to the display unit 9.
The image recording unit 8 is configured by, for example, a nonvolatile memory (flash memory) or the like, and records image data for recording of a still image encoded in a predetermined compression format by the image processing unit 7. Specifically, in both the case of the automatic shooting process and the case of the manual shooting process, the Image recording section (recording unit) 8 acquires an Image for recording shot by the shooting control section 6e as the privacy Image P, and acquires the privacy level calculated in the privacy level calculation process before shooting the privacy Image P, and records the privacy level as Exif (Exchangeable Image File Format) information in association with the Image data of the privacy Image P.
The image recording unit 8 may be configured to be capable of detachably mounting a recording medium (not shown), and control reading of data from the mounted recording medium and writing of data to the recording medium.
The display unit 9 includes a display control unit 9a and a display panel 9 b.
The display control unit 9a performs control to display a predetermined image in the display area of the display panel 9b based on the image data of a predetermined size read from the memory 2 and the image recording unit 8 and decoded by the image processing unit 7. Specifically, the display control unit 9a includes a VRAM (Video Random Access Memory), a VRAM controller, a digital Video encoder, and the like. Then, the digital video encoder reads out the luminance signal Y and the color difference signals Cb and Cr decoded by the image processing unit 7 and stored in the VRAM (not shown) via the VRAM controller at a predetermined reproduction frame rate from the VRAM, generates a video signal based on these data, and outputs the video signal to the display panel 9 b.
The display panel 9b displays an image or the like captured by the image capturing unit 3 in a display area based on a video signal from the display control unit 9 a. Specifically, the display panel 9b displays a live view image while sequentially updating a plurality of frame images generated by imaging a subject by the imaging unit 3 at a predetermined frame rate in the still image imaging mode.
Further, the display panel 9b displays the image recorded in the image recording section 8 in the image reproduction mode. At this time, the display control unit 9a, as display control means, causes the display panel 9b to display the private images P (see fig. 10) recorded in the image recording unit 8 in a sorted or sequential manner with reference to the corresponding privacy level established. For example, as shown in fig. 10, the display control unit 9a classifies the plurality of privacy images P recorded in the image recording unit 8 into image groups for each predetermined privacy level (for example, 10, 30, 50, 70, 100, etc.), and displays the display panel 9b for each classified image group. At this time, for each image group corresponding to the privacy level, thumbnail images can be displayed with any one privacy image P as the representative image Ps.
The display control unit 9a may rearrange the plurality of privacy images P recorded in the image recording unit 8 according to the privacy level associated therewith, and cause the display panel 9b to display the privacy images P in the rearranged order.
Examples of the display panel 9b include a liquid crystal display panel, an organic EL (Electro-Luminescence) display panel, and the like, but this is merely an example and is not limited thereto.
The communication control unit 10 transmits and receives data to and from a communication network (not shown) via the communication antenna 10 a.
That is, the communication antenna 10a is an antenna capable of transmitting and receiving data corresponding to a predetermined communication System (for example, a W-CDMA (Wideband Code Division Multiple Access) System, a CDMA2000 System, a GSM (Global System for Mobile Communications (registered trademark)), or the like) used by the image processing apparatus 100 for communication with a radio base station (not shown). The communication control unit 10 transmits and receives data to and from the radio base station via the communication antenna 10a through a communication channel set in a predetermined communication scheme according to a communication protocol corresponding to the communication scheme. Specifically, the communication control unit (transmission means) 10 transmits the private image P captured by the imaging control unit 6e to an external recording server (predetermined external device) S via the communication antenna 10 a.
The recording server S is, for example, a server constituting a cloud, and has a function of opening a Web page (e.g., an image publication page or the like) on the internet as a World Wide Web (www) server. The recording server S receives various images and the like transmitted from a communication terminal such as the image processing apparatus 100, and discloses the images and the like as contents on a Web page.
Thus, the content disclosed on the Web page opened by the recording server S is viewable by the user of the communication terminal that can access the Web page via the communication network.
The recording server S may be any computer capable of being connected to a communication network, and a detailed description thereof will be omitted.
The communication network is a communication network for connecting the image processing apparatus 100 to an external apparatus such as the recording server S via a wireless base station, a gateway server (not shown), or the like. The communication Network is a communication Network constructed by using a dedicated line or an existing general public line, and various line systems such as WAN (Wide Area Network) and LAN (Local Area Network) can be applied.
The communication network includes, for example, various communication networks such as a telephone line network, an ISDN line network, a private line, a mobile communication network, a communication satellite line, and a CATV line network, an IP network, a VoIP (Voice over Internet Protocol) gateway, an Internet service provider, and the like.
The configuration of the communication control unit 10 is merely an example, and is not limited thereto, and can be changed as appropriate and arbitrarily, and for example, although not shown in the drawings, the communication control unit may be configured to be mounted with a wireless LAN module and to be able to Access a communication network via an Access Point (Access Point).
The operation input unit 11 is used to perform a predetermined operation of the image processing apparatus 100. Specifically, the operation input unit 11 includes operation units such as a shutter button for instructing to photograph an object, a selection decision button for instructing to select an image-taking mode, a function, and the like, and a zoom button for instructing to adjust a zoom amount (all of which are not shown), and outputs a predetermined operation signal to the central control unit 1 in accordance with an operation of each button of the operation units.
< automatic shooting processing >
Next, an automatic image capturing process performed by the image processing apparatus 100 will be described with reference to fig. 2.
Fig. 2 is a flowchart showing an example of the operation related to the automatic shooting process.
As shown in fig. 2, when the image pickup of the live view image of the subject by the image pickup unit 3 is started, the signal processing unit 4 performs various image signal processes on the analog-value signal of the frame image relating to the live view image transferred from the electronic image pickup unit 3b, and generates digital-value image data (step S1). The signal processing unit 4 outputs the generated image data to the memory 2, and the memory 2 temporarily stores the input image data.
The first image obtaining unit 6a of the operation control unit 6 reads out and obtains image data of a frame image related to the live view image from the memory 2 (step S2).
Next, the detection processing unit 6b performs line-of-sight detection processing on the frame image to be processed acquired by the first image acquisition unit 6a (step S3), and determines whether or not the line of sight of the person as the subject is a front face (step S4).
Here, when it is determined that the line of sight is not the front face (step S4; no), the first image obtaining section 6a reads out and obtains image data of a new frame image relating to the live view image from the memory 2 (step S5), and returns the process to step S3. Then, in step S3, the detection processor 6b performs the line-of-sight detection process on the new frame image acquired by the first image acquirer 6a in substantially the same manner as described above (step S3).
On the other hand, when it is determined in step S4 that the line of sight is the front (step S4; yes), the operation control unit 6 shifts to the standby state for automatic shooting (step S6).
That is, in the case where the line of sight of the person as the subject is not the front (watching camera), the standby state of automatic shooting is not shifted, and thus, for example, automatic shooting in a state of a non-watching camera in which shooting is not intended can be prevented.
Thereafter, the detection processing unit 6b performs detection processing on the frame images sequentially acquired by the first image acquisition unit 6a, and the first calculation unit 6c determines, as a reference state, a state in which the face of the person can be detected and the line of sight of the person is in a frontal direction (predetermined direction) based on the detection result of the detection processing unit 6b (step S7). The first calculation unit 6c outputs the frame image in the reference state to the memory 2, and temporarily stores the frame image as the reference face image F. Then, the reference face image F temporarily stored is set in a state in which the face is oriented in the front direction (predetermined direction).
Next, the first image obtaining unit 6a reads out and obtains image data of a new frame image related to the live view image from the memory 2 (step S8), and the operation control unit 6 performs privacy level calculation processing (see fig. 3 and 4) for calculating the privacy level of the new frame image (step S9; described in detail later).
Then, the first determination unit 6d of the operation control unit 6 determines whether or not the privacy level calculated in the privacy level calculation process of step S9 is higher than the determination value set in the determination value setting process (step S10). Specifically, the first judgment section 6d reads out and acquires the judgment value from the memory 2, and judges whether or not the privacy level calculated in the privacy level calculation process is higher than the judgment value.
When it is determined that the privacy level is not higher than the determination value (step S10; no), the operation control unit 6 returns the process to step S8 and executes the processes after step S8. That is, the first image obtaining section 6a obtains the image data of the new frame image in step S8, and performs the privacy level calculation process in step S9.
On the other hand, when it is determined in step S10 that the privacy level is higher than the determination value (step S10; YES), the image pickup control unit 6e controls the image pickup unit 3 to pick up an image for recording of the subject (step S11). Specifically, for example, the imaging control unit 6e sets a timer for automatically imaging the subject after a predetermined time has elapsed, and causes the imaging unit 3 to image the subject and the signal processing unit 4 to generate image data when the predetermined time set by the timer has elapsed. Then, the image processing unit 7 encodes the image data generated by the signal processing unit 4 in a predetermined compression format (for example, JPEG format or the like) to generate image data of an image for recording.
Thereafter, the image recording unit 8 acquires the image for recording from the image processing unit 7 as the private image P, acquires the privacy level calculated in the privacy level calculation process, and records the privacy level as Exif information in association with the image data of the private image P (step S12).
This ends the automatic shooting process.
In the above-described automatic image capturing process, one private image P is captured based on the privacy level, but this is merely an example, and for example, a plurality of private images P having privacy levels higher than a predetermined determination value may be captured and recorded in the image recording unit 8 so that the user selects a desired private image P from among the recorded private images P.
< privacy level calculation processing >
Next, the privacy level calculation process performed by the image processing apparatus 100 will be described with reference to fig. 3 and 4.
Fig. 3 and 4 are flowcharts showing one example of actions involved in the privacy-level calculation process.
As shown in fig. 3, first, the detection processing unit 6b performs face detection processing on a frame image to be processed (for example, a new frame image acquired in step S8 of fig. 2 or the like) (step S21), and determines whether or not a face area including the face of a person as a subject is detected (step S22).
Here, although the person as the subject is within the frame image, if it is determined that no face area is detected (step S22; no), the first calculation section 6c calculates the privacy level to the highest value considering, for example, a state in which the person rotates the entire body by approximately 180 ° around the yaw axis so that the face faces rearward (step S23). Thereby, the privacy level calculation processing is ended.
On the other hand, when it is determined in step S22 that a face region is detected (step S22; yes), the first calculation portion 6c detects a wearing article (external mask) such as sunglasses or a face mask in the face region detected from the frame image of the processing target (step S24), and determines whether or not the wearing article (external mask) is detected (step S25). When it is determined that the wearing article (external blocking object) is detected (step S25; yes), the score for calculating the privacy level is evaluated and determined according to the type of the wearing article (external blocking object) with reference to the blocking object table ST (fig. 9A) (step S26). On the other hand, when it is determined that no wearing article (external shield) is detected (step S25; NO), the process of step S26 is skipped.
Next, the first calculation unit 6c measures the color of the skin color region in the face region detected from the frame image to be processed, and the first calculation unit 6c acquires the reference face image F from the memory 2, measures the color of the skin color region in the face region for the reference face image F, and performs the color detection process of the skin for calculating the difference between the measured skin colors (step S27). Then, it is determined whether or not a skin color different from the normal skin color is detected, the calculated difference between skin colors being equal to or larger than a predetermined value (step S28). Here, when it is determined that a skin color different from the usual one is detected (step S28; yes), a score for calculating the privacy level corresponding to the color of the skin (difference between the calculated skin colors) is evaluated and decided (step S29). On the other hand, when it is determined that a skin color different from the usual is not detected (step S28; NO), the process of step S29 is skipped.
Next, the first calculation unit 6c determines the number of pixels constituting the face region detected from the frame image to be processed as the size of the face (step S30). The first calculation unit 6c acquires the reference face image F from the memory 2, and also specifies the number of pixels of the face area constituting the face of the person with respect to the reference face image F.
Then, the first calculation unit 6c specifies a change in the size of the face of the person included in the frame image from the reference state based on a change in the number of pixels constituting the face area, and evaluates and determines a score for calculating the privacy level based on the specified change in the size of the face (step S31).
Next, the first calculation unit 6c compares the shape of the face detection frame corresponding to the face area of the person included in the reference face image F with the shape of the face detection frame corresponding to the face area of the person included in the new frame image, and calculates the central axis of rotation and the rotation angle of the face of the person from the change in the shape of the face detection frame (step S32).
Then, the first calculation unit 6c determines whether or not the direction of the face of the person detected from the frame image to be processed is a front face (step S33).
When it is determined in step S33 that the direction of the face is not the front face (step S33; no), the first calculation portion 6c determines the amount or rate of change of the rotation angle of the face of the person about the predetermined axis with respect to the reference state, and evaluates and determines the score for calculating the privacy level from the change of the rotation angle of the determined face (step S34).
On the other hand, when it is determined in step S33 that the direction of the face is the front face (step S33; YES), the first calculation portion 6c skips the process of step S34.
Moving to fig. 4, the detection processing unit 6b detects the constituent parts of the face such as the eyes and mouth from the face region detected in the face detection processing in the frame image to be processed (step S35). In addition, the detection processing unit 6b also detects the constituent parts of the face such as eyes and mouth from within the face region with respect to the reference face image F.
Then, the first calculation unit 6c evaluates and determines a score for calculating the privacy level based on a change in the number of components of the face, that is, based on a change in the rate at which components of the face of the person are blocked from the reference state (step S36).
Next, the detection processing unit 6b determines whether or not an eye is detected from the frame image to be processed (step S37).
When it is determined that the eyes are detected (step S37; yes), the detection processing unit 6b performs line-of-sight detection processing on the frame image to be processed (step S38), and determines whether or not the line of sight of the person as the subject is a front face (step S39).
When it is determined in step S39 that the line of sight is not the front surface (step S39; no), the first calculation portion 6c determines a change in the line of sight of the person as the object from the reference state, and evaluates and decides a score for calculating the privacy level based on the determined change in the line of sight of the person (step S40).
On the other hand, when it is determined in step S39 that the line of sight is the front face (step S39; YES), the first calculation section 6c skips the process of step S40.
Further, in the case where it is determined in step S37 that no eye is detected (step S37; NO), the respective processes of steps S38 to S40 are skipped.
Then, the first calculation unit 6c calculates the privacy level using a predetermined conversion expression based on the result of the score evaluation corresponding to the wearing article in step S26, the result of the score evaluation corresponding to the color of the skin in step S29, the result of the score evaluation corresponding to the change in the size of the face in step S31, the result of the score evaluation corresponding to the change in the rotation angle of the face in step S34, the result of the score evaluation corresponding to the change in the rate at which the constituent parts of the face are blocked in step S36, and the result of the score evaluation corresponding to the change in the line of sight of the person in step S40 (step S41). That is, the first calculation unit 6c calculates the privacy level so that the privacy level is relatively lower for frame images in which the change of the person is relatively small with respect to the reference state, and the privacy level is relatively higher for frame images in which the change of the person is relatively large with respect to the reference state.
The conversion equation for calculating the privacy level is used to calculate the privacy level by performing a comprehensive evaluation of all the evaluation items, and for example, when an evaluation item to be prioritized (for example, a direction of a face or the like) can be designated and the evaluation item is not subjected to a score evaluation, the privacy level may be calculated from the result of the score evaluation of another evaluation item.
Thereby, the privacy level calculation processing is ended.
< Manual shooting processing >
Next, the manual image capturing process performed by the image processing apparatus 100 will be described with reference to fig. 5.
Fig. 5 is a flowchart showing an example of the operation related to the manual shooting process.
As shown in fig. 5, when the image pickup of the live view image of the subject by the image pickup unit 3 is started, the signal processing unit 4 performs various image signal processes on the analog-value signal of the frame image relating to the live view image transferred from the electronic image pickup unit 3b, and generates digital-value image data (step S13). The signal processing unit 4 outputs the generated image data to the memory 2, and the memory 2 temporarily stores the input image data.
Next, the central control unit 1 determines whether or not an imaging instruction (imaging instruction operation) has been given by the pressing operation of the shutter button of the operation input unit 11 (step S14).
When it is determined that the photographing instruction operation has been performed (step S14; yes), the photographing control unit 6e controls the image pickup unit 3 to photograph an image for recording of the subject (step S15). Specifically, for example, the imaging control unit 6e causes the imaging unit 3 to image the subject and causes the signal processing unit 4 to generate image data. Then, the image processing unit 7 encodes the image data generated by the signal processing unit 4 in a predetermined compression format (for example, JPEG format or the like) to generate image data of an image for recording.
Next, the first image obtaining unit 6a of the operation control unit 6 reads out and obtains the image data of the frame image related to the temporarily stored live view image from the memory 2 (step S16), and the operation control unit 6 performs the above-described privacy level calculation process (see fig. 3 and 4) (step S17).
Thereafter, the image recording unit 8 acquires the image for recording from the image processing unit 7 as the private image P, acquires the privacy level calculated in the privacy level calculation process, and records the privacy level as Exif information in association with the image data of the private image P (step S18).
This ends the manual shooting process.
As described above, according to the image processing apparatus 100 of embodiment 1, the privacy level of the face of the person included in the frame image (an example of the type of image) captured by the image capturing unit 3 (an example of determining whether or not the privacy level, which indicates how hard it is to recognize the face of a specific person, satisfies the predetermined condition) is calculated, and when it is determined that the calculated privacy level is higher than the predetermined determination value, the image capturing unit 3 is controlled to capture the image for recording (the private image P) (an example of performing the control of the predetermined processing related to the image). Thus, for example, it is not necessary to perform image processing such as mosaic processing and blocking processing on an image to be published or recorded, and it is possible to obtain a more natural privacy image P corresponding to a desired privacy level without deteriorating the appearance due to local mosaic or blocking.
Then, the image for recording (privacy image P) captured based on the privacy level is transmitted to a predetermined external device such as the recording server S, and the image can be disclosed as content on a Web page.
In addition, although the image pickup unit 3 is controlled to pick up an image for recording (the private image P) when it is determined that the calculated privacy level is higher than the predetermined determination value in embodiment 1, the private image P may be normally picked up but not picked up when it is not determined that the calculated privacy level is higher than the predetermined determination value (an example of performing control of predetermined processing relating to an image). This can provide the same effects as those of embodiment 1.
Further, since the privacy level is sequentially calculated for each frame image sequentially picked up by the image pickup unit 3 and the image for recording is picked up based on the determination result as to whether or not the sequentially calculated privacy level is higher than the predetermined determination value, when the privacy level is changed from a state in which the privacy level is equal to or less than the predetermined determination value to a state in which the privacy level is higher than the predetermined determination value, for example, the user himself/herself performs an operation such as changing the privacy level during the image pickup of the live view image to adjust the privacy level of each frame image and transition to a state in which the privacy level is higher than the predetermined determination value, thereby acquiring an image corresponding to a desired privacy level.
The privacy level can be calculated based on a relative change in the face of the person or a part constituting the face of the person in a reference state (for example, a state in which the face of the person can be detected and the face of the person and the line of sight of the person are in a predetermined direction) in which the face of the person included in the frame image can be recognized as the face of a specific person. For example, the privacy level can be calculated based on the amount or rate of change in the rotation angle of the face of the person about a predetermined axis, or the amount or rate of change in the direction of the face of the person by the image processing apparatus 100. That is, by using the relative change of the face of the person or the part constituting the face of the person with respect to the reference state, it is possible to improve the usability and to make the expression form of the privacy image P acquired with reference to the privacy level colorful, for example, without determining the rotation angle of the face in advance for each privacy level, and rotating the face at a determined angle or adjusting the direction of the image processing apparatus 100.
Further, the privacy level is calculated based on a change in the rate at which the face of the person or the part constituting the face of the person in the frame image is blocked, or the privacy level is calculated based on a change in the size of the face of the person or the imaging distance from the face of the person included in the frame image, or the privacy level is calculated based on a change in an external blocking object that blocks the face of the person or the part constituting the face of the person, or the privacy level is calculated based on a change in the color of the face of the person, so that the reference for calculating the privacy level can be increased, and the expression form of the privacy image P acquired based on the privacy level can be made more colorful.
Further, by recording the private images P in association with the privacy levels, the display panel 9b can display the private images P in a category or an order based on the corresponding privacy levels, and even when a plurality of private images P are recorded, the private images P desired by the user can be easily selected, thereby improving the usability.
In the manual shooting process, a privacy level indicating a degree of difficulty in recognizing that the face of the person included in the frame image is the face of the specific person is calculated, and the image is recorded in association with the image shot as the recording image in accordance with the shooting instruction of the user using the calculated privacy level (an example of control for executing a predetermined process related to the image). This makes it possible to display the privacy level associated with the calling of the recording image and display the calling of the recording image, or to output the recording image associated with the privacy level to an external device. In addition, when the user refers to the privacy level associated with the displayed recorded image or the external apparatus refers to the privacy level associated with the outputted recorded image (an example of the case of determining whether or not the privacy level satisfies a predetermined condition) and determines that the privacy level is higher than a predetermined determination value, it is possible to determine whether or not the user is a subject of disclosure or to perform a process of disclosure.
Further, as one of the controls for executing the predetermined processing related to the image, the user can determine whether or not the image is a subject of disclosure or recording by displaying the calculated privacy level in a batch manner during playback display in which the image captured in accordance with the user's image capturing instruction is displayed for a predetermined time, and execute the processing for disclosing and recording the captured image in accordance with a predetermined operation during playback display.
In the above embodiment, the determination value of the privacy level may be automatically set.
The determination value setting process performed by the image processing apparatus 100 will be described below with reference to fig. 11.
Fig. 11 is a flowchart showing an example of the operation related to the determination value setting process.
The determination value setting process is a process performed before the above-described automatic shooting process, and is a process performed in a state where a mode for setting a determination value used in the automatic shooting process is selected.
As shown in fig. 11, when the image pickup of the live view image of the subject by the image pickup unit 3 is started, the signal processing unit 4 performs various image signal processing on the analog value signal of the frame image relating to the live view image transferred from the electronic image pickup unit 3b, and generates digital value image data (step S51). The signal processing unit 4 outputs the generated image data to the memory 2, and the memory 2 temporarily stores the input image data.
The first image obtaining unit 6a of the operation control unit 6 reads out and obtains image data of a frame image related to the live view image from the memory 2 (step S52).
Next, the detection processing unit 6b performs face detection processing on the frame image acquired by the first image acquisition unit 6a to detect a face area including the face of a person as a subject, and performs line-of-sight detection processing to detect the line of sight of the person as the subject (step S53).
Next, the first calculation unit 6c determines whether or not the reference state in which the face of the person can be detected and the face and the line of sight of the person are in the front direction is determined based on the results of the face detection processing and the line of sight detection processing performed by the detection processing unit 6b (step S54).
When it is determined in step S54 that the reference state is not determined (step S54; no), the first image obtaining section 6a reads out and obtains image data of a new frame image related to the live view image from the memory 2 (step S55), and returns the process to step S53. Then, in step S53, the detection processing unit 6b performs face detection processing and line-of-sight detection processing on the new frame image acquired by the first image acquisition unit 6a in substantially the same manner as described above (step S53).
On the other hand, when it is determined in step S54 that the reference state is determined (step S54; yes), the first image obtaining unit 6a reads out and obtains image data of a new frame image related to the live view image from the memory 2 (step S56), and the operation control unit 6 performs privacy level calculation processing (see fig. 3 and 4) for calculating the privacy level of the new frame image (step S57). In addition, the image data of the frame image in the reference state (reference face image F) may be temporarily stored in the memory 2.
Then, the operation control section 6 determines whether or not the privacy level calculated in the privacy level calculation process is a determination value for determining the privacy level in the automatic shooting process (step S58). Specifically, for example, the display control unit 9a causes the display panel 9b to display a screen (not shown) for confirming the privacy level calculated in the privacy level calculation process. Then, after the user confirms the privacy level, the operation control unit 6 determines whether or not to use the calculated privacy level as the determination value, based on whether or not an instruction to use the calculated privacy level as the determination value is input based on a predetermined operation of the operation input unit 11.
When it is determined that the calculated privacy level is not to be used as the determination value (step S58; no), the operation control unit 6 returns the process to step S56 and executes the processes after step S56. That is, in step S56, the first image obtaining section 6a obtains the image data of the new frame image, and performs the privacy level calculation process in step S57.
On the other hand, when it is determined in step S58 that the calculated privacy level is to be the determination value (step S58; yes), the operation control unit 6 sets the calculated privacy level to the determination value (step S59). Specifically, the first calculation unit 6c of the operation control unit 6 outputs the calculated privacy level to the memory 2, and the memory 2 temporarily stores the inputted privacy level as a determination value.
This ends the determination value setting process.
[ embodiment 2]
Hereinafter, an image processing apparatus 200 according to embodiment 2 will be described with reference to fig. 12.
Fig. 12 is a block diagram showing a schematic configuration of an image processing apparatus 200 according to embodiment 2 to which the present invention is applied.
As shown in fig. 12, the image processing apparatus 200 of the present embodiment includes: a central control unit 1, a memory 2, an operation control unit 206, an image processing unit 7, an image recording unit 8, a display unit 9, a communication control unit 10, and an operation input unit 11.
The central control unit 1, the memory 2, the operation control unit 206, the image processing unit 7, the image recording unit 8, the display unit 9, and the communication control unit 10 are connected via a bus 12.
The image processing apparatus 200 according to embodiment 2 has substantially the same configuration as the image processing apparatus 100 according to embodiment 1, except for the aspects described in detail below, and detailed description thereof is omitted.
The motion control unit 206 includes a second image acquisition unit 206a, a detection processing unit 6b, a second calculation unit 206c, a second determination unit 206d, and an acquisition control unit 206 f.
The second image acquiring unit 206a acquires a recording image from the image recording unit 8.
Specifically, the second image acquiring unit 206a acquires, for example, a recording image recorded in the image recording unit 8 as a processing target of an image acquiring process (described later). When a plurality of recorded images are recorded in the image recording unit 8, for example, all the recorded images may be subjected to the image capturing process, or only the recorded images designated by a predetermined operation of the operation input unit 11 by the user may be subjected to the image capturing process.
The second calculation section (calculation unit) 206c calculates the privacy level.
That is, the second calculation unit 206c sets, as a virtual reference state, a state in which it is possible to recognize that the face of the person included in the recorded image is the face of a specific person, for example, a state in which the face of the person can be detected and the face and the line of sight of the person are in the front direction (predetermined direction), based on the detection result of the detection processing unit 6b on the recorded image acquired by the second image acquisition unit 206 a. The second calculation unit 206c then calculates the privacy level by specifying the relative change of the entire face of the person and the constituent parts of the face with respect to the virtual reference state.
That is, the second calculation unit 206c assumes a state in which the face of the person is facing forward and the line of sight is facing forward with respect to the recorded image acquired by the second image acquisition unit 206a, and sets this state as a virtual reference state. The second calculation unit 206c determines a virtual change in the rotation angle of the entire face of the person detected from the recorded image and the constituent parts of the face about a predetermined axis (for example, the yaw axis and the pitch axis; see fig. 6) with respect to the set virtual reference state. The second calculation unit 206c determines a virtual change in the line of sight of the person detected from the recorded image with respect to the set virtual reference state.
The second calculation unit 206c calculates the privacy level based on a virtual change in the rotation angle of the face of the person in the virtual reference state around the predetermined axis and a virtual change in the line of sight of the person as the subject in the virtual reference state.
Further, the second calculation section 206c calculates the privacy level based on the rate at which the face of the person or the parts constituting the face of the person is blocked.
That is, the second calculation unit 206c determines the number of components (for example, eyes, mouth, and the like) of the face of the person detected from the recorded image acquired by the second image acquisition unit 206 a. The second calculation unit 206c calculates the privacy level based on, for example, a ratio at which the constituent parts of the face of the person are blocked, with respect to the number of the constituent parts of the face (for example, "3" in the case of both eyes and mouth) that can be determined as long as the person as the subject does not perform an operation of blocking the entire face or the constituent parts of the face (for example, eyes, mouth, and the like) with his/her own hand, hair, an external blocking object (for example, sunglasses, a mask, and the like).
Further, the second calculation section 206c calculates the privacy level based on the size of the face of the person included in the recorded image or the imaging distance to the face of the person.
That is, the second calculation unit 206c determines, for example, the number of pixels constituting a face area including the face of the person detected from the recorded image acquired by the second image acquisition unit 206a as the size of the face of the person included in the recorded image. The second calculation unit 206c calculates the privacy level based on the size of the face of the person included in the specified recorded image. The second calculation unit 206c acquires the focal length of the image pickup unit 3 from, for example, Exif information associated with the image data of the recorded image, converts the focal length into the image pickup distance to the face of the person, and calculates the privacy level.
The second calculation unit 206c calculates the privacy level based on a change in the external shade from the reference state.
That is, the second calculation unit 206c compares the type of the external mask detected from the reference face image F with the type of the external mask detected from the recorded image acquired by the second image acquisition unit 206a, and specifies the change in the external mask from the reference state. The second calculation unit 206c calculates the privacy level based on the change in the external blocking object from the reference state.
The specific method of calculating the privacy level by the second calculation unit 206c based on the change in the external shield from the reference state is substantially the same as the method used by the first calculation unit 6c in embodiment 1, and a detailed description thereof will be omitted.
Further, the second calculation section 206c calculates the privacy level based on the change in the color of the face of the person from the reference state.
That is, the second calculation unit 206c compares the color of the face of the person detected from the reference face image F with the color of the face of the person detected from the recorded image acquired by the second image acquisition unit 206a, and determines the change in the color of the face of the person from the reference state. The second calculation unit 206c calculates the privacy level based on the change in the color of the face of the person from the reference state.
The specific method of calculating the privacy level by the second calculation unit 206c based on the change in the color of the face of the person from the reference state is substantially the same as the method used by the first calculation unit 6c in embodiment 1, and a detailed description thereof will be omitted.
The second determination unit (determination means) 206d determines whether or not the privacy level calculated by the second calculation unit 206c is higher than a predetermined determination value.
That is, the second determination unit 206d determines whether or not the privacy level calculated for the recorded image by the second calculation unit 206c is higher than the determination value in the image acquisition process. Specifically, the second determination section 206d acquires the desired privacy level stored in the memory 2 from the memory 2 as a determination value, and determines whether or not the privacy level calculated by the second calculation section 206c is higher than the determination value.
The acquisition control unit 206f acquires an image on which a predetermined process is performed.
That is, the acquisition control unit 206f acquires the recorded image determined by the second determination unit 206d to have the privacy level higher than the determination value as the privacy image P for executing the predetermined processing (for example, transmission processing).
< image acquisition processing >
Next, image acquisition processing performed by the image processing apparatus 200 will be described with reference to fig. 13.
Fig. 13 is a flowchart showing an example of the operation related to the image acquisition processing.
As shown in fig. 13, the second image acquiring unit 206a reads out image data of any one of the recorded images from the image recording unit 8 and acquires the image data as a processing target of the image acquiring process (step S61).
Then, the operation control unit 206 performs a privacy level calculation process (see fig. 14 and 15) of calculating the privacy level of the acquired recorded image (step S62; described in detail later).
Then, the second determination unit 206d of the motion control unit 206 determines whether or not the privacy level calculated in the privacy level calculation process of step S62 is higher than the determination value (step S63). Specifically, the second determination section 206d reads out and acquires the determination value from the memory 2, and determines whether or not the privacy level calculated in the privacy level calculation process is higher than the determination value.
Here, when it is determined that the privacy level is higher than the determination value (step S63; yes), the acquisition controller 206f acquires the recording image to be processed as the privacy image P transmitted to the recording server S (step S64).
Thereafter, the operation control unit 206 determines whether or not all the recorded images recorded in the image recording unit 8 have been processed as processing targets of the image acquisition processing (step S65).
If it is determined in step S63 that the privacy level is not higher than the determination value (no in step S63), the operation control unit 206 skips the processing in step S64 and similarly determines whether or not all the recorded images have been processed as the processing target of the image acquisition processing.
When it is determined in step S65 that all of the recorded images have not been processed as the processing subjects of the image acquisition processing (step S65; no), the second image acquisition section 206a reads out the image data of the new recorded image from the image recording section 8 to acquire as the processing subjects of the image acquisition processing (step S66), and then returns the processing to step S62. Then, in step S62, the operation control unit 206 performs the privacy level calculation process for calculating the privacy level of the acquired new recording image, substantially in the same manner as described above (step S62).
On the other hand, when it is determined in step S65 that all the recorded images have been processed as processing targets for the image acquisition processing (step S65; YES), the image acquisition processing is ended.
< privacy level calculation processing.
Next, the privacy level calculation process performed by the image processing apparatus 200 will be described with reference to fig. 14 and 15.
Fig. 14 and 15 are flowcharts showing an example of actions involved in the privacy level calculation processing.
As shown in fig. 14, first, the detection processing unit 6b performs face detection processing on a recording image to be processed (for example, the recording image acquired in step S56 of fig. 13 or the like) (step S71), and determines whether or not a face area including the face of a person is detected (step S72).
Here, when it is determined that the face area is detected (yes in step S72), the second calculation portion 206c detects a wearing article (external mask) such as sunglasses or a face mask in the face area detected from the recorded image of the processing target (step S73), and determines whether or not the wearing article (external mask) is detected (step S74). When it is determined that the wearing article (external blocking object) is detected (step S74; yes), the score for calculating the privacy level is evaluated and determined according to the type of the wearing article (external blocking object) with reference to the blocking object table ST (fig. 9A) (step S75). On the other hand, when it is determined that no wearing article (external shield) is detected (step S74; NO), the process of step S75 is skipped.
Next, the second calculation unit 206c measures the color of the skin color region in the face region detected from the recorded image of the processing target, and the second calculation unit 206c acquires the reference face image F from the memory 2, measures the color of the skin color region in the face region for the reference face image F, and performs the color detection process of the skin for calculating the difference between the measured skin colors (step S76). Then, it is determined whether or not a skin color different from the normal skin color is detected, the calculated difference between skin colors being equal to or larger than a predetermined value (step S77). Here, when it is determined that a skin color different from the usual one is detected (step S77; yes), a score for calculating the privacy level corresponding to the color of the skin (difference between the calculated skin colors) is evaluated and decided (step S78). On the other hand, when it is determined that a skin color different from the usual is not detected (step S77; NO), the process of step S78 is skipped.
Next, the second calculation unit 206c determines the number of pixels constituting the face region detected from the recording image as the processing target as the size of the face (step S79). Then, the second calculation section 206c evaluates and decides a score for calculating the privacy level according to the size of the determined face (step S80).
On the other hand, when it is determined that no face area is detected (step S72; NO), the privacy level calculation process is ended.
Next, the second calculation unit 206c sets, as a virtual reference state, a state in which it is possible to recognize that the face of the person included in the recorded image is the face of the specific person, for example, a state in which the face of the person can be detected and the face and the line of sight of the person are in the front direction (predetermined direction) (step S81). Thereafter, the second calculation unit 206c calculates the central axis and the rotation angle of the rotation of the detected face of the person with respect to the set virtual reference state (step S82).
Then, the second calculation unit 206c determines whether or not the direction of the face of the person detected from the recorded image to be processed is a front face (step S83).
When it is determined in step S83 that the direction of the face is not the front face (step S83; no), the second calculation unit 206c determines the amount or rate of change of the rotation angle of the face of the person about the predetermined axis with respect to the virtual reference state, and evaluates and determines the score for calculating the privacy level from the virtual change of the rotation angle of the determined face (step S84).
On the other hand, when it is determined in step S83 that the direction of the face is the front face (step S83; YES), the second calculation portion 206c skips the process of step S84.
Moving to fig. 15, the detection processing unit 6b detects the constituent parts of the face such as the eyes and mouth from the face region detected in the face detection processing in the recorded image to be processed (step S85). Then, the second calculation unit 206c evaluates and determines a score for calculating the privacy level based on the number of components of the detected face, that is, based on the rate at which the components of the face of the person are blocked with respect to the number of components of the face (for example, "3" in the case of eyes and mouth) that can be determined as long as the person does not perform the operation of blocking the entire face or the components of the face (for example, eyes, mouth, and the like) (step S86).
Next, the detection processing unit 6b determines whether or not an eye is detected from the recording image to be processed (step S87).
When it is determined that the eyes are detected (step S87; yes), the detection processing unit 6b performs line-of-sight detection processing on the recorded image to be processed (step S88), and determines whether or not the line of sight of the person as the subject is a front face (step S89).
When it is determined in step S89 that the line of sight is not the front face (step S89; no), the second calculation portion 206c determines a change in the line of sight of the person from a virtual reference state, and evaluates and determines a score for calculating the privacy level based on the determined virtual change in the line of sight of the person (step S90).
On the other hand, when it is determined in step S89 that the line of sight is the front face (step S89; YES), the second calculation portion 206c skips the process of step S90.
Further, in the case where it is determined in step S87 that no eye is detected (step S87; NO), the respective processes of steps S88 to S90 are skipped.
Then, the second calculation unit 206c calculates the privacy level based on the result of the score evaluation corresponding to the wearing article in step S75, the result of the score evaluation corresponding to the color of the skin in step S78, the result of the score evaluation corresponding to the size of the face in step S80, the result of the score evaluation corresponding to the change in the rotation angle of the face with respect to the virtual reference state in step S84, the result of the score evaluation corresponding to the ratio at which the constituent parts of the face are blocked in step S86, and the result of the score evaluation corresponding to the change in the line of sight of the person with respect to the virtual reference state in step S90 (step S91).
Thereby, the privacy level calculation processing is ended.
As described above, according to the image processing apparatus 200 of embodiment 2, the privacy level of the face of the person included in the recorded image (an example of the type of the image) is calculated (an example of determining whether or not the privacy level satisfies a predetermined condition, where the privacy level indicates a degree of difficulty in recognizing the face of the specific person), and when it is determined that the calculated privacy level is higher than a predetermined determination value, the image is acquired as the image to be transmitted to the predetermined external apparatus (an example of performing control of predetermined processing related to the image). Thus, as in embodiment 1, for example, it is not necessary to perform image processing such as mosaic processing and blocking processing on an image to be published or recorded, and a more natural privacy image P corresponding to a desired privacy level can be acquired without deteriorating the appearance due to local mosaic or blocking.
The privacy level can be calculated based on a relative change in the face of the person or a part constituting the face of the person in a virtual reference state (for example, a state in which the face of the person can be detected and the face of the person and the line of sight of the person are in a predetermined direction) in which the face of the person included in the recorded image can be recognized as the face of the specific person. Therefore, in the same manner as in embodiment 1, it is possible to improve the usability by utilizing the relative change of the face of the person or the part constituting the face of the person with respect to the virtual reference state, and to make the expression form of the private image P acquired with the privacy level as a reference colorful.
Further, the privacy level is calculated based on the rate at which the face of the person or the part constituting the face of the person in the recorded image is blocked, or the privacy level is calculated based on the size of the face of the person included in the recorded image or the imaging distance from the face of the person, or the privacy level is calculated based on the change in the external blocking object that blocks the face of the person or the part constituting the face, or the privacy level is calculated based on the change in the color of the face of the person.
Further, by recording the private images P in association with the privacy levels, the display panel 9b can display the private images P in a category or an order based on the corresponding privacy levels, and even when a plurality of private images P are recorded, the private images P desired by the user can be easily selected, thereby improving the usability.
The present invention is not limited to the above embodiments 1 and 2, and various improvements and design changes can be made without departing from the scope of the present invention.
For example, although embodiments 1 and 2 disclose the transmission of the private image P to the external recording server S, this is merely an example and is not limited thereto. For example, the image processing apparatus 100 according to embodiment 1 or the image processing apparatus 200 according to embodiment 2 may be provided with a server function, and may be accessed from an external terminal to the image processing apparatuses 100 and 200 to view the private images P. In this case, for example, whether or not to disclose the privacy image P may be automatically set for each privacy image P according to the privacy level to which the privacy image P corresponds.
In embodiments 1 and 2, the size of the face of the person included in the private image P may be recorded as Exif information in association with the image data of the private image P, and the display control unit 9a may cause the display panel 9b to display the private images P recorded in the image recording unit 8 in a classification or order based on the size of the face of the person corresponding to the Exif information.
The configurations of the image processing apparatuses 100 and 200 are merely examples illustrated in the above embodiments 1 and 2, and are not limited thereto. For example, although the image processing apparatus 100 is equipped with the imaging unit 3, the present invention is not limited to this, and may be connected to an external imaging unit without being equipped with an imaging unit so as to enable information communication and imaging control.
In addition, although the above embodiment 1 is configured to realize the functions as the determination means and the control means by driving the first determination unit 6d and the imaging control unit 6e under the control of the central control unit 1, the present invention is not limited to this, and may be configured to realize the functions by executing a predetermined program or the like by the central control unit 1.
That is, a program including the determination processing program and the control processing program is recorded in a program memory (not shown). The CPU of the central control unit 1 may be caused to function as means for determining whether or not a privacy level indicating a degree of difficulty in recognizing that the face of the person included in the image is the face of the specific person satisfies a predetermined condition by a determination processing program. Further, the CPU of the central control section 1 can be caused to function as means for controlling execution of predetermined processing related to the image (privacy image P) when it is determined that the privacy level satisfies the predetermined condition by the control processing program.
Similarly, although the second calculation unit 206c and the acquisition control unit 206f are driven under the control of the central control unit 1 to realize the functions as the calculation means and the control means in embodiment 2, the present invention is not limited to this, and may be realized by the central control unit 1 executing a predetermined program or the like.
That is, a program including a calculation processing program and a control processing program is recorded in a program memory (not shown). Further, the CPU of the central control section 1 may be caused to function as means for calculating a privacy level indicating a degree of difficulty in recognizing that the face of the person included in the image is the face of the specific person by a calculation processing program. Further, the CPU of the central control section 1 can be caused to function as means for controlling execution of predetermined processing related to the image (privacy image P) using the calculated privacy level by the control processing program.
Further, as a computer-readable medium storing a program for executing the above-described respective processes, a nonvolatile memory such as a flash memory, or a portable recording medium such as a CD-ROM may be used in addition to a ROM, a hard disk, or the like. A carrier wave (carrier wave) may be applied as a medium for supplying data of the program via a predetermined communication line.
Although the present invention has been described with reference to the embodiments, the scope of the present invention is not limited to the embodiments described above, and includes the scope of the invention described in the claims and the scope equivalent thereto.

Claims (18)

1. An image processing apparatus is provided with a processor,
the processor:
calculating a privacy level indicating a degree of difficulty in recognizing a face of a person included in an image as a face of a specific person based on a relative change in the face of the person or a part constituting the face of the person with respect to a predetermined reference state in which the face of the person is recognized as the face of the specific person;
determining whether the calculated privacy level is higher than a predetermined determination value;
and controlling execution of a predetermined process related to the image based on a result of the determination as to whether or not the privacy level satisfies a predetermined condition.
2. The image processing apparatus according to claim 1,
the processor calculates the privacy level based on a change amount or a change rate of a rotation angle of the face of the person around a prescribed axis.
3. The image processing apparatus according to claim 1,
the processor calculates the privacy level based on a relative change in the face of the person or a part constituting the face of the person with respect to the predetermined reference state, the predetermined reference state including a state in which the face of the person can be detected and the face and the line of sight of the person are in a predetermined direction.
4. The image processing apparatus according to claim 1,
the processor calculates the privacy level based on an amount of change or a rate of change in the direction of the face of the person with respect to the image processing apparatus.
5. The image processing apparatus according to claim 1,
the processor calculates the privacy level based on a change in a rate at which the face of the person or a part constituting the face of the person is blocked.
6. The image processing apparatus according to claim 1,
the processor calculates the privacy level based on a size of a face of a person included in the image or a change in imaging distance to the face of the person.
7. The image processing apparatus according to claim 1,
the processor calculates the privacy level based on a change in an external blocking object that blocks a face of a person or a constituent part of the face of the person included in the image.
8. The image processing apparatus according to claim 1,
the processor calculates the privacy level based on a change in color of a face of a person included in the image.
9. The image processing apparatus according to claim 1,
the processor:
calculating the privacy level of an image captured by a camera;
and a control unit configured to control the image pickup unit to pick up an image for recording when the calculated privacy level is determined to be higher than a predetermined determination value.
10. The image processing apparatus according to claim 9,
the processor:
sequentially calculating the privacy level for each image sequentially captured by the imaging section;
sequentially determining whether or not the privacy levels sequentially calculated are higher than the predetermined determination value;
and performing control of capturing an image for recording based on the results of the sequential determinations, when a change from a state in which the privacy level is equal to or less than the predetermined determination value to a state in which the privacy level is higher than the predetermined determination value is triggered.
11. The image processing apparatus according to claim 9,
the processor:
recording the captured image for recording in a recording unit in association with the calculated privacy level;
and performing control of displaying the images recorded in the recording unit on a display unit in a sorted or sequential manner based on the privacy level associated with the image.
12. The image processing apparatus according to claim 1,
the processor:
calculating the privacy level of the image recorded in the recording section;
and performing control of acquiring an image determined to have the calculated privacy level higher than a predetermined determination value as an image to be transmitted to a predetermined external device.
13. An image processing apparatus is provided with a processor,
the processor:
calculating a privacy level indicating a degree of difficulty in recognizing a face of a person included in an image as a face of a specific person based on a relative change in the face of the person or a part constituting the face of the person with respect to a predetermined reference state in which the face of the person is recognized as the face of the specific person;
the calculated privacy level is used to control execution of prescribed processing related to the image,
further, the image is an image captured by an imaging unit,
the processor further performs control of recording the calculated privacy level in a recording unit in association with an image captured by the imaging unit as an image for recording.
14. An image processing method using an image processing apparatus, the image processing method comprising:
a processing of calculating a privacy level indicating a degree of difficulty in recognizing a face of a person included in an image as a face of a specific person based on a relative change in the face of the person or a part constituting the face of the person with respect to a predetermined reference state in which the face of the person is recognized as the face of the specific person;
a process of determining whether or not the calculated privacy level is higher than a predetermined determination value; and
and a process of controlling execution of the predetermined process related to the image according to a determination result of whether or not the privacy level satisfies a predetermined condition.
15. An image processing method using an image processing apparatus, the image processing method comprising:
a processing of calculating a privacy level indicating a degree of difficulty in recognizing a face of a person included in an image as a face of a specific person based on a relative change in the face of the person or a part constituting the face of the person with respect to a predetermined reference state in which the face of the person is recognized as the face of the specific person; and
a process of controlling execution of a prescribed process related to the image using the calculated privacy level,
further, the image is an image captured by an imaging unit,
the image processing method further includes the processing of: and performing control of recording the calculated privacy level in a recording unit in association with an image captured by the imaging unit as an image for recording.
16. A computer-readable recording medium storing a program for causing a computer to execute:
a processing of calculating a privacy level indicating a degree of difficulty in recognizing a face of a person included in an image as a face of a specific person based on a relative change in the face of the person or a part constituting the face of the person with respect to a predetermined reference state in which the face of the person is recognized as the face of the specific person;
a process of determining whether or not the calculated privacy level is higher than a predetermined determination value; and
and a process of controlling execution of the predetermined process related to the image according to a determination result of whether or not the privacy level satisfies a predetermined condition.
17. A computer-readable recording medium storing a program for causing a computer to execute:
a processing of calculating a privacy level indicating a degree of difficulty in recognizing a face of a person included in an image as a face of a specific person based on a relative change in the face of the person or a part constituting the face of the person with respect to a predetermined reference state in which the face of the person is recognized as the face of the specific person; and
a process of controlling execution of a prescribed process related to the image using the calculated privacy level,
further, the image is an image captured by an imaging unit,
the program is for causing a computer to further execute: and performing control of recording the calculated privacy level in a recording unit in association with an image captured by the imaging unit as an image for recording.
18. An image processing apparatus is provided with a processor,
the processor:
determining whether or not a privacy level indicating a degree of difficulty in recognizing the face of the person included in the image as the face of the specific person satisfies a predetermined condition;
controlling execution of a predetermined process related to the image based on a result of the determination as to whether or not the privacy level satisfies a predetermined condition,
the processor:
also calculating a privacy level indicating a degree of difficulty in recognizing the face of the person included in the image as the face of the specific person;
determining whether the calculated privacy level is higher than a predetermined determination value,
the processor:
calculating the privacy level of an image captured by a camera;
when the calculated privacy level is determined to be higher than a predetermined determination value, the control unit performs control for causing the image pickup unit to pick up an image for recording,
the processor:
sequentially calculating the privacy level for each image sequentially captured by the imaging section;
sequentially determining whether or not the privacy levels sequentially calculated are higher than the predetermined determination value;
and performing control of capturing an image for recording based on the results of the sequential determinations, when a change from a state in which the privacy level is equal to or less than the predetermined determination value to a state in which the privacy level is higher than the predetermined determination value is triggered.
CN201610908844.5A 2015-12-01 2016-10-18 Image processing apparatus, image processing method, and computer-readable recording medium Active CN107038362B (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2015-234701 2015-12-01
JP2015234701 2015-12-01
JP2016-118545 2016-06-15
JP2016118545A JP6206542B2 (en) 2015-12-01 2016-06-15 Image processing apparatus, image processing method, and program

Publications (2)

Publication Number Publication Date
CN107038362A CN107038362A (en) 2017-08-11
CN107038362B true CN107038362B (en) 2020-11-17

Family

ID=59061085

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610908844.5A Active CN107038362B (en) 2015-12-01 2016-10-18 Image processing apparatus, image processing method, and computer-readable recording medium

Country Status (2)

Country Link
JP (1) JP6206542B2 (en)
CN (1) CN107038362B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7030534B2 (en) 2018-01-16 2022-03-07 キヤノン株式会社 Image processing device and image processing method
JP7075237B2 (en) * 2018-02-23 2022-05-25 ラピスセミコンダクタ株式会社 Operation judgment device and operation judgment method
CN109035167B (en) * 2018-07-17 2021-05-18 北京新唐思创教育科技有限公司 Method, device, equipment and medium for processing multiple faces in image
CN109711297A (en) * 2018-12-14 2019-05-03 深圳壹账通智能科技有限公司 Risk Identification Method, device, computer equipment and storage medium based on facial picture
US11430088B2 (en) * 2019-12-23 2022-08-30 Samsung Electronics Co., Ltd. Method and apparatus for data anonymization
CN116206558B (en) * 2023-05-06 2023-08-04 惠科股份有限公司 Display panel control method and display device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101140620A (en) * 2007-10-16 2008-03-12 上海博航信息科技有限公司 Human face recognition system
US8254647B1 (en) * 2012-04-16 2012-08-28 Google Inc. Facial image quality assessment
US9036875B2 (en) * 2013-02-06 2015-05-19 Kabushiki Kaisha Toshiba Traffic control apparatus, method thereof, and program therefor
CN104718742A (en) * 2013-10-16 2015-06-17 奥林巴斯映像株式会社 Display device, image generation device, display method and program

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4036051B2 (en) * 2002-07-30 2008-01-23 オムロン株式会社 Face matching device and face matching method
BRPI1101789E2 (en) * 2011-02-14 2015-12-22 Neti Soluções Tecnologicas Ltda face access validation system for biometric face recognition
JP2014067131A (en) * 2012-09-25 2014-04-17 Zenrin Datacom Co Ltd Image processing apparatus, image processing system, image processing method, and computer program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101140620A (en) * 2007-10-16 2008-03-12 上海博航信息科技有限公司 Human face recognition system
US8254647B1 (en) * 2012-04-16 2012-08-28 Google Inc. Facial image quality assessment
US9036875B2 (en) * 2013-02-06 2015-05-19 Kabushiki Kaisha Toshiba Traffic control apparatus, method thereof, and program therefor
CN104718742A (en) * 2013-10-16 2015-06-17 奥林巴斯映像株式会社 Display device, image generation device, display method and program

Also Published As

Publication number Publication date
JP6206542B2 (en) 2017-10-04
CN107038362A (en) 2017-08-11
JP2017108374A (en) 2017-06-15

Similar Documents

Publication Publication Date Title
CN107038362B (en) Image processing apparatus, image processing method, and computer-readable recording medium
US8416312B2 (en) Image selection device and method for selecting image
JP5409189B2 (en) Imaging apparatus and control method thereof
US7826730B2 (en) Image capturing device having a hand shake correction function, hand shake correction method, and storage medium storing a hand shake correction process program
JP6230554B2 (en) Imaging device
US8830348B2 (en) Imaging device and imaging method
US10546185B2 (en) Image processing apparatus for performing image processing according to privacy level
US8139136B2 (en) Image pickup apparatus, control method of image pickup apparatus and image pickup apparatus having function to detect specific subject
JP5136669B2 (en) Image processing apparatus, image processing method, and program
JP4321287B2 (en) Imaging apparatus, imaging method, and program
CN109068058B (en) Shooting control method and device in super night scene mode and electronic equipment
US8106995B2 (en) Image-taking method and apparatus
US9113075B2 (en) Image processing method and apparatus and digital photographing apparatus using the same
JP2005130468A (en) Imaging apparatus and its control method
KR101728042B1 (en) Digital photographing apparatus and control method thereof
CN107872631B (en) Image shooting method and device based on double cameras and mobile terminal
US9253406B2 (en) Image capture apparatus that can display review image, image capture method, and storage medium
KR20150078275A (en) Digital Photographing Apparatus And Method For Capturing a Moving Subject
KR20140031804A (en) Imaging apparatus, imaging method and storage medium
JP2008028747A (en) Imaging device and program thereof
JP5370555B2 (en) Imaging apparatus, imaging method, and program
JP6758950B2 (en) Imaging device, its control method and program
JP2013192184A (en) Subject tracking display controller, subject tracking display control method, and program
JP5951988B2 (en) Image composition apparatus and image composition method
JP5393189B2 (en) Imaging apparatus and image processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant