JP2008004061A - Object image detection system, matching decision apparatus and sorting apparatus for object image section, and control method thereof - Google Patents

Object image detection system, matching decision apparatus and sorting apparatus for object image section, and control method thereof Download PDF

Info

Publication number
JP2008004061A
JP2008004061A JP2006201782A JP2006201782A JP2008004061A JP 2008004061 A JP2008004061 A JP 2008004061A JP 2006201782 A JP2006201782 A JP 2006201782A JP 2006201782 A JP2006201782 A JP 2006201782A JP 2008004061 A JP2008004061 A JP 2008004061A
Authority
JP
Japan
Prior art keywords
target image
means
position
image portion
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2006201782A
Other languages
Japanese (ja)
Other versions
JP4769653B2 (en
Inventor
Daisuke Hayashi
大輔 林
Original Assignee
Fujifilm Corp
富士フイルム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to JP2006146030 priority Critical
Priority to JP2006146030 priority
Application filed by Fujifilm Corp, 富士フイルム株式会社 filed Critical Fujifilm Corp
Priority to JP2006201782A priority patent/JP4769653B2/en
Publication of JP2008004061A publication Critical patent/JP2008004061A/en
Application granted granted Critical
Publication of JP4769653B2 publication Critical patent/JP4769653B2/en
Application status is Expired - Fee Related legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Abstract

[Objective] To prevent user discomfort due to a change in the display position of a frame surrounding a face.
[Configuration] Face image detection processing is performed on each of the four subject images I1 to I4. Frames F11 to F13, F21, F22, F31, F32, and F41 to F43 are defined so as to surround the detected face image. Among these frames, the frame F41 is displayed in the average size and position of the frame F41 in addition to the frames F11, F21 and F31 corresponding to the frame F41 surrounding the face image of the latest subject image I4. The frame F41 is less changed from the corresponding frames F11, F21, and F31 of the previous subject images I1 to I3. The user is prevented from being given a flickering relationship due to frame fluctuations.
[Selection] Figure 2

Description

  The present invention relates to a target image detection system, a target image portion matching determination device, a target image portion sorting device, and a control method thereof.

  A digital still camera, a digital movie video camera, or the like is provided with a display device, and a subject image is displayed as a moving image by imaging. The user determines the camera angle while viewing the displayed subject image.

Recently, it has been considered that a face image portion is detected from a captured subject image and the detected face image portion is surrounded by a frame and displayed (Patent Documents 1 to 4).
JP 2005-318515 A JP 2005-311888 JP 2005-286940 A JP 2005-165562 A

  However, even if the same subject is captured in a continuous image as represented by a moving image, there is a shift for each image. Therefore, if the face image portion is displayed in a frame in each image, the position of the frame display must be accurate. The more the image is, the more the frame will be shifted. It may be uncomfortable for the user viewing the image.

  Further, when a face image is included in each of a plurality of consecutive frames, it may be difficult to determine whether or not they are the same face image.

  An object of the present invention is to suppress discomfort caused by the frame display as much as possible.

  Another object of the present invention is to make it possible to determine whether or not face images included in a plurality of frames are the same. It is a further object of the present invention to make it possible to arrange images in order when a single image includes a target image portion such as a face image.

  A target image detection system according to a first aspect of the present invention shows target image part detection means for detecting a target image part from an image represented by given image data, and the target image part detected by the target image part detection means. The target image portion detection means and the position storage so as to repeat the position storage means for storing the position, the detection processing in the target image portion detection means and the storage processing in the position storage means for image data of a plurality of frames that are successively given. Control means for controlling the means, display position determining means for determining the display position of the detection frame indicating the target image portion based on the position stored in the position storage means, and the detection frame by the display position determining means. Display at the determined position together with the last image given to the target image part detection means Characterized in that it comprises a display control means for controlling the urchin display device.

  The first invention also provides a control method suitable for the target image detection system. That is, in this method, the target image portion detection means detects the target image portion from the image represented by the given image data, and the position storage means detects the target image portion detected by the target image portion detection means. And the control means repeats the detection processing in the target image portion detection means and the storage processing in the position storage means for the image data of a plurality of frames given in succession, and The position storage means is controlled, the display position determination means determines the display position of the detection frame indicating the target image portion based on the position stored in the position storage means, and the display control means determines the detection frame. , And the image finally given to the target image portion detecting means is displayed at the position determined by the display position determining means. And controls the display device.

  According to the first invention, when image data is given, a target image portion is detected from an image represented by the image data, and a position indicating the detected target image portion is stored. The image data of a plurality of frames is given continuously (may be given at a constant cycle like a moving image, or image data representing a still image may be given for a plurality of frames like continuous shooting), and the target image. The part detection process and the position storage process are repeated. Based on the stored position, the display position of the detection frame indicating the target image portion is determined. A detection frame is displayed at the determined position together with the image. According to the first invention, since the display position of the detection frame is determined based on the position of the target image portion of the image represented by the previously given image data, the vicinity of the position of the target image portion of the previous frame image A detection frame can be displayed on the screen. The fluctuation of the detection frame can be suppressed, and the user viewing the detection frame can be prevented from feeling uncomfortable. The display position may be, for example, an average position of previously detected target image partial positions.

  The display position determining means rearranges the size of the target image portion determined based on the position stored in the position storage means in descending order, or rearranges the center position of the target image portion in ascending order of the center of the image. Replacement means may be provided. In this case, a plurality of sizes including an intermediate size among the sizes of the target image portions rearranged by the rearranging means, or an intermediate position among the center positions of the target image portions rearranged by the rearranging means. The display position of the detection frame indicating the target image portion will be determined based on a plurality of positions including the position. The above processing may be performed when the number of stored positions is greater than or equal to a predetermined number.

  According to a second aspect of the present invention, there is provided a target image part coincidence determination device comprising: first target image part detection means for detecting a target image part from the first image; Change in size between the target image portion detected by the second target image portion detection means and the target image portion detected by the second target image portion detection means; The target image portion detected by the first target image portion detecting means and the target image detected by the second target image portion detecting means based on at least one of the change in the inclination, the change in the inclination, and the change in the orientation Judgment means for judging whether or not the part is the same is provided.

  The second invention also provides a control method suitable for the above-described target image portion coincidence determination device. That is, in this method, the first target image portion detection means detects the target image portion from the first image, and the second target image portion detection means detects the target image portion from the second image. And the determining means detects a change in the size of the target image part detected by the first target image part detecting means and the target image part detected by the second target image part detecting means, and the center position. The target image portion detected by the first target image portion detecting means and the target image detected by the second target image portion detecting means based on at least one of the change in the inclination, the change in the inclination, and the change in the orientation It is determined whether or not the part is the same.

  According to the second invention, the target image portion is detected from the first image, and the target image portion is detected from the second image. Whether or not the two target image portions are the same is determined based on at least one of the detected size change, center position change, inclination change, and orientation change. It can be prevented in advance that different target images are determined to be the same.

  A target image detection system according to a third aspect of the present invention is a target image portion detection means for detecting one or a plurality of target image portions from each of a plurality of frame images represented by continuously provided image data for a plurality of frames. Representative target image part determining means for determining a representative target image part from one or a plurality of target image parts detected by the target image part detecting means, and a previous frame determined by the representative target image part determining means. The first determination means for determining whether or not the position data indicating the representative image portion is stored in the position storage means, the position data indicating the representative image portion of the previous frame is determined by the first determination means. The position data determined based on the image data of the new frame is recorded in the position storage means in response to the determination that it is not stored in the storage means. In response to determining that the position data determined based on the image data of the previous frame is stored in the position storage unit by the first storage control unit and the first determination unit, The position indicated by the position data determined based on the image data and the position indicated by the position data determined based on the image data of the previous frame already stored in the position storage means are separated by a predetermined distance or more. Second position determination means for determining whether or not the position data is determined based on the image data of the new frame when the second determination means determines that the distance is not more than a predetermined distance. And the object image portion detected by the object image portion detecting means in response to the determination that the object image is more than a predetermined distance. Second storage control means for storing data indicating the position of the target image portion not separated by a predetermined distance or more in the position storage means as data indicating the position of the representative target image portion, and position data stored in the position storage means Display control for controlling the display device to display a frame on the image represented by the given image data so that the frame for the representative target image part represented by is different from the frame for the other target image part Means are provided.

  The third invention also provides a control method suitable for the target image detection system. That is, in this method, the target image portion detection means detects one or a plurality of target image portions from each of a plurality of frame images represented by continuously provided image data for a plurality of frames, and represents a representative target image. The part determining means determines a representative target image part from one or a plurality of target image parts detected by the target image part detecting means, and the first determining means is determined by the representative target image part determining means. It is determined whether or not the position data indicating the determined representative image portion of the previous frame is stored in the position storage means, and the first storage control means determines the first image indicating the representative image portion of the previous frame. When the determination means determines that the position data is not stored in the position storage means, the position data determined based on the image data of the new frame is stored in the position record. In response to the fact that the second determination means determines that the position data determined based on the image data of the previous frame is stored in the position storage means by the first determination means. The position indicated by the position data determined based on the image data of the new frame and the position indicated by the position data determined based on the image data of the previous frame already stored in the position storage means are predetermined. It is determined whether or not the distance is more than the distance, and the second storage control means is determined based on the image data of the new frame when the second determination means determines that the distance is not more than the predetermined distance. The stored position data is stored in the position storage means, and the pair detected by the target image portion detection means in response to the determination that the distance is greater than a predetermined distance. Data indicating the position of the target image portion not separated by a predetermined distance or more in the image portion is stored in the position storage means as data indicating the position of the representative target image portion, and the display control means is stored in the position storage means. Control the display device to display a frame on the image represented by the given image data so that the frame for the representative target image portion represented by the position data is different from the frame for the other target image portion Is.

  According to the third invention, when the representative target image portion is not stored, data indicating the position of the representative target image portion is stored in the position storage means. One or more target image portions are detected from the image represented by the given image data, and a representative image portion is determined from the detected target image portions. If the distance between the newly determined position of the representative image portion and the position of the already stored representative image portion is not a predetermined distance or more, data indicating the position of the newly determined representative image portion is the representative image. It is stored (updated) in the position storage means as data indicating the position of the portion. If the distance is greater than or equal to the predetermined distance, data indicating the position of the target image portion that is not further than the predetermined distance is newly stored as data indicating the position of the target image portion. The frame is displayed so that the frame of the representative target image portion represented by the stored position data is different from the frame of the other target image portion (for example, the color is different). Since the target image portion whose distance from the stored position of the representative target image portion is not a predetermined distance is set as the representative image portion, it is possible to prevent the representative target image portion from being frequently changed. Since the representative target image part does not change frequently, even if the frame of the representative target image part is different from the frame of the other target image part, the frame of the representative target image part can be prevented from frequently moving, The user can be prevented from feeling uncomfortable.

  Clearing means for clearing the position data stored in the position storage means each time image data for a predetermined frame is given or in response to the target image part detection means not detecting the target image part Further, it may be provided.

  The representative target image portion determination unit converts the plurality of target image portions detected by the target image portion detection unit into at least one of sorting elements of the detected position, size, and target image quality of the target image portion. Based on the above, sorting means may be provided for arranging the representative target image portions in the order of candidates. In this case, the first candidate among the plurality of target image portions arranged by the sorting means will be determined as the representative target image portion.

  Further, a designation unit for designating a representative target image portion from a plurality of target image portions arranged by the sorting unit may be further provided. In this case, the target image portion specified by the specifying means will be determined as the representative target image portion.

  A target image partial sorting apparatus according to a fourth aspect of the present invention provides a target image part detection unit that detects a plurality of target image parts from a given image, and a plurality of target image parts detected by the target image part detection unit. Sorting means for arranging in order of candidates for the representative target image portion based on at least one of the sorting elements of the detected position, size and target image likeness of the target image portion is provided.

  The fourth invention also provides a control method suitable for the sorting apparatus for the target image portion. That is, in this method, the target image portion detection means detects a plurality of target image portions from the given image, and the sorting means detects the detected plurality of target image portions as the positions of the detected target image portions. , Based on at least one of the sorting elements of size and target image quality, the representative target image portions are arranged in the order of candidates.

  According to the fourth invention, a plurality of target image portions are detected from a given image. A plurality of detected target image portions are arranged in the order of candidates for the representative target image portion based on the position (distance from the center of the image) of the detected target image portion, the size of the target image portion, and the like. When there are a plurality of target image portions, sorting can be performed and a representative target image portion can be determined.

  The sorting means may comprise calculation means for calculating a composite priority based on a combination of at least two of the sorting elements of the position, size and target image quality. In this case, a plurality of target image portions detected by the target image portion detection means will be arranged in the candidate order of representative target image portions based on the composite priority calculated by the calculation means.

  The sorting means may comprise a designation means for designating a sorting element to be prioritized among the sorting elements used for calculating the composite priority by the calculating means. In this case, among the plurality of target image portions detected by the target image portion detecting means, those having the same composite priority calculated by the calculating means are selected by the sorting element specified by the specifying means or the specifying means. They will be arranged in the order of candidates for the representative target image portion based on the sorting elements that are not specified.

  FIG. 1 shows an embodiment of the present invention and is a block diagram showing an electrical configuration of a digital still camera.

  The overall operation of the digital still camera is controlled by the system control circuit 1.

  The digital still camera has a mode dial 7 for setting modes such as an image pickup mode and a playback mode, a shutter button 8, an image display on / off switch 9 for controlling on / off of the liquid crystal display device 35, and a color. A temperature setting switch 10, an operation switch 11, and a memory card attachment / detachment detection circuit 12 are provided. Signals output from these switches are input to the system control circuit 1. Connected to the system control circuit 1 are a memory 2 for temporarily storing position data of a face image, which will be described later, a display device 3 for displaying predetermined information such as a frame number, a nonvolatile memory 4 and a communication circuit 5. Yes. An antenna 6 is connected to the communication circuit 5. Further, the system control circuit 1 includes a power supply control circuit 13, and the power supplied from the power supply 14 is given to each circuit.

  The digital still camera includes a CCD 23 controlled by a timing generation circuit 33. In front of the CCD 23, a zoom lens 21 that is controlled to open and close by a barrier control circuit 15 and whose zoom amount is controlled by a zoom control circuit 16 and that is positioned at an in-focus position by a distance measurement control circuit 17. An aperture 22 whose aperture value is controlled by the exposure control circuit 18 is provided.

  When the imaging mode is set, the subject is imaged at a constant cycle, and a video signal representing the subject image is output from the CCD 23 at a constant cycle. The video signal output from the CCD 23 is given to the analog / digital conversion circuit 24 and converted into digital image data. The image data obtained by the conversion is given to the face extraction circuit 26. In the face extraction circuit 26, a face image is extracted from the subject image obtained by imaging. The face extraction circuit 26 and the system control circuit 1 can perform serial communication, and face extraction processing is performed based on a control signal provided from the system control circuit 1 by serial communication.

  In the above-described nonvolatile memory 4, data for detecting a face image, for example, if a predetermined image is a face image, positions where eyebrows, eyes, nose, ears, mouth, cheeks, etc. should exist, Data such as size is stored. A scan frame having a predetermined size is moved by a predetermined distance on the subject image, data representing an image included in the scan frame at the moved position, and data stored in the nonvolatile memory 4 Are compared to determine whether the image in the detection frame is a face image. However, it is also possible to determine whether or not the image in the scan frame is a face image by storing a sample image of the face and comparing the sample image with the image in the scan frame.

  The image data output from the analog / digital conversion circuit 24 is input to the image processing circuit 25, and predetermined signal processing such as gamma correction is performed. The image data output from the image processing circuit 25 is input to the color temperature detection circuit 27. The color temperature detection circuit 27 detects the color temperature of the subject image, and the color balance is adjusted under the control of the system control circuit 1.

  The image data output from the analog / digital conversion circuit 24 is temporarily stored in the image display memory 29 under the control of the memory control circuit 28. The image data is read from the image display memory 29 and applied to the digital / analog conversion circuit 34, whereby the image data is returned to the analog video signal. By applying the analog video signal to the liquid crystal display device 35, a subject image obtained by imaging is displayed on the display screen of the liquid crystal display device 35. The imaging of the subject is repeated at regular intervals, and a moving image of the subject image is displayed.

  In the digital still camera according to this embodiment, a detection frame surrounding the portion of the face image extracted as described above is displayed on the subject image. By looking at the detection frame, the user can easily confirm the face image portion in the subject image.

  When the shutter button 8 is pressed, the image data output from the analog / digital conversion circuit 24 as described above is temporarily stored in the memory 30 by the memory control circuit 28. The image data is read from the memory 30 and compressed by the compression / decompression circuit 31. The compressed image data is given to the memory card 32 and recorded.

  When the reproduction mode is set, the compressed image data recorded on the memory card 32 is read and decompressed by the compression / decompression circuit 31. The decompressed image data is converted into a video signal and applied to the liquid crystal display device 35, whereby the subject image represented by the image data read from the memory card 32 is displayed on the display screen of the liquid crystal display device 35. Is done.

  2A to 2D are examples of subject images given continuously.

  As described above, by setting the imaging mode, the subject is imaged and subject images are continuously obtained. In (A) to (D), four frames of images I1 to I4 are shown. The image I4 shown in (A) is the latest image (the image of the fourth frame given last), the image I3 shown in (B) is the image I3 one frame before, and the image I2 shown in (C). Is the image I2 two frames before the latest image I4, and the image I1 shown in (D) is the image I1 three frames before the latest image I4. These subject images I1 to I4 are obtained by imaging the same subject.

  In this embodiment, a process of extracting a face image (target image) in each of the subject images I1 to I4 (although not necessarily four frames) obtained by imaging is performed. A detection frame is displayed around the face image portion extracted in the face image extraction process.

  Referring to (D), the subject image I1 of the first frame includes person images 41, 42, and 43. By performing face image extraction processing on the subject image I1, face image portions i41, i42, and i43 of the person images 41, 42, and 43 are extracted. Detection frames F41, F42, and F43 surrounding the extracted face image portions i41, i42, and i43 are displayed on the subject image I1.

  With reference to (C), face image extraction processing is also performed on the subject image I2 of the second frame. By performing the face image extraction process, the face image portions of the person images 41, 42 and 43 are extracted, but the face image portion is not extracted for the person image 42 in the second subject image I2. In addition, no frame is displayed on the face image portion of the person image 42. For human images 41 and 43, face image portions i21 and i23 are extracted by face image extraction processing, and frames F21 and F23 surrounding the face image portion are displayed.

  Similarly, referring to (B), the face extraction process is also performed on the subject image I3 of the third frame, whereby the face image portions are extracted from the human images 41 and 42, and the face image portions i31 and i32 are extracted. Frames F31 and F32 are displayed on the screen. For the human image 43, no face image portion is extracted and no frame is displayed.

  Referring to (A), face extraction processing is also performed on the latest subject image I4 of the fourth frame, and frames F41, F42 and F43 are displayed so as to surround the face image portions i41, i42 and i43.

  Due to camera shake and the like, the four subject images I1 to I4 are often not completely the same even if the subject is the same. For this reason, when face extraction processing is performed for each subject image and a detection frame is displayed on the face image portion of each subject image, the position of the frame is surrounded even though the same face image portion is surrounded. , The size and the like may change for each subject image, which may cause discomfort to the user viewing the subject image. The digital still camera according to this embodiment suppresses fluctuations in the position and size of the frame and prevents the user from feeling uncomfortable due to the fluctuation of the frame. Specifically, the detection frames F41, F42 and F43 to be displayed in the latest subject image I4 correspond to the detection frames F11 to F13, F21, F23, F31 and F32 of the previous subject images I1, I2 and I3. The size of the frame to be displayed, the average size of the position, and the position are displayed.

  FIG. 3 shows a relationship between a detection frame (correction target detection frame) displayed on the latest subject image I4 shown in FIG. 2A and a detection frame used for the correction target detection frame.

  As described above, the detection frames F41, F42 and F43 to be displayed in the latest subject image I4 correspond to the detection frames F11 to F13, F21, F23, F31 and F32 of the previous subject images I1, I2 and I3. It is corrected using the position of the detection frame to be performed (the position of the face image). The detection frames corresponding to the detection frame F41 are F11, F21, F31 and F41. The detection frames corresponding to the detection frame F42 are F12, F32 and F42. The detection frames corresponding to the detection frame F43 are F13, F23 and F43. The display positions of the detection frames F41, F42 and F43 are corrected using these corresponding detection frames.

  4 and 5 are flowcharts showing a processing procedure for correcting the display position of the detection frame.

  As described above, it is assumed that image data representing a subject image is continuously given.

  When image data for one frame is input (YES in step 51), face image detection processing (face image extraction processing) of the subject image represented by the input image data is performed (step 52). If no face image is detected (NO in step 53), image data of the next frame is input and face detection processing is performed (steps 51 and 52).

  When a face image is detected (YES in step 53), variable n is set to 0, variable i is set to 1, and variable j is set to 0 (step 54). The variable n is a variable for the number of face images present in the latest subject image. The variable i is a variable for the frame number of the subject image. The variable j is a variable regarding the number of face images existing in the subject image of the i-th frame. When each variable n, i, j is set, it is confirmed whether or not a face image is included in the subject image of the i-th frame (step 55). If not included (NO in step 55), the variable i is incremented (step 62), and it is confirmed whether or not a face image is included in the subject image of the next frame (step 55).

  If a face image is included in the i-th subject image (YES in step 55), the n-th face image (the n-th face image in the face image included in the latest subject image) And the j-th face image (j-th face image in the face image included in the subject image of the i-th frame) are compared (step 56).

  If there is a correlation between the compared face images (YES in step 57), the j-th face image is stored as a face image used for correction (step 58). For example, the nth face image is the face image of the person image 41 (face image surrounded by the frame F41) in the latest subject image I4 shown in FIG. 2A, and the jth face image. Is the face image of the person image 41 (face image surrounded by the frame F11) in the subject image I1 of the first frame shown in FIG. 2D, it is determined that there is a correlation between the faces. , The face image surrounded by the frame F11 is stored as a face image (detection frame, position) used for correction.

  If there is no correlation between the compared face images (NO in step 57), the variable j (the variable of the face image of the i-th subject image) is the number of face images included in the i-th subject image. It is confirmed whether it has become (step 59). If the variable j is not the number of face images included in the subject image of the i-th frame (NO in step 59), the variable j is incremented (step 60). Of the face images included in the i-th frame subject image, the next face image is compared (step 56). For example, it is included in the face image (enclosed by a frame F41) of the human image 41 included in the latest subject image I4 shown in FIG. 2A and the subject image I1 of the human frame shown in FIG. When the face image of the person image 42 (enclosed by the frame F12) is compared, it is determined that there is no correlation between the face images, the variable j is incremented, and the next comparison process is performed.

  When registration of the face image (detection frame, position) used for correction (step 58) or comparison processing for all face images included in the subject image to be compared with the latest subject image is completed (YES in step 59) , Whether the variable i is equal to the previous number of frames is checked (step 61). This confirmation process is to confirm whether or not all the face image comparison processes for the subject images taken before the latest subject image have been completed. If it is not equal to the previous number of frames (NO in step 61), variable i is incremented and variable j is set to 0 (step 62) in order to perform face image comparison processing for the next subject image. ).

  If the number of frames of the subject image captured before the latest subject image is reached (YES in step 61), the position of the face image portion of the latest subject image is determined using the stored face image portion. It is corrected (step 63). For example, the average position of the face images corresponding to the face image of the latest subject image among the face images in the previous subject image is set as the position of the face image of the latest subject image. A frame is displayed at the position corrected in this way. The fluctuation of the frame can be suppressed, and it is possible to prevent the user from feeling uncomfortable.

  If correction of all face images existing in the latest subject image has not been completed (NO in step 64), the variable n is incremented to cause the next subject image to be subjected to face image comparison processing, etc. i is set to 1 and variable j is set to 0 (step 65). When the correction is completed for all the face images existing in the latest subject image (YES in step 64), the processing from step 51 is repeated again if the end command for all the processing is not given (step 66).

  6 to 10 show another embodiment.

  In this embodiment, it is determined whether or not two face images (target images) are the same. Whether or not the face images are the same is determined based on a comparison of the size of the face images, a comparison of the center positions of the face images, a comparison of the inclinations of the face images, and a comparison of the orientation of the face images.

  FIG. 6 shows the extracted face image, the size of the face image, and the center position O. It is assumed that the face image is extracted as a square shape. It goes without saying that it need not be square. The size of the face image is represented by the width of the face image. Further, the center position O of the face image is represented by the x coordinate and y coordinate in the subject image including the face image.

  7A, 7B, and 7C are examples of face images. (A) shows a face image tilted counterclockwise, (B) shows an upright face image, and (C) shows a face image tilted clockwise.

  8A, 8B, and 8C are also examples of face images. (A) shows a face image facing left when viewed from the front, (B) shows a face image facing front, and (C) shows a face image facing right when viewed from the front.

  FIG. 9 shows the difference in size, center position, inclination, and orientation of the two face images.

  One face image has a size s1, a center position (x1, y1), an inclination i1, and a direction d1, and the other face image has a size s2, a center position (x2, y2), an inclination i2, and a direction d2. is there. By comparing these sizes and the like, it is determined whether the two face images are the same.

  FIG. 10 is a flowchart showing the face comparison processing procedure.

  First, it is determined whether or not the amount of change in the size of the two face images is within a predetermined value (step 71). For example, if the magnitude ratio s1 / s2 is between the first threshold value and the second threshold value, it is determined that the magnitude change amount is within a predetermined value. If the amount of change in size is within a predetermined value (YES in step 71), it is determined whether the distance (center distance) between the center position of one face image and the center position of the other face image is within a predetermined value. (Step 72). For example, if the center distance is less than the value obtained by dividing the average size of the two sizes by an arbitrary integer, it is determined to be within a predetermined value.

  Furthermore, if the angle change is within a predetermined value (YES in step 74) and the change in orientation is within a predetermined value (YES in step 75), it is determined that the two face images are the same (step 75).

  If any one of the change in size, the distance between centers, the change in angle, or the change in direction is not within a predetermined value (NO in any of steps 71 to 74), the two face images are not the same face image. It is determined that there is not (step 76).

  FIG. 11 shows another embodiment and is a flowchart showing a face position correction processing procedure.

  When the number of face images used for correction is large, the face position is corrected using a face image having an intermediate size. First, the face images used for correction are rearranged in descending order of the detected face image size (step 81). If the number of rearranged face images is greater than 8 (YES in step 82), the two larger face images and the two smaller face images are excluded from the face images used for correction ( Step 83). When the number of rearranged face images is greater than 5 (YES in step 84), the largest face image and the smallest face image are excluded from the face images used for correction (step 85). The size of the face image is averaged using the remaining face images (step 86). A frame is displayed so as to surround the averaged face image.

  If the number of rearranged face images is 5 or less, the face images obtained without performing the process of removing from the face images used for correction are averaged using the face images obtained (NO in steps 82 and 84, step 86). ). For example, in the subject images shown in FIGS. 2A to 2D, the face images corresponding to the face image surrounded by the frame F41 shown in FIG. 2A are shown in FIGS. As shown in (D), there are three face images surrounded by frames F31, F21, and F11. Since these three face images and the face image surrounded by the frame F41 are the number of the four face images, the processing for removing the face images is not performed as described above, and the size of these four face images is not performed. The average size is the size of the face image, and a frame is displayed.

  In the above-described embodiment, the face image used for correction is determined based on the size, but the face image used for correction may be similarly determined based on the center position of the face image. When the center position is used, the face images are rearranged in the order that is closer to the center position of the subject image, and the face images that are intermediate between the rearranged face images or the subject image that is closer to the center position The position obtained by averaging using the plurality of face images will be the face image position and the frame will be displayed.

  FIGS. 12A and 12B to FIG. 15 show still another embodiment.

  In this embodiment, when a plurality of face images exist in one frame of the subject image, the frame of the face image (representative face image) having the highest evaluation value (likeness of face image) of the face image and other faces It is displayed separately from the frame of the image. For example, the frame of the face image with the highest evaluation value of the face image is a solid line, while the frames of other face images are chain lines. It goes without saying that the color and shape of the frame may be changed according to the evaluation value instead of the type of the line of the frame.

  As described above, when a frame of a face image having a high evaluation value is distinguished from a frame of another face image, if the frame is displayed in each of subject images given continuously such as a moving image, the evaluation value When there is a change, the frame also changes in accordance with the change in the evaluation value, which may cause discomfort to the user viewing the subject image. In this embodiment, the variation of the frame of the representative face image is prevented in advance.

  12A and 12B are examples of subject images.

  (A) is the subject image I6 of the Lth frame, and (B) is the subject image I5 of the (L-1) th frame.

  Referring to (B), the subject image I5 of the (L-1) -th frame includes person images 41, 42, and 43 in the same manner as the subject image described above. By performing face image detection processing on the subject image I5, face images i51, i52, and i53 of the person images 41, 42, and 43 are detected, and frames F51, F52, and F53 are displayed. The evaluation value of the face image of the person image 41 is the highest, and the evaluation values of the face images of the person images 42 and 43 are lower than the evaluation value of the face image of the person image 41. Then, the frame F51 displayed on the person image 41 becomes a solid line so as to indicate that the evaluation value of the face image i51 in the frame is the highest, and the frame F52 surrounding the face image i42 or i43 of the person image 42 or 43 or F53 is a chain line. A face image i51 in the frame F51 is a representative face image.

  Referring to (A), the subject image I6 of the L-th frame is substantially the same subject image as the subject image I5, and includes person images 41, 42, and 43. By performing the face detection process, frames F61, F62, and F63 indicating the face images i61, i62, and i63 are displayed. Frames F61, F62 and F63 are all shown as chain lines for convenience. Of the frames F61, F62 or F63 included in the subject image I6, the frame corresponding to the frame F51 displayed in the subject image I5 is the frame F61. As described above, even if the evaluation value of the face image i63 in the frame F62 or F63 is the highest, the face image i61 in the frame F61 is displayed as a representative face image so that the frame F61 represents the representative face image. Is done.

  FIGS. 13A and 13B are representative face image candidate order tables showing the relationship among face images, evaluation values, representative face image candidate orders, and the like.

  (A) shows a table before update.

  As described above, it is assumed that the face image i51 is the representative face image among the face images included in the (L-1) -th frame subject image I5. The subject image I6 of the L-th frame includes face images i61, i62, and i63, and the evaluation value is that the face image i63 is the highest, the face image i61 is the next closest, and the face image i62 is the lowest. To do. Then, the first representative face image candidate order of these face images i61, i62 and i63 is the face image i63, the second is the face image i61, and the third is the face image i62. The representative face image i63 determined according to the representative face image order based on the evaluation value is changed from the representative face image i51 of the (L-1) -th frame, and as described above, the subject image is seen by the variation of the frame. The user is uncomfortable. For this reason, in this embodiment, the representative face image candidate ranking is matched with the registered representative face image i51.

  Referring to (B), the face image i61 of the subject image I6 of the Lth frame corresponds to the face image i51 of the subject image I5 of the (L-1) th frame. For this reason, even if the evaluation value is not the highest, the representative face image candidate ranking is the first. The face image i63 having the highest evaluation value is set as the second representative face image candidate rank. A representative face image is determined in accordance with the changed representative face image candidate order. The frame of the determined representative face image is a solid line.

  14 and 15 are flowcharts showing a processing procedure for generating the representative face image candidate ranking table. The frame of the representative face image is displayed with a solid line according to the generated table.

  When image data representing a subject image for one frame is input (YES in step 91), the clear variable N is reset to 0 (step 92). The clear variable resets the representative face image every predetermined number of frames. Since the representative face image is reset, the representative face image can be prevented from being fixed for a long time.

  Face detection processing is performed on the subject image represented by the input image data (step 93). When face images are detected, the detected face images are rearranged in descending order of evaluation values (step 94, see FIG. 13A). If clear variable N is less than the predetermined clear threshold (YES in step 95), clear variable N is incremented (step 96).

  When the clear variable N is equal to or greater than a predetermined clear threshold (NO in step 95), the representative face image is considered to be fixed for a long time as described above, and thus the representative face image is cleared (step 98). ). If no face image is detected from the subject image (YES in step 97), the scene is considered changed and the representative face image is cleared (step 98). When the representative face image is cleared, the face image having the highest evaluation value becomes the representative face image (step 107). Needless to say, if no face image exists, the representative face image is not set.

  When a face image is detected (NO in step 97), it is confirmed whether the representative face image determined using the image data of the previous frame has already been registered (step 109). If not registered, a representative face image is registered (step 110). The image data of the next frame is input (step 91), and the processing from step 92 is repeated. If the representative face image determined using the image data of the previous frame has already been registered (YES in step 109), the center position and the representative face image of the first representative face image candidate determined based on the evaluation value The distance from the center position (center distance) is calculated (step 99). If the calculated center distance is less than the predetermined distance threshold (YES in step 100), the first face representative face image is considered to correspond to the representative face image (the same face image). For this purpose, the first representative face image having the highest evaluation value is set as the representative face image (step 107).

  If the center distance is greater than or equal to a predetermined distance threshold value (NO in step 100), the representative face image candidate rank variable k is set to 2 in order to make the face image with the next highest evaluation value the representative face image candidate. (Step 101). The distance between the center position of the kth representative face image candidate and the center position of the representative face image is calculated (step 102). If the calculated center distance is less than the predetermined distance threshold value (YES in step 103), the kth representative face image candidate is considered to correspond to the representative face image. The kth representative face image candidate is registered as the first representative face image candidate, and the first to (k−1) th representative face image candidates are registered as the second to kth representative face image candidates (steps). 106, FIG. 13 (B)).

  If the center distance is equal to or greater than the predetermined distance threshold value (NO in step 103), it is considered that the new kth representative face image candidate does not correspond to the already registered representative face image. Is present (YES in step 104, k = number of face images in the subject image), the variable k is incremented and the processing from step 102 is repeated again.

  When the processing in steps 102 and 103 is performed for all face images existing in the subject image (YES in step 104), the first representative face image candidate is set as the representative face image (step 107).

  If an end command for all the processes is not given (NO in step 108), it is determined whether or not the user sets a representative face (step 121). When the representative face is set by the user (YES in step 121), the cursor keys included in the operation switch 11 of the digital still camera (for example, the up, down, left, and right arrows are formed so that they can be pressed. The representative face changes in response to pressing of the button. The face selected by the user is set as the representative face (step 122), and the face frame selected by the user is displayed in a different frame from the other faces. The user's desired face can be used as the representative face, and the user can focus. In addition to designating the representative face using the cursor keys, when the touch panel is formed on the display screen, the representative face can be designated by touching the image portion of the desired face. it can. Thereafter, the processing from step 91 is performed again.

  For the representative face image set in this way, a frame is displayed differently from other face images. Since it is possible to prevent the frame of the representative face image from being changed for each frame, it is possible to prevent the user from feeling uncomfortable without flickering the frame.

  FIG. 16 is a flowchart showing a processing procedure for rearranging a plurality of face image portions detected in one frame image. In the process of step 94 in FIG. 14, when a plurality of face images are detected, they are rearranged in the descending order of evaluation value. In this embodiment, not only the evaluation value but also other elements are used. It is.

  When a plurality of face image portions are detected in one frame image (YES in step 131), a composite priority is calculated from equation 1 (step 132). In Equation 1, k1 and k2 are weighting coefficients, and the median is a value related to the distance to the center of the detected face image portion, and the value is larger as it is closer to the center.

  Compound priority = k1 × (centrality) + k2 × (face image part size) Equation 1

  A plurality of face image portions existing in one frame are sorted in descending order of composite priority (step 133).

  Settings are made so that multiple face image parts are sorted in descending order of face image parts (this setting will be set in advance using the setup menu of a digital still camera, for example). (YES in step 134), if there are two or more face image portions having the same composite priority among the plurality of face image portions, they are sorted in order of size (step 135). Further, when there are two or more face image parts having the same size, the face image parts are sorted in the order of the face image parts close to the center of the image (step 136).

  In addition, when it is not set so that a plurality of face image parts are sorted in descending order of the face image parts (NO in step 134), the plurality of face image parts have the same composite priority. If there are two or more face image parts, they are sorted in the order of the face image parts close to the center of the image (step 137). Furthermore, if there are two or more face image portions at the same distance from the center of the image, they are sorted in order of size (step 138).

  If there are two or more face image parts that have the same value of composite priority, the same size, and the same distance from the center of the image, sort in descending order of the evaluation value of the face image part (Step 139).

  In the processing described above, the priority of sorting elements (sorting elements) is as follows: compound priority> median> face image part size> evaluation value or compound priority> face image part size> It is needless to say that median degree> evaluation value but other priorities may be followed. In this way, even when a plurality of face image portions are detected in one frame image, they can be sorted in a predetermined order, and a representative face can be determined.

It is a block diagram which shows the electric constitution of a digital still camera. (A) to (D) are examples of subject images. It is a table which shows the relationship between the face frame used for amendment and the face frame used for amendment. It is a flowchart which shows the correction process procedure of a face position. It is a flowchart which shows the correction process procedure of a face position. The size and center position of the face image are shown. (A) and (C) show tilted face images, and (B) shows an erect face image. (A) shows a face image facing left, (B) shows a face image facing front, and (C) shows a face image facing right. It is a table which shows the magnitude | size etc. of two face images. It is a flowchart which shows a face comparison process procedure. It is a flowchart which shows a face position correction process procedure. (A) and (B) are examples of subject images. (A) and (B) are tables showing representative face image candidate rankings and the like. It is a flowchart which shows the process sequence which sets a representative face image. It is a flowchart which shows the process sequence which sets a representative face image. It is a flowchart which shows the process sequence which sorts several face image part.

Explanation of symbols

1 System control circuit 2 Memory
26 Face extraction circuit

Claims (14)

  1. Target image part detection means for detecting a target image part from an image represented by given image data;
    Position storage means for storing a position indicating the target image portion detected by the target image portion detection means;
    Control means for controlling the target image portion detection means and the position storage means so as to repeat the detection processing in the target image portion detection means and the storage processing in the position storage means for image data of a plurality of frames that are successively given;
    The display position determining means for determining the display position of the detection frame indicating the target image portion based on the position stored in the position storage means, and the detection frame at the position determined by the display position determining means Display control means for controlling the display device to display together with the last image given to the target image portion detection means;
    A target image detection system.
  2. The display position determining means is
    Rearrangement means for rearranging the size of the target image portion determined based on the position stored in the position storage means in descending order, or rearranging the center position of the target image portion in order from the center of the image;
    A plurality of sizes including an intermediate size among the sizes of the target image portions rearranged by the rearranging means, or an intermediate position among the center positions of the target image portions rearranged by the rearranging means. The display position of the detection frame indicating the target image portion is determined based on a plurality of positions including,
    The target image detection system according to claim 1.
  3. First target image part detecting means for detecting a target image part from the first image;
    Second target image portion detection means for detecting a target image portion from the second image;
    Change in size, change in center position, change in inclination and direction between the target image portion detected by the first target image portion detection means and the target image portion detected by the second target image portion detection means Whether the target image portion detected by the first target image portion detection means and the target image portion detected by the second target image portion detection means are the same based on at least one of the changes Means for determining,
    A target image portion matching determination device comprising:
  4. A target image portion detecting means for detecting one or a plurality of target image portions from each of a plurality of frame images represented by image data for a plurality of frames given in succession;
    Representative target image part determining means for determining a representative target image part from one or a plurality of target image parts detected by the target image part detecting means;
    First determination means for determining whether or not position data indicating a representative image portion of the previous frame determined by the representative target image portion determination means is stored in the position storage means;
    The first determination means determines that the position data indicating the representative image portion of the previous frame is not stored in the position storage means, and is determined based on the new frame image data. First storage control means for storing position data in the position storage means;
    In response to the determination by the first determination means that the position data determined based on the image data of the previous frame is stored in the position storage means, the position is determined based on the image data of the new frame. A second determination is made to determine whether or not the position indicated by the position data and the position indicated by the position data determined based on the image data of the previous frame already stored in the position storage means are more than a predetermined distance apart. Determination means,
    When the second determination means determines that the distance is not more than a predetermined distance, the position data determined based on the image data of the new frame is stored in the position storage means. In response to the determination, data indicating the position of the target image portion not separated by a predetermined distance or more among the target image portions detected by the target image portion detection means is stored as data indicating the position of the representative target image portion. The second storage control means stored in the means, and the frame for the representative target image portion represented by the position data stored in the position storage means is different from the frames for the other target image portions. Display control means for controlling the display device to display a frame on the image represented by the displayed image data;
    A target image detection system.
  5.   Clearing means for clearing the position data stored in the position storage means each time image data for a predetermined frame is given or in response to the target image part detection means not detecting the target image part The target image detection system according to claim 4, further comprising:
  6. The representative target image part determining means is
    A plurality of target image portions detected by the target image portion detection means are selected as representative target image portion candidates based on at least one of the detected elements of the position, size, and target image quality of the target image portion. Sorting means are arranged in order,
    The first candidate is determined as the representative target image portion among the plurality of target image portions arranged by the sorting means.
    The target image detection system according to claim 4.
  7. A designation means for designating a representative target image portion from a plurality of target image portions arranged by the sorting means;
    Determining the target image portion specified by the specifying means as a representative target image portion;
    The target image detection system according to claim 6.
  8. A target image portion detecting means for detecting a plurality of target image portions from a given image, and a plurality of target image portions detected by the target image portion detecting means, the position, size and Sorting means for arranging in order of candidates of representative target image parts based on at least one of the sorting elements of the target image quality;
    A target image portion sorting apparatus comprising:
  9. The sorting means is
    A calculating means for calculating a composite priority based on a combination of at least two of the sorting elements of the position, size, and target image quality;
    A plurality of target image portions detected by the target image portion detection means are arranged in the order of candidates of representative target image portions based on the composite priority calculated by the calculation means.
    The target image sorting apparatus according to claim 8.
  10. The sorting means is
    A specifying means for specifying a sorting element to be given priority among sorting elements used for calculating the composite priority by the calculating means;
    Among a plurality of target image portions detected by the target image portion detection means, those having the same composite priority calculated by the calculation means or a sorting element specified by the specification means or not specified by the specification means It is arranged in the order of candidates of representative target image parts based on the elements.
    The target image sorting apparatus according to claim 9.
  11. A target image portion detecting means detects a target image portion from an image represented by given image data;
    A position storage means for storing a position indicating the target image portion detected by the target image portion detection means;
    The control means controls the target image portion detection means and the position storage means so as to repeat the detection processing in the target image portion detection means and the storage processing in the position storage means for image data of a plurality of frames that are successively given. ,
    Display position determining means determines the display position of the detection frame indicating the target image portion based on the position stored in the position storage means;
    Display control means controls the display device to display the detection frame at the position determined by the display position determining means together with the image last given to the target image portion detecting means;
    Control method of target image detection system.
  12. A first target image part detecting unit detects a target image part from the first image;
    A second target image portion detecting means detects the target image portion from the second image;
    The determination means is a change in size, a change in the center position, a slope between the target image portion detected by the first target image portion detection means and the target image portion detected by the second target image portion detection means. The target image portion detected by the first target image portion detection means and the target image portion detected by the second target image portion detection means are the same based on at least one of the change in the direction and the change in direction To determine whether
    A control method for a device for determining a match of a target image portion.
  13. A target image portion detection unit detects one or a plurality of target image portions from each of a plurality of frames represented by a plurality of frames of image data given in succession;
    A representative target image portion determining unit determines a body-like target image portion from one or a plurality of target image portions detected by the target image portion detecting unit;
    A first determination unit for determining whether or not position data indicating a representative image portion of the previous frame determined by the representative target image portion determination unit is stored in the position storage unit;
    When the first storage control means determines that the position data indicating the representative image portion of the previous frame is not stored in the position storage means by the first determination means, the image of the new frame Storing the position data determined based on the data in the position storage means;
    When the second determination means determines that the position data determined based on the image data of the previous frame is stored in the position storage means by the first determination means, the image of the new frame Whether the position indicated by the position data determined based on the data is separated from the position indicated by the position data determined based on the image data of the previous frame already stored in the position storage means by a predetermined distance or more Determine whether
    The second storage control means stores the position data determined based on the image data of the new frame in the position storage means when the second determination means determines that the distance is not more than the predetermined distance. , Data representing the position of the target image portion not more than the predetermined distance among the target image portions detected by the target image portion detection means in response to the determination that the target image portion is more than the predetermined distance. Is stored in the position storage means as data indicating the position of
    The display control means is represented by the given image data so that the frame for the representative target image portion represented by the position data stored in the position storage means is different from the frame for the other target image portion. Control the display device to display a frame on the image,
    Control method of target image detection system.
  14. The target image part detection means detects a plurality of target image parts from the given image,
    Sorting means arranges a plurality of detected target image parts in the order of candidates of representative target image parts based on at least one of the detected elements of the position, size and target image quality of the detected target image parts.
    A method for controlling a sorting apparatus for a target image portion.

JP2006201782A 2006-05-26 2006-07-25 Target image detection system, target image portion matching determination device, target image portion sorting device, and control method therefor Expired - Fee Related JP4769653B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2006146030 2006-05-26
JP2006146030 2006-05-26
JP2006201782A JP4769653B2 (en) 2006-05-26 2006-07-25 Target image detection system, target image portion matching determination device, target image portion sorting device, and control method therefor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2006201782A JP4769653B2 (en) 2006-05-26 2006-07-25 Target image detection system, target image portion matching determination device, target image portion sorting device, and control method therefor

Publications (2)

Publication Number Publication Date
JP2008004061A true JP2008004061A (en) 2008-01-10
JP4769653B2 JP4769653B2 (en) 2011-09-07

Family

ID=39008359

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2006201782A Expired - Fee Related JP4769653B2 (en) 2006-05-26 2006-07-25 Target image detection system, target image portion matching determination device, target image portion sorting device, and control method therefor

Country Status (1)

Country Link
JP (1) JP4769653B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011254151A (en) * 2010-05-31 2011-12-15 Casio Comput Co Ltd Moving image reproduction device, moving image reproduction method, and program
JP2014096817A (en) * 2013-12-20 2014-05-22 Nikon Corp Focus adjustment device and camera

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005229444A (en) * 2004-02-13 2005-08-25 Toshiba Corp Vehicle tracking device and program
JP2006005662A (en) * 2004-06-17 2006-01-05 Nikon Corp Electronic camera and electronic camera system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005229444A (en) * 2004-02-13 2005-08-25 Toshiba Corp Vehicle tracking device and program
JP2006005662A (en) * 2004-06-17 2006-01-05 Nikon Corp Electronic camera and electronic camera system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011254151A (en) * 2010-05-31 2011-12-15 Casio Comput Co Ltd Moving image reproduction device, moving image reproduction method, and program
JP2014096817A (en) * 2013-12-20 2014-05-22 Nikon Corp Focus adjustment device and camera

Also Published As

Publication number Publication date
JP4769653B2 (en) 2011-09-07

Similar Documents

Publication Publication Date Title
EP1737216B1 (en) Object determination device, imaging device and monitor
JP4755490B2 (en) Blur correction method and imaging apparatus
US8144230B2 (en) Camera, storage medium having stored therein camera control program, and camera control method
US9007442B2 (en) Stereo image display system, stereo imaging apparatus and stereo display apparatus
JP3727954B2 (en) Imaging device
JP4832518B2 (en) Image processing apparatus, image processing method thereof, and image processing program
JP4429241B2 (en) Image processing apparatus and method
JP4815807B2 (en) Image processing apparatus, image processing program, and electronic camera for detecting chromatic aberration of magnification from RAW data
KR101342638B1 (en) Image processing apparatus, image processing method, and program
JP2007306416A (en) Method for displaying face detection frame, method for displaying character information, and imaging apparatus
EP0586019A2 (en) Target follow-up device and camera comprising the same
US6919927B1 (en) Camera with touchscreen
JP2005130468A (en) Imaging apparatus and its control method
US7327890B2 (en) Imaging method and system for determining an area of importance in an archival image
US20070291152A1 (en) Image pickup apparatus with brightness distribution chart display capability
JP4006415B2 (en) Image capturing apparatus, control method therefor, and control program
JP2005241872A (en) Microscope image photographing system and method
JP2009053748A (en) Image processing apparatus, image processing program, and camera
US7248294B2 (en) Intelligent feature selection and pan zoom control
JP2005141151A (en) Projector and method for setting projector function
KR100827637B1 (en) Image reproducing device and control method thereof
CN102318354B (en) Imaging apparatus and image correction method
US20080284900A1 (en) Digital camera
JP2003271949A (en) System for reading and processing digital image
JPH1051755A (en) Screen display controller for video conference terminal equipment

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20090623

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20110315

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20110502

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20110524

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20110620

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20140624

Year of fee payment: 3

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

LAPS Cancellation because of no payment of annual fees