CN101277394A - Information processing method, information processing apparatus and program - Google Patents

Information processing method, information processing apparatus and program Download PDF

Info

Publication number
CN101277394A
CN101277394A CNA2008100951567A CN200810095156A CN101277394A CN 101277394 A CN101277394 A CN 101277394A CN A2008100951567 A CNA2008100951567 A CN A2008100951567A CN 200810095156 A CN200810095156 A CN 200810095156A CN 101277394 A CN101277394 A CN 101277394A
Authority
CN
China
Prior art keywords
scene
image
identification
data
recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2008100951567A
Other languages
Chinese (zh)
Inventor
河西庸雄
锹田直树
笠原广和
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Seiko Epson Corp
Original Assignee
Seiko Epson Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Seiko Epson Corp filed Critical Seiko Epson Corp
Publication of CN101277394A publication Critical patent/CN101277394A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

An information processing method of the present invention includes acquiring scene information of image data from supplemental data appended to the image data, identifying a scene of an image represented by the image data based on the image data, and storing the identified scene in the supplemental data when there is a mismatch between a scene indicated by the scene information and the identified scene.

Description

Information processing method, messaging device and program
The cross reference of related application
The application requires the priority of the Japanese patent application No.2007-315245 that submits to according to the Japanese patent application No.2007-038369 that submits on February 19th, 2007 with on December 5th, 2007, and it is hereby incorporated by.
Technical field
The present invention relates to information processing method, messaging device and program.
Background technology
Some digital still video cameras have pattern rotating disk are set, and are used to be provided with screening-mode.When the user used this rotating disk that screening-mode is set, the digital still video camera was according to screening-mode, determined shooting condition (such as the time for exposure) and took.When having taken photo, the digital still video camera generates image file.This image file comprises the relevant view data and the additional supplementary data to this view data of captured image, and for example, this supplementary data is the shooting condition when taking this image.
On the other hand, according to this supplementary data view data being carried out image processing also implements in practice.For example, when printing machine is carried out printing according to this image file,, and carry out printing according to the view data of this correction according to shooting condition image correcting data by this supplementary data indication.JP-A-2001-238177 has described the example of background technology.
In some cases, the user forgets and screening-mode is set and has taken picture still being provided with under the screening-mode that is not suitable for shooting condition.For example, may be provided with night scene mode and take the scene on daytime.This situation about causing is that the storage of indication night scene mode is in supplementary data, although the view data in the image file is the image of scene on daytime.In this case, when according to the time, possibly can't suitably proofread and correct this view data by the night scene mode image correcting data of supplementary data indication.This problem not only can cause by the setting of unsuitable rotating disk dish, also can not caused by matching between the content of the content of view data and supplementary data.
Summary of the invention
Consider these situations, designed the present invention, and its advantage is the problem of having eliminated by between the content of the content of view data and supplementary data that does not match and cause.
In order to realize above-mentioned advantage, main aspect of the present invention comprises at a kind of information processing method: the scene information that obtains view data from additional supplementary data to view data; According to this view data, identification is by the scene of the image of pictorial data representation; And, in supplementary data, store the scene of being discerned when between by the scene of scene information indication and the scene discerned by the recognition image scene, existing when not matching.
Further feature of the present invention will be by becoming clear in the explanation of this specification and the description of accompanying drawing.
Description of drawings
For more complete understanding the present invention and its advantage, referring now to the following explanation of carrying out in conjunction with the accompanying drawings, in the accompanying drawing:
Fig. 1 is the key diagram of image processing system;
Fig. 2 is the key diagram of the configuration of printing machine;
Fig. 3 is the key diagram of the structure of image file;
Fig. 4 A is the key diagram of the label that uses in IFD0; Fig. 4 B is the key diagram of the label that uses in Exif SubIFD;
Fig. 5 shows pattern the setting of rotating disk and the mapping table of the corresponding relation between the data is set;
Fig. 6 is the key diagram of the zero offset capability of printing machine;
Fig. 7 is image scene and proofreaies and correct the key diagram of the relation between the details;
Fig. 8 is the flow chart of being handled by the scene Recognition that scene Recognition portion carries out;
Fig. 9 is the key diagram of the function of scene Recognition portion;
Figure 10 is the flow chart that whole identification is handled;
Figure 11 is the key diagram of recognition objective table;
Figure 12 is the key diagram of the sure threshold value during whole identification is handled;
Figure 13 is the key diagram of Hui Xiandu (Recall) and accuracy (Precision);
Figure 14 is the key diagram of the first negative threshold value;
Figure 15 is the key diagram of the second negative threshold value;
Figure 16 A is the key diagram of the threshold value in the landscape identification part; Figure 16 B is the key diagram of the profile of the processing carried out of landscape identification part;
Figure 17 is the flow chart that local identification is handled;
Figure 18 is a key diagram of being selected the order of topography by at dusk local identification part;
Figure 19 show only use preceding ten topographies discern sunset during the scape image Hui Xiandu and the figure of accuracy;
The key diagram that Figure 20 A is to use linear SVMs to differentiate; The key diagram that Figure 20 B is to use kernel function to differentiate;
Figure 21 is the flow chart that comprehensive identification is handled;
Figure 22 is the flow chart of the scene information treatment for correcting of embodiment; And
Figure 23 is the key diagram of the configuration of APP1 section when adding recognition result to supplementary data.
Embodiment
Explanation by this specification and to the description of accompanying drawing, following content will become clear at least.
It is clear that information processing method will become, and this method comprises: the scene information that obtains view data from additional supplementary data to view data; According to the scene of view data identification by the image of pictorial data representation; And, in supplementary data, store the scene of being discerned when between by the scene of scene information indication and the scene discerned by the scene of recognition image, existing when not matching.
According to this information processing method, the caused problem that do not match between the content by the content of view data and supplementary data can be eliminated.
In addition, preferably, in supplementary data, store the scene of being discerned and comprise: the scene by the scene information indication is rewritten as the scene of being discerned.Utilize this configuration, can be eliminated by the caused problem that do not match between view data content and the supplementary data content.
In addition, preferably, in supplementary data, store the scene of being discerned and comprise: in supplementary data, store the scene of being discerned, keep scene information constant simultaneously.Utilize this configuration, can wipe initial data.
In addition, preferably, in supplementary data, store the scene of being discerned and comprise: will be stored in the supplementary data together with the scene of being discerned according to the assessment result of the accuracy rate of recognition result.Utilize this configuration, image file has the data that can reduce the wrong identification influence.
In addition, preferably, identification is comprised by the scene of the image of pictorial data representation: the characteristic quantity that is used to obtain the characteristic quantity of indicating image feature obtains; And the scene Recognition that is used for coming the scene of recognition image according to characteristic quantity.Utilize this configuration, the accuracy of identification is improved.
In addition, preferably, characteristic quantity obtains the local feature amount that comprises the global feature amount of obtaining the indicating image global feature and the feature of obtaining the topography that comprises in the indicating image; Scene Recognition comprises and is used for coming the integral body identification of scene of recognition image and the part identification that is used for coming according to the local feature amount scene of recognition image according to the overall permanence amount.When can't be in integral body identification identification during by the scene of the image of pictorial data representation, carry out local identification, but and when can be in integral body identification during the scene of recognition image, do not carry out local identification.Utilize this configuration, processing speed has improved.
In addition, preferably, whole identification comprises: based on the global feature amount, calculating according to image is the assessed value of the probability of specific scene; And when assessed value during greater than first threshold, recognition image is specific scene.In addition, local identification comprises: based on local feature amount recognition image is specific scene.When the assessed value in the integral body identification during, do not carry out local identification less than second threshold value.Utilize this configuration, processing speed has improved.
In addition, preferably, scene Recognition comprises that being used for based on the characteristic quantity recognition image is first scene Recognition of first scene and to be used for based on the characteristic quantity recognition image be second scene Recognition that is different from second scene of first scene, and first scene Recognition comprises: calculating according to image based on characteristic quantity is the assessed value of the probability of first scene; And when assessed value during greater than first threshold, recognition image is first scene.In addition, in scene Recognition, the assessed value in first identification is not carried out second scene Recognition during greater than the 3rd threshold value.Utilize this configuration, processing speed has improved.
In addition, it is clear that messaging device becomes, and it comprises: the scene information acquisition unit, from additional supplementary data, obtain the scene information of the scene of indicating image data to view data; Scene Recognition portion, based on view data, identification is by the scene of the image of pictorial data representation; With the supplementary data storage part,, in supplementary data, store the scene of being discerned when when existence does not match by the scene of scene information indication with between by the scene of scene Recognition portion identification.
In addition, it is clear that program also becomes, and it makes messaging device: the scene information that obtains the scene of indicating image data from additional supplementary data to view data; According to view data, identification is by the scene of the image of pictorial data representation; And, in supplementary data, store the scene of being discerned when between scene, existing when not matching by the scene of scene information indication and the scene Recognition by recognition image.
Configured in one piece
Fig. 1 is the key diagram of image processing system.This image processing system comprises digital still video camera 2 and printing machine 4.
Digital still video camera 2 is to come the video camera of capture digital image by going up the image that forms object at digital device (such as CCD).Digital still video camera 2 has pattern rotating disk 2A is set.The user can use rotating disk 2A that screening-mode is set according to shooting condition.For example, when with rotating disk 2A " night scene " pattern being set, digital still video camera 2 makes the elongated or increase iso sensitivity of shutter speed, with pictures taken under the shooting condition that is fit to the shooting night scene.
Digital still video camera 2 is preserved the image file that generates by taking according to the file format standard on storage card 6.Image file not only comprises the numerical data (view data) about captured image, and the supplementary data of the shooting condition (photographed data) when comprising about photographic images for example.
Printing machine 4 is the printing equipments that are used for printing on paper by the image of pictorial data representation.Printing machine 4 has the groove 21 that storage card 6 can insert.After taking with digital still video camera 2, the user can remove storage card 6 from digital still video camera 2, and storage card 6 is inserted in the groove 21.
Fig. 2 is the key diagram of the configuration of printing machine 4.Printing machine 4 comprises printing mechanism 10 and is used to control the printing machine side controller 20 of printing mechanism 10.Printing mechanism 10 has the print head that is used to spray ink, be used to control the print head control part 12 of print head 11, for example be used to transmit the motor 13 and the transducer 14 of paper.Printing machine side controller 20 have be used for sending/receiving data to/from holding tank 21, CPU22, the memory 23 of storage card 6, the drive signal generating unit 25 that is used to control the control unit 24 of engine 13 and is used to generate drive signal (drive waveforms).
When storage card 6 was inserted in the groove 21, printing machine side controller 20 read out on the storage card 6 image file of preserving, and in memory 23 memory image file.Then, printing machine side controller 20 is converted into the view data in the image file will be printed the printed data of mechanism's 10 printings, and controls printing mechanism 10 based on printed data and print image on paper.The sequence of these operations is known as " directly printing ".
It should be noted that " directly printing " not only carried out by storage card 6 is inserted in the groove 21, and can carry out by digital still video camera 2 is connected to printing machine 4 via the cable (not shown).
The structure of image file
Image file is made of view data and supplementary data.View data is made of a plurality of pixel datas unit.Pixel data is the data of each color of pixel information (tone value) of indication.Image is made up of the pixel of arranging with matrix form.Therefore, view data is the data of presentation video.Supplementary data comprises data that the attribute of view data is indicated, photographed data, thumbnail image data or the like.
The ad hoc structure of image file is described hereinafter.
Fig. 3 is the key diagram of the structure of image file.The configured in one piece of image file illustrates in the left side of figure, and the right side that is configured in figure of APP1 section illustrates.
Image file starts from indicating the mark of SOI (beginning of image), and ends to indicate the mark of EOI (end of image).The mark back of indication SOI is the APP1 mark, is used to indicate the beginning of APP1 data area.APP1 data area after the APP1 mark comprises supplementary data, such as photographed data and thumbnail image.In addition, view data comprises after the mark of indication SOS (beginning of stream).
After the APP1 mark, place the information of the size of indication APP1 data area, its back is EXIF head, TIFF head, is the IFD zone then.
Each IFD zone has the link and the data area of the position in a plurality of directory entries, the next IFD of indication zone.For example, an IFD (IFD0 (IFD of master image)) is linked to the position of next IFD (IFD1 (IFD of thumbnail image)).Yet here IFD1 is other without any IFD, thereby IFD1 is not linked to any other IFD.Each directory entry comprises label and data segment.When low volume data will be stored, data segment was stored real data as it is, yet when mass data will be stored, actual data storage was in the IFD0 data area, and the pointer of the memory location of data segment storage designation data.It should be noted that IFD0 comprises directory entry, store the label (Exif IFD pointer) of the memory location of representing Exif SubIFD and the pointer (deviant) of indicating the memory location of Exif SubIFD therein.
Exif SubIFD zone has a plurality of directory entries.These directory entries also comprise label and data segment.When low volume data will be stored, data segment was stored real data as it is, yet when mass data will be stored, actual data storage was in Exif SubIFD data area, and the pointer of the memory location of data segment storage designation data.It should be noted the pointer of the memory location of the label of the memory location of Exif SubIFD storage representation Makernote IFD and indication Makernote IFD.
Makernote IFD zone has a plurality of directory entries.These directory entries also comprise label and data segment.When low volume data will be stored, data segment was stored real data as it is, yet when mass data will be stored, actual data storage was in Makernote IFD data area, and the pointer of the memory location of data segment storage designation data.Yet for Makernote IFD zone, data memory format can freely be defined, and data there is no need with this form storage.In the following description, the data of storing in Makernote IFD zone are called " MakerNote data ".
Fig. 4 A is the key diagram of the label that uses in IFD0.As shown in FIG., IFD0 stores general data (data of the attribute of indicating image data), and does not have detailed photographed data.
Fig. 4 B is the key diagram of the label that uses in Exif SubIFD.As shown in FIG., the detailed photographed data of ExifSubIFD storage.It should be noted that the most of photographed datas that extract are the photographed datas that are stored among the Exif SubIFD during scene Recognition is handled.Scene capture type label (scene capture type) is the label of the type of the captured scene of indication.In addition, the Makernote label is the label of the memory location of indication Makernote IFD.
When with Exif SubIFD in the corresponding data segment of scene capture type label (scene capture categorical data) when being " zero ", mean " standard ", " 1 " means " landscape ", " 2 " mean " portrait ", and " 3 " mean " night scene ".It should be noted that because the data that are stored among the Exif SubIFD are standardized, anyone can know the content of this scene capture categorical data.
In the present embodiment, the MakerNote data comprise the screening-mode data.This indication of screening-mode data and the corresponding different value of different mode that rotating disk 2A setting is set with this pattern.Yet, because the form of MakerNote data is different and different with manufacturer, thus the content of there is no telling screening-mode data, unless know the form of MakerNote data.
Fig. 5 shows pattern the setting of rotating disk 2A and the mapping table of the corresponding relation between the data is set.The scene capture type label of using in Exif SubIFD meets the file format standard, so that the scene that restriction can appointment, the data that are used to specify the scene such as " sunset scape " thus can not be stored in the data segment.On the other hand, the MakerNote data can freely be defined, thereby can use the screening-mode label will be used to specify the storage of screening-mode that pattern is provided with rotating disk 2A in data segment, and this label is included in the MakerNote data.
Under the shooting condition of the setting that rotating disk 2A is set according to pattern after the pictures taken, above-mentioned digital still video camera 2 is created such as above-mentioned image file, and on storage card 6 memory image file.This image file comprises the scene capture categorical data and according to pattern the screening-mode data of rotating disk 2A is set, and they are stored in respectively among Exif SubIFD and the Makernote IFD, as additional scene information to view data.
The general introduction of zero offset capability
When printing " portrait " picture, there is demand to the beautiful colour of skin.In addition, when printing " landscape " picture, there is demand the blueness of sky should be strengthened, and will sets with the green of plant and strengthen.Like this, the printing machine 4 of present embodiment has the analysis image file and automatically carries out the zero offset capability of suitable treatment for correcting.
Fig. 6 is the key diagram of the zero offset capability of printing machine 4.Each assembly of printing machine side controller 20 among the figure is realized with software and hardware.
Storage part 31 is realized with a certain zone and the CPU22 of memory 23.In the image storage part 31A of storage part 31, whole or topography's file of having read from storage card 6 is expanded.The operating result of being carried out by the assembly of printing machine side controller 20 is stored among the 31B of storage part as a result of storage part 30.
Recognition of face portion 32 realizes with the recognition of face program that is stored in the memory 23 with CPU22.The view data of recognition of face portion 32 analyzing stored in image storage part 31A, and whether identification has people's face.When recognition of face portion 32 identification has people's face, with image recognition to be identified for belonging to " portrait " scene.In this case, scene Recognition portion 33 does not carry out the scene Recognition processing.Because handle the processing that is similar to extensive use by the recognition of face that recognition of face portion 32 carries out, omit detailed description thereof here.
The image file of scene Recognition portion 33 analyzing stored in image storage part 31A, and identification is by the scene of the image of pictorial data representation.When recognition of face portion 32 identification nobody faces, scene Recognition portion 33 carries out scene Recognition and handles.As describing subsequently, it is " landscape " that scene Recognition portion 33 identifies image to be identified, " sunset scape ", " night scene ", " flower ", in " autumn days " and " other " image which.
Fig. 7 is the scene of image and proofreaies and correct the key diagram of the relation between the details.
Figure image intensifying portion 34 realizes with the image correction program that is stored in the memory 23 with CPU22.Figure image intensifying portion 34 is according to the view data among recognition result (recognition result of being carried out by recognition of face portion 32 or scene Recognition portion 33) the correcting image storage part 31A, and wherein recognition result is to be stored among the 31B of storage part as a result of storage part 31.For example, when the recognition result when scene identification part 33 was " landscape ", image correcting data was to increase the weight of blueness and green.It should be noted that figure image intensifying portion 34 can not only come image correcting data according to the recognition result about scene, and the view data of the reflection photographed data content in the correcting image file.For example, when exposure compensating was born in application, view data can obtain proofreading and correct, so that stop dark-coloured image to brighten.
Printing machine control part 35 is with CPU22, drive signal generating unit 25, and control unit 24 and the printing machine control program that is stored in the memory 23 are realized.Printing machine control part 35 is converted into printed data to the view data after proofreading and correct, and makes printing mechanism's 10 printing images.
Scene Recognition is handled
Fig. 8 is the flow chart of being handled by the scene Recognition that scene Recognition portion 33 carries out.Fig. 9 is the key diagram of the function of scene Recognition portion 33.Each assembly of scene Recognition portion 33 shown in the figure is realized with software and hardware.
At first, characteristic quantity acquisition unit 40 is analyzed image expanding data in the image storage part 31A of storage part 31, and obtains local feature amount (S101).Particularly, characteristic quantity acquisition unit 40 is divided into the 8x8=64 piece to view data, calculates the color average and the variance of these pieces, and obtains the color average calculated and variance as the local feature amount.It should be noted that each pixel has the data about tone value in the YCC color space here, and for each piece calculates average, the average of Cb and the average of Cr of Y, and be variance, the variance of Cb and the variance of Cr of each piece calculating Y.In other words, calculate three color averages and three variances local feature amount as each piece.Color average of being calculated and variance have been indicated the feature of the topography in each piece.It should be noted, also can calculate average and variance in the RGB color space.
Calculate color average and variance because be each piece, characteristic quantity acquisition unit 40 under all images data conditions among the expanded images storage part 31A not, expansion and the corresponding view data part of each piece successively block by block.Thus, image storage part 31A can have and can expand the same big capacity of all view data.
Secondly, characteristic quantity acquisition unit 40 is obtained global feature amount (S102).Particularly, characteristic quantity acquisition unit 40 is obtained color average and variance, barycenter and photographing information, the characteristic quantity as a whole of whole image data.It should be noted the feature of color average and variance indication entire image.Use the local feature amount of obtaining in advance to calculate color average, variance and the barycenter of whole image data.Thus, needn't be when calculating the global feature amount expanded image data again, thereby improved the speed of calculating the global feature amount.This is because improved computational speed as follows: obtain the global feature amount after the local feature amount, handle (describing after a while) although handle the whole identification of (describing after a while) execution before in part identification.It should be noted, extract in the photographed data of photographing information from image file.Particularly, will be such as the information whether f-number, shutter speed and photoflash lamp be luminous as the global feature amount.Yet, be not that all photographed datas in the image file all are used as the global feature amount.
Next, whole identification part 50 is carried out whole identification and is handled (S103).It is to discern (estimation) processing by the image scene of pictorial data representation according to the global feature amount that whole identification is handled.The detailed description that identification is handled to integral body is provided after a while.
In the time can discerning the processing and identification scene by integral body (" being " in S104), scene Recognition portion 33 determines scene (S109) by storage recognition result in the 31B of storage part as a result of storage part 31, and stops the scene Recognition processing.In other words, in the time can discerning scene by integral body identification processing (" being " among the S104), part identification is handled and is comprehensively discerned to handle and is omitted.Like this, the speed of scene Recognition processing has improved.
When can not handle discern scene the time by integral body identification (among the S104 " deny "), local identification processing (S105) is carried out in local identification part 60.It is to discern processing by the scene of the entire image of pictorial data representation according to the local feature amount that local identification is handled.The detailed description that part identification is handled is provided after a while.
In the time can discerning scene by part identification processing (" being " in S106), scene Recognition portion 33 determines scene (S109) by storage recognition result in the 31B of storage part as a result of storage part 31, and stops the scene Recognition processing.In other words, in the time can discerning scene by part identification processing (" being " among the S106), comprehensive identification is handled and is omitted.Like this, the speed of scene Recognition processing has improved.
In the time can not discerning scene by part identification processing (among the S106 " deny "), the 70 comprehensive identification processing of execution (S107) of comprehensive identification part.The detailed description that comprehensive identification is handled is provided after a while.
When handling (" being " among the S108) when discerning scene by comprehensive identification, scene Recognition portion 33 determines scene (S109) by storage recognition result in the 31B of storage part as a result of storage part 31, and stops the scene Recognition processing.On the other hand, (among the S108 " denys ") when not handling when discerning scene by comprehensive identification, is the image by pictorial data representation that " other " scene (is removed " landscape ", " sunset scape ", " night scene ", " flower " or " autumn days " scene in addition) recognition result is stored in as a result among the storage part 31B (S110).
Whole identification is handled
Figure 10 is the flow chart that whole identification is handled.Here, also describing whole identification with reference to figure 9 handles.
At first, a sub-identification part 51 (S201) is selected in whole identification part 50 from a plurality of sub-identification parts 51.Whole identification part 50 has five sub-identification parts 51, and whether the image (image to be identified) that is used to discern as recognition objective belongs to special scenes.Five sub-identification parts 51 discern respectively landscape, sunset scape, night scene, flower and autumn days scene.Here, whole identification part 50 is according to the sub-identification part 51 of selective sequential on landscape → sunset scape → night scene → flower → autumn days.Thus, during beginning, select to be used to discern the sub-identification part 51 (landscape identification part 51L) whether image to be identified belongs to the landscape scene.
Secondly, whole identification part 50 is with reference to the recognition objective table, and determines whether to use selected sub-identification part 51 to discern scene (S202).
Figure 11 is the key diagram of recognition objective table.This recognition objective table is stored among the 31B of storage part as a result of storage part 31.In the phase I, all fields are set to zero in the recognition objective table.In the S202 step,, when this field is zero, determine " being ", and when this field is 1, determine " denying " with reference to " negating " field.Here, whole identification part 51 is with reference to " negating " field below " landscape " hurdles, and finding out this field is zero, and definite " being " thus.
Next, sub-identification part 51 is based on the global feature amount, calculate belong to the probability of special scenes according to image to be identified value (assessed value) (S203).The recognition methods of having used SVMs (SVM) is adopted in the sub-identification part 51 of present embodiment.Description to SVMs is provided after a while.When image to be identified belongs to special scenes, the discriminant of sub-identification part 51 be likely on the occasion of.When image to be identified did not belong to special scenes, the discriminant of sub-identification part 51 was likely negative value.In addition, the probability that image to be identified belongs to special scenes is high more, and the value of discriminant is big more.Therefore, it is higher that the higher value of discriminant indication image to be identified belongs to the probability of special scenes, and the probability that the smaller value indication image to be identified of discriminant belongs to special scenes is lower.
Therefore, the value of discriminant (assessed value) indication certainty factor, image promptly to be identified may belong to the degree of special scenes.It should be noted that the term of Shi Yonging " certainty factor " can refer to that the value of discriminant itself maybe can refer to the accuracy (describing after a while) that obtains from the value of discriminant in the following description.The accuracy (describing after a while) that the value of discriminant itself maybe can obtain from the value of discriminant also is " assessed value " (assessment result) that belongs to the probability of special scenes according to image to be identified.
Next, sub-identification part 51 determines that whether the value (certainty factor) of discriminant is greater than affirming threshold value (S204).When the value of discriminant during greater than threshold value certainly, sub-identification part 51 determines that images to be identified belong to special scenes.
Figure 12 is the key diagram of the sure threshold value during whole identification is handled.In this drawing, the longitudinal axis is represented sure threshold value, and transverse axis is represented back the probability of existing degree or accuracy.Figure 13 is the key diagram of Hui Xiandu and accuracy.When the value of discriminant was equal to or greater than sure threshold value, recognition result was for certainly, and when the value of discriminant was not equal to or is not more than sure threshold value, recognition result was for negative.
Hui Xiandu has indicated recall factor or recall rate.Hui Xiandu is the ratio of number in the total number of images of special scenes that is identified as the image that belongs to special scenes.Sub-identification part 51 was identified as sure probability (is the probability that belongs to special scenes with the image recognition of special scenes) when in other words, Hui Xiandu had indicated the image of group identification part 51 identification special scenes.For example, Hui Xiandu has indicated that landscape identification part 51L recognition image is the probability that belongs to the landscape scene when landscape identification part 51L identification landscape image.
Accuracy has been indicated accuracy or accuracy rate.Accuracy is the ratio of number in the sum that is identified as sure image of the image of special scenes.In other words, accuracy indicated when sub-identification part 51 recognition images that are used to discern special scenes image to be identified for certainly the time be the probability of special scenes.For example, accuracy indicated when landscape identification part 51L recognition image be the probability that the image discerned when belonging to the landscape scene is actually landscape image.
Can be as seen from Figure 12, threshold value is big more certainly, and accuracy is high more.Like this, certainly threshold value is big more, is identified as that to belong to the image of landscape scene for example be that the probability of landscape image is high more.That is to say that threshold value is big more certainly, the probability of wrong identification is low more.
On the other hand, threshold value is big more certainly, and Hui Xiandu is more little.Thereby for example, even in by landscape identification part 51L identification landscape image, also being difficult to correctly, recognition image belongs to the landscape scene.When image to be identified can be identified as (" being " among the S204) when belonging to the landscape scene, no longer carry out identification (such as the sunset scape), thereby improve the speed that whole identification is handled with respect to other scene.Therefore, threshold value is big more certainly, and the speed that whole identification is handled is low more.In addition, because when scene identification can realize by whole identification processing, the speed that scene Recognition is handled has improved (S104) by omitting local identification processing, so threshold value is big more certainly, the speed that scene Recognition is handled is low more.
That is to say that certainly the lead to errors probability of identification of the too little meeting of threshold value is higher, and certainly threshold value too conference cause processing speed to reduce.In the present embodiment, the sure threshold value of landscape is set to 1.72, so that accuracy (accuracy) is set to 97.5%.
When the value of discriminant during greater than threshold value certainly (" being " among the S204), sub-identification part 51 determines that images to be identified belong to special scenes, and sure sign (S205) is set." sure sign is set " and is meant and " affirm that " field is set to 1 among Figure 11.In this case, does not carry out in follow-up sub-identification part 51 under the situation of identification whole identification part 50, stops whole identification and handle.For example, when image can be identified as landscape image, whole identification part 50 stopped whole identification and handles under the situation of not carrying out about sunset scape or the like of identification.In this case, the speed that whole identification is handled can be improved, and this is because the identification of follow-up sub-identification part 51 is omitted.
When the value of discriminant is not more than sure threshold value (among the S204 " deny "), sub-identification part 51 can not determine that image to be identified belongs to special scenes, and carries out the processing of follow-up S206.
Then, the value and the negative threshold value (S206) of discriminant are compared in sub-identification part 51.Compare based on this, sub-identification part 51 determines whether images to be identified belong to predetermined scene.Carry out this definite in two ways.The first, when group identification part 51 negates threshold value with respect to the value of the discriminant of a certain special scenes less than first, determine that image to be identified does not belong to that special scenes.For example, when the value of the discriminant of landscape identification part 51L negates threshold value less than first, determine that image to be identified does not belong to the landscape scene.The second, when group identification part 51 negates threshold value with respect to the value of the discriminant of a certain special scenes greater than second, determine the image that is determined is not belonged to the scene that is different from that special scenes.For example, when the value of the discriminant of landscape identification part 51L negates threshold value greater than second, determine that image to be identified does not belong to night scene.
Figure 14 is the key diagram of the first negative threshold value.In this drawing, transverse axis represents that first negates threshold value, and the longitudinal axis is represented probability.By the curve representation shown in the thick line really negates Hui Xiandu, and indicates the image with non-landscape image correctly to be identified as the probability that is not landscape image.False by the curve representation shown in the fine rule negates Hui Xiandu, and indication is to be not the probability of landscape image with the landscape image wrong identification.
Can be as seen from Figure 14, first negates that threshold value is more little, and it is just more little that existing degree is negated back in vacation.Like this, first to negate threshold value more little, is identified as not belong to the probability that the image of landscape scene for example is actually landscape image and become low more.In other words, the probability of wrong identification reduces.
On the other hand, first negates that threshold value is more little, really negates that Hui Xiandu is also more little.Thereby it is less not to be that the image of landscape image is identified as the possibility of landscape image.Simultaneously, when image to be identified can be identified as when not being special scenes, be omitted during part identification is handled by processing with respect to the sub local identification part 61 of this special scenes, thus the speed (describing the S302 in Figure 17 after a while) that provides scene Recognition to handle.Therefore, first negates that threshold value is more little, and the speed that scene Recognition is handled is low more.
That is to say that first negates threshold value, and too the lead to errors probability of identification of conference is higher, and first negates that the too little meeting of threshold value causes processing speed to reduce.In the present embodiment, first negates that threshold value is set to-1.01, so that the negative Hui Xiandu of vacation is set to 2.5%.
When a certain image belongs to the probability of landscape scene when higher, the probability that this image belongs to night scene is very low inevitably.Like this, when the value of the discriminant of landscape identification part 51L is big, perhaps might be with image recognition for not being night scene.In order to carry out this identification, provide second to negate threshold value.
Figure 15 is the key diagram of the second negative threshold value.In this drawing, transverse axis is represented the value with respect to the discriminant of landscape, and the longitudinal axis is represented probability.Except the curve of Hui Xiandu shown in Figure 12 and accuracy, this figure has also shown the line of now writing music that returns with respect to night scene, and it is drawn by dotted line.When checking this curve that draws by dotted line, find when with respect to the value of the discriminant of landscape greater than-0.44 the time, image to be identified is that the probability of night scene image is 2.5%.In other words, not the night scene image, the probability of wrong identification yet just 2.5% even be identified as greater than-0.44 o'clock image to be identified in value with respect to the discriminant of landscape.In the present embodiment, second negates that therefore threshold value is set to-0.44.
When the value of discriminant negates threshold value less than first, or when the value of discriminant negates threshold value greater than second (" being " among the S206), sub-identification part 51 determines that image to be identified does not belong to predetermined scene, and negative sign (S207) is set." being provided with negates sign " is meant that in Figure 11 " negating " field being set is 1.For example, when determining that based on the first negative threshold value image to be identified does not belong to the landscape scene, below " landscape " hurdle " negates that " field is set to 1.In addition, when determining that based on the second negative threshold value image to be identified does not belong to night scene, below " night scene " hurdle " negates that " field is set to 1.
Figure 16 A is the key diagram of the threshold value among the aforesaid landscape identification part 51L.In the 51L of landscape identification part, set in advance sure threshold value and negative threshold value.Certainly threshold value is set to 1.72.Negative threshold value comprises that first negates the threshold value and the second negative threshold value.First negates that threshold value is set to-1.01.At the scene except that landscape, second negates that threshold value is set to corresponding value.
Figure 16 B is the key diagram of the general introduction of aforesaid landscape identification part 51L processing.Here, for the purpose of simplifying description, only describing second with respect to night scene negates threshold value.When the value of discriminant greater than 1.72 the time (" being " among the S204), landscape identification part 51L determines that image to be identified belongs to the landscape scene.When the value of discriminant be not more than 1.72 (among the S204 " deny ") and greater than-0.44 the time (among the S206 " be "), landscape identification part 51L determines that image to be identified does not belong to night scene.When the value of discriminant less than-1.01 the time (" being " among the S206), landscape identification part 51L determines that image to be identified does not belong to the landscape scene.It should be noted that landscape identification part 51L with respect to about sunset scape and autumn days, negates that threshold value determines whether image to be identified does not belong to these scenes based on second also.Yet because negate threshold value greater than threshold value certainly with respect to second of flower, landscape identification part 51L can't determine that image to be identified does not belong to the scene of flower.
" deny " time, when in S206 being " denys when in S202 being " time or when the processing of S207 finished, whole identification part 50 determined whether to exist follow-up sub-identification part 51 (S208).Here, the processing of being undertaken by landscape identification part 51L finishes, thereby follow-up sub-identification part 51 (sunset scape identification part 51S) is determined to exist in whole identification part 50 in S208.
Then, when the processing of S205 finishes (when definite image to be identified belongs to special scenes), or when determining not have follow-up sub-identification part 51 in S208 (when can not determine that image to be identified belongs to special scenes), whole identification part 50 stops whole identification and handles.
Just as described above, when termination was handled in integral body identification, scene Recognition portion 33 determined whether scene Recognition can realize (S104 among Fig. 8) by whole identification processing.At this moment wait, scene Recognition portion 33 is with reference to the recognition objective table shown in Figure 11, and determines " to affirm " whether have 1 in the field.
Can handle (" being " among the S104) when realizing by whole identification when scene identification, local identification is handled and comprehensive identification processing is omitted.Like this, the speed of scene Recognition processing has improved.
Local identification is handled
Figure 17 is the flow chart that local identification is handled.When scene identification can not realize by whole identification processing, (among the S104 among Fig. 8 " denied ") to carry out local identification processing.As following description,, carry out part identification and handle, with the scene of identification entire image by discerning the scene of the topography that image was divided into to be identified individually.Here, also describing local identification with reference to figure 9 handles.
At first, a local identification part 61 of son (S301) is selected in local identification part 60 from the local identification part 61 of a plurality of sons.Local identification part 60 has three local identification parts 61 of son.Whether the topography of the 8x8=64 piece that each sub local identification part 61 identification image division to be identified becomes belongs to special scenes.Here, three local identification parts 61 of son discern respectively sunset scape, flower volume scene and autumn days scene.Local identification part 60 is according to the local identification part 61 of selective sequential of sunset scape → flower → autumn days.Like this, during beginning, the local identification part 61 of chooser (the local identification part 61S of sunset scape) is used to discern topography and whether belongs to the sunset scape.
Secondly, local identification part 60 is with reference to recognition objective table (Figure 11), and whether definite scene Recognition uses the sub local identification part 61 of selection to carry out (S302).Here, local identification part 60 is with reference to " negating " field below " sunset scape " hurdle in the recognition objective table, and when having zero, determines " being ", and when having 1, definite " denying ".It should be noted that when during integral body identification is handled, sunset scape identification part 51S negates that threshold value is provided with based on first negates sign, or another sub-identification part 51 negates that threshold value is provided with when negating sign based on second, in this step S302, determine " denying ".If determine " denying ", the part identification processing of scape is omitted with respect to sunset, thereby improves the speed that local identification is handled.Yet, for convenience of description for the purpose of, suppose that definite result here is " being ".
Then, a topography (S303) is selected in sub local identification part 61 from the topography of the 8x8=64 piece of image division one-tenth to be identified.
Figure 18 is the key diagram that the local identification part 61S of sunset scape selects the order of topography.Under the situation that the scene of entire image is discerned based on topography, the topography that preferably is used to discern is the part that wherein has object.For this cause, in the present embodiment, prepared thousands of sample sunset scape image, each evening scene image is divided into the 8x8=64 piece, extraction comprise sunset scape parts of images piece (sunset scape the sun and the topography of sky part), and based on the position of the piece that is extracted, the probability that calculating sunset scape parts of images exists in each piece.In the present embodiment, according to the descending order that has probability of piece, select topography.It should be noted that the relevant information stores of selecting sequence to that indicated in the drawings is in memory 23, as the part of program.
It should be noted, in the situation of sunset scape image, sunset the sky of scape extend to the first half of image around often dividing from central division, thereby increased be arranged in divide from central division around to the probability that exists of the piece in the zone of the first half.In addition, in the situation of sunset scape image, because backlight, the frequent deepening of image bottom 1/3 part, and can not determine that image is a sunset scape or night scene based on single topography, so the probability that exists that is arranged in the piece of bottom 1/3 part reduces.In the situation of the image of flower, flower often is positioned at around the picture centre part, so the probability that the parts of images of flower exists in around the core increases.
Then, sub local identification part 61 determines based on the local feature amount of the topography of having selected whether selected topography belongs to special scenes (S304).The method of discrimination that has used SVMs (SVM) is adopted in the local identification part 61 of son, as the sub-identification part 51 of whole identification part 50.Description to SVMs is provided after a while.When the value of discriminant be on the occasion of the time, determine that topography belongs to special scenes, and sub local identification part 61 increases progressively sure count value.When the value of discriminant is negative value, determine that topography does not belong to special scenes, and sub local identification part 61 increases progressively negative count value.
Next, sub local identification part 61 determines that whether count value is greater than affirming threshold value (S305) certainly.Certainly count value has been indicated the number of the topography that is confirmed as belonging to special scenes.When sure count value during greater than threshold value certainly (" being " among the S305), sub local identification part 61 determines that images to be identified belong to special scenes, and sure sign (S306) is set.In this case, local identification part 60 is not carried out in the local identification part 61 of follow-up son and is stopped local identification under the situation of identification and handle.For example, when image to be identified can be identified as sunset during the scape image, local identification part 60 stops local identification and handles under situation about not carrying out with respect to flower and identification autumn days.In this case, the speed that local identification is handled can be improved, because the identification of follow-up sub-identification part 61 is omitted.
When certainly count value is not more than sure threshold value (among the S305 " deny "), sub local identification part 61 can not determine that image to be identified belongs to special scenes, and the processing of execution subsequent step S307.
When the number sum of certainly count value and remaining topography during less than threshold value certainly (" being " among the S307), sub local identification part 61 advances to the processing of S309.When the number sum of certainly count value and remaining topography during less than threshold value certainly, when even all remaining topographies increase progressively sure count value, certainly count value also can not be greater than affirming threshold value, therefore handle and advance to S309, omit the identification of using SVMs about residue topography.Thus, the speed of local identification processing can be improved.
When " denying " determined in the local identification part 61 of S307 neutron, sub local identification part 61 determined whether to exist follow-up topography (S308).In the present embodiment, not 64 all topographies of sequentially selecting image division to be identified to become.Only preceding ten topographies of sketching the contours by thick line among Figure 18 are sequentially selected.For this cause, when the end of identification of the tenth topography, sub local identification part 61 determines there is not follow-up topography in S308.(consider this point, " number of residue topography " also is determined.)
Figure 19 show only based on preceding ten topographies carry out sunset scape image identification the time return the figure of existing degree and accuracy.When sure threshold value is set to when as shown in the drawing, it is about 80% that accuracy (accuracy) can be set to, and Hui Xiandu leads (Hui Xiandu) and can be set to approximately 90%, therefore can carry out the identification of pinpoint accuracy.
In the present embodiment, only carry out the identification of sunset scape image based on ten topographies.Therefore, in the present embodiment, the speed that local identification is handled can be higher than the situation of using all 64 topographies to carry out the identification of sunset scape image.
In addition, in the present embodiment, use preceding ten the higher topographies of probability that exist of sunset scape parts of images to carry out the identification of sunset scape image.Therefore, in the present embodiment, can be arranged on than using ten topographies that do not consider to have probability and be extracted to carry out the high rank of situation of the identification of sunset scape image returning existing degree and accuracy.
In addition, in the present embodiment, the descending order that has probability of scape parts of images is selected topography according to sunset.Thereby, may the stage early in S305 determine " being " more.Therefore, the speed handled of local identification may be higher than to select the situation of topography with the irrelevant order of the degree that has probability.
When determining " being " among the S307, or when determining not have follow-up topography in S308, sub local identification part 61 determines to negate whether count value is greater than negative threshold value (S309).This negative threshold value has and the negative threshold value function of discerning in above-mentioned integral body in handling (S206 among Figure 10) much at one, and therefore omits detailed description thereof.When in S309, determining " being ", negate sign with the same setting of situation of S207 among Figure 10.
When in S302 being " denying ", when when in S309 being " denying ", or when the processing of S310 finished, local identification part 60 determined whether to exist the local identification part 61 of follow-up son (S311).When the processing by the local identification part 61S of sunset scape has finished, have the local identification part 61 of remaining son, promptly the local identification part 61F of flower and autumn days local identification part 61R, therefore the local identification part 61 of follow-up son is determined to exist in local identification part 60 in S311.
Then, when the processing of S306 finishes (when definite image to be identified belongs to special scenes), or when determining not have the local identification part 61 of follow-up son in S311 (when can not determine that image to be identified belongs to special scenes), local identification part 60 stops local identification and handles.
With as above described the same, when termination was handled in part identification, scene Recognition portion 33 determined whether scene Recognition can realize (S106 among Fig. 8) by local identification processing.At this moment wait, scene Recognition portion 33 is with reference to the recognition objective table shown in Figure 11, and determines " to affirm " whether have 1 in the field.
When scene identification can be handled realization by local identification (" being " among the S106), omit comprehensive identification and handle.Therefore, the speed of scene Recognition processing has improved.
SVMs
Before describing comprehensive identification processing, description is handled the sub local identification part 61 of neutralization by sub-identification part 51 in integral body identification and is discerned the SVMs (SVM) that uses in the processing in the part.
Figure 20 A is the key diagram of the differentiation of linear SVMs.Here, learning sample is shown in the two-dimensional space that is defined by two kinds of characteristic quantity x1 and x2.Learning sample is divided into two class A and B.In the drawings, the sample that belongs to category-A is represented that by circle the sample that belongs to category-B is represented by square.
As the learning outcome that uses learning sample, definition is divided into two-part border with two-dimensional space.Boundary definition is<wx 〉+b=0 (wherein x=(x1, x2), w represents weight vector, and<wx the inner product of expression w and x).Yet the border is defined as using the learning outcome of learning sample, with maximization nargin (margin).That is to say that in this drawing, the border is not bold dashed lines but runic solid line.
Use discriminant f (x)=<wx+b carries out differentiation.As certain input x (this input x be separated) when satisfying f (x)>0, determine that input x belongs to category-A, and when f (x)<0, determine to import x and belong to category-B with learning sample.
Here, use two-dimensional space to describe differentiation.Yet this is not to limit (that is, can use the characteristic quantity more than two).In this case, the border is defined as hyperplane.
Exist the separation between two classes can not be by the situation of using linear function to realize.In the case, when carrying out the differentiation of using linear SVMs, the accuracy of differentiating the result reduces.In order to address this problem, the characteristic quantity in the input space is carried out nonlinear transformation, or in other words, the Nonlinear Mapping from the input space to certain feature space, and therefore can realize separation in the feature space by using linear function.Non-linear SVMs makes in this way.
Figure 20 B is to use the key diagram of the differentiation of kernel function.Here, learning sample is shown in the two-dimensional space that is defined by two characteristic quantity x1 and x2.When the Nonlinear Mapping from the input space shown in Figure 20 B is feature space shown in Figure 20 A, can realize separation between two classes by using linear function.When the border is defined so that when maximizing nargin in this feature space, the inverse mapping on the border in this feature space is the border shown in Figure 20 B.Therefore, the border is nonlinear as shown in Figure 20 B.
Because use Gaussian kernel in the present embodiment, so with following equation expression discriminant f (x):
Formula 1
f ( x ) = Σ i N w i exp ( - Σ j M ( x j - y j ) 2 2 σ 2 )
The number of M representation feature amount wherein, N are represented the number number of the contributive learning sample in border (or to) of learning sample, w iThe expression weight factor, y jThe characteristic quantity of expression learning sample, and x jThe characteristic quantity of expression input x.
As certain input x (this input x be separated) when satisfying f (x)>0, determine that input x belongs to category-A, and when f (x)<0, determine to import x and belong to category-B with learning sample.In addition, the value of discriminant f (x) is big more, and the probability that input x (this input x is separated with learning sample) belongs to category-A is high more.On the contrary, the value of discriminant f (x) is more little, and the probability that input x (this input x is separated with learning sample) belongs to category-A is low more.As mentioned above, sub-identification part 51 in integral body identification is handled and sub local identification part 61 in part identification is handled, adopt the value of the discriminant f (x) of SVMs described above.
It should be noted that the assessment sample is prepared separately with learning sample.The curve of described above time existing degree and accuracy is based on respect to the recognition result of assessing sample.
Comprehensive identification is handled
Handle and during local identification handles in whole identification described above, sub-identification part 51 is set to relative higher value with sure threshold value in the sub local identification part 61, so that accuracy (accuracy rate) is set to quite high rank.This be because, when the accuracy rate of the landscape identification part of for example whole identification part 51L is set to low level, the problem that occurs is that landscape identification part 51L wrong identification image on autumn days is a landscape image, and stops whole identification processing before being discerned by identification part 51R execution on autumn days.In the present embodiment, accuracy (accuracy rate) is set to quite high rank, and the image that therefore belongs to special scenes is by with respect to sub-identification part 51 (or the sub local identification part 61) identification of this special scenes (for example, autumn days, image was by identification part 51R on autumn days (or autumn days local identification part 61R) identification).
Yet when integral body identification is handled and local identification is handled accuracy (accuracy rate) was set to quite high rank, scene Recognition can not be handled and local identification is handled the possibility that realizes and increased by integral body identification.In order to address this problem, in the present embodiment, when scene identification can not be handled realization by integral body identification processing and local identification, to carry out the comprehensive identification of describing hereinafter and handle.
Figure 21 is the flow chart that comprehensive identification is handled.As described below, the value that the discriminant that is based on each the sub-identification part 51 of whole identification in handling is handled in comprehensive identification selects to have the processing of the scene of high certainty factor.
At first, comprehensive identification part 70 is based on the value of the discriminant of five sub-identification parts 51, and the value of extracting discriminant is positive scene (S401).At this moment wait, use the value of the discriminant of during integral body identification is handled, calculating by each sub-identification part 51.
Then, comprehensive identification part 70 determines whether that having the value of discriminant is positive scene (S402).
When the value that has discriminant when being positive scene (" being " among the S402), sure sign (S403) is set below having the hurdle of peaked scene, and stops comprehensive identification and handle.Therefore, determine that image to be identified belongs to and has peaked scene.
On the other hand, (" denying "), under the situation that sure sign is not set, stopping comprehensive identification and handle among the S402 when the value that does not have discriminant is positive scene.Therefore, still there be not " affirming and " 1 scene being set in the field at the recognition objective table shown in Figure 11.That is to say, can't discern image to be identified and belong to which scene.
As previously discussed, when termination was handled in comprehensive identification, whether scene Recognition portion 33 definite scene Recognition can be handled by comprehensive identification and realize (S108 among Fig. 8).At this moment wait, scene Recognition portion 33 is with reference to the recognition objective table shown in Figure 11, and determines whether " to affirm and " have 1 in the field.When in S402, determining " denying ", also in S108, determine " denying ".
Scene information is proofreaied and correct general introduction
As mentioned above, the user can the use pattern be provided with rotating disk 2A screening-mode is set.Then, digital still video camera 2 is determined shooting condition (time for exposure, iso sensitivity or the like) based on the screening-mode and the photometry that for example are provided with, when taking pictures with reference object with the shooting condition of determining simultaneously.After taking, the photographed data that digital still video camera 2 will the shooting condition when taking be indicated is stored in the storage card 6, as image file together with view data.
Under the certain situation, the user forgets screening-mode is set, thereby takes when the screening-mode that is not suitable for shooting condition still is provided with.For example, perhaps when still being provided with, taken night scene mode the scene on daytime.Therefore, in this case, although the view data in the image file is the image of scene on daytime, the storage (for example, the scene capture categorical data shown in Fig. 5 is set to " 3 ") in photographed data of indication night scene mode.
On the other hand, some printing machines do not have scene Recognition processing capacity described above, and be based on photographed data carries out image data in the image file from dynamic(al) correction.If by the image file of this press printing picture shot under unaccommodated screening-mode, then can come image correcting data based on the photographed data of mistake.
In order to address this problem, in the present embodiment, when scene identification result and scene by the scene information in the image file (scene capture categorical data and screening-mode data) indication do not match, the scene of scene Recognition result is stored as supplementary data in the image file.About the method for the scene of storage scenarios in image file identification result, can use the method that changes original scene information and add the scene of scene Recognition result and keep the constant method of original scene information simultaneously.
Therefore, when the user uses another printing machine to carry out printing, even do not have the scene Recognition processing capacity but when carrying out the printing machine of automatic treatment for correcting using, also image correcting data suitably.
Figure 22 is the flow chart of the scene information treatment for correcting of present embodiment.This scene information treatment for correcting is realized by the scene information correction program that execution is stored in the memory 23 by CPU22.
The scene information treatment for correcting is carried out after scene Recognition described above is handled.Yet the scene information treatment for correcting also can be before by printing machine 4 printing, during by printing machine 4 printings or execution after by printing machine 4 printings.
At first, printing machine side controller 20 obtains the photographed data (S501) in the image file.Particularly, printing machine side controller 20 obtains scene capture categorical data (Exif SubIFD zone) and screening-mode data (Makernote IFD zone), and these are the supplementary datas in the image file.Thereby printing machine side controller 20 can be analyzed the scene by the indication of the supplementary data in the image file.
Then, printing machine side controller 20 obtains recognition result (S502).Recognition result comprises the face recognition result that obtained by recognition of face described above portion 32 and the scene Recognition result who is obtained by scene Recognition described above portion 33.Thereby the view data of printing machine side controller 20 in can the estimated image file belongs to scene " portrait ", " landscape ", " sunset scape ", " night scene ", " flower ", in " autumn days " and " other " which.
Next, printing machine side controller 20 is with the scene of supplementary data indication compare with the scene of estimation (S503).When nothing between two scenes does not match (among the S503 " deny "), the termination of scene information treatment for correcting.
When existence between two scenes does not match (" being " among the S503), printing machine side controller 20 is proofreaied and correct the photographed data (S504) of image file in the storage cards 6.Thereby, when the user removes storage card 6 from the printing machine 4 of present embodiment, and when being inserted into storage card 6 in another printing machine, even be not have the scene Recognition processing capacity but when carrying out the printing machine of automatic treatment for correcting at this printing machine, also image correcting data suitably.
There are various possible forms in the processing of aforesaid S503 and S504.The processing example of S503 and S504 is described hereinafter.
Example 1: the change of scene capture categorical data
In the following description, the scene capture categorical data in the printing machine side controller 20 change image files.
Among the superincumbent S503, printing machine side controller 20 is compared scene capture categorical data (as the supplementary data in the image file) with the scene Recognition result.When the indication of the scene capture categorical data that obtains in S501 " portrait ", " landscape " or " night scene ", and whether the recognition result that obtains in S502 can be determined to exist between two scenes not match when being " portrait ", " landscape " or " night scene ".
When the scene capture categorical data that obtains in S501 is not in " portrait ", " landscape " and " night scene " any one, for example, when scene is caught categorical data and is " 0 " (see figure 5), can not come given scenario based on the scene capture categorical data, therefore can not determine between two scenes, whether to exist not match, thereby in S503, be defined as " denying ".Because the scene capture categorical data is a standardized data, can appointed scene be limited, thereby the scene capture categorical data may trend towards not being in " portrait ", " landscape " and " night scene " any one.
In addition, when the recognition result that obtains in S502 is not in " portrait ", " landscape " and " night scene " any one, not and the corresponding scene capture categorical data of recognition result, therefore can not determine whether between two scenes, to exist not match, thereby in S503, be defined as " denying ".Therefore for example, when recognition result is " sunset scape ", do not have corresponding scene capture categorical data, can not determine whether between two scenes, to exist not match, thereby in S503, be defined as " denying ".In addition, (for example, being in the situation of " sunset scape " at recognition result) in this case there is no need to determine whether to exist not match, because can not change the scene capture categorical data according to recognition result.
When the indication of the scene capture categorical data that obtains in S501 " portrait ", " landscape " or " night scene ", and the recognition result that obtains in S502 is when being " portrait ", " landscape " or " night scene ", and printing machine side controller 20 determines whether two scenes mate.Then, when two scene couplings (among the S503 " deny "), the termination of scene information treatment for correcting.On the other hand, when two scenes do not match, the scene capture categorical data that printing machine side controller 20 changes in the image file.For example, when scene is caught categorical data indication " landscape ", and recognition result is when being " night scene ", and printing machine side controller 20 is changed into " night scene " (the scene capture categorical data is changed into " 3 " from " 1 ") to the scene capture categorical data from " landscape ".
According to this example, determine not matching between two scenes based on the scene capture categorical data.Because the scene capture categorical data is a standardized data, printing machine 4 can be determined the content of scene capture categorical data, and irrelevant with the manufacturer of the digital still video camera 2 that is used to take.Thereby this example has multifunctionality.Yet because the scene that can represent with the scene capture categorical data is limited, the degree that can proofread and correct also is restricted.
Example 2: the change of screening-mode data
Also can determine not matching between two scenes based on the screening-mode data, the screening-mode data are MakerNote data.In this case, printing side controller 20 changes the screening-mode data,
Among the superincumbent S503, printing machine side controller 20 is screening-mode data (it is the supplementary data in the image file) and scene Recognition result relatively.When the indication of screening-mode data " portrait ", " landscape ", " sunset scape " or " night scene " that in S501, obtain, and whether the recognition result that obtains in S502 can be determined to exist between two scenes not match when being " portrait ", " landscape ", " sunset scape " or " night scene ".
It should be noted, when the indication of the screening-mode data obtained in S501 is not in " portrait ", " landscape ", " the sunset scape " and " night scene " any one, for example, when the screening-mode data are " 3 (close-up shot) " (see figure 5), can't carry out comparison with recognition result, therefore can not determine between two scenes, whether to exist not match, thereby in S503, be defined as " denying ".
In addition, when the recognition result that obtains in S502 is not in " portrait ", " landscape ", " sunset scape " and " night scene " any one, not with the corresponding screening-mode data of recognition result, therefore can not determine between two scenes, whether to exist not match, thereby in S503, be defined as " denying ".Therefore for example, when recognition result is " flower ", do not have corresponding screening-mode data, can not determine whether between two scenes, to exist not match, thereby in S503, be defined as " denying ".In addition, when recognition result is " flower " or " autumn days ", there is no need to determine whether to exist not match, because the screening-mode data can not be changed into " flower " or " autumn days ".
When the indication of screening-mode data " portrait ", " landscape ", " sunset scape " or " night scene " that in S501, obtain, and the recognition result that obtains in S502 is when being " portrait ", " landscape ", " sunset scape " or " night scene ", and printing machine side controller 20 determines whether two scenes mate.Then, when two scene couplings (among the S503 " deny "), the termination of scene information treatment for correcting.On the other hand, when two scenes do not match, the screening-mode data that printing machine side controller 20 changes in the image file.For example, when recognition result was " sunset scape ", although screening-mode data indications " landscape ", printing machine side controller 20 was changed into " sunset scape " to the screening-mode data from " landscape ".
According to this example, determine not matching between two scenes based on the screening-mode data.Because the screening-mode data are MakerNote data, data type can freely be defined by manufacturer, therefore exist many can appointed scene type.For this cause, in this example, also can carry out relatively and proofread and correct, and can't carry out comparison and correction for " sunset scape " in the example of describing in the above with respect to " sunset scape ".Yet because the screening-mode data are MakerNote data, printing machine side controller 20 need be used to analyze the routine analyzer of the data memory format in Makernote IFD zone.In addition, the data memory format in Makernote IFD zone is different with the difference of manufacturer, thereby needs to prepare multiple routine analyzer, to support various storage formats.
Example 3: consider the change of the scene information of certainty factor
Scene Recognition is handled the last situation of relatively having indicated that situation about realizing and scene Recognition handle between the situation about realizing by part identification and is produced high certainty factor by integral body identification, certainty factor is hanged down in situation generation then.Particularly, discern situation and the image that processing and identification is " landscape " at image by integral body and provide lower wrong identification probability for the last situation of relatively having indicated between the situation of " landscape " by comprehensive identification processing and identification.Reason is, in integral body identification processing accuracy (accuracy rate) is set to quite high rank, and comprehensive identification processing is can not be performed by integral body identification processing and local identification under the situation of handling realization in scene Recognition.
That is to say, be identical even work as recognition result, i.e. " landscape ", and certainty factor also may differ from one another.
When existence does not match between the scene of scene of being indicated by the supplementary data in the image file and recognition result,, and do not consider, then to affect greatly low certainty factor if then make a mistake identification if change supplementary data.
In order to address this problem, among the S503 of the Figure 22 that can describe in the above, have only when certainty factor is higher than predetermined threshold, just be defined as " being ".
Example 4: add scene information
In above-mentioned two examples, the scene capture categorical data or the screening-mode data that have been stored in the image file are changed (rewriting).Yet, can not adopt the change initial data, and scene information is added in the image file, it is constant to keep initial data simultaneously.That is to say that when being " being " among the S503, printing machine side controller 20 can add recognition result to the supplementary data in the image file.
Figure 23 is the key diagram of the configuration of APP1 section when adding recognition result to supplementary data.In Figure 23, be different from the part of the image file shown in those Fig. 3 by the thick line indication.
When comparing with the image file shown in Fig. 3, the image file shown in Figure 23 comprises the Makernote IFD of interpolation.The information stores relevant with recognition result is in the 2nd Makernote IFD.
In addition, newly-built directory entry also is added among the Exif SubIFD.The directory entry that adds is made of the label of indication the 2nd Makernote IFD and the pointer of the memory location of indication the 2nd Makernote IFD.
In addition, owing to add newly-built directory entry to Exif SubIFD, the memory location of Exif SubIFD data area is shifted, so the pointer of the memory location of indication Exif SubIFD data area changes.
In addition, owing to added the 2nd Makernote IFD, the IFD1 zone is shifted, so be arranged in IFD0 and indicate the link of the position of IFD1 also to change.In addition, owing to add the 2nd Makernote IFD, the size variation of APP1 data area is so the size of APP1 data area has also changed.
According to this example, can wipe original photographed data.In addition, the information relevant with " autumn days " scene with " flower " also can be stored in the supplementary data in the image file.
Example 5: add the certainty factor data
Because data can be stored in the Makernote IFD zone with any form, except the information relevant with scene, the information relevant with certainty factor also can be stored in wherein.Thereby when printing machine 4 came image correcting data based on supplementary data, printing machine 4 can be considered certainty factor, comes image correcting data.
When proofreading and correct " landscape " view data, preferably come image correcting data to increase the weight of blue and green mode.On the other hand, when proofreading and correct " autumn days " view data, preferably come image correcting data to increase the weight of red and yellow mode.Here, if autumn days, image was erroneously identified as " landscape ", the complementary color of the actual color that will increase the weight of is increased the weight of, thereby this correction may cause the image of the non-constant of quality.For this cause, preferably under the situation of low certainty factor, reduce degree of correction.
Therefore, when the data (certainty factor data) about certainty factor are added to image file, the degree of correction of the color that printing machine can will increase the weight of according to this certainty factor adjustment.Therefore, can when taking place, wrong identification prevent the image of the non-constant of output quality.
It should be noted, the value former state of discriminant can be used as the certainty factor data unchangeably, perhaps can be used as the certainty factor data with the corresponding precision value of the value of discriminant.In one situation of back, the value of discriminant and the table of the relation between the precision value have been needed to prepare given.
Other embodiment
Hereinbefore, for example use printing machine to describe embodiment.Yet above embodiment is used to illustrate purpose of the present invention, and is not to be construed as limiting the invention.Clearly the present invention can obtain changes and improvements not breaking away under its main idea situation, and comprises functional equivalent.Specifically, the present invention also comprises the hereinafter embodiment of explanation.
About printing machine
In embodiment described above, printing machine 4 is carried out scene Recognition processing, scene information treatment for correcting or the like.Yet digital still video camera 2 also can be carried out scene Recognition processing, scene information treatment for correcting or the like.In addition, carry out that above-mentioned scene Recognition is handled and the messaging device of scene information treatment for correcting is not limited to printing machine 4 and digital still video camera 2.For example, the messaging device such as the photograph storage device that is used to keep the great amount of images file can be carried out above-mentioned scene Recognition and handles and the scene information treatment for correcting.Nature, the personal computer or the server that are positioned on the internet can be carried out above-mentioned scene Recognition processing and scene information treatment for correcting.
About image file
Image file described above is the Exif formatted file.Yet image file format is not limited to this form.In addition, image file described above is a static picture document.Yet image file also can be a motion pictures files.In fact, as long as image file comprises view data and supplementary data, just can carry out aforesaid scene information treatment for correcting.
About SVMs
The recognition methods of having used SVMs (SVM) is adopted in sub-identification part 51 described above and sub local identification part 61.Yet, be used to discern the method whether image to be identified belong to special scenes and be not limited to the method for using SVMs.For example, also can adopt mode identification technology, such as neural net.
Sum up
(1) in embodiment above, printing machine side controller 20 obtains scene capture categorical data and screening-mode data (they are scene informations) (S501) from additional supplementary data to view data.In addition, printing machine side controller 20 obtains recognition result (seeing Fig. 8) that scene Recognition handles (S502).
The scene of the recognition result that may be handled with scene Recognition by the scene of scene capture categorical data and screening-mode data indication does not match.This situation for example occurs in probably when the user uses digital still video camera 2 and takes and forget when screening-mode is set.In this case, when by not having the scene Recognition processing capacity but the printing machine of the automatic treatment for correcting of carries out image data when carrying out directly printing, based on the photographed data of mistake and image correcting data.
In order to address this problem, in embodiment above, when existing between two scenes when not matching, the scene of printing machine side controller 20 storage scenarios identification result in image file is data as a supplement.
(2) in above-mentioned example 1 and example 2, when the scene by the scene of scene capture categorical data or the indication of screening-mode data and the recognition result that scene Recognition is handled did not match, scene capture categorical data or screening-mode data were changed (rewriting).Therefore, when the user uses another printing machine to carry out printing,, also suitably proofreaied and correct view data even use the printing machine that does not have the scene Recognition processing capacity but carry out automatic treatment for correcting.
It should be noted that (3) as describing in the above-mentioned example 4, can not adopt the method that changes initial data, but add the scene of scene Recognition result, it is constant to keep original scene simultaneously.This method needn't be wiped initial data.
(4) in above-mentioned example 5, when the scene with the scene Recognition result was stored in the image file as a supplement data, certainty factor data (assessment result) also were stored in wherein.Therefore, image file has the data that can be used for preventing the image of the non-constant of output quality when wrong identification takes place.
(5) in scene Recognition described above is handled, in S101 and S102, obtain characteristic quantity (see figure 8) to indicating by the characteristics of image of pictorial data representation.It should be noted that characteristic quantity comprises color average, variance or the like.Then, in scene Recognition described above is handled, carry out scene Recognition to the characteristic quantity among the S108 based on S103.
(6) in scene Recognition described above is handled, (" denying "), carrying out local identification and handle (S106) among the S105 when scene identification can not be handled realization by integral body identification.On the other hand, when scene identification can be handled realization by whole identification (" being " among the S105), do not carry out local identification and handle.Therefore, the speed of scene Recognition processing has improved.
(7) in whole identification described above is handled, the value (corresponding to assessed value) of sub-identification part 51 computational discrimination formulas, and when this value during greater than sure threshold value (corresponding to first threshold) (" being " among the S204), image to be identified is identified as special scenes (S205).On the other hand, when the value of discriminant negates threshold value (corresponding to second threshold value) less than first (" being " among the S206), being provided with negates sign (S207), and in part identification is handled, omits with respect to the part identification of those special scenes and handle (S302).
For example, during integral body identification is handled, when the value of the discriminant of sunset scape identification part 51S negates threshold value less than first (" being " among the S206), image to be identified be sunset the probability of scape image very low, it is nonsensical therefore using the local identification part 61S of sunset scape during part identification is handled.Thereby, during integral body identification is handled, when the value of the discriminant of sunset scape identification part 51S negates threshold value less than first (" being " among the S206), " negating " field among Figure 11 below " sunset scape " hurdle is set to 1 (S207), and (among the S302 " denys ") to omit the processing of being undertaken by the local identification part 61S of sunset scape during part identification is handled.Therefore, the speed of scene Recognition processing has improved (also with reference to Figure 16 A and Figure 16 B).
(8) in whole identification described above is handled, carry out the identification of using landscape identification part 51L and handle (corresponding to the first scene Recognition step) and use the identification of night scene identification part 51N to handle (corresponding to the second scene Recognition step).
The high probability that certain image belongs to the landscape scene means that inevitably the probability that this image belongs to night scene is lower.Therefore, when the value (corresponding to assessed value) of the discriminant of landscape identification part L is big, can recognition image not night scene perhaps.
Thereby, in embodiment above, provide second to negate threshold value (corresponding to the 3rd threshold value) (seeing Figure 16 B).When the value of the discriminant of landscape identification part 51L greater than at the negative threshold value (0.44) of night scene the time (" being " of S206), " negating " field among Figure 11 below " night scene " hurdle is set to 1 (S207), and (among the S202 " denys ") to omit the processing of being undertaken by night scene identification part 51N during integral body identification is handled.Therefore, the speed of scene Recognition processing has improved.[0173] (9) printing machines 4 described above (corresponding to messaging device) comprise printing machine side controller 20 (see figure 2)s.Printing machine side controller 20 obtains scene capture categorical data and screening-mode data (they are scene informations) (S501) from additional supplementary data to view data.In addition, printing machine side controller 20 obtains recognition result (seeing Fig. 8) that scene Recognition handles (S502).When the scene by the scene of scene capture categorical data and the indication of screening-mode data and the recognition result that scene Recognition is handled does not match, the scene of printing machine side controller 20 storage scenarios identification result in image file, data as a supplement.
Therefore, when the user uses another printing machine to carry out printing, when carrying out the printing machine of automatic treatment for correcting, also suitably proofreaied and correct view data even use does not have the scene Recognition processing capacity.
(10) have program stored therein in the memory 23 described above, this program makes the processing shown in printing machine 4 execution graphs 8.That is to say, this program has following code: be used for the scene information of obtaining the scene of indicating image data for the supplementary data of view data from additional, be used for discerning by the scene of the image of pictorial data representation and storing the scene of being discerned in supplementary data when being used for that existence does not match between by the scene of the scene of scene information indication and identification based on view data.
Although most preferred embodiment of the present invention is had been described in detail, should be understood that, can under the situation that does not have disengaging by the spirit and scope of the present invention of appended claims definition, carry out various changes, substitutions and modifications to the present invention.

Claims (10)

1. information processing method comprises:
From additional supplementary data, obtain the scene information of described view data to view data;
Based on described view data, identification is by the scene of the image of described pictorial data representation; And
When between by the scene of described scene information indication and the scene discerned by the scene of discerning described image, existing when not matching, in described supplementary data, store the scene of being discerned.
2. information processing method according to claim 1,
Wherein storing the scene of being discerned in described supplementary data comprises: the scene by described scene information indication is rewritten as the scene of being discerned.
3. information processing method according to claim 1,
Wherein storing the scene of being discerned in described supplementary data comprises: store the scene of being discerned in described supplementary data, keep described scene information constant simultaneously.
4. according to any one the described information processing method in the claim 1 to 3,
Wherein storing the scene of being discerned in described supplementary data comprises: will be stored in the described supplementary data together with the scene of being discerned according to the assessment result of the accuracy rate of recognition result.
5. information processing method according to claim 1,
The scene of wherein discerning by the image of described pictorial data representation comprises:
Characteristic quantity obtains, and is used to obtain the characteristic quantity that the feature of described image is indicated; And
Scene Recognition is used for discerning the scene of described image based on described characteristic quantity.
6. information processing method according to claim 5,
Wherein characteristic quantity obtains and comprises:
Obtain the global feature amount that the feature of described integral image is indicated, and
Obtain the local feature amount that the characteristic of the topography that comprises in the described image is indicated; And
Scene Recognition comprises:
Whole identification is used for discerning the scene of described image based on described global feature amount, and
Local identification is used for discerning the scene of described image based on described local feature amount; And
Wherein when can't be in described whole identification identification during by the scene of the described image of described pictorial data representation, carries out described local identification, and
In the time can in described whole identification, discerning the scene of described image, do not carry out described local identification.
7. information processing method according to claim 6,
Wherein said whole identification comprises:
Based on described global feature amount, calculating according to described image is the assessed value of the probability of special scenes, and
When described assessed value during greater than first threshold, discerning described image is described special scenes; And
Described local identification comprises: based on described local feature amount, discerning described image is described special scenes; And
When wherein the assessed value in described integral body is discerned is less than second threshold value, do not carry out described local identification.
8. information processing method according to claim 5,
Wherein said scene Recognition comprises:
First scene Recognition is used for based on described characteristic quantity, and discerning described image is first scene, and
Second scene Recognition is used for based on described characteristic quantity, and discerning described image is second scene that is different from first scene; And
Described first scene Recognition comprises:
Based on described characteristic quantity, calculating according to described image is the assessed value of the probability of first scene, and
When described assessed value during greater than first threshold, discerning described image is first scene; And
Wherein in described scene Recognition,, do not carry out second scene Recognition when the assessed value in first scene Recognition during greater than the 3rd threshold value.
9. messaging device comprises:
The scene information acquisition unit is used for obtaining the scene information that the scene of described view data is indicated from additional to the supplementary data of view data;
Scene Recognition portion is used for based on described view data, and identification is by the scene of the image of described pictorial data representation; And
The supplementary data storage part when when existence does not match by the scene of described scene information indication with between by the scene of scene Recognition portion identification, is stored the scene of being discerned in described supplementary data.
10. program comprises:
First program code is used for making messaging device to obtain the scene information that the scene of described view data is indicated from additional to the supplementary data of view data;
Second program code is used to make messaging device to discern scene by the image of described pictorial data representation based on described view data; And
The 3rd program code is used to make messaging device to exist when not matching between by the scene of described scene information indication and the scene discerned by the scene of discerning described image, stores the scene of being discerned in described supplementary data.
CNA2008100951567A 2007-02-19 2008-02-19 Information processing method, information processing apparatus and program Pending CN101277394A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2007-038369 2007-02-19
JP2007038369 2007-02-19
JP2007-315245 2007-12-05

Publications (1)

Publication Number Publication Date
CN101277394A true CN101277394A (en) 2008-10-01

Family

ID=39907280

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2008100951567A Pending CN101277394A (en) 2007-02-19 2008-02-19 Information processing method, information processing apparatus and program

Country Status (2)

Country Link
JP (1) JP5040624B2 (en)
CN (1) CN101277394A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103069790A (en) * 2010-08-18 2013-04-24 Nec卡西欧移动通信株式会社 Image capturing device, method for correcting image and sound, recording medium
CN103279189A (en) * 2013-06-05 2013-09-04 合肥华恒电子科技有限责任公司 Interacting device and interacting method for portable electronic equipment
CN103617432A (en) * 2013-11-12 2014-03-05 华为技术有限公司 Method and device for recognizing scenes
WO2014040559A1 (en) * 2012-09-14 2014-03-20 华为技术有限公司 Scene recognition method and device
CN103942523A (en) * 2013-01-18 2014-07-23 华为终端有限公司 Sunshine scene recognition method and device
CN110166711A (en) * 2019-06-13 2019-08-23 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and storage medium
CN113728328A (en) * 2020-03-26 2021-11-30 艾思益信息应用技术股份公司 Information processing apparatus, information processing method, and computer program

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009033459A (en) * 2007-07-26 2009-02-12 Seiko Epson Corp Image identification method, image identifying device and program

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4006590B2 (en) * 2003-01-06 2007-11-14 富士ゼロックス株式会社 Image processing apparatus, scene determination apparatus, image processing method, scene determination method, and program
JP4611069B2 (en) * 2004-03-24 2011-01-12 富士フイルム株式会社 Device for selecting an image of a specific scene, program, and recording medium recording the program

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103069790B (en) * 2010-08-18 2016-03-16 日本电气株式会社 Image capture device, image and sound bearing calibration
CN103069790A (en) * 2010-08-18 2013-04-24 Nec卡西欧移动通信株式会社 Image capturing device, method for correcting image and sound, recording medium
US9465992B2 (en) 2012-09-14 2016-10-11 Huawei Technologies Co., Ltd. Scene recognition method and apparatus
WO2014040559A1 (en) * 2012-09-14 2014-03-20 华为技术有限公司 Scene recognition method and device
CN103942523A (en) * 2013-01-18 2014-07-23 华为终端有限公司 Sunshine scene recognition method and device
CN103942523B (en) * 2013-01-18 2017-11-03 华为终端有限公司 A kind of sunshine scene recognition method and device
CN103279189A (en) * 2013-06-05 2013-09-04 合肥华恒电子科技有限责任公司 Interacting device and interacting method for portable electronic equipment
CN103279189B (en) * 2013-06-05 2017-02-08 合肥华恒电子科技有限责任公司 Interacting device and interacting method for portable electronic equipment
CN103617432A (en) * 2013-11-12 2014-03-05 华为技术有限公司 Method and device for recognizing scenes
CN103617432B (en) * 2013-11-12 2017-10-03 华为技术有限公司 A kind of scene recognition method and device
CN110166711A (en) * 2019-06-13 2019-08-23 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and storage medium
CN110166711B (en) * 2019-06-13 2021-07-13 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, electronic device, and storage medium
CN113728328A (en) * 2020-03-26 2021-11-30 艾思益信息应用技术股份公司 Information processing apparatus, information processing method, and computer program
CN113728328B (en) * 2020-03-26 2024-04-12 艾思益信息应用技术股份公司 Information processing apparatus and information processing method

Also Published As

Publication number Publication date
JP2008234625A (en) 2008-10-02
JP5040624B2 (en) 2012-10-03

Similar Documents

Publication Publication Date Title
EP2549438B1 (en) Apparatus and program for selecting photographic images
CN101277394A (en) Information processing method, information processing apparatus and program
US8836817B2 (en) Data processing apparatus, imaging apparatus, and medium storing data processing program
CN101321223B (en) Information processing method, information processing apparatus
US20040145602A1 (en) Organizing and displaying photographs based on time
US20060044416A1 (en) Image file management apparatus and method, program, and storage medium
JP2007189428A (en) Apparatus and program for index image output
US20150169944A1 (en) Image evaluation apparatus, image evaluation method, and non-transitory computer readable medium
JP2001331781A (en) Picture data retaining method, picture processing method and computer-readable storage medium
CN102422628A (en) Image processing method and image processing apparatus
US20030169343A1 (en) Method, apparatus, and program for processing images
US8466929B2 (en) Image processor
CN100418376C (en) Canera equipment and image process method
CN101335811B (en) Printing method, and printing apparatus
US20160253357A1 (en) Information terminal, image server, image search system, and image search method
EP1959668A2 (en) Information processing method, information processing apparatus, and program
JP2009044249A (en) Image identification method, image identification device, and program
JP4569659B2 (en) Image processing device
JP2008228086A (en) Information processing method, information processor, and program
CN109977247A (en) Image processing method and image processing apparatus
JP2005303396A (en) Printer
CN114724074B (en) Method and device for detecting risk video
JP2008228087A (en) Information processing method, information processor, and program
US20080199098A1 (en) Information processing method, information processing apparatus, and storage medium having program stored thereon
JP2007288409A (en) Imaging apparatus with image data classifying function and program

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Open date: 20081001