US20010045948A1 - Image processing apparatus and method of controlling same - Google Patents
Image processing apparatus and method of controlling same Download PDFInfo
- Publication number
- US20010045948A1 US20010045948A1 US09/095,545 US9554598A US2001045948A1 US 20010045948 A1 US20010045948 A1 US 20010045948A1 US 9554598 A US9554598 A US 9554598A US 2001045948 A1 US2001045948 A1 US 2001045948A1
- Authority
- US
- United States
- Prior art keywords
- image data
- image
- sought
- retrieval
- input
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/5838—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
Definitions
- This invention relates to an image processing apparatus for managing image data to a method of controlling this apparatus.
- the general practice is to append explanatory text and a keyword indicative of image data when the image data is managed and subsequently retrieve the image data using the explanatory text and keyword that were appended to the image data.
- the following method has been proposed as a method of retrieving image data to which explanatory text and a keyword have not been appended: First, the image features of the overall image data are extracted and the extracted image features are managed by being mapped to the image data. Then the image feature of image data entered as a retrieval condition is extracted and this extracted image feature is compared with the image features of the managed image data to thereby retrieve the desired image data.
- image data is merely retrieved based upon the image features of the overall image data, and image data inclusive of image data sought by the user that is contained in the overall image data cannot be retrieved with a high degree of precision. Further, in view of the fact that it may generally be surmised that image data entered by the user as a retrieval condition is decided based upon the sought image data, the fact that the sought image data can be retrieved as a retrieval condition is of great significance.
- an object of the present invention is to provide an image processing apparatus and a method of controlling the same whereby the burden placed upon the user for the purpose of managing image data is alleviated and image data in accordance with user preference can be retrieved from the managed image data.
- an image processing apparatus for managing image data comprising input means for inputting image data and coordinates of sought image data contained in this image data, calculating means for calculating an image feature of the sought image data based upon the coordinates of the sought image data input by the input means, and management means for performing management by mapping the image feature calculated by the calculating means and the input image data.
- the foregoing object is attained by providing a method of controlling an image processing apparatus for managing image data, comprising an input step of inputting image data and coordinates of sought image data contained in this image data, a calculating step of calculating an image feature of the sought image data based upon the coordinates of the sought image data input at the input step, and a management step of performing management in a memory by mapping the image feature calculated at the calculating step and the input image data.
- the foregoing object is attained by providing a computer readable memory storing program codes for controlling an image processing apparatus for managing image data, comprising program code of an input step of inputting image data and coordinates of sought image data contained in this image data, program code of a calculating step of calculating an image feature of the sought image data based upon the coordinates of the sought image data input at the input step, and program code of a management step of performing management in a memory by mapping the image feature calculated at the calculating step and the input image data.
- FIG. 1 is a block diagram illustrating the construction of an image processing apparatus according to an embodiment of the present invention
- FIG. 2 is a diagram showing an example of the detailed composition of an equivalent/synonym/concept dictionary according to the embodiment of the invention
- FIG. 3 is a flowchart illustrating the flow of image registration processing executed according to the embodiment of the invention.
- FIG. 4 is a flowchart illustrating an overview of retrieval processing executed according to the embodiment of the invention.
- FIG. 5 is a flowchart illustrating the details of retrieval processing executed according to the embodiment of the present invention.
- FIG. 6 is a diagram showing an example of a user interface according to the embodiment of the invention.
- FIG. 7 is a diagram showing an example of a user interface according to the embodiment of the invention.
- FIG. 8 is a diagram showing an example of updating of an equivalent/synonym/concept dictionary according to the embodiment of the invention.
- FIG. 9 is a flowchart illustrating the details of processing of a step S 305 according to this embodiment.
- FIG. 10 is a diagram showing an example of a user interface according to the embodiment of the invention.
- FIG. 11 is a diagram showing an example of a user interface according to the embodiment of the invention.
- FIG. 12 is a diagram showing an example of a user interface according to the embodiment of the invention.
- FIG. 13 is a diagram showing an example of a user interface according to the embodiment of the invention.
- FIG. 14 is a diagram showing the structure of the memory map of a storage medium storing program code for implementing the embodiment of the present invention.
- FIG. 1 is a block diagram illustrating the construction of an image processing apparatus according to an embodiment of the present invention.
- the apparatus of FIG. 1 includes a user interface 1 comprising a keyboard and mouse, an image input unit 2 for inputting image data, a language processor 3 , and an image storage unit 4 for accumulating image data stored temporarily in an image memory 5 .
- the image input unit 2 is an image photography input unit having a line-of-sight sensor incorporated in a finder or an image photography input unit having a control panel by which an image displayed on a built-in or externally attached display unit can be subjected to manipulation such as processing and editing.
- the image input unit 2 enters image data based upon a captured image as well as the coordinates of image data (referred to as “sought image data” below) corresponding to an object of interest contained in the first-mentioned image data.
- the image photography input unit having the line-of-sight sensor incorporated in the finder.
- the coordinates of sought image data in a case where this input unit is employed are acquired from coordinate information represented by the image data coordinate system of the user's line of sight, which data is obtained from the line-of-sight sensor.
- the coordinates of sought image data are obtained by designating the sought object from the image displayed on the display unit and acquiring the coordinates from coordinate information on the display unit, the coordinate information representing the sought object that has been designated.
- the coordinates of the sought image data in a case where such an input unit is employed are acquired from coordinates for which there is a statistically high probability of a prior existence of the sought object or from coordinates of a sought object for which information such as lightness, hue and saturation is decided from a psychological standpoint.
- a method of acquiring coordinates of sought image data from a psychological standpoint is to adopt the position of the center of the captured image as the coordinates of sought image data or to calculate prominence through a calculation which weights the height of saturation and the height of luminance in input image data and adopt image data having a high degree of prominence as the coordinates of the sought image data.
- the image memory 5 temporarily stores the image data and the coordinates of the sought image data entered from the image input unit 2 .
- a sought-object extraction unit 50 extracts the sought image data from the entered image data based upon the entered coordinates of the sought image data.
- An image feature extraction unit 7 extracts an image feature for retrieving image data that resembles the sought image data extracted by the sought-object extraction unit 50 .
- Extraction of the sought image data and of the image feature thereof is carried out through the following procedure: First, while taking the color and edge of image data into consideration, a particular area is increased in size from the coordinates of the sought image data toward the periphery while ranges over which identical image data can be recognized are obtained. Further, in concurrence with this growth of the area, an image feature on the boundary of each range is extracted. Each image feature obtained until the area eventually grows to the shape, position and size of the image data is extracted as the image feature of the sought image data.
- An image feature index 9 registers the image feature of image data extracted by the image feature extraction unit 7 by mapping this feature to the image data.
- the language processor 3 outputs a retrieval word, which has been entered from the user interface 1 , to an image retrieval unit 10 .
- Numeral 6 denotes a full-text retrieval registration unit & keyword registration unit for registering a word, which is entered from the language processor 3 , by mapping the word to the entered image data.
- An equivalent/synonym/concept dictionary 8 is a dictionary for managing equivalents and synonyms according to each concept corresponding thereto.
- a word managed by the equivalent/synonym/concept dictionary 8 has an appended image feature weighting that indicates its own effectiveness with respect to a retrieval word. The details of the equivalent/synonym/concept dictionary 8 will be described later.
- the retrieval unit 10 has an image retrieval section 10 a , a language-to-image feature concept converter 10 b and a word retrieval section 10 c .
- the word retrieval section 10 c retrieves a word pertaining to a retrieval word entered from the language processor 3 .
- the language-to-image feature concept converter 10 b obtains the image feature weighting corresponding to a retrieval word by referring to the equivalent/synonym/concept dictionary 8 and calculates an image feature for retrieving the image data.
- the image retrieval section 10 a refers to the image feature index 9 to retrieve the image data.
- a retrieved result notification unit 12 displays image data obtained from the image retrieval unit 10 as the results of retrieval. Further, in regard to an entered retrieval word, the retrieved result notification unit 12 displays a dialog screen for obtaining information, to the user, that useful in performing retrieval.
- FIG. 2 is a diagram showing an example of the detailed construction of the equivalent/synonym/concept dictionary according to the embodiment of the invention.
- Equivalents and synonyms are registered in the equivalent/synonym/concept dictionary 8 in dependence upon degrees of abstraction of corresponding to the equivalents and synonyms. Equivalents and synonyms have different degrees of abstraction in terms of the concepts they represent. A concept distance which indicates the difference between degrees of abstraction is defined between these equivalents and synonyms. For example, the concept distance between “vehicle” and “wheeled vehicle” is 10 in FIG. 2.
- n-dimensional vectors are defined as image feature weightings for subjecting n image features of an entered retrieval word to weighting that reflects the effectiveness of its own image feature in regard to a retrieval word.
- the n-dimensional vectors are normalized so that the magnitudes thereof are made 100.
- a motor vehicle is an artificial object and can be of various colors. Accordingly, the weighting applied to an image feature relating to color is defined as being 0. This means that this image feature should not be referred to in the retrieval operation. As a result, the system executes retrieval in which weight is placed upon image features other than color, these image features being efficacious in regard to the retrieval word. However, if the retrieval word is “red car”, then information relating to the color “red” is taken into account and the system performs retrieval in which the color red is taken into consideration as an image feature.
- FIG. 3 is a flowchart illustrating the flow of image registration processing executed according to the embodiment of the invention.
- step S 101 in FIG. 3 entered image data and coordinates of sought image data are input by the image input unit 2 .
- the input image data is stored temporarily in the image memory 5 . It is determined at step S 102 whether the coordinates of the sought image data have been entered or not. If the coordinates of the sought image data have been entered (“YES” at step S 102 ), then control proceeds to step S 103 , at which the sought image data is extracted from the entered image data using these coordinates.
- step S 107 At which it is determined whether coordinates have been entered by the user. If coordinates have been entered by the user (“YES” at step S 107 ), then control proceeds to step S 108 , at which the sought image data is extracted from the entered image data using these coordinates.
- step S 109 At which the coordinates of the sought image data are decided from statistical and psychological standpoints.
- step S 110 At which the sought image data is extracted from the entered image data using the coordinates that have been decided.
- An image feature of the sought image data that has been extracted is extracted at step S 104 .
- the image feature of the sought image data is registered in the image feature index 9 by being mapped to the entered image data.
- the entered image data is stored in the image storage unit 4 at step S 106 .
- FIG. 4 is a flowchart illustrating an overview of retrieval processing executed according to the embodiment of the invention.
- step S 201 All image information stored in the image storage unit 4 is set at step S 201 as image data to be retrieved.
- step S 202 at which “natural language” is input from the user interface 1 as a retrieval condition
- step S 203 at which retrieval processing for retrieving image data is executed based upon the entered “natural language”.
- step S 206 at which the apparatus accepts a command entered by the user in regard to the results of retrieval.
- the apparatus retains the currently obtained retrieval results and narrows down the results by taking the logical product between the results of retrieval obtained by retrieval processing based upon the newly entered retrieval condition and the results of retrieval that have been retained.
- step S 209 If the user enters a command indicating the retrieval should be performed again, then the results of retrieval are cleared at step S 209 and control returns to step S 201 .
- step S 207 If the user enters a command indicating that image data is to be selected in order to display the details of desired image data taken from the image data being displayed (in reduced size) as the results of retrieval, then the details (the image data in the original size) of the selected image data (being displayed in reduced size) are displayed at step S 207 .
- step S 203 The details of the retrieval processing of step S 203 will be described with reference to FIG. 5, which is a flowchart illustrating an overview of retrieval processing executed according to the embodiment of the invention.
- step S 301 the “natural language” serving as the entered retrieval condition is subjected to morpheme analysis and modifier analysis by referring to the equivalent/synonym/concept dictionary 8 , whereby there are extracted a retrieval object name and an external-appearance feature which indicates the feature of the “retrieval object name”.
- step S 302 it is determined at step S 302 whether the extracted “retrieval object name” exists in the equivalent/synonym/concept dictionary 8 . If the “retrieval object name” exists in the equivalent/synonym/concept dictionary 8 (“YES” at step S 302 ), then control proceeds to step S 303 .
- step S 309 if the “retrieval object name” does not exist in the equivalent/synonym/concept dictionary 8 (“NO” at step S 302 ), then control proceeds to step S 309 .
- the user is prompted at step S 309 to acquire the “object name” of a concept that most closely approximates the “retrieval object name”. Further, the “object name” acquired is set as the “retrieval object name”.
- the processing of step S 309 makes it possible to extract the word of a concept nearest to the entered “retrieval object name”. This is followed by step S 310 , at which the “retrieval object name” set at step S 309 is registered as a new “retrieval object name” in the equivalent/synonym/concept dictionary 8 .
- step S 309 A specific example of the processing of step S 309 will be described with reference to FIGS. 6 through 8.
- FIG. 6 is a diagram showing an example of a user interface according to the embodiment of the invention.
- step S 309 When the processing of step S 309 is executed, a word entry screen of the kind shown in FIG. 6 is displayed on the retrieved result notification unit 12 . If the user enters words of a concept closest to the “retrieval object name” using this word entry screen, the words of a concept closest to these entered words will be displayed.
- “passenger car” was entered as the “retrieval object name”.
- FIG. 6 illustrates a case where the concept “motor vehicle” that most closely approximates “passenger car” has been entered.
- buttons namely “PREVIOUS”, “NEXT”, “FIND”, “CANCEL” and “OK” are provided on the right side of the word entry screen.
- PREVIOUS “NEXT”
- FIND “FIND”
- CANCEL “CANCEL”
- OK the processing described below is executed.
- the “PREVIOUS” button causes the currently displayed word entry screen to return to the preceding word entry screen.
- the “NEXT” button causes a new word entry screen to be displayed.
- the “FIND” button retrieves the word of a concept nearest to the entered word and causes the results to be displayed as shown in FIG. 7, by way of example.
- the “CANCEL” button cancels the entered work.
- the “OK” button selects the word of a concept nearest to the “retrieval object name” that has not been registered in the equivalent/synonym/concept dictionary 8 .
- the “retrieval object name” that has not been registered in the equivalent/synonym/concept dictionary 8 is registered at the level of a concept between level of the concept of the selected word and the level of the concept of the neighboring word in the equivalent/synonym/concept dictionary 8 .
- step S 303 the image feature weighting of “retrieval object name” is acquired from the equivalent/synonym/concept dictionary 8 at step S 303 .
- step S 304 where, if the external-appearance feature of the “retrieval object name” has been extracted, the image feature weighting relating to this external-appearance feature is appended to the image feature weighting acquired at step S 303 .
- the user is prompted at step S 305 to obtain an effective image feature weighting for the image feature.
- the details of this processing will be described with reference to the flowchart of FIG. 9.
- FIG. 9 is a flowchart illustrating the details of processing of step S 305 according to this embodiment.
- FIG. 9 It should be noted that the processing of FIG. 9 is executed to make up for a case where further information (image feature weighting) useful in retrieval is desired to be obtained or for a case where image features in retrieval are too few.
- step S 403 it is determined, based upon the image feature weighting of an acquired “retrieval object name”, whether the color of the retrieval object indicated by the “retrieval object name” is unique. If the color of the retrieval object is unique (“YES” at step S 403 ), then control proceeds to step S 404 . If the color of the retrieval object is not unique (“NO” at step S 403 ), on the other hand, then control proceeds to step S 407 . The user is prompted to “SPECIFY A COLOR APPROXIMATING THAT OF THE RETRIEVAL OBJECT” at step S 407 using a dialog screen of the kind shown in FIG. 10, and the apparatus accepts the designation made by the user.
- the color is specified using language or a color sample in the manner depicted in FIG. 10. Noted, a calorimetric system of the color sample is not limited, CIE 1976, L*a*b*, YCbCr, and the like may be used.
- the color that has been specified is stored as image feature weighting at step S 408 .
- step S 404 The user is questioned “DOES RETRIEVAL OBJECT APPEAR LARGE IN SIZE?” at step S 404 using a dialog screen of the kind shown in FIG. 11. This is followed by step S 405 , at which the user employs this dialog screen to make an answer is regard to the size of the retrieval object indicated by the “retrieval object name”. If the retrieval object appears large in size (“YES” at step S 405 ), then control proceeds to step S 406 . Here it is judged that the color of the retrieval object indicated by the “retrieval object name” is an extremely important item of retrieval information and the image feature weighting relating to the color of the retrieval object is increased to set the image feature of the “retrieval object name”. If the retrieval object does not appear large in size (“NO” at step S 405 ), then control proceeds to step S 409 .
- step S 409 The user is questioned “WHAT IS THE BACKGROUND?” at step S 409 using a dialog screen of the kind shown in FIG. 12.
- a command from the user is accepted.
- the background is specified by an object name in the manner shown in FIG. 12. It is determined at step S 410 whether the color of the background object is unique. If the color of the background object is unique (“YES” at step S 410 ), then control proceeds to step S 411 . If the color of the background object is not unique (“NO” at step S 410 ), then control proceeds to step S 412 .
- the user is prompted to “SPECIFY A COLOR APPROXIMATING THAT OF THE BACKGROUND” at step S 413 using a dialog screen of the kind shown in FIG. 13, and the apparatus accepts the designation made by the user.
- the specified color is stored as image feature weighting at step S 413 .
- step S 411 It is judged at step S 411 that the color of the background is an extremely important item of retrieval information and the image feature weighting relating to the color of the background is increased to set the image feature of the “retrieval object name”.
- image data is retrieved at step S 306 by referring to the image feature index 9 based upon the image feature.
- step S 307 it is determined whether image data to be retrieved exists. If image data to be retrieved exists (“YES” at step S 307 ), then control proceeds to step S 308 . Here the image data that has been retrieved is displayed on the retrieved result notification unit 12 . If image data to be retrieved does not exist (“NO” at step S 307 ), then control proceeds to step S 312 .
- step S 312 It is determined at step S 312 whether the user wishes to re-specify an image feature. If the user desires to re-specify (“YES” at step S 312 ), then control proceeds to step S 311 . Here the user is prompted to acquire an effective image feature weighting for the image feature. The details of this processing are as described above in the flowchart of FIG. 9. If the user does not desire to re-specify (“NO” at step S 312 ), then control proceeds to step S 313 . Here the fact that no image data has been retrieved is displayed on the retrieved result notification unit 12 .
- entered image data is managed by being mapped to the image feature of sought image data contained in this image data.
- the conventional operation of appending explanatory text and keywords to image data is no longer required and it is possible to retrieve desired image data from managed image data in an efficient manner.
- the gist of the present invention is a technique that can be applied to the retrieval of all forms of information media.
- the present invention can be applied to a system constituted by a plurality of devices (e.g., a host computer, interface, reader, printer, etc.) or to an apparatus comprising a single device (e.g., a copier or facsimile machine, etc.).
- a host computer e.g., a host computer, interface, reader, printer, etc.
- an apparatus e.g., a copier or facsimile machine, etc.
- the object of the present invention can also be achieved by providing a storage medium storing the program codes of the software for performing the aforesaid functions of the foregoing embodiment to a system or an apparatus, reading the program codes with a computer (e.g., a CPU or MPU) of the system or apparatus from the storage medium, and then executing the program.
- a computer e.g., a CPU or MPU
- the program codes read from the storage medium implement the novel functions of the invention, and the storage medium storing the program codes constitutes the invention.
- the storage medium such as a floppy disk, hard disk, optical disk, magneto-optical disk, CD-ROM, CD-R, magnetic tape, non-volatile type memory card or ROM can be used to provide the program codes.
- the present invention covers a case where an operating system or the like working on the computer performs a part of or the entire process in accordance with the designation of program codes and implements the functions according to the embodiment.
- the present invention further covers a case where, after the program codes read from the storage medium are written in a function extension board inserted into the computer or in a memory provided in a function extension unit connected to the computer, a CPU or the like contained in the function extension board or function extension unit performs a part of or the entire process in accordance with the designation of program codes and implements the function of the above embodiment.
- the program code stored on the storage medium would be that for at least an “input module”, “calculation module” and “management module”.
- the “input module” is for entering image data as well as the coordinate of sought image data contained in the image data.
- the “calculation module” is for calculating an image feature of the sought image data based upon the entered coordinates of the sought image data.
- the “management module” is for mapping and managing calculated image features and entered image data.
Abstract
Image data and coordinate of sought image data contained in this image data are entered from an image input unit. Next, the sought image data is extracted by a sought-object extraction unit based upon the coordinates of the entered sought image data. An image feature of the extracted sought image data is then calculated by an image-feature extraction unit. The calculated image feature and the entered image data are mapped in an image memory unit and managed.
Description
- This invention relates to an image processing apparatus for managing image data to a method of controlling this apparatus.
- Using a photographically captured image as a retrieval condition when retrieving image data managed by an image processing apparatus does not assure good retrieval precision because it is difficult to realize accurate recognition of the captured image.
- The general practice, therefore, is to append explanatory text and a keyword indicative of image data when the image data is managed and subsequently retrieve the image data using the explanatory text and keyword that were appended to the image data.
- The following method has been proposed as a method of retrieving image data to which explanatory text and a keyword have not been appended: First, the image features of the overall image data are extracted and the extracted image features are managed by being mapped to the image data. Then the image feature of image data entered as a retrieval condition is extracted and this extracted image feature is compared with the image features of the managed image data to thereby retrieve the desired image data.
- With an image processing apparatus that manages image data by appending explanatory text and keywords to image data, the task of appending the explanatory text and keywords to the image data places a considerable burden upon user, especially when the image data managed is large in quantity.
- In the case of the image processing apparatus that manages image data by mapping image data to the image features thereof, image data is merely retrieved based upon the image features of the overall image data, and image data inclusive of image data sought by the user that is contained in the overall image data cannot be retrieved with a high degree of precision. Further, in view of the fact that it may generally be surmised that image data entered by the user as a retrieval condition is decided based upon the sought image data, the fact that the sought image data can be retrieved as a retrieval condition is of great significance.
- Accordingly, an object of the present invention is to provide an image processing apparatus and a method of controlling the same whereby the burden placed upon the user for the purpose of managing image data is alleviated and image data in accordance with user preference can be retrieved from the managed image data.
- According to the present invention, the foregoing object is attained by providing an image processing apparatus for managing image data, comprising input means for inputting image data and coordinates of sought image data contained in this image data, calculating means for calculating an image feature of the sought image data based upon the coordinates of the sought image data input by the input means, and management means for performing management by mapping the image feature calculated by the calculating means and the input image data.
- Further, according to the present invention, the foregoing object is attained by providing a method of controlling an image processing apparatus for managing image data, comprising an input step of inputting image data and coordinates of sought image data contained in this image data, a calculating step of calculating an image feature of the sought image data based upon the coordinates of the sought image data input at the input step, and a management step of performing management in a memory by mapping the image feature calculated at the calculating step and the input image data.
- Further, according to the present invention, the foregoing object is attained by providing a computer readable memory storing program codes for controlling an image processing apparatus for managing image data, comprising program code of an input step of inputting image data and coordinates of sought image data contained in this image data, program code of a calculating step of calculating an image feature of the sought image data based upon the coordinates of the sought image data input at the input step, and program code of a management step of performing management in a memory by mapping the image feature calculated at the calculating step and the input image data.
- In accordance with the present invention described above, it is possible to provide an image processing apparatus and a method of controlling the same whereby the burden placed upon the user for the purpose of managing image data is alleviated and image data in accordance with user preference can be retrieved from the managed image data.
- Other features and advantages of the present invention will be apparent from the following description taken in conjunction with the accompanying drawings, in which like reference characters designate the same or similar parts throughout the figures thereof.
- The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention.
- FIG. 1 is a block diagram illustrating the construction of an image processing apparatus according to an embodiment of the present invention;
- FIG. 2 is a diagram showing an example of the detailed composition of an equivalent/synonym/concept dictionary according to the embodiment of the invention;
- FIG. 3 is a flowchart illustrating the flow of image registration processing executed according to the embodiment of the invention;
- FIG. 4 is a flowchart illustrating an overview of retrieval processing executed according to the embodiment of the invention;
- FIG. 5 is a flowchart illustrating the details of retrieval processing executed according to the embodiment of the present invention;
- FIG. 6 is a diagram showing an example of a user interface according to the embodiment of the invention;
- FIG. 7 is a diagram showing an example of a user interface according to the embodiment of the invention;
- FIG. 8 is a diagram showing an example of updating of an equivalent/synonym/concept dictionary according to the embodiment of the invention;
- FIG. 9 is a flowchart illustrating the details of processing of a step S305 according to this embodiment;
- FIG. 10 is a diagram showing an example of a user interface according to the embodiment of the invention;
- FIG. 11 is a diagram showing an example of a user interface according to the embodiment of the invention;
- FIG. 12 is a diagram showing an example of a user interface according to the embodiment of the invention;
- FIG. 13 is a diagram showing an example of a user interface according to the embodiment of the invention; and
- FIG. 14 is a diagram showing the structure of the memory map of a storage medium storing program code for implementing the embodiment of the present invention.
- A preferred embodiment of the present invention will now be described in detail with reference to the drawings.
- FIG. 1 is a block diagram illustrating the construction of an image processing apparatus according to an embodiment of the present invention.
- The apparatus of FIG. 1 includes a
user interface 1 comprising a keyboard and mouse, animage input unit 2 for inputting image data, alanguage processor 3, and animage storage unit 4 for accumulating image data stored temporarily in animage memory 5. According to this embodiment, theimage input unit 2 is an image photography input unit having a line-of-sight sensor incorporated in a finder or an image photography input unit having a control panel by which an image displayed on a built-in or externally attached display unit can be subjected to manipulation such as processing and editing. In accordance with a command from theuser interface 1, theimage input unit 2 enters image data based upon a captured image as well as the coordinates of image data (referred to as “sought image data” below) corresponding to an object of interest contained in the first-mentioned image data. - For example, consider the image photography input unit having the line-of-sight sensor incorporated in the finder. The coordinates of sought image data in a case where this input unit is employed are acquired from coordinate information represented by the image data coordinate system of the user's line of sight, which data is obtained from the line-of-sight sensor.
- In the case of the image photography input unit having the control panel by which an image displayed on a built-in or externally attached display unit can be subjected to manipulation such as processing and editing, the coordinates of sought image data are obtained by designating the sought object from the image displayed on the display unit and acquiring the coordinates from coordinate information on the display unit, the coordinate information representing the sought object that has been designated.
- Further, consider an image photography input unit not having the function of the above-described image photography input unit. The coordinates of the sought image data in a case where such an input unit is employed are acquired from coordinates for which there is a statistically high probability of a prior existence of the sought object or from coordinates of a sought object for which information such as lightness, hue and saturation is decided from a psychological standpoint. A method of acquiring coordinates of sought image data from a psychological standpoint is to adopt the position of the center of the captured image as the coordinates of sought image data or to calculate prominence through a calculation which weights the height of saturation and the height of luminance in input image data and adopt image data having a high degree of prominence as the coordinates of the sought image data.
- The
image memory 5 temporarily stores the image data and the coordinates of the sought image data entered from theimage input unit 2. A sought-object extraction unit 50 extracts the sought image data from the entered image data based upon the entered coordinates of the sought image data. An imagefeature extraction unit 7 extracts an image feature for retrieving image data that resembles the sought image data extracted by the sought-object extraction unit 50. - Extraction of the sought image data and of the image feature thereof is carried out through the following procedure: First, while taking the color and edge of image data into consideration, a particular area is increased in size from the coordinates of the sought image data toward the periphery while ranges over which identical image data can be recognized are obtained. Further, in concurrence with this growth of the area, an image feature on the boundary of each range is extracted. Each image feature obtained until the area eventually grows to the shape, position and size of the image data is extracted as the image feature of the sought image data.
- An
image feature index 9 registers the image feature of image data extracted by the imagefeature extraction unit 7 by mapping this feature to the image data. - The
language processor 3 outputs a retrieval word, which has been entered from theuser interface 1, to animage retrieval unit 10. -
Numeral 6 denotes a full-text retrieval registration unit & keyword registration unit for registering a word, which is entered from thelanguage processor 3, by mapping the word to the entered image data. An equivalent/synonym/concept dictionary 8 is a dictionary for managing equivalents and synonyms according to each concept corresponding thereto. A word managed by the equivalent/synonym/concept dictionary 8 has an appended image feature weighting that indicates its own effectiveness with respect to a retrieval word. The details of the equivalent/synonym/concept dictionary 8 will be described later. - The
retrieval unit 10 has animage retrieval section 10 a, a language-to-imagefeature concept converter 10 b and aword retrieval section 10 c. Theword retrieval section 10 c retrieves a word pertaining to a retrieval word entered from thelanguage processor 3. The language-to-imagefeature concept converter 10 b obtains the image feature weighting corresponding to a retrieval word by referring to the equivalent/synonym/concept dictionary 8 and calculates an image feature for retrieving the image data. On the basis of the image feature calculated by the language-to-imagefeature concept converter 10 b, theimage retrieval section 10 a refers to theimage feature index 9 to retrieve the image data. - A retrieved
result notification unit 12 displays image data obtained from theimage retrieval unit 10 as the results of retrieval. Further, in regard to an entered retrieval word, the retrievedresult notification unit 12 displays a dialog screen for obtaining information, to the user, that useful in performing retrieval. - An example of the detailed construction of the equivalent/synonym/
concept dictionary 8 according to this embodiment will now be described with reference to FIG. 2. - FIG. 2 is a diagram showing an example of the detailed construction of the equivalent/synonym/concept dictionary according to the embodiment of the invention.
- Equivalents and synonyms are registered in the equivalent/synonym/
concept dictionary 8 in dependence upon degrees of abstraction of corresponding to the equivalents and synonyms. Equivalents and synonyms have different degrees of abstraction in terms of the concepts they represent. A concept distance which indicates the difference between degrees of abstraction is defined between these equivalents and synonyms. For example, the concept distance between “vehicle” and “wheeled vehicle” is 10 in FIG. 2. - Further, for equivalents and synonyms, n-dimensional vectors are defined as image feature weightings for subjecting n image features of an entered retrieval word to weighting that reflects the effectiveness of its own image feature in regard to a retrieval word. The n-dimensional vectors are normalized so that the magnitudes thereof are made 100.
- For example, a motor vehicle is an artificial object and can be of various colors. Accordingly, the weighting applied to an image feature relating to color is defined as being 0. This means that this image feature should not be referred to in the retrieval operation. As a result, the system executes retrieval in which weight is placed upon image features other than color, these image features being efficacious in regard to the retrieval word. However, if the retrieval word is “red car”, then information relating to the color “red” is taken into account and the system performs retrieval in which the color red is taken into consideration as an image feature.
- Image registration processing executed by the image processing apparatus of this embodiment to register image data will be described with reference to FIG. 3.
- FIG. 3 is a flowchart illustrating the flow of image registration processing executed according to the embodiment of the invention.
- First, at step S101 in FIG. 3, entered image data and coordinates of sought image data are input by the
image input unit 2. The input image data is stored temporarily in theimage memory 5. It is determined at step S102 whether the coordinates of the sought image data have been entered or not. If the coordinates of the sought image data have been entered (“YES” at step S102), then control proceeds to step S103, at which the sought image data is extracted from the entered image data using these coordinates. - If the coordinates of the sought image data have not been entered (“NO” at step S102), then control proceeds to step S107, at which it is determined whether coordinates have been entered by the user. If coordinates have been entered by the user (“YES” at step S107), then control proceeds to step S108, at which the sought image data is extracted from the entered image data using these coordinates.
- If coordinates have not been entered by the user (“NO” at step S107), then control proceeds to step S109, at which the coordinates of the sought image data are decided from statistical and psychological standpoints. This is followed by step S110, at which the sought image data is extracted from the entered image data using the coordinates that have been decided.
- An image feature of the sought image data that has been extracted is extracted at step S104. Next, at step S105, the image feature of the sought image data is registered in the
image feature index 9 by being mapped to the entered image data. The entered image data is stored in theimage storage unit 4 at step S106. - An overview of retrieval processing executed by the image processing apparatus of this embodiment to retrieve image data will now be described with reference to FIG. 4.
- FIG. 4 is a flowchart illustrating an overview of retrieval processing executed according to the embodiment of the invention.
- All image information stored in the
image storage unit 4 is set at step S201 as image data to be retrieved. This is followed by step S202, at which “natural language” is input from theuser interface 1 as a retrieval condition, and by step S203, at which retrieval processing for retrieving image data is executed based upon the entered “natural language”. When retrieval processing is executed, the results of retrieval are displayed by the retrievedresult notification unit 12 at step S204. This is followed by step S206, at which the apparatus accepts a command entered by the user in regard to the results of retrieval. - If upon viewing the results of retrieval the user decides upon a further search word to narrow down the results of retrieval, an indication to the effect that retrieval is to be performed while narrowing down the current results of retrieval is made from the
user interface 1 at step 208, whereupon control returns to step S202 and the user enters a new retrieval condition. In this case the apparatus retains the currently obtained retrieval results and narrows down the results by taking the logical product between the results of retrieval obtained by retrieval processing based upon the newly entered retrieval condition and the results of retrieval that have been retained. - If the user enters a command indicating that retrieval processing should be terminated, then processing is terminated.
- If the user enters a command indicating the retrieval should be performed again, then the results of retrieval are cleared at step S209 and control returns to step S201.
- If the user enters a command indicating that image data is to be selected in order to display the details of desired image data taken from the image data being displayed (in reduced size) as the results of retrieval, then the details (the image data in the original size) of the selected image data (being displayed in reduced size) are displayed at step S207.
- The details of the retrieval processing of step S203 will be described with reference to FIG. 5, which is a flowchart illustrating an overview of retrieval processing executed according to the embodiment of the invention.
- First, at step S301, the “natural language” serving as the entered retrieval condition is subjected to morpheme analysis and modifier analysis by referring to the equivalent/synonym/
concept dictionary 8, whereby there are extracted a retrieval object name and an external-appearance feature which indicates the feature of the “retrieval object name”. Next, it is determined at step S302 whether the extracted “retrieval object name” exists in the equivalent/synonym/concept dictionary 8. If the “retrieval object name” exists in the equivalent/synonym/concept dictionary 8 (“YES” at step S302), then control proceeds to step S303. On the other hand, if the “retrieval object name” does not exist in the equivalent/synonym/concept dictionary 8 (“NO” at step S302), then control proceeds to step S309. The user is prompted at step S309 to acquire the “object name” of a concept that most closely approximates the “retrieval object name”. Further, the “object name” acquired is set as the “retrieval object name”. The processing of step S309 makes it possible to extract the word of a concept nearest to the entered “retrieval object name”. This is followed by step S310, at which the “retrieval object name” set at step S309 is registered as a new “retrieval object name” in the equivalent/synonym/concept dictionary 8. - A specific example of the processing of step S309 will be described with reference to FIGS. 6 through 8.
- FIG. 6 is a diagram showing an example of a user interface according to the embodiment of the invention.
- When the processing of step S309 is executed, a word entry screen of the kind shown in FIG. 6 is displayed on the retrieved
result notification unit 12. If the user enters words of a concept closest to the “retrieval object name” using this word entry screen, the words of a concept closest to these entered words will be displayed. In the example of FIG. 6, “passenger car” was entered as the “retrieval object name”. However, since this has not been registered in the equivalent/synonym/concept dictionary 8, FIG. 6 illustrates a case where the concept “motor vehicle” that most closely approximates “passenger car” has been entered. - Five buttons, namely “PREVIOUS”, “NEXT”, “FIND”, “CANCEL” and “OK” are provided on the right side of the word entry screen. When these buttons are clicked using a cursor displayed on the retrieved
result notification unit 12, the processing described below is executed. - The “PREVIOUS” button causes the currently displayed word entry screen to return to the preceding word entry screen. The “NEXT” button causes a new word entry screen to be displayed. The “FIND” button retrieves the word of a concept nearest to the entered word and causes the results to be displayed as shown in FIG. 7, by way of example. The “CANCEL” button cancels the entered work. The “OK” button selects the word of a concept nearest to the “retrieval object name” that has not been registered in the equivalent/synonym/
concept dictionary 8. On the basis of the selected word, the “retrieval object name” that has not been registered in the equivalent/synonym/concept dictionary 8 is registered at the level of a concept between level of the concept of the selected word and the level of the concept of the neighboring word in the equivalent/synonym/concept dictionary 8. - For example, if “motor vehicle” is selected as the words of a concept most closely approximating “passenger car”, which has not been registered in the equivalent/synonym/
concept dictionary 8, then “passenger car” is registered as the words having the level of a concept between the level of the concept of “motor vehicle” and the level of the concept of “car”, as shown in FIG. 8. Further, the position at which “passenger car” is registered is such that the concept distance to “motor vehicle” is the same as the concept distance to “car”. Furthermore, the image feature weighting of “passenger car” is created and registered based upon the image feature weighting of “motor vehicle” and the image feature weighting of “car”. - With reference again to the flowchart of FIG. 5, the image feature weighting of “retrieval object name” is acquired from the equivalent/synonym/
concept dictionary 8 at step S303. This is followed by step S304 where, if the external-appearance feature of the “retrieval object name” has been extracted, the image feature weighting relating to this external-appearance feature is appended to the image feature weighting acquired at step S303. The user is prompted at step S305 to obtain an effective image feature weighting for the image feature. The details of this processing will be described with reference to the flowchart of FIG. 9. - FIG. 9 is a flowchart illustrating the details of processing of step S305 according to this embodiment.
- It should be noted that the processing of FIG. 9 is executed to make up for a case where further information (image feature weighting) useful in retrieval is desired to be obtained or for a case where image features in retrieval are too few.
- First, at step S403, it is determined, based upon the image feature weighting of an acquired “retrieval object name”, whether the color of the retrieval object indicated by the “retrieval object name” is unique. If the color of the retrieval object is unique (“YES” at step S403), then control proceeds to step S404. If the color of the retrieval object is not unique (“NO” at step S403), on the other hand, then control proceeds to step S407. The user is prompted to “SPECIFY A COLOR APPROXIMATING THAT OF THE RETRIEVAL OBJECT” at step S407 using a dialog screen of the kind shown in FIG. 10, and the apparatus accepts the designation made by the user. The color is specified using language or a color sample in the manner depicted in FIG. 10. Noted, a calorimetric system of the color sample is not limited, CIE 1976, L*a*b*, YCbCr, and the like may be used. The color that has been specified is stored as image feature weighting at step S408.
- The user is questioned “DOES RETRIEVAL OBJECT APPEAR LARGE IN SIZE?” at step S404 using a dialog screen of the kind shown in FIG. 11. This is followed by step S405, at which the user employs this dialog screen to make an answer is regard to the size of the retrieval object indicated by the “retrieval object name”. If the retrieval object appears large in size (“YES” at step S405), then control proceeds to step S406. Here it is judged that the color of the retrieval object indicated by the “retrieval object name” is an extremely important item of retrieval information and the image feature weighting relating to the color of the retrieval object is increased to set the image feature of the “retrieval object name”. If the retrieval object does not appear large in size (“NO” at step S405), then control proceeds to step S409.
- The user is questioned “WHAT IS THE BACKGROUND?” at step S409 using a dialog screen of the kind shown in FIG. 12. In addition, a command from the user is accepted. The background is specified by an object name in the manner shown in FIG. 12. It is determined at step S410 whether the color of the background object is unique. If the color of the background object is unique (“YES” at step S410), then control proceeds to step S411. If the color of the background object is not unique (“NO” at step S410), then control proceeds to step S412. The user is prompted to “SPECIFY A COLOR APPROXIMATING THAT OF THE BACKGROUND” at step S413 using a dialog screen of the kind shown in FIG. 13, and the apparatus accepts the designation made by the user. The specified color is stored as image feature weighting at step S413.
- It is judged at step S411 that the color of the background is an extremely important item of retrieval information and the image feature weighting relating to the color of the background is increased to set the image feature of the “retrieval object name”.
- Here a case has been described in which information relating to color of retrieval object and color of background is specified by the operator. However, it goes without saying that an arrangement can be adopted in which information relating to the features of the surface of a retrieval object and information relating to other features may be specified by the user. In the arrangement set forth above, image feature weighting useful in retrieval based upon entered retrieval conditions can be created and retrieval can be performed based upon an image feature that takes this image feature weighting into account. This makes it possible to perform more precise retrieval.
- With reference again to the flowchart of FIG. 5, image data is retrieved at step S306 by referring to the
image feature index 9 based upon the image feature. Next, it is determined a step S307 whether image data to be retrieved exists. If image data to be retrieved exists (“YES” at step S307), then control proceeds to step S308. Here the image data that has been retrieved is displayed on the retrievedresult notification unit 12. If image data to be retrieved does not exist (“NO” at step S307), then control proceeds to step S312. - It is determined at step S312 whether the user wishes to re-specify an image feature. If the user desires to re-specify (“YES” at step S312), then control proceeds to step S311. Here the user is prompted to acquire an effective image feature weighting for the image feature. The details of this processing are as described above in the flowchart of FIG. 9. If the user does not desire to re-specify (“NO” at step S312), then control proceeds to step S313. Here the fact that no image data has been retrieved is displayed on the retrieved
result notification unit 12. - In accordance with this embodiment as described above, entered image data is managed by being mapped to the image feature of sought image data contained in this image data. As a result, the conventional operation of appending explanatory text and keywords to image data is no longer required and it is possible to retrieve desired image data from managed image data in an efficient manner.
- Further, it is possible to enter, in dependence upon an entered retrieval condition, an external-appearance feature of a retrieval object that constitutes the retrieval condition, and detailed retrieval conditions desired by the user can be entered. Even if natural language that has not been registered in the equivalent/synonym/
concept dictionary 8 is entered, a word of the concept that approximates this natural language can be entered and retrieved. The user need not append a keyword but need only enter a retrieval word to make possible the retrieval of image data desired by the user. In the event of an unknown word such as a word newly coined, the equivalent/synonym/concept dictionary 8 can be updated by a learning function through an interactive interface with the user. This makes it possible to realize an automatic learning function for retrieval in line with user preference and allows broader searches in conformity with changing times. - In the embodiment set forth above, an example is described in which natural image data is retrieved. However, the gist of the present invention is a technique that can be applied to the retrieval of all forms of information media.
- Further, though not set forth in this embodiment, it is possible to execute processing in parallel with processing for appending explanatory text and keywords to images and performing retrieval based upon the same, combine the results of this processing and give notification of the results of retrieval.
- The present invention can be applied to a system constituted by a plurality of devices (e.g., a host computer, interface, reader, printer, etc.) or to an apparatus comprising a single device (e.g., a copier or facsimile machine, etc.).
- Further, it goes without saying that the object of the present invention can also be achieved by providing a storage medium storing the program codes of the software for performing the aforesaid functions of the foregoing embodiment to a system or an apparatus, reading the program codes with a computer (e.g., a CPU or MPU) of the system or apparatus from the storage medium, and then executing the program.
- In this case, the program codes read from the storage medium implement the novel functions of the invention, and the storage medium storing the program codes constitutes the invention.
- Further, the storage medium, such as a floppy disk, hard disk, optical disk, magneto-optical disk, CD-ROM, CD-R, magnetic tape, non-volatile type memory card or ROM can be used to provide the program codes.
- Furthermore, besides the case where the aforesaid functions according to the embodiment are implemented by executing the program codes read by a computer, the present invention covers a case where an operating system or the like working on the computer performs a part of or the entire process in accordance with the designation of program codes and implements the functions according to the embodiment.
- The present invention further covers a case where, after the program codes read from the storage medium are written in a function extension board inserted into the computer or in a memory provided in a function extension unit connected to the computer, a CPU or the like contained in the function extension board or function extension unit performs a part of or the entire process in accordance with the designation of program codes and implements the function of the above embodiment.
- In a case where the present invention is applied to the above-mentioned storage medium, program code corresponding to the above-described flowcharts is stored on the storage medium. That is, the modules shown in the example of the memory map of FIG. 14 would be stored on the storage medium.
- Specifically, the program code stored on the storage medium would be that for at least an “input module”, “calculation module” and “management module”.
- The “input module” is for entering image data as well as the coordinate of sought image data contained in the image data. The “calculation module” is for calculating an image feature of the sought image data based upon the entered coordinates of the sought image data. The “management module” is for mapping and managing calculated image features and entered image data.
- The present invention is not limited to the above embodiments and various changes and modifications can be made within the spirit and scope of the present invention. Therefore, to apprise the public of the scope of the present invention, the following claims are made.
Claims (13)
1. An image processing apparatus for managing image data, comprising:
input means for inputting image data and coordinates of sought image data contained in this image data;
calculating means for calculating an image feature of the sought image data based upon the coordinates of the sought image data input by said input means; and
management means for performing management by mapping the image feature calculated by said calculating means and the input image data.
2. The apparatus according to , wherein said input means includes:
claim 1
photography means for capturing an image by photography;
sensing means for sensing line of sight of a user with respect to the image captured by said photography means; and
extracting means for extracting coordinates of the sought image data based upon results of sensing performed by said sensing means.
3. The apparatus according to , wherein said input means includes:
claim 1
photography means for capturing an image by photography;
display means for displaying the image captured by said photography means;
designating means for designating a sought image contained in the image displayed by said display means; and
extracting means for extracting coordinates of the sought image data, which is based upon the sought image, based upon results of designation by said designating means.
4. The apparatus according to , wherein said input means includes:
claim 1
photography means for capturing an image by photography;
deciding means for deciding coordinates of the sought image data, which is based upon a sought image contained in the image, based upon a photographic characteristic of said photography means.
5. The apparatus according to , further comprising retrieval-condition input means for inputting a retrieval condition for retrieving desired image data from the image data managed by said management means.
claim 1
6. The apparatus according to , further comprising:
claim 5
analyzing means for analyzing the retrieval conditions entered from said retrieval-condition input means; and
output means which, on the basis of results of analysis by said analyzing means, outputs a prompt which prompts the user to enter a retrieval condition that is different from the above-mentioned retrieval condition.
7. A method of controlling an image processing apparatus for managing image data, comprising:
an input step of inputting image data and coordinates of sought image data contained in this image data;
a calculating step of calculating an image feature of the sought image data based upon the coordinates of the sought image data input at the input step; and
a management step of performing management in a memory by mapping the image feature calculated at said calculating step and the input image data.
8. The method according to , wherein said input step includes:
claim 7
a sensing step of sensing line of sight of a user with respect to a captured image; and
an extracting step of extracting coordinates of the sought image data based upon results of sensing performed at said sensing step.
9. The method according to , wherein said input step includes:
claim 7
a display step of displaying a captured image; and
an extracting step of extracting coordinates of the sought image data, which is based upon a designated sought image contained in the image displayed at said display step.
10. The method according to , wherein said input step includes a deciding step of deciding coordinates of the sought image data, which is based upon a sought image contained in the image, based upon a photographic characteristic of a captured image.
claim 7
11. The method according to , further comprising a retrieval-condition input step of inputting a retrieval condition for retrieving desired image data from the image data in the memory managed at said management step.
claim 7
12. The method according to , further comprising:
claim 11
an analyzing step of analyzing the retrieval conditions entered at said retrieval-condition input step; and
an output step which, on the basis of results of analysis at said analyzing step, outputs a prompt which prompts the user to enter a retrieval condition that is different from the above-mentioned retrieval condition.
13. A computer readable memory storing program codes for controlling an image processing apparatus for managing image data, comprising:
program code of an input step of inputting image data and coordinates of sought image data contained in this image data;
program code of a calculating step of calculating an image feature of the sought image data based upon the coordinates of the sought image data input at said input step; and
program code of a management step of performing management in a memory by mapping the image feature calculated at the calculating step and the input image data.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP9-163033 | 1997-06-19 | ||
JP16303397A JP3673615B2 (en) | 1997-06-19 | 1997-06-19 | Image processing apparatus and control method thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
US20010045948A1 true US20010045948A1 (en) | 2001-11-29 |
US6411291B2 US6411291B2 (en) | 2002-06-25 |
Family
ID=15765918
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/095,545 Expired - Lifetime US6411291B2 (en) | 1997-06-19 | 1998-06-11 | Image processing appratus and method of controlling same |
Country Status (2)
Country | Link |
---|---|
US (1) | US6411291B2 (en) |
JP (1) | JP3673615B2 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040220962A1 (en) * | 2003-04-30 | 2004-11-04 | Canon Kabushiki Kaisha | Image processing apparatus, method, storage medium and program |
CN107251045A (en) * | 2015-03-05 | 2017-10-13 | 欧姆龙株式会社 | Object detector, object identification method and program |
US10586127B1 (en) | 2011-11-14 | 2020-03-10 | Google Llc | Extracting audiovisual features from content elements on online documents |
US10972530B2 (en) | 2016-12-30 | 2021-04-06 | Google Llc | Audio-based data structure generation |
US11087424B1 (en) | 2011-06-24 | 2021-08-10 | Google Llc | Image recognition-based content item selection |
US11093692B2 (en) | 2011-11-14 | 2021-08-17 | Google Llc | Extracting audiovisual features from digital components |
US11100538B1 (en) | 2011-06-24 | 2021-08-24 | Google Llc | Image recognition based content item selection |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE60044924D1 (en) * | 1999-01-29 | 2010-10-21 | Lg Electronics Inc | PROCESSES FOR SEARCHING AND BROWSING MULTIMEDIA DATA AND DATA STRUCTURE |
US7016916B1 (en) * | 1999-02-01 | 2006-03-21 | Lg Electronics Inc. | Method of searching multimedia data |
JP2001282846A (en) * | 2000-03-29 | 2001-10-12 | Canon Inc | Image retrieval method and device |
JP4266695B2 (en) * | 2003-04-30 | 2009-05-20 | キヤノン株式会社 | Image processing apparatus and image processing method |
CN108829764B (en) * | 2018-05-28 | 2021-11-09 | 腾讯科技(深圳)有限公司 | Recommendation information acquisition method, device, system, server and storage medium |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH05242160A (en) * | 1992-02-27 | 1993-09-21 | Matsushita Electric Ind Co Ltd | Image feature extraction device, image collation device, and image retrieval device |
JPH05242161A (en) * | 1992-02-28 | 1993-09-21 | Nippon Telegr & Teleph Corp <Ntt> | Image retrieval device |
JP3178483B2 (en) * | 1992-06-09 | 2001-06-18 | 富士ゼロックス株式会社 | Document processing device |
JPH06295318A (en) * | 1993-04-09 | 1994-10-21 | Omron Corp | Method and device for allocating keyword |
JP3334949B2 (en) * | 1993-06-22 | 2002-10-15 | キヤノン株式会社 | Image processing apparatus and method |
JPH07160725A (en) * | 1993-12-03 | 1995-06-23 | Toshiba Corp | Picture retrieval device |
JP3163216B2 (en) * | 1994-03-31 | 2001-05-08 | シャープ株式会社 | Representative feature value extraction method and representative feature value extraction device |
JP3350223B2 (en) * | 1994-07-13 | 2002-11-25 | 富士通株式会社 | Automatic graph layout method and apparatus |
US5801687A (en) * | 1994-09-30 | 1998-09-01 | Apple Computer, Inc. | Authoring tool comprising nested state machines for use in a computer system |
JP3727967B2 (en) * | 1995-01-31 | 2005-12-21 | キヤノン株式会社 | Image search method and apparatus |
US5708767A (en) * | 1995-02-03 | 1998-01-13 | The Trustees Of Princeton University | Method and apparatus for video browsing based on content and structure |
JP3302855B2 (en) * | 1995-03-16 | 2002-07-15 | アルプス電気株式会社 | Region extraction method and apparatus |
JPH09153134A (en) * | 1995-11-29 | 1997-06-10 | Hitachi Ltd | Picture area extracting method |
US5896139A (en) * | 1996-08-01 | 1999-04-20 | Platinum Technology Ip, Inc. | System and method for optimizing a scene graph for optimizing rendering performance |
JPH10301943A (en) * | 1997-04-24 | 1998-11-13 | Canon Inc | Image processor and its controlling method |
JPH10289245A (en) * | 1997-04-15 | 1998-10-27 | Canon Inc | Image processor and its control method |
JPH10289241A (en) * | 1997-04-14 | 1998-10-27 | Canon Inc | Image processor and its control method |
JPH10289240A (en) * | 1997-04-14 | 1998-10-27 | Canon Inc | Image processor and its control method |
-
1997
- 1997-06-19 JP JP16303397A patent/JP3673615B2/en not_active Expired - Fee Related
-
1998
- 1998-06-11 US US09/095,545 patent/US6411291B2/en not_active Expired - Lifetime
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040220962A1 (en) * | 2003-04-30 | 2004-11-04 | Canon Kabushiki Kaisha | Image processing apparatus, method, storage medium and program |
US7548916B2 (en) * | 2003-04-30 | 2009-06-16 | Canon Kabushiki Kaisha | Calculating image similarity using extracted data |
US11087424B1 (en) | 2011-06-24 | 2021-08-10 | Google Llc | Image recognition-based content item selection |
US11100538B1 (en) | 2011-06-24 | 2021-08-24 | Google Llc | Image recognition based content item selection |
US11593906B2 (en) | 2011-06-24 | 2023-02-28 | Google Llc | Image recognition based content item selection |
US10586127B1 (en) | 2011-11-14 | 2020-03-10 | Google Llc | Extracting audiovisual features from content elements on online documents |
US11093692B2 (en) | 2011-11-14 | 2021-08-17 | Google Llc | Extracting audiovisual features from digital components |
CN107251045A (en) * | 2015-03-05 | 2017-10-13 | 欧姆龙株式会社 | Object detector, object identification method and program |
US10599709B2 (en) | 2015-03-05 | 2020-03-24 | Omron Corporation | Object recognition device, object recognition method, and program for recognizing an object in an image based on tag information |
US10972530B2 (en) | 2016-12-30 | 2021-04-06 | Google Llc | Audio-based data structure generation |
US11949733B2 (en) | 2016-12-30 | 2024-04-02 | Google Llc | Audio-based data structure generation |
Also Published As
Publication number | Publication date |
---|---|
JP3673615B2 (en) | 2005-07-20 |
US6411291B2 (en) | 2002-06-25 |
JPH1115834A (en) | 1999-01-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6711287B1 (en) | Image-feature extraction method and computer-readable record medium with a program for making a computer execute steps of the method recorded therein | |
JP4073156B2 (en) | Image search device | |
JP3469345B2 (en) | Image filing apparatus and filing method | |
US7698332B2 (en) | Projecting queries and images into a similarity space | |
JP3026712B2 (en) | Image search method and apparatus | |
US20040215660A1 (en) | Image search method and apparatus | |
US6480841B1 (en) | Information processing apparatus capable of automatically setting degree of relevance between keywords, keyword attaching method and keyword auto-attaching apparatus | |
US6411291B2 (en) | Image processing appratus and method of controlling same | |
US20010003182A1 (en) | Method and devices for indexing and seeking digital images taking into account the definition of regions of interest | |
US9558212B2 (en) | Apparatus, image processing method and computer-readable storage medium for object identification based on dictionary information | |
US20040177069A1 (en) | Method for fuzzy logic rule based multimedia information retrival with text and perceptual features | |
JP2004334334A (en) | Document retrieval system, document retrieval method, and storage medium | |
US6567551B2 (en) | Image search apparatus and method, and computer readable memory | |
US7117226B2 (en) | Method and device for seeking images based on the content taking into account the content of regions of interest | |
US7308119B2 (en) | Image retrieval apparatus and method, and image display apparatus and method thereof | |
US6430566B1 (en) | Image processing apparatus and control method therefor | |
JPH0460770A (en) | Method and device for retrieving picture using schematic picture | |
JPH10289240A (en) | Image processor and its control method | |
JPH10289245A (en) | Image processor and its control method | |
JPH07219959A (en) | Information retrieval method from database | |
JPH08263522A (en) | Image retrieving method | |
JPH05242160A (en) | Image feature extraction device, image collation device, and image retrieval device | |
JPH10289241A (en) | Image processor and its control method | |
JP2858576B2 (en) | Image processing method | |
JPH07239856A (en) | Method and device for retrieving image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: CANON KABUSHIKI KAISHA, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SHIIYAMA, HIROTAKA;REEL/FRAME:009243/0228 Effective date: 19980603 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
FPAY | Fee payment |
Year of fee payment: 12 |