GB2461641A - Object search and navigation - Google Patents

Object search and navigation Download PDF

Info

Publication number
GB2461641A
GB2461641A GB0911855A GB0911855A GB2461641A GB 2461641 A GB2461641 A GB 2461641A GB 0911855 A GB0911855 A GB 0911855A GB 0911855 A GB0911855 A GB 0911855A GB 2461641 A GB2461641 A GB 2461641A
Authority
GB
United Kingdom
Prior art keywords
visual content
content items
computer implemented
implemented method
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0911855A
Other versions
GB0911855D0 (en
Inventor
Dan Atsmon
Alon Atsmon
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to GB0911855A priority Critical patent/GB2461641A/en
Publication of GB0911855D0 publication Critical patent/GB0911855D0/en
Publication of GB2461641A publication Critical patent/GB2461641A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • G06F16/358Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/9038Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/904Browsing; Visualisation therefor

Abstract

A computer implemented method of presenting visual content items, comprises grouping 220 the visual content items according to predefined similarity rules relating to visual characteristics of the visual content items such that each group has a range for the number of its members; selecting 225 representative visual content item (e.g. thumbnail) for each group; presenting 230 the representative visual content item of each group that has a minimal number of members above a predefined threshold; and optionally presenting 235 the visual content items alongside the representative visual content items. The grouping may be carried out by using predefined color groups; by using predefined shape groups; or using Keypoints of the visual content items, and relate to groups of human faces, product images and landscape images. The visual content items may be product offerings presented on an online market place.

Description

OBJECT SEARCH AND NAVIGATION METHOD AND SYSTEM
CROSS REFERENCE TO RELATED APPLICATIONS
100011 This application claims the benefit of U.S. Provisional Patent Application No. 61/078,789 filed on July 8th 2008, which is incorporated herein by reference.
BACKGROUND
1. TECHNICAL FIELD
100021 The present invention relates to searching, presenting and navigating within a list of objects, and more particularly, to navigating using content items.
2. DISCUSSION OF RELATED ART 100031 Electronic shopping is becoming evermore elaborate and versatile, yet users are confronted with ever growing range of products that they may wish to choose from.
* e* : Current problems with electronic commerce are: Overload-A shopper will need to go through hundreds or even thousands of pages in order to get some orientation on the .. product selection; Requires Familiarity-The are many criteria for narrowing down the selection, yet some of them require the shopper for prior familiarity with the category **** * . . . which could be different has a photo been presented; No sub division-There is no way to ** * * really those thousands of deals to major subgroups; No Pareto-There is no for the shopper to focus his efforts on the major products rather than in ancillary products that may find themselves to the higher parts of the pages; Redundancy-in many cases tens of deals with the same offering are presented. These disadvantages may shopping experience tedious.
BRIEF SUMMARY
100041 Embodiments of the present invention provide a computer implemented method of presenting a plurality of visual content items, comprising: grouping the visual content items according to predefined similarity rules relating to visual characteristics of the visual content items such that each group has a range for the number of its members; selecting a representative visual content item for each group; and presenting the representative visual content item of each group that has a minimal number of members above a predefmed threshold.
100051 Accordingly, according to an aspect of the present invention, there is provided a computer implemented method, further comprising presenting the plurality of visual content items alongside the representative visual content items.
100061 Accordingly, according to an aspect of the present invention, the grouping may be carried out by using predefined color groups; by using predefined shape groups; or using * S..
* . at least one keypoint of the visual content items, and relate to groups of human faces, product images, landscape images.
100071 Embodiments of the present invention provide a data processing system for * S** analyzing and presenting a plurality of visual content items, comprising: a mediator * * server comprising a graphical user interface, the mediator server connected via a communication link with a user and with a plurality of sources holding the visual content items, and arranged to group the visual content items according to predefined similarity rules relating to visual characteristics of the visual content items such that each group has a range for the number of its members; and to select a representative visual content item for each group, wherein the graphical user interface is arranged to present the representative visual content items of each group that has a minimal number of members above a predefined threshold.
100081 Accordingly, according to an aspect of the present invention, there is provided a data processing system, wherein the graphical user interface is arranged to present the representative visual content items alongside the visual content items.
100091 These, additional, andlor other aspects andlor advantages of the present invention are: set forth in the detailed description which follows; possibly inferable from the detailed description; andlor learnable by practice of the present invention.
BRIEF DESCRIPTION OF THE DRAWINGS
100101 The present invention will now be described in the following detailed description of exemplary embodiments of the invention and with reference to the attached drawing, *:: :: in which dimensions of components and features shown are chosen for convenience and clarity of presentation and are not necessarily shown to scale. Generally, only structures, elements or parts that are germane to the discussion are shown in the figure.
Fig. 1 is a schematic illustration of a system setup in accordance with an exemplary embodiment of the invention; * . Fig. 2 is a flowchart of acts performed in querying an object, in accordance with an exemplary embodiment of the invention;
I
Fig. 3 is a flowchart of acts performed after an object has been selected, in accordance with an exemplary embodiment of the invention; Fig. 4 Shows screen shots of the results of exemplary embodiment of the invention; Fig. S Shows screen shots of the results of the process defined in figure 6. in accordance with an exemplary embodiment of the invention; Fig. 6 is a flowchart of acts performed after a query has been submitted, in accordance with an exemplary embodiment of the invention; Fig. 7 Shows screen shots of the menu items and search results in accordance with an exemplary embodiment of the invention; Fig. 8 is a flowchart of acts performed in classifying an image into the shapes, in accordance with an exemplary embodiment of the invention; Fig. 9 is a flowchart of acts performed on each contour collected in, in accordance with an exemplary embodiment of the invention; Fig. 10 shows screen shots of screen shot with MVP results in accordance with an exemplary embodiment of the invention; Figs. 11 and 12 are high level flowcharts illustrating a computer implemented method of * : :: :* running a query item on a plurality of visual content items, according to some embodiments of the invention; Fig. 13 is a scheme describing the system and process in accordance with an exemplary * . *. embodiment of the invention; s.d.
** Fig. 14 is a scheme describing the system in accordance with an exemplary embodiment of the invention;
I
Fig. 15 is a high level flowchart illustrating a computer implemented method of presenting a plurality of visual content items, according to some embodiments of the invention; and Fig. 16 is a high level block diagram illustrating a data processing system for analyzing and presenting a plurality of visual content items, according to some embodiments of the invention.
DETAILED DESCRIPTION
[0011] Provided herein is a detailed description of this invention. It is to be understood, however, that this invention may be embodied in various forms, and that the suggested (or proposed) embodiments are only possible implementations (or examples for a feasible embodiments, or materializations) of this invention. Therefore, specific details disclosed herein are not to be interpreted as limiting, but rather as a basis and/or principle for the claims, and/or as a representative basis for teaching one skilled in the art to employ this invention in virtually any appropriately detailed system, structure or manner.
100121 To facilitate understanding the present invention, the following glossary of terms is provided. It is to be noted that terms used in the specification but not included in this glossary are considered as defined according the normal usage of the computer science art, or alternatively according to normal dictionary usage.
. 100131 The term "GPU" as used herein in this application, is defined as an apparatus *, adapted to reduces the time it takes to produce images on the computer screen by incorporating its own processor and memory, having more than 4 CPU's such as GeForce 8800.
I
100141 The term "API" as used herein in this application, is defined as an application program interface that is the software interface to system services or software libraries.
API may be a third party's source code interface that supports requests for service, such as yahoo image service API.
100151 The term "URL" as used herein in this application, is defined as Universal Resource Locator. An Internet World Wide Web Address.
100161 The term "Keypoint" as used herein in this application, is defined as interest points in an object. For example, in the SIFT framework, the image is convolved with Gaussian filters at different scales, and then the difference of successive Gaussian-blurred images are taken. Keypoints are then taken as maxima/minima of the Difference of Gaussians. Such keypoints can be calculated for the original image or for a transformation of the original image such as an affine transform of the original images.
10017] The term "Keypoint descriptor" as used herein in this application, is defined as a descriptor of a key point. For example, in the SIFT framework the feature descriptor is computed as a set of orientation histograms on neighborhoods. The orientation * : : : : histograms are relative to the keypoint orientation and the orientation data comes from the Gaussian image closest in scale to the keypoint's scale. Just like before, the contribution of each pixel is weighted by the gradient magnitude, and by a Gaussian with s 1.5 times the scale of the keypoint. Histograms contain 8 bins each, and each descriptor contains an array of 4 histograms around the keypoint. This leads to a SIFT feature vector ** * * * * with 4 x 4 x 8 = 128 elements * *.
100181 The term "ROB" as used herein in this application, is defined as an additive color model in which red, green, and blue light are added together in various ways to reproduce a broad array of colors.
100191 The term "thumbnail" as used herein in this application, is defined as a reduced version of an image, which is commonly included in the image file itself.
100201 The term "Visual content item" as used herein in this application, is defined as an object with visual characteristics such as An image file like BMP,JPG,JPEG,GIF, PNG files; a screenshot; A video file like AVI, MPG,MPEG,MOV,WMV,FLV files or a one or more frame of a video.
100211 The term " Visual analysis " as used herein in this application, is defined as the analysis of the characteristics of visual objects such, as visual similarity, coherence, hierarchical organization, concept load or density, feature extraction and noise removal.
100221 The term "Text similarity" as used herein in this application, is defined as the Measure the pair-wise similarity of strings. Text similarity can score the overlaps found between two strings based on text matching. Identical strings will have a score of 100% while "car" and "dogs" will have close to zero score. "Nike Air max blue" and Nike Air * max red" will have a score which is between the two. Other string similarity metrics may also be used. * * * ***
100231 The term "Labeling" as used herein in this application, is defined as Creating a *. s. name for a group of items. For instance in case we are Labeling a product the label will ** S. * . describe several things about the product -who made it, when it was made, where it was made, its content, how is it to be use and how to use it safely.
I
100241 Fig. 1 is a schematic illustration of a system setup 110, which may benefit from methods in accordance with an exemplary embodiment of the invention. In setup 110, a server 100 is running an object search service. Object can be one of the following: An image file such as BMP,JPG,JPEG,GIF, PNG files; A video file such as AVI, MPG,MPEG,MOV,WMV,FLV files; An audio file such as MP3, WMA, WAV, OGG; A document such as DOC, DOCX, XLS, XML, HTML,PDF files. Optionally, servers such as server 100, uses one or more GPU's such as GPU 102 to accelerate its computations.
100251 Server 100 is connected over a computer network 104 to second server 108.
Optionally the communication is done through the API 106 of the second server 108.
The API receives the query parameters, and sends back to server 100 the query results.
Query results include a list of objects (for example the object files themselves or a list links to each of them). Preferably, such list if significantly longer than the number of object that are normally displayed on one page of a typical display device, for example 300 or 500 objects are returned as the later reordering would usually present a sub set of that list.
100261 Optionally, server 100 has or is connected to GPU 102. Such units usually have * two advantages: Multiple processors, at the present commercially available GPU s have 256 or even 320 stream processors while current commercially available Intel processors have 4 processors hence they have an advantage in massively parallel processes. Built in *.... ability to accelerate vector operation such as vector additions and subtractions. IS.*
1021 Fig. 2 is a flowchart of acts performed in querying an object, in accordance with an exemplary embodiment of the invention; and
I
100281 User input a query 202 (such as a keyword query) to the server 100. The server uses API 106 to submit a query request such request contains fields such as: The application ID; The query to search for; The kind of search to submit; The number of results to return; The starting result position to return; The finishing position, format (such as bmp, gif, jpeg, png); Whether the service should filter out adult content by default; Coloration (such as color, black and white); Site: a domain to restrict searches to; The format for the output; The name of the callback function to wrap around the data if needed; The API 106 responds 204 with fields such as: Result Set: Contains all of the query responses. Has attributes, The number of query matches in the database, The number of query matches returned, The position of the first result in the overall search; Result: Contains each individual response; Title: The title of the image file; Summary: Summary text associated with the image file; Url: The URL for the image file; Click Url The URL for linking to the image file; Referer Url: The URL of the web page hosting the content; File Size: The size of the file in bytes; File Format: One of bmp, gif, jpg, or png; Height: The height of the image in pixels; Width: The width of the image in pixels; Thumbnail: The URL of the thumbnail file and its height and width in pixels; Publisher: *::: :* The creator of the image file.
:. [00291 Restrictions: Provides any restrictions for this media object. Restrictions include noframe and noinline. Noframe means that you should not display it with a framed page s... on your site. Noinline means that you should not inline the object in the frame up top.
*: . Copyright: The copyright owner. Alternatively or additionally, some or all the query results can be received from the memory or storage device of server 100. For example they can be retrieved from cached query results saved in stage 206 of a previous query.
There are at least two possible ways to decide whether to search the query results in the memory or storage: The query is identical or similar to a previous query; the links are identical to links stored in the memory or storage. The Server then presents the query results, preferably in the order received on the user's screen as shown in screen shot 402 of figure 4.Optionally, query results are than saved 206.
100301 Following that, two processes run in parallel: 208: The object thumbnails are downloaded from their respective URL's, this process usually starts first. 210: Displaying the thumbnails of the object's on the users display and further information such as the link to the original object and summary data. Alternatively, this process can happen one after another and not in parallel.
100311 Fig. 3 is a flowchart of acts performed after an object has been selected, in accordance with an exemplary embodiment of the invention. An object is selected 302 by means of mouse click, mouse double click or any other way that enables a user to select it, an example can be shown in the selection of object 410 in figure 4.
Consequently, the objects received in step 204, such as the objects presented in screen *: : : * shot 402 of figure 4, are sorted 304 according to their similarity to the selected object.
100321 There are several ways to perform such sorting: Using methods such as Scale-s. **a invariant feature transform or similar methods such as GLOH (Gradient Location and Orientation Histogram), PCA-SIFT and MSR. Such method usually use keypoint localization step, an later on compare many keypoint descriptors in one object to a plurality of keypoint descriptors in another object and hence require quick computation in order to compare an object to a plurality of object within a response time an ordinary user would expect. The higher the number or the percentage of keypoint descriptors in a first
S
object than match (exactly or approximately) keypoint descriptors in a second object the higher is the similarity between the two objects. Using methods such as Haar wavelet transform. Comparing the color histograms of the object to other color histograms.
Categorizing, for example dividing images to images with human faces, or human skin vs. other images using face detection of human skin detection software program. The methods can be used separately, one after another or in parallel. In case a heavily computational method such as (a) is used it is advisable to use a GPU such as 102 to attain a reasonable response time.
100331 Once or in parallel to sorting 304 the objects are presented 306 to the user in descending order of similarity. This way the user can focus on the objects that are most similar to the selected object. Optionally, objects that have a similarity lower than a certain threshold would not be presented. The objects compared during process 304 can be either the thumbnails or the object files themselves.
100341 Fig. 4 Shows screen shots of the results of stages 210 and 306. Screen shot 402 shows the results of step 210. It can be seen that the object are not necessarily sorted according to their similarity to object 410. Once object 410 has been selected by the user, the sort process 304 is performed. Consequently in step 306 the sorted objects are presented as shown in screen shot 404. Object 410 is shown first and the other object are presented right and down to him in descending order of similarity. The end result is that s.*. once the user has selected object 410, it received back only object 410 and its similar 5.*.
*: . objects. This was the process of paging back and forth to collect similar objects in an object search engine is saved and higher productivity is achieved.
100351 It is noted that some of the above described embodiments may describe the best mode contemplated by the inventors and therefore may include structure, acts or details of structures and acts that may not be essential to the invention and which are described as examples. Structure and acts described herein are replaceable by equivalents which perfonn the same function, even if the structure or acts are different, as known in the art.
Variations of embodiments described will occur to persons of the art. Therefore, the scope of the invention is limited only by the elements and limitations as used in the claims, wherein the terms "comprise," "include," "have" and their conjugates, shall mean, when used in the claims, "including but not necessarily limited to." 100361 Fig. 5 Shows screen shots of the results of the process defined in figure 6.in accordance with an exemplary embodiment of the invention. Screen shot 510 shows a search results page in which the text item "rokefeller" was searched as shown in step 602 of figure 6. Since the Faces filter 516 has been selected in the category selection area 514, the systems filters in items that are most probably faces according to its parameters as described in steps 612 to 616 of figure 6. Hence, the user can narrow the search to faces. * IS
IS S
10037] Screen shot 520 shows a search results page in which the text item "rokefeller" was searched as shown in step 602 of figure 6. Since the landscapes filter 522 has been selected the systems filters in items that are most probably landscapes according to its parameters as described in step 622 of figure 6. Hence, the user can narrow the search to IS I* *:*. landscapes.
(0038J Screen shot 530 shows a search results page in which the text item "phones" was searched as shown in step 602 of figure 6. Since the products filter 532 has been selected
I
the systems filters in items that are most probably products according to its parameters as described in step 632 of figure 6. Hence, the user can narrow the search to products. In a similar manner the process described in screen shot can apply to documents.
100391 Screen shot 540 shows a search results page in which the text item "rockefeller" was searched as shown in step 602 of figure 6. Since the color filter 542 has been selected the lets the user selects, as described in step 642 of figure 6, the dominant color 544 to look for. The system filters in items that are have this dominant color as described in step 632 of figure 6. Hence, the user can narrow the search to objects with the dominant color 544.
100401 Fig. 6 is a flowchart of acts performed after a query has been submitted, in accordance with an exemplary embodiment of the invention. The user queries the system with an object query such as a text query or such as the process defined in figure 2. The query results are displayed 604. The user can than choose 606 to filter the results according to several categories such as: Faces 514; Landscapes 522; Products 532; and Color 542; Alternatively the user can choose the filters prior to performing the query.
100411 The following processes described below assume that a plurality of objects has at least one image ( such as a thumbnail, or the image file itself, a video frame) that represents one or more of the object query results, and the filters are applied to one or *..* * * S *... more of those images. ****
100421 In case a face filter has been chosen 610, a human skin filtering is performed 612 over the image, and filters in pixels suspected to be of human skin.
100431 A further texture and pixel distribution are further performed 614 (before, after or * I S...
*:*. in parallel to 612) to filter images that are most probably to include a human face. For example the relative area of the human skin divided by the total image are should be above a certain predefined percentage. Any object that is suspected to include a human face image is filtered in 616 to be displayed in step 650.
100441 In case a landscapes filter has been chosen 620, a landscape filtering is performed 622 over the images. For example, in an RGB representation of each pixel in each image Blue (=B), Green(=G) and Red (+R) intensities are taken or calculated. In case: B is above threshold b1; and BIG is above threshold bg1; and B/R is above threshold br1; Or in case: B is above threshold b2; and BIG is above threshold bg2; and B/R is above threshold br2; Than the pixel is considered a landscape pixel.
100451 In the case that in a predefined area of the image the ratio of landscape pixels divided by the total number of pixels in that area exceeds a certain threshold, the image is considered a landscape image.
100461 Alternatively, a similar process calculates the ration between black pixels and the total number of pixels in a predefined area to filter in night time landscape images. In that case a black pixel is defined for example as a pixel in which: R<tbr1 and G<tbgi and R<tbr1. Any object that is suspected to be landscape image is filtered in 622 to be displayed in step 650.
100471 In case a productldocument filter has been chosen 630, a landscape filtering is *..* * * * performed 632 over the images. For example, in an RGB representation of each pixel in S..
each image Blue (=B), Green(=G) and Red (+R) intensities are taken or calculated. In case: B is above threshold bw1; and G is above threshold gw; and R is above threshold rw1; The pixel is considered "white". I... * . S... S. S S* S **
100481 In the case that in a predefined area of the image the ratio of white pixels divided by the total number of pixels in that area exceeds a certain threshold, the image is considered a product or document image. Any object that is suspected to be product/document image is filtered in 632 to be displayed in step 650.
100491 In case a color filter has been chosen 640, the system allows the user to choose 642 between a set of predefined dominant colors as shown in item 544 of figure 6.
100501 A dominant color filtering 644 is than performed, For example, in an RGB representation of each pixel in each image Blue (=B), Green(=G) and Red (+R) intensities are taken or calculated. In case the "orange" color has been selected than if: RIG is above threshold O; and GIB is above threshold 02; and R is above threshold 03; The pixel is considered "Orange".
100511 In the case that in a predefined area of the image the ratio of "Orange" pixels divided by the total number of pixels in that area exceeds a certain threshold, the image is considered an image with a dominant orange color. Any object that is suspected to be an image with a dominant orange color is filtered in 644 to be displayed in step 650.
[00521 Fig. 7 Shows screen shots of the menu items and search results in accordance with an exemplary embodiment of the invention. Search screen 700 is comprised of: Thumbnails of search results 710; thumbnails of search history 711, such as reduced * * U : . * thumbnails of the images selected in previous searches; Breadcrumbs-A form of text **** navigation 712 showing a hierarchical structure of the search history. The current location within the search is indicated by a list of searches before the current search in a hierarchy, leading up to the home page. The following menu items: Means 722 to select a U... * S
* . category such as Faces, Products, Landscapes or "MVP"-most valuable pictures (further explained in Fig. 10); Means 724 to further search a particular shape as described in Fig. 8-9; Means 726 to choose an images format such as portrait, landscape or panoramic in which choosing a format will refine the search to images with certain range of height / width ratio; means 728 to conduct the search in a certain image database such as Yahoo images, Flickr images or Picasa; and means 730 to limit the search to a certain license such as creative commons license.
100531 Fig. 8 is a flowchart of acts performed in classifying an image into the shapes, in accordance with an exemplary embodiment of the invention. In step 802 an image is loaded. Usually the process described in Fig. 8 is performed at the first time that image is downloaded, but it could be done in other contexts as well. Subsequent to that color histogram is computed 804, than a check is performed 806 for colorfulness-for example, if a certain color range in the color space exceeds a predefined percentage of the image pixels and the rest of the color ranges are below another predefined threshold than the image is considered colorful. If the image is colorful the flow is passed to step 808 in which a color segmentation is done-searching the borders of each color cluster, and all the significant contours in the image are collected. If the image is not colorful the flow is passed to step 810 in which an edge detection such as "Sobel" is performed, all connected components above a certain size collected into contour collection. Then, in *S.. * * S
step 812 for each of the contours collected by either step 808 or 810 the process defined S...
in Fig.9 is performed.
100541 Fig. 9 is a flowchart of acts performed on each contour collected in step 808 or 810, in accordance with an exemplary embodiment of the invention. For the contour: A S... * .
* . center of the contour is found 902, such as its center of mass; the contour is transformed 904 into polar coordinates (r, 0); one or more of the contour coordinates are smoothed 906 using methods such as "moving average", applying various filters, or other smoothing methods. Extremum (a point where a function reaches a maximum or a minimum) points of the contour in polar coordinates are calculated 908. Further properties of the contour are calculated 910 such as its area, the ration between its height and width and its symmetry. All the calculated parameters are used to classify 912 the contour into shapes, for example a contour with not extremum points and height to width ratio of 1 and totally symmetrical is a circle shape. Optionally, the shape infonnation such as number, size, location, rotation texture and color is stored 914. Optionally, the stored information is later used for indexing and retrieval 916 of the visual content items.
100551 Fig. 10 Shows screen shots of screen shot with MVP results in accordance with an exemplary embodiment of the invention. Screen shots 1000 is comprised of: a collection of image thumbnails 1006; a collection 1002 of "MVP" images-the dominant images in the current search such as a row of images; and an indicator 1004 of the number of times each of the MVP objects appear in the set.
100561 When the MVP control is pressed 722 or after the image set is downloaded or in any other stage, the images in the image set are compared to each other and similar images (for example, image with close to identical color histogram andlor above a certain * I * ::.. two dimensional correlation coefficient to each other) are collected into clusters. If S...
clusters than contain exceed a certain number of images are presented in descending order of the number of images.
100571 In certain cases in which it is clear that a significant part of the image set to be I... * S S...
*: * presented is comprised of images of a certain category such as: Faces, products, landscapes, images with a dominant red color, cross shapes, or black and white images, this subset will be presented first in its own row such as in 1002.
100581 Figs. 11 and 12 are high level flowcharts illustrating a computer implemented method of running a query item on a plurality of visual content items, according to some embodiments of the invention. Query item may comprise text, images or generally multimedia files, and may comprise various shapes. The computer implemented method comprises the following stages: analyzing the plurality of visual content items according to predefined analysis rules relating to visual characteristics of the visual content items (stage 120); receiving the query from a user and identifying at least one query item (stage 125), e.g., using a third party's source code interface that support requests for services; searching the visual content items for a plurality of suggested visual content items relating to the query items by predefined comparison rules (stage 130), suggested visual content items may comprise thumbnails of selected visual content items; allowing the user to select at least one of the suggested visual content items (stage 135); and reordering the visual content items according to their similarity to the selected visual content item and to the analyzing and visual characteristics of the visual content items (stage 140).
100591 According to some embodiments of the invention, the computer implemented * * S method may further comprise applying at least one filter to the visual content items (stage *.** 145). The filter may be a landscape filter, a face filter, a shape filter, productldocument filter, or a color filter. The filter application (stage 145) may be carried out by using a keypoint descriptors comparison or by using a dedicated graphics rendering device. S... * . *. * .
S SI
100601 According to some embodiments of the invention, analyzing the visual content items (stage 120) may comprise generating a color histogram, color segmentating for colorful items and detecting edges and editing contours for non-colorful items (stage 150), wherein items are identified as colorful or non-colorful according to the color histogram and at least one predefined threshold. Analyzing the visual content items (stage 120) may additionally or alternatively comprise applying a two dimensional correlation analysis (stage 155). Analyzing the visual content items (stage 120) may comprise analyzing the content items according to shapes occurring in the query items.
100611 According to some embodiments of the invention, the computer implemented method may further comprise removing content items according to a predefined removal criteria (stage 160) such as an absence of the at least one shape from the query items.
100621 According to some embodiments of the invention, the computer implemented method may further comprise transforming the shapes to polar coordinates (stage 165), finding extremum points in contours of the shapes, and using the properties of the extremum points to classif' the contour into shape categories (stage 170).
100631 According to some embodiments of the invention, the computer implemented method may further comprise counting and presenting the number of the reordered visual content items (stage 175). * * *
100641 According to some embodiments of the invention, the computer implemented *.
method may further comprise applying at least one operator on the visual content items to receive modified visual content items (stage 180). Reordering the visual content items (stage 140) may be carried out in further relation to the modified visual content items. * *** * *
*: * Reordering the visual content items (stage 140) may be carried out by classifying the visual content items to predefined categories relating to the query terms and the analysis of the visual content items.
100651 Fig. 13 is a scheme describing the system and process in accordance with an exemplary embodiment of the invention. System 1300 performs the process described hereinafter: A person 1302 captures using a capturing means 1301 a visual object of a tangible object 1304. The visual object is sent over a network 1306 such as the internet to a processing system 1308. Processing system comprises multiple processing units configured to allow larger scale processing. A preferably multi-core processors system 1308 runs the grouping algorithm described in Figure 3. Partial or full results are sent over network such as 1306 to a user terminal 1310. User terminal 310 displays the results end user 1312.
100661 Fig. 14 is a scheme describing the system in accordance with an exemplary embodiment of the invention. System 1400 is a display showing: A first subpart 1402 showing several content items, in this case product photos with their prices and titles. A second subpart showing simultaneously 10 clusters. Each of this clusters was calculated using cluster analysis of the content items: A major part of the objects were compared to each other and similar objects were collected to the same cluster. The top 10 clusters are * :::: presented by their number of members in descending order, next to each of them their respective number of elements is presented. A third subpart 1406 showing the number S...
1407 of objects containing a significant area of each of the predefined 19 colors. A fourth subpart 1408 showing the number 1409 of objects containing a shapes area of each of the predefined set of shapes. * * . *5*S S.
S I **
100671 Fig. 14 is a scheme describing the system in accordance with an exemplary embodiment of the invention. System 1400 is a display showing: A first subpart 1402 showing several content items, in this case product photos with their prices and titles. A second subpart showing simultaneously 10 clusters. Each of this clusters was calculated using cluster analysis of the content items: A major part of the objects were compared to each other and similar objects were collected to the same cluster. The top 10 clusters 1404 are presented by their number of members in descending order, next to each of them their respective number of elements is presented. Selecting each of them such as clicking on 1401 will result in presenting only the content items that belong to its group. Each of the groups can have a label 1403 as described in stage 225. A third subpart 1406 showing the number 1407 of objects containing a significant area of each of the predefined 19 colors. A fourth subpart 1408 showing the number 1409 of objects containing a shapes area of each of the predefined set of shapes; selecting a representative visual content item for each group (stage 225), such selection uses the visual items parameters such as using each object's visual match to other items in group, its resolution symmetry, background uniformity, optionally the group is labeled using the text fields of its items; and presenting the representative visual content item of each group that has a minimal number of members above a predefined threshold (stage 230). The computer * *** implemented method may further comprise presenting the plurality of visual content **** items alongside the representative visual content items (stage 235). The computer implemented method may further comprise storing grouping information for later caching * for a certain period of time (fixed or calculated by the relative change in the items set) of *** said results (stage 237). The computer implemented method may further allowing for manual change of the groups such as deletion, insertion, subdivision or merger of groups (stage 239).
(0068J According to some embodiments of the invention, the range may be a fixed number. According to some embodiments of the invention, grouping (stage 220) may be carried out by using predefmed color groups, or predefined shape groups, or any other categorization. Grouping (stage 220) may be carried out using at least one keypoint of the visual content items. At least one of the groups may comprise human faces, product images, landscape images, or any other item category. The visual content items may be product offerings, for example such that are presented on an online market place.
100691 According to some embodiments of the invention, the representative visual content items of each group may be presented in descending order of their range for the number of its members. The representative visual content items may be reduced images, such as thumbnails, and may be presented on a predefined sub part of a users' display, or on a separate window.
100701 Fig. 16 is a high level block diagram illustrating a data processing system 280 for analyzing and presenting a plurality of visual content items 262, according to some embodiments of the invention. Data processing system 280 comprises a mediator server * : : :. 250 comprising a graphical user interface 260. Mediator server 250 is connected via a communication link 271 with users 270 and via a communication link 241 with a * * ***.
plurality of sources holding the visual content items 240. Mediator server 250 is arranged to group visual content items 262 according to predefined similarity rules relating to * visual characteristics of visual content items 262 such that each group has a range for the * : * number of its members; and to select a representative visual content item 266 for each group. Graphical user interface 260 is arranged to present representative visual content items 266 of each group that has a minimal number of members above a predefined threshold. Graphical user interface 260 may be arranged to present representative visual content items 266 in a predefined subarea 264 of the display, alongside visual content items 262 presented in a different subarea 261. A further subarea 267 may be allocated for user selectable categories 269.
100711 According to some embodiments of the invention, the range may be a fixed number. According to some embodiments of the invention, grouping may be carried out by using predefined color groups, or predefined shape groups, or any other categorization, relating to user selectable categories 269 or unrelated thereto. Grouping may be carried out in advance, responsive to user selections and queries, or on the fly dynamically on the inventory of visual content items 262. Grouping may be carried out by mediator server 250 using at least one keypoint of the visual content items. At least one of the groups may comprise human faces, product images, landscape images, or any other item category. Visual content items 262 may be product offerings, for example such that are presented on an online market place. According to some embodiments of the invention, representative visual content items 266 of each group may be presented in *: : :: descending order of their range for the number of its members. Representative visual *... content items 266 may be reduced images, such as thumbnails. * ***
[0072] According to some embodiments of the invention, a data processing system for running a query item on a plurality of visual content items is presented. The data processing system comprises a mediator server (hosting API 106) connected via a **** * * communication link (e.g., internet 104) with a user and with a plurality of sources holding the visual content items (through web server 100 with GPU 102), and arranged to analyze the plurality of visual content items according to predefmed analysis rules relating to visual characteristics of the visual content items; to receive the query from the user and to identify at least one query item therein; to search the visual content items for a plurality of suggested visual content items relating to the query items by predefined comparison rules; to allow the user to select at least one of the suggested visual content items; and to reorder the visual content items according to their similarity to the selected visual content item and to the analyzing and visual characteristics of the visual content items.
100731 According to some embodiments of the invention, the systems and methods groups similar images into the same group, and then presets the major groups in descending order, larger groups first. According to some embodiments of the invention, the systems and methods download the system feeds on a daily basis, decide on a list on main views, analyze the feeds to calculate the product clusters and preparing output files for the vendors. According to some embodiments of the invention, the systems and methods have the advantages of: No overload-Instead of reading the whole book, a "table of content" window is presented on the left side that shows the major product * : :: : groups. Apart from functionally it reduces the current visual load of the current loads on *... text in the current layout; The navigation is not linear! Clicking on a product groups will * *** take the shopper to that group, no need to page down. Visual interface-no need for prior familiarity with intricate product categories-everything has a picture; thus, one click will select your desired product group and what you see is what you get. Pareto-The product *SS* * : * groups are presented in descending order of importance, according to their relative "market share" which is the number of products in each group. Additional advantages are: User experience-improved user experience; Faster Navigation to the desired product that increases conversion rates and decreases the load on the system; and Positioning-position the system as using best of bread shopping technology.
[0074J Regarding color, according to some embodiments of the invention, the systems and methods uses an economic 19 natural color palette rather than using an artificial color palette, which may better cater to shoppers than an RGB palette. Not using natural colors creates another problem, though one color is chosen, other color appear.
Separating product from its background, in many product photos, the product appears with a background. Context-the systems and methods are context sensitive, only the colors that appear in the offering are shown, and a number shows the number of deals which contain, the relevant color. This way shopper can see the color distribution of the relevant offering and focus on the existing colors. The systems and methods may work online or offline. Offline, they may analyze content items from providers in relation to their form, color and content and convert the results to a standardized file. Actual purchases may them be related or included in the files. Online, users may search the offers using queries that may be likewise analyzed.
*: : : 100751 In the above description, an embodiment is an example or implementation of the inventions. The various appearances of "one embodiment," "an embodiment" or "some embodiments" do not necessarily all refer to the same embodiments.
10076] Although various features of the invention may be described in the context of a * single embodiment, the features may also be provided separately or in any suitable **** * : * combination. Conversely, although the invention may be described herein in the context of separate embodiments for clarity, the invention may also be implemented in a single embodiment.
100771 Reference in the specification to "some embodiments", "an embodiment", "one embodiment" or "other embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments, of the inventions.
100781 It is to be understood that the phraseology and terminology employed herein is not to be construed as limiting and are for descriptive purpose only.
100791 The principles and uses of the teachings of the present invention may be better understood with reference to the accompanying description, figures and examples.
100801 It is to be understood that the details set forth herein do not construe a limitation to an application of the invention.
100811 Furthermore, it is to be understood that the invention can be carried out or practiced in various ways and that the invention can be implemented in embodiments other than the ones outlined in the description above.
100821 It is to be understood that the terms "including", "comprising", "consisting" and grammatical variants thereof do not preclude the addition of one or more components, features, steps, or integers or groups thereof and that the terms are to be construed as *::: :* specifying components, features, steps or integers.
100831 If the specification or claims refer to "an additional" element, that does not preclude there being more than one of the additional element.
* 100841 It is to be understood that where the claims or specification refer to "a" or "an" **** * : * element, such reference is not be construed that there is only one of that element.
100851 It is to be understood that where the specification states that a component, feature, structure, or characteristic "may", "might", "can" or "could" be included, that particular component, feature, structure, or characteristic is not required to be included.
100861 Where applicable, although state diagrams, flow diagrams or both may be used to describe embodiments, the invention is not limited to those diagrams or to the corresponding descriptions. For example, flow need not move through each illustrated box or state, or in exactly the same order as illustrated and described.
100871 Methods of the present invention may be implemented by performing or completing manually, automatically, or a combination thereof, selected steps or tasks.
100881 The term "method" may refer to manners, means, techniques and procedures for accomplishing a given task including, but not limited to, those manners, means, techniques and procedures either known to, or readily developed from known manners, means, techniques and procedures by practitioners of the art to which the invention belongs.
100891 The descriptions, examples, methods and materials presented in the claims and the specification are not to be construed as limiting but rather as illustrative only.
100901 Meanings of technical and scientific terms used herein are to be commonly * : : * understood as by one of ordinary skill in the art to which the invention belongs, unless *.** otherwise defined. * a a
100911 The present invention may be implemented in the testing or practice with methods and materials equivalent or similar to those described herein.
* a, [0092J Any publications, including patents, patent applications and articles, referenced or aaSa *, * mentioned in this specification are herein incorporated in their entirety into the specification, to the same extent as if each individual publication was specifically and individually indicated to be incorporated herein. In addition, citation or identification of any reference in the description of some embodiments of the invention shall not be construed as an admission that such reference is available as prior art to the present invention.
100931 While the invention has been described with respect to a limited number of embodiments, these should not be construed as limitations on the scope of the invention, but rather as exemplifications of some of the preferred embodiments. Other possible variations, modifications, and applications are also within the scope of the invention.
Accordingly, the scope of the invention should not be limited by what has thus far been described, but by the appended claims and their legal equivalents. *s* * I I *, * * i** * I * I,. I. * IS, *.* * S 5.. I* I * I. * *0

Claims (20)

  1. CLAIMSWhat is claimed is: I. A computer implemented method of presenting a plurality of visual content items, comprising: grouping the visual content items according to predefined similarity rules relating to visual characteristics of the visual content items such that each group has a range for the number of its members; selecting a representative visual content item for each group; and presenting the representative visual content item of each group that has a minimal number of members above a predefined threshold.
  2. 2. The computer implemented method of claim 1, further comprising presenting the plurality of visual content items alongside the representative visual content items.
  3. 3. The computer implemented method of claims 1 or 2, wherein the range is a fixed number.
  4. 4. The computer implemented methods of claims 1 to 3, wherein the grouping is carried out by using predefined color groups.
    *: : ::
  5. 5. The computer implemented methods of claims I to 4, wherein the grouping is carried out using predefined shape groups.
  6. 6. The computer implemented method of claims 1 to 5, wherein at least one of the groups comprises human faces.**
  7. 7. The computer implemented method of claims I to 6, wherein at least one of the *: *: groups comprises product images.
    I
  8. 8. The computer implemented method of claims 1 to 7, wherein at least one of the groups comprises landscape images.
  9. 9. The computer implemented method of claims 1 to 8, wherein the representative visual content items of each group are presented in descending order of their range for the number of its members.
  10. 10. The computer implemented method of claims I to 9, wherein the representative visual content items of each group are reduced images.
  11. 11. The computer implemented method of claims 1 to 10, wherein the representative visual content items of each group are thumbnails.
  12. 12. The computer implemented method of claims I to 11, wherein the representative visual content items are presented on a predefined sub part of a users' display.
  13. 13. The computer implemented method of claims 1 to 12, wherein the representative visual content items are presented on a separate window.
  14. 14. The computer implemented method of claims 1 to 13, wherein the groping is carried out using at least one keypoint of the visual content items.
  15. 15. The computer implemented method of claims I to 14, wherein the visual content items are product offerings, *
  16. 16. The computer implemented method of claims 1 to 15, wherein the visual content :*. items are product offerings presented on an online market place.
  17. 17. A computer implemented method as hereinbefore described with reference to the * * ** accompanying drawings. ***** .*
  18. 18. A data processing system for analyzing and presenting a plurality of visual content items, comprising:Sa mediator server comprising a graphical user interface, the mediator server connected via a communication link with a user and with a plurality of sources holding the visual content items, and arranged to group the visual content items according to predefined similarity rules relating to visual characteristics of the visual content items such that each group has a range for the number of its members; and to select a representative visual content item for each group, wherein the graphical user interface is arranged to present the representative visual content items of each group that has a minimal number of members above a predefined threshold.
  19. 19. The data processing system of claim 18, wherein the graphical user interface is arranged to present the representative visual content items alongside the visual content items.
  20. 20. A data processing system as hereinbefore described with reference to the accompanying drawings. *.* * * S *S S * S.S * . *S.. * S * S.. * * S...SIIS IS
GB0911855A 2009-07-08 2009-07-08 Object search and navigation Withdrawn GB2461641A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB0911855A GB2461641A (en) 2009-07-08 2009-07-08 Object search and navigation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0911855A GB2461641A (en) 2009-07-08 2009-07-08 Object search and navigation

Publications (2)

Publication Number Publication Date
GB0911855D0 GB0911855D0 (en) 2009-08-19
GB2461641A true GB2461641A (en) 2010-01-13

Family

ID=41022338

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0911855A Withdrawn GB2461641A (en) 2009-07-08 2009-07-08 Object search and navigation

Country Status (1)

Country Link
GB (1) GB2461641A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102750132A (en) * 2012-06-13 2012-10-24 深圳中微电科技有限公司 Thread control and call method for multithreading virtual assembly line processor, and processor

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040215657A1 (en) * 2003-04-22 2004-10-28 Drucker Steven M. Relationship view
US20060224993A1 (en) * 2005-03-31 2006-10-05 Microsoft Corporation Digital image browser
US20060253491A1 (en) * 2005-05-09 2006-11-09 Gokturk Salih B System and method for enabling search and retrieval from image files based on recognized information
US20080063267A1 (en) * 2003-07-04 2008-03-13 Leszek Cieplinski Method and apparatus for representing a group of images
US20080077569A1 (en) * 2006-09-27 2008-03-27 Yahoo! Inc., A Delaware Corporation Integrated Search Service System and Method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040215657A1 (en) * 2003-04-22 2004-10-28 Drucker Steven M. Relationship view
US20080063267A1 (en) * 2003-07-04 2008-03-13 Leszek Cieplinski Method and apparatus for representing a group of images
US20060224993A1 (en) * 2005-03-31 2006-10-05 Microsoft Corporation Digital image browser
US20060253491A1 (en) * 2005-05-09 2006-11-09 Gokturk Salih B System and method for enabling search and retrieval from image files based on recognized information
US20080077569A1 (en) * 2006-09-27 2008-03-27 Yahoo! Inc., A Delaware Corporation Integrated Search Service System and Method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102750132A (en) * 2012-06-13 2012-10-24 深圳中微电科技有限公司 Thread control and call method for multithreading virtual assembly line processor, and processor
CN102750132B (en) * 2012-06-13 2015-02-11 深圳中微电科技有限公司 Thread control and call method for multithreading virtual assembly line processor, and processor

Also Published As

Publication number Publication date
GB0911855D0 (en) 2009-08-19

Similar Documents

Publication Publication Date Title
US9607327B2 (en) Object search and navigation method and system
JP5596792B2 (en) Content-based image search
US8433140B2 (en) Image metadata propagation
US8788529B2 (en) Information sharing between images
US9092458B1 (en) System and method for managing search results including graphics
US20170024384A1 (en) System and method for analyzing and searching imagery
US20100250539A1 (en) Shape based picture search
US20200265491A1 (en) Dynamic determination of data facets
Chatzichristofis et al. Img (rummager): An interactive content based image retrieval system
US20130326338A1 (en) Methods and systems for organizing content using tags and for laying out images
Adrakatti et al. Search by image: a novel approach to content based image retrieval system
US9613059B2 (en) System and method for using an image to provide search results
Khokher et al. Content-based image retrieval: state-of-the-art and challenges
EP1973046A1 (en) Indexing presentation slides
KR101901645B1 (en) Method, apparatus, system and computer program for image retrieval
GB2461641A (en) Object search and navigation
Deniziak et al. World wide web CBIR searching using query by approximate shapes
Kalaiarasi et al. Visual content based clustering of near duplicate web search images
Hezel et al. ImageX-explore and search local/private images
Brindha et al. Certain Investigations on Content Based Video Indexing and Retrieval Using Heuristic Approaches
Khobragade et al. Content Based Image Retrieval System Use for Similarity Analysis of Images
Balan et al. Design and Development of Image Retrieval in Documents Using Journal Logo Matching
Gupta Utilization of Hierarchical and flat clustering in Content Based Image Retrieval.
Endo et al. MIRACLES: Multimedia Information RetrievAl, CLassification, and Exploration System
Gao et al. Multimedia Information Technology Application in Image Retrieval

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)