US20170193532A1 - System and method for identifying a marketplace status - Google Patents

System and method for identifying a marketplace status Download PDF

Info

Publication number
US20170193532A1
US20170193532A1 US15/462,865 US201715462865A US2017193532A1 US 20170193532 A1 US20170193532 A1 US 20170193532A1 US 201715462865 A US201715462865 A US 201715462865A US 2017193532 A1 US2017193532 A1 US 2017193532A1
Authority
US
United States
Prior art keywords
system
shelf space
analysis
data
visual representation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/462,865
Inventor
Alon Atsmon
Nimrod Shkedy
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Atsmon Alon
Original Assignee
Alon Atsmon
Nimrod Shkedy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US30514710P priority Critical
Priority to US30651010P priority
Priority to US32532810P priority
Priority to PCT/IB2011/050661 priority patent/WO2011101800A1/en
Priority to US13/586,912 priority patent/US8908927B2/en
Priority to US14/556,163 priority patent/US9600829B2/en
Application filed by Alon Atsmon, Nimrod Shkedy filed Critical Alon Atsmon
Priority to US15/462,865 priority patent/US20170193532A1/en
Publication of US20170193532A1 publication Critical patent/US20170193532A1/en
Application status is Abandoned legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06QDATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce, e.g. shopping or e-commerce
    • G06Q30/02Marketing, e.g. market research and analysis, surveying, promotions, advertising, buyer profiling, customer management or rewards; Price estimation or determination
    • G06Q30/0201Market data gathering, market analysis or market modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/62Methods or arrangements for recognition using electronic means
    • G06K9/6201Matching; Proximity measures
    • G06K9/6215Proximity measures, i.e. similarity or distance measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06QDATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading, distribution or shipping; Inventory or stock management, e.g. order filling, procurement or balancing against orders
    • G06Q10/087Inventory or stock management, e.g. order filling, procurement, balancing against orders
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06QDATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce, e.g. shopping or e-commerce
    • G06Q30/02Marketing, e.g. market research and analysis, surveying, promotions, advertising, buyer profiling, customer management or rewards; Price estimation or determination
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining

Abstract

A system for diagnosing an object that includes a preset of an object, a visual representation associated with one or more sub objects of at least part of the object, and a module to analyze the applicable visual representation. The visual representation is captured by a device and provides a specified status of the object. Further the analysis provides a measure of fit between the preset and the specified status of the object as captured and is carried out based on keypoint descriptors used to separate sub objects on the applicable visual representation. The device has a reproduction capturing device and data connectivity.

Description

    RELATED APPLICATIONS
  • This application is a continuation of U.S. patent application Ser. No. 14/556,163 filed on Nov. 30, 2014, which is a continuation of U.S. patent application Ser. No. 13/586,912 filed on Aug. 16, 2012, now U.S. Pat. No. 8,908,927, which is a continuation of PCT Patent Application No. PCT/IB2011/050661 having International Filing Date of Feb. 17, 2011, which claims the benefit of priority of U.S. Provisional Patent Application Nos. 61/325,328 filed on Apr. 18, 2010, 61/306,510 filed on Feb. 21, 2010 and 61/305,147 filed on Feb. 17, 2010. The contents of the above applications are all incorporated by reference as if fully set forth herein in their entirety.
  • FIELD AND BACKGROUND OF THE INVENTION Technical Field
  • The present invention relates to the field of visual data analysis and more particularly, to visual representation analysis that provides a diagnosis based on the difference between specified status of an object and desired status of the object, in real time.
  • Discussion of Related Art
  • There are many algorithms in use and known in the art, that calculate a level of visual similarity between two images. None goes the extra mile and analyze the data of the level of similarity to provide information in real time.
  • The information provided may benefit businesses as well as customers in decision making and problems solving.
  • SUMMARY OF THE INVENTION
  • Embodiments of the present invention provide a system for diagnosing an object. Accordingly, according to an aspect of the invention, the system further comprises (i) a preset of an object; (ii) a visual representation associated with one or more sub objects of at least part of the object; and (iii) a module to analyze an applicable visual representation. The analysis is based on keypoint descriptors that are used to separate sub objects on the applicable visual representation. Moreover, the analysis provides a level of visual similarity between the preset and the specified status of the object as captured in real time.
  • According to some embodiments of the invention, the visual representation is captured by a device. The device may be a mobile device that is associated with a reproduction capturing device and wireless connection.
  • According to some embodiments of the invention, the object may be at least one of: (i) an arrangement of products on a marketplace; (ii) a plant; and (iii) a vertebrate.
  • According to some embodiments of the invention, the analysis may provide prognosis and remedy to diseases of plants or vertebrates.
  • According to some embodiments of the invention, the object may be an arrangement of products in storage and the preset may be full inventory as determined for the storage.
  • According to yet another embodiment of the invention, the object may be an arrangement of products for display for sale which is compared with a preset provided by a supplier. An analysis of the comparison may provide buying patterns of the products.
  • These, additional, and/or other aspects and/or advantages of the present invention are set forth in the detailed description which follows; possibly inferable from the detailed description; and/or learnable by practice of the present invention.
  • BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
  • The present invention will be more readily understood from the detailed description of embodiments thereof made in conjunction with the accompanying drawings of which:
  • FIG. 1 shows a visual representation capturing and analysis, according to some embodiments of the invention;
  • FIG. 2 is a flowchart of a method illustrating a method of capturing and matching a visual representation, according to some embodiments of the invention; and
  • FIG. 3 illustrates an exemplary method of capturing and matching of a visual representation, according to some embodiments of the invention.
  • DESCRIPTION OF SPECIFIC EMBODIMENTS OF THE INVENTION
  • Prior to setting forth the detailed description, it may be helpful to set forth definitions of certain terms that will be used hereinafter.
  • As used herein, the term “visual representation” encompasses a content that includes visual information such as images (in any wavelength including: (i) visible light; (ii) Infrared; and (iii) Ultraviolet), photos, videos, Infrared image, magnified image, an image sequence or, three-dimensional Images or Videos, TV broadcast.
  • As used herein, the phrase “visual similarity” refers to, the measure of resemblances between two visual representations that may be comprised of: (i) the fit between their color distributions such as the correlation between their HSV color histograms; (ii) the fit between their texture; (iii) the fit between their shapes; (iv) the correlation between their edge histograms; (v) face similarity; and (vi) methods that include local descriptors.
  • As used herein, the term “visual analysis” refers to the analysis of the characteristics of visual representations such as: (i) visual similarity; (ii) coherence; (iii) hierarchical organization; (iv) concept load or density; (v) feature extraction; and (vi) noise removal.
  • As used herein, the term “applicable visual analysis” refers to a sufficient visual representation for analysis.
  • As used herein, the term “text similarity” refers to the measure of pair-wise similarity of strings. Text similarity may score the overlaps found between pair strings based on text matching. Identical strings may have a score of 100%, while the pair strings, “car” and “dogs” will have a score close to zero. The pair strings, “Nike Air max blue” and “Nike Air max red” may have a score between zero and 100%.
  • As used herein, the term “regular expression” refers to a string that provides a concise and flexible means for identifying strings of text of interest, such as: (i) particular characters; (ii) words; and (iii) patterns of characters.
  • As used herein, the term “text analysis” as used herein refers to the analysis of the structural characteristics of text, such as: (i) text similarity; (ii) coherence; (iii) hierarchical organization; and (iv) concept load or density. The text analysis may use regular expressions.
  • As used herein, the term “symbol analysis” refers to analysis of symbolic data such as: (i) optical character recognition; (ii) hand write recognition; (iii) barcode recognition; and (iv) QR code recognition.
  • As used herein, the term “capturing data analysis” refers to the analysis of capturing data such as: (i) X-Y-Z coordinates; (ii) 3 angles; (iii) manufacturer; (iv) model; (v) orientation (rotation) top-left; (vi) software; (vii) date and time; (viii) YCbCr Positioning centered; (ix) Compression; (x) x-Resolution; (xi) y-Resolution; (xii) Resolution Unit; (xiii) Exposure Time; (xiv) FNumber; (xv) exposure Program; (xvi) Exit Version; (xvii) date and time (original); (xviii) date and time (digitized); (xix) components configuration Y Cb Cr; (xx) Compressed Bits per Pixel; (xxi) Exposure Bias; (xxii) MaxAperture Value; (xxiii) Metering Mode Pattern; (xxiv) Flash fired or not; (xxv) Focal Length; Maker Note; (xxvi) Flash Pix Version; (xxvii) Color Space; (xxviii) Pixel X Dimension; (xxix) Pixel Y Dimension; (xxx) File Source; (xxxi) Interoperability Index; (xxxii) Interoperability Version; and (xxxiii) derivatives of the above such as acceleration in the X-axis.
  • As used herein, the term “location based analysis” refers to analysis of local data such as: (i) Global Positioning System (GSM) location; (ii) triangulation data such as GSM network or Wi-Fi network triangulation data; (iii) data of Radio Frequency Identification; and (iv) street address. For example, location data may identify a marketplace or even the specific part of the marketplace in which the visual representation was captured.
  • As used herein, the term “content analysis” refers to the combination of: (i) text analysis; (ii) visual analysis; (iii) symbol analysis; (iv) location based analysis; (v) capturing data analysis, and (vi) analysis of other data. The other data may be: (i) numerical fields (e.g. price range); (ii) date fields; (iii) logical fields (e.g. female/male); (iv) arrays and structures; and (v) analysis of historical data.
  • As used herein, the term “match” refers to a numerical value that describes the results of content analysis that measures the match between two items. For example the correlation between two or more visual representations. The term “match” may also refer to a logical value that is true in case the similarity is above a certain threshold.
  • As used herein, the term “marketplace” refers to a physical place where objects may be purchased. For example: (i) a supermarket; (ii) a convenience store; and (iii) a grocery store.
  • As used herein, the term “keypoint descriptor” refers to a vector describing the area of a specific point in an image and is used to distinguish between different objects such as the keypoint descriptors used in Scale-invariant feature transform. For example, in the Scale-Invariant Feature Transform (SIFT) framework the feature descriptor is computed as a set of orientation histograms on neighborhoods. The orientation histograms are relative to the keypoint orientation and the orientation data comes from the Gaussian image closest in scale to the keypoint's scale. Just like before, the contribution of each pixel is weighted by the gradient magnitude, and by a Gaussian with σ1.5 times the scale of the keypoint. Histograms contain 8 bins each, and each descriptor contains an array of 4 histograms around the keypoint. This leads to a SIFT feature vector with (4×4×8=128 elements). As used herein, the term “planogram” refers to a diagram of fixtures and products that illustrates how and where products may be arranged. It is commonly displayed on a store shelf.
  • As used herein, the term “mobile device” refers to a computing device that is using a cellular network. Commonly weighs less than 300 grams.
  • As used herein, the term “crowdsourcing” refers to a task contributed by a large diffused group of people. Commonly, the group of people is not employed by an entity that is receiving the contribution and is not scheduled to contribute at a specific time.
  • As used herein, the term “parallel algorithm” refers an algorithm which can be executed a piece at a time on many different processing devices, and then put back together again at the end to get the correct result. For example, parallel SIFT.
  • As used herein, the term “Graphics Processing Unit (GPU)” refers to an apparatus adapted to reduce the time it takes to produce images on the computer screen by incorporating its own processor and memory, having more than 16 CPU cores, such as GeForce 8800. GPU is a good mean for executing parallel algorithms.
  • Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of the components set forth in the following description or illustrated in the drawings. The invention is applicable to other embodiments or of being practiced or carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein is for the purpose of description and should not be regarded as limiting.
  • Reference is now made to FIG. 1 which illustrates a system and process in accordance with an exemplary embodiment of the invention. System 100 performs the process described hereinafter: capturing device 101, in a non limiting example, a mobile phone with a reproduction capturing device 102 or a reproduction capturing device captures a visual representation, namely a photo, of scenes 120. Scene 120 may be a shelf space in a marketplace in which 130 will be the shelf body, 122 and 124 will be products of type A, 128 may be product of type B and 126 will be the space between them, or a stand in a marketplace. The mobile phone may weigh less than 300 grams.
  • Alternatively, the scene may be 104 a plant comprising of sub-objects such as 152. Further, 104 may be a scene of: (i) a plant such as crop; (ii) non-crop or an unwanted herb; (iii) an ill part of a leaf, namely an infected leaf of a plant; and (iv) a leaf infected with aphid or an unwanted herb.
  • Yet, in another non limiting example, the scene may be 160, a body part with a mole 162 as a sub-object.
  • In a non limiting example, the capturing is performed by a device attached to a part of a human body such as an ear 170 having an earpiece 172 equipped with a camera 174 operating in the same manner as 102. The device may also be implanted into a human body such as a camera implanted to replace a human retina. The capturing may be crowdsourced. For example, by a crowd of people having devices such as 174. Furthermore, the capturing may be performed in a passive manner while user 112 is shopping with no premeditation of capturing photos.
  • Yet, in another non limiting example, the capturing is performed by a crowd of people incentivized to capture shelf spaces. The crowd may be incentivized by monetary compensation or just by being given credit. The crowd may not be in work relationship with the entity collecting the data. Further, majority voting and outliers removal may be taken to improve data quality. Lastly, the crowd may identify presence of new products on the shelf.
  • The capturing may be performed in several ways: (i) taking a photograph; (ii) taking a video; (iii) contentiously capturing an image while local or remote processing provides a real time feedback in a non limiting example: “verified” or “a problem was found”. The continuous capturing process may be performed while moving the reproduction capturing device such as moving in the directions shown in element 103.
  • The visual representation may be captured from a static reproduction capturing device placed in the marketplace or from a reproduction capturing device held by person. For example, a static camera on supermarket may take continuous photos of a shelf space. An analysis as in steps 204 or 207 in FIG. 2 may present the following: (i) product 120 is in shortage of the shelf; (ii) red products are bought on valentines more than the average; (iii) shirts with stripes are bought more than one color shirts; (iv) low stature people such as kids tend to stare at a specific shelf; and (v) search for optimal planogram that yields more purchases than other planograms.
  • Person 112 may be an employee or a crowd of people that were pledged an incentive for capturing the visual representation.
  • The visual representation may be processed locally using 101 or be sent (as shown in step 206 of FIG. 2 to a local or a remote server 108 over a data connectivity 106 such as the Internet. The results of the processing may be a match to a disease of the plant, after which it may be presented back on device 101.
  • Alternatively, in a non limiting example, server 108 or device 101 may generate a shelf space report 140 that is sent over the Internet and displayed on the device 101. Report 140 may display one of: (i) an absolute and relative shelf space of each product; and (ii) product category including the share of empty space 126. Report 140 may also be presented as a planogram.
  • In a non limiting example, a person may capture a visual representation of shelf space 126 with a Mobile device 101. The visual representation may be sent to a remote server 108 that uses a product database to match the products photographed to a product database. Then, a shelf space report 140 may be generated. Further, one of: device 101 and server 108 are optionally comprised of a parallel computing device such as a GPU.
  • In yet another non limiting example, a farmer 112 may capture a visual representation of an aphid of a leaf 152 with mobile device 101. The visual representation may be sent to a remote server 108 that utilizes aphid database to match the visual representation to a specific aphid or an aphid category. Then, an aphid information web site may display it on a mobile device and a specific insecticide is suggested for purchase. The insecticide may be purchased in bulk in a non limiting example, in a discount, by the provider of system 100 and may be sold in retail price to the farmer 112. The purchase is may be completed using device 101.
  • FIG. 2 is a flowchart of a method that performs capturing and matching of a visual representation, according to some embodiments of the invention. The flowchart illustrates process 200 to capture and match visual representations. In a non limiting example, a planogram and a product database including photos of a plurality of products from one or more sides are first loaded. Then, a visual representation of 104 may be captured (202). Then, capture visual representation, namely an image, may be analyzed locally (204) to provide a match using content analysis or to reduce the size of the data to be sent to the servers in step 206.
  • Alternatively, in a non limiting example, the image itself or a processed part of it may be sent to a remote server (206). The server may perform server content analysis (207) for one of: (i) generating report 140; (ii) calculating a match between the planogram loaded in step 201 and the actual shelf plan calculated from the object captured in 202. In a non limiting example, the analysis may use the visual representation as well as other data such as: (i) Global Positioning System (GPS) data; and (ii) the history of the sender. In case a specified criterion is met, such as a match to predefined planogram is found (208), report 140 is generated (210). Then, report 140 may be displayed (212) on device 101. Further, ads and offers may be displayed (214).
  • In case the system recognize from the capturing data that flash technique was used it may automatically correct the image to minimize the distortion caused by the flash.
  • In case no match is found, a check may be performed whether another capturing should be performed (211). The mismatch may be displayed on device 101. The check may be with the user of device 101 using its reproduction capturing device, or check against a timer that allows for image to be captured for up to 10 seconds. In case check results are positive step 202 is performed again, if not the process ends.
  • In a non limiting example, the analysis may adventitiously provide the amount of products of each different color or texture having the same barcode.
  • In yet another non limiting example, the analysis may adventitiously verify a preset designed and paid for by a supplier, namely a planogram, compared with specified status of products as positioned on a shelf in a store.
  • In a non limiting example, various factors provide a quick response. For example, a database is stored locally on device 101, so device 101 may use parallel matching algorithm on a multicores such as GPU. The factors may reduce duration time for user 112 to receive a feedback in real time reaction, which provides user 112 the choice to make a quick decision on the spot. For example, the user may decide in real time, whether the planogram he requested was properly executed by the store. The feedback is provided in 1, 3 or 5 seconds from the capturing.
  • FIG. 3 illustrates an exemplary method of capturing and matching a visual representation, according to some embodiments of the invention.
  • The flowchart describes a sub-process and sub-system 200 to capture and match visual representations comprising of the following steps: (i) loading a present 302 such as planogram or a model of a healthy leaf. The preset may include pre calculated keypoints such as keypoints of a product such as a cereal box 122; (ii) capturing a scene representation such as taking a photo of 120 (304); (iii) compare scene representation keypoints to preset keypoint 306, for example, calculating SIFT keypoints for the scene representation and trying to match them to the preset keypoints of the cereal box; (iv) In case enough percent of absolute number of matches were found 308 the sub object is marked 310. For example, cereal box 122 is marked as present on scene 120. If not step 304 is performed again.
  • A fit may be calculated (312) for scene 120 against a preset. For example, in case the preset is comprised of a required planogram defining the presence of products 122, 124, 128 and space 126 and their relative positions in space a fit may be calculated between the scene and the planogram.
  • For example, the fit may validate specified rules such as their relative distances and angles and mark the planogram as “Pass” or “Fail”, according to the percent of rules passed. The fit may take in account other data as defined in content analysis. For example, the location based data may select a desired planogram for a specific store from a variety of stores.
  • In a non-limiting example, the analysis may adventitiously provide time based analysis of moles such as 162 since a mole may be a melanoma i.e. skin cancer. However, it is hard to map all the moles on human body including their position on the body. The preset may maintain the keypoints of a photo of a mole that was taken on a given year Y. In Year Y+1 the various moles on the same body may be captured again. Sub system 300 may match photos of the moles captured in year Y to photos captured in year Y+1. Then analyze the moles: (i) area; (ii) borders; (iii) color and diameter, may be compared to find suspects for symptoms of skin cancer using keypoint descriptors. In yet another non-limiting example, inflamed tonsils can be detected using a baseline and current photos.
  • In the above description, an embodiment is an example or implementation of the inventions. The various appearances of “one embodiment”, “an embodiment”, or “some embodiments”, do not necessarily all refer to the same embodiments.
  • Although various features of the invention may be described in the context of a single embodiment, the features may also be provided separately or in any suitable combination. Conversely, although the invention may be described herein in the context of separate embodiments for clarity, the invention may also be implemented in a single embodiment.
  • Reference in the specification to “some embodiments”, “an embodiment”, “one embodiment”, or “other embodiments”, means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments, of the inventions.
  • It is to be understood that the phraseology and terminology employed herein are not to be construed as limiting, and are for descriptive purposes only.
  • The principles and uses of the teachings of the present invention may be better understood with reference to the accompanying description, Figures, and examples.
  • It is to be understood that the details set forth herein do not construe a limitation to an application of the invention.
  • Furthermore, it is to be understood that the invention can be carried out or practiced in various ways and that the invention can be implemented in embodiments other than the ones outlined in the description above.
  • It is to be understood that the terms “including”, “comprising”, “consisting”, and grammatical variants thereof do not preclude the addition of one or more components, features, steps, or integers; or groups thereof, and that the terms are to be construed as specifying components, features, steps or integers.
  • If the specification or claims refer to “an additional” element, that does not preclude there being more than one of the additional element.
  • It is to be understood that where the claims or specification refer to “a” or “an” element, such reference is not be construed that there is only one of that element.
  • It is to be understood that where the specification states that a component, feature, structure, or characteristic “may”, “might”, “can” or “could” be included, that particular component, feature, structure, or characteristic is not required to be included.
  • Where applicable, although state diagrams, flow diagrams or both may be used to describe embodiments, the invention is not limited to those diagrams or to the corresponding descriptions. For example, flow need not move through each illustrated box or state, or in exactly the same order as illustrated and described.
  • Methods of the present invention may be implemented by performing or completing manually, automatically, or a combination thereof, selected steps or tasks.
  • The term “method” may refer to manners, means, techniques and procedures for accomplishing a given task including, but not limited to, those manners, means, techniques and procedures either known to, or readily developed from known manners, means, techniques, and procedures by practitioners of the art to which the invention belongs.
  • The descriptions, examples, methods and materials presented in the claims and the specification are not to be construed as limiting but rather as illustrative only.
  • Meanings of technical and scientific terms used herein are to be commonly understood as by one of ordinary skill in the art to which the invention belongs, unless otherwise defined.
  • The present invention may be implemented in the testing or practice with methods and materials equivalent or similar to those described herein.
  • Any publications, including patents, patent applications and articles, referenced or mentioned in this specification are herein incorporated in their entirety into the specification, to the same extent as if each individual publication was specifically and individually indicated to be incorporated herein. In addition, citation or identification of any reference in the description of some embodiments of the invention shall not be construed as an admission that such reference is available as prior art to the present invention.
  • While the invention has been described with respect to a limited number of embodiments, these should not be construed as limitations on the scope of the invention, but rather as exemplifications of some of the preferred embodiments. Other possible variations, modifications, and applications are also within the scope of the invention. Accordingly, the scope of the invention should not be limited by what has thus far been described, butby the appended claims and their legal equivalents.

Claims (20)

What is claimed is:
1. A system for generating a marketplace status report, comprising:
a mobile device having an image sensor adapted to capture an image sequence of at least shelf space in a marketplace;
a processing unit adapted to:
perform an image processing analysis during which a visual representation of said at least shelf space is extracted from said image sequence,
automatically identify at least one storage management data related to said said at least shelf space, and
analyze said at least one identified storage management data to identify buying patterns of at least one product presented on said at least shelf space;
a presentation unit adapted to present an indication of said identified buying patterns.
2. The system of claim 1, wherein said image processing analysis is performed by a GPU comprising a plurality of cores using an algorithm executed in a piece at a time on a plurality of different cores in parallel and put back together at the end, providing the outcome in real time.
3. The system of claim 1, wherein said image processing analysis comprises identifying a visual similarity between said visual representation and at least one planogram.
4. The system of claim 3, wherein said at least one planogram includes precalculated keypoint descriptors, said visual similarity is identified by matching scene representation keypoint descriptors extracted from said visual representation to said precalculated keypoint descriptors of said at least one planogram, wherein said scene representation keypoint descriptors describing an area of a specific point in an image of said visual representation.
5. The system of claim 3, wherein said at least one planogram is selected according to a location based data analysis of localization data acquired by said mobile device.
6. The system of claim 3, wherein said at least one storage management data comprises new products in said of at least shelf space which do not appear in said at least one planogram.
7. The system of claim 1, wherein said image sequence is captured while moving said mobile device.
8. The system of claim 1, wherein said at least one storage management data comprises an indication of a share of empty space in said of at least shelf space.
9. The system of claim 1, wherein said at least one storage management data comprises supplier payment data of each of a plurality of products in said of at least shelf space.
10. The system of claim 1, wherein said at least one storage management data comprises status of each of a plurality of products in said of at least shelf space.
11. The system of claim 1, wherein said at least one storage management data comprises a full inventory of a plurality of products in said of at least shelf space.
12. The system of claim 1, wherein said image processing analysis is based on a comparison between different products in said of at least shelf space.
13. The system of claim 1, wherein said at least one storage management data comprises an indication of a presence or absence of products in positions in at least one planogram.
14. The system of claim 1, wherein said processing unit performs said said image processing analysis in light of a location based analysis of localization data acquired by said mobile device.
15. The system of claim 14, wherein said localization data is selected from a group consisting of: global positioning system data, triangulation data, Global system for mobile communications (GSM) network data, Wi-Fi network data, and a radio frequency identification data.
16. The system of claim 1, wherein said processing unit performs an analysis of said visual representation in light of an analysis of historical data.
17. The system of claim 1, wherein said image sequence is part of video data captured using said image sensor.
18. The system of claim 1, wherein said image processing analysis is performed while further image capturing is performed.
19. A computerized method for generating a marketplace status report, comprising:
capturing an image sequence of at least shelf space in a marketplace;
performing an image processing analysis during which a visual representation of said at least shelf space is extracted from said image sequence,
automatically identifying at least one storage management data related to said at least shelf space, and
analyzing said at least one identified storage management data to identify buying patterns of at least one product presented on said at least shelf space; and
presenting an indication of said identified buying patterns.
20. The method according to claim 19, wherein said image processing analysis comprises identifying keypoint descriptors in images of said image sequence, said keypoint descriptors describing an area of a specific point in an image of said visual representation.
US15/462,865 2010-02-17 2017-03-19 System and method for identifying a marketplace status Abandoned US20170193532A1 (en)

Priority Applications (7)

Application Number Priority Date Filing Date Title
US30514710P true 2010-02-17 2010-02-17
US30651010P true 2010-02-21 2010-02-21
US32532810P true 2010-04-18 2010-04-18
PCT/IB2011/050661 WO2011101800A1 (en) 2010-02-17 2011-02-17 Automatic method and system for visual analysis of object against preset
US13/586,912 US8908927B2 (en) 2010-02-17 2012-08-16 Automatic method and system for identifying healthiness of a plant
US14/556,163 US9600829B2 (en) 2010-02-17 2014-11-30 System and method for identifying a marketplace status
US15/462,865 US20170193532A1 (en) 2010-02-17 2017-03-19 System and method for identifying a marketplace status

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US15/462,865 US20170193532A1 (en) 2010-02-17 2017-03-19 System and method for identifying a marketplace status

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US14/556,163 Continuation US9600829B2 (en) 2010-02-17 2014-11-30 System and method for identifying a marketplace status

Publications (1)

Publication Number Publication Date
US20170193532A1 true US20170193532A1 (en) 2017-07-06

Family

ID=44482496

Family Applications (3)

Application Number Title Priority Date Filing Date
US13/586,912 Active 2031-04-14 US8908927B2 (en) 2010-02-17 2012-08-16 Automatic method and system for identifying healthiness of a plant
US14/556,163 Active US9600829B2 (en) 2010-02-17 2014-11-30 System and method for identifying a marketplace status
US15/462,865 Abandoned US20170193532A1 (en) 2010-02-17 2017-03-19 System and method for identifying a marketplace status

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US13/586,912 Active 2031-04-14 US8908927B2 (en) 2010-02-17 2012-08-16 Automatic method and system for identifying healthiness of a plant
US14/556,163 Active US9600829B2 (en) 2010-02-17 2014-11-30 System and method for identifying a marketplace status

Country Status (3)

Country Link
US (3) US8908927B2 (en)
IL (1) IL221526A (en)
WO (1) WO2011101800A1 (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011101800A1 (en) * 2010-02-17 2011-08-25 Alon Atsmon Automatic method and system for visual analysis of object against preset
US9089227B2 (en) 2012-05-01 2015-07-28 Hussmann Corporation Portable device and method for product lighting control, product display lighting method and system, method for controlling product lighting, and -method for setting product display location lighting
US10121254B2 (en) 2013-08-29 2018-11-06 Disney Enterprises, Inc. Methods and systems of detecting object boundaries
US9547838B2 (en) 2013-11-06 2017-01-17 Oracle International Corporation Automated generation of a three-dimensional space representation and planogram verification
US10430855B2 (en) 2014-06-10 2019-10-01 Hussmann Corporation System, and methods for interaction with a retail environment
EP3307142A4 (en) 2015-06-10 2019-02-20 Tyto Care Ltd. Apparatus and method for inspecting skin lesions
US9864969B2 (en) * 2015-06-26 2018-01-09 Toshiba Tec Kabushiki Kaisha Image processing apparatus for generating map of differences between an image and a layout plan
CA3016615A1 (en) 2016-03-16 2017-09-21 Walmart Apollo, Llc System for verifying physical object absences from assigned regions using video analytics
CN106682704B (en) * 2017-01-20 2019-08-23 中国科学院合肥物质科学研究院 A kind of disease geo-radar image recognition methods of integrating context information

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050021561A1 (en) * 2003-07-22 2005-01-27 International Business Machines Corporation System & method of updating planogram information using RFID tags and personal shopping device
US20080306787A1 (en) * 2005-04-13 2008-12-11 Craig Hamilton Method and System for Automatically Measuring Retail Store Display Compliance
US20090059270A1 (en) * 2007-08-31 2009-03-05 Agata Opalach Planogram Extraction Based On Image Processing
US20120308086A1 (en) * 2010-02-17 2012-12-06 Alon Atsmon Automatic method and system for visual analysis of object against preset
US20150049902A1 (en) * 2013-08-14 2015-02-19 Ricoh Co., Ltd. Recognition Procedure for Identifying Multiple Items in Images

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3468877B2 (en) * 1994-10-27 2003-11-17 矢崎総業株式会社 Plant automatic diagnosis method and apparatus
US6573063B2 (en) * 1995-10-04 2003-06-03 Cytoscan Sciences, Llc Methods and systems for assessing biological materials using optical and spectroscopic detection techniques
US20030036855A1 (en) * 1998-03-16 2003-02-20 Praelux Incorporated, A Corporation Of New Jersey Method and apparatus for screening chemical compounds
US7112806B2 (en) * 2001-09-27 2006-09-26 Robert Lussier Bio-imaging and information system for scanning, detecting, diagnosing and optimizing plant health
AU2002950805A0 (en) * 2002-08-15 2002-09-12 Momentum Technologies Group Improvements relating to video transmission systems
NL1021800C2 (en) * 2002-10-31 2004-05-06 Plant Res Int Bv Method and device for taking images of the quantum efficiency of the photosynthetic system with the aim of determining the quality of plant material, and method and apparatus for measuring, classifying and sorting of plant material.
JP3885058B2 (en) * 2004-02-17 2007-02-21 株式会社日立製作所 Plant growth analysis system and analysis method
US7415143B2 (en) * 2004-04-13 2008-08-19 Duke University Methods and systems for the detection of malignant melanoma
US8320996B2 (en) * 2004-11-29 2012-11-27 Hypermed Imaging, Inc. Medical hyperspectral imaging for evaluation of tissue and tumor
US7457441B2 (en) * 2005-02-25 2008-11-25 Aptina Imaging Corporation System and method for detecting thermal anomalies
US7796815B2 (en) * 2005-06-10 2010-09-14 The Cleveland Clinic Foundation Image analysis of biological objects
DE102007003448A1 (en) * 2007-01-19 2008-07-24 Perner, Petra, Dr.-Ing. Method and data processing system for the automatic detection, processing, interpretation and conclusion of objects present as digital data sets
US8938218B2 (en) * 2007-06-06 2015-01-20 Tata Consultancy Servics Ltd. Mobile based advisory system and a method thereof
WO2009005828A1 (en) * 2007-07-03 2009-01-08 Tenera Technology, Llc Imaging method for determining meat tenderness
NZ562316A (en) * 2007-10-09 2009-03-31 New Zealand Inst For Crop And Method and system of managing performance of a tuber crop
PT2291640T (en) * 2008-05-20 2019-02-26 Univ Health Network Device and method for fluorescence-based imaging and monitoring
CN201298866Y (en) * 2008-11-04 2009-08-26 北京中科嘉和科技发展有限公司 Plant diseases and pests remote diagnosis terminal based on PSTN
NL1036677C2 (en) * 2009-03-06 2010-09-07 Praktijkonderzoek Plant & Omgeving B V Method and apparatus for making images containing information on the quantum efficiency and time response of the photosynthesis system for objective of determining the quality of vegetable material and method and measuring material size classification.

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050021561A1 (en) * 2003-07-22 2005-01-27 International Business Machines Corporation System & method of updating planogram information using RFID tags and personal shopping device
US20080306787A1 (en) * 2005-04-13 2008-12-11 Craig Hamilton Method and System for Automatically Measuring Retail Store Display Compliance
US20090059270A1 (en) * 2007-08-31 2009-03-05 Agata Opalach Planogram Extraction Based On Image Processing
US20120308086A1 (en) * 2010-02-17 2012-12-06 Alon Atsmon Automatic method and system for visual analysis of object against preset
US20150049902A1 (en) * 2013-08-14 2015-02-19 Ricoh Co., Ltd. Recognition Procedure for Identifying Multiple Items in Images

Also Published As

Publication number Publication date
US20120308086A1 (en) 2012-12-06
WO2011101800A1 (en) 2011-08-25
WO2011101800A4 (en) 2011-10-27
US20150088605A1 (en) 2015-03-26
US8908927B2 (en) 2014-12-09
US9600829B2 (en) 2017-03-21
IL221526A (en) 2016-10-31

Similar Documents

Publication Publication Date Title
US8755837B2 (en) Methods and systems for content processing
US9477955B2 (en) Automatic learning in a merchandise checkout system with visual recognition
EP2751748B1 (en) Methods and arrangements for identifying objects
JP5578077B2 (en) Search support system, search support method, and search support program
US9600982B2 (en) Methods and arrangements for identifying objects
CN101933048B (en) Product modeling system and method
US20130215116A1 (en) System and Method for Collaborative Shopping, Business and Entertainment
JP5427859B2 (en) System for image capture and identification
US20110011936A1 (en) Digital point-of-sale analyzer
US20130063561A1 (en) Virtual advertising platform
JP2011525664A (en) Capture images for purchase
US20130066750A1 (en) System and method for collaborative shopping, business and entertainment
JP5829662B2 (en) Processing method, computer program, and processing apparatus
US20130141586A1 (en) System and method for associating an order with an object in a multiple lane environment
JP5205562B2 (en) Method, terminal, and computer-readable recording medium for acquiring information about product worn by person in image data
US9436704B2 (en) System for normalizing, codifying and categorizing color-based product and data based on a universal digital color language
US20160203361A1 (en) Method and apparatus for estimating body shape
US9785898B2 (en) System and method for identifying retail products and determining retail product arrangements
CN103493068B (en) Personalized advertisement selects system and method
US8548203B2 (en) Sequential event detection from video
JP2009048229A (en) Person action analysis device, method and program
US8478048B2 (en) Optimization of human activity determination from video
US20130286048A1 (en) Method and system for managing data in terminal-server environments
US10290031B2 (en) Method and system for automated retail checkout using context recognition
US20100023400A1 (en) Image Recognition Authentication and Advertising System

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION