WO2022204296A1 - Mobile client based image analysis - Google Patents

Mobile client based image analysis Download PDF

Info

Publication number
WO2022204296A1
WO2022204296A1 PCT/US2022/021570 US2022021570W WO2022204296A1 WO 2022204296 A1 WO2022204296 A1 WO 2022204296A1 US 2022021570 W US2022021570 W US 2022021570W WO 2022204296 A1 WO2022204296 A1 WO 2022204296A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
tags
logic
images
review
Prior art date
Application number
PCT/US2022/021570
Other languages
French (fr)
Inventor
Bradford A. Folkens
Dominik K. Mazur
Mujtaba Asif
Original Assignee
Cloudsight, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/399,024 external-priority patent/US20220207274A1/en
Application filed by Cloudsight, Inc. filed Critical Cloudsight, Inc.
Publication of WO2022204296A1 publication Critical patent/WO2022204296A1/en
Priority to US18/372,675 priority Critical patent/US20240013237A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06395Quality analysis or management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/087Inventory or stock management, e.g. order filling, procurement or balancing against orders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0623Item investigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • G06Q50/265Personal security, identity or safety
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates

Definitions

  • the invention is in the field of image processing, and more particularly in the field of characterizing content of images.
  • Tagging refers to the identification of tags (words) that characterize the content of an image. For example, an image of a car may be tagged with the words “car,” “Ford Granada,” or “White 1976 Ford Granada with broken headlight.” These tags include varying amounts of information and, as such, may vary in usefulness.
  • Analysis of images can include identification of objects and/or actions within an image or sequence of images. Typically, such analysis is performed on a suitably powerful computer such as a server.
  • Applications executable on portable devices such as smartphones or tablets, are known to communication images and video to remote servers for analysis.
  • the remote servers process the visual content and return information derived from visual media, such as tags, captions, full image descriptors, and other types of metadata (hereinafter referred to as image tags), to the portable devices. See for example, the system described in US Patent Ser. Nos.
  • Embodiments of the invention include a two-pronged approach to tagging of images.
  • the first prong is to perform automated image recognition on an image.
  • the automated image recognition results in a review of the image.
  • the image review includes one or more tags identifying contents of the image and optionally also a measure of confidence representative of the reliability of the automated image recognition.
  • the second prong in the approach to tagging of images includes a manual tagging of the image.
  • Manual tagging includes a person viewing each image, considering the content of the image, and manually providing tags representative of the content of the image.
  • Automated image recognition has an advantage in that the cost, in time or money, of analyzing each image can be relatively low.
  • Manual tagging of images has an advantage of higher accuracy and reliability.
  • Embodiments of the invention combine both automated image recognition and manual image recognition.
  • automated image recognition is performed first.
  • the resulting image review typically includes both one or more tags characterizing the image and a measure of confidence in the accuracy of these tags. If the confidence is above a predetermined threshold, then these tags are associated with the image and provided as an output of the tagging process. If the confidence is below the predetermined threshold, then a manual review of the image is performed.
  • the manual review results in additional and/or different tags that characterize the contents of the image.
  • the automated image recognition and the manual review of the image are performed in parallel. The manual review is optionally cancelled or aborted if the automated image recognition results in one or more tags having a confidence above the predetermined threshold.
  • the image processing systems described herein are optionally configured to provide an image sequence as output.
  • images received from a first and second remote client may be included in an image sequence provided to a third remote client.
  • the image sequence can include images received from at least 10, 100 or 1000 remote clients.
  • the image sequence can be presented in a variety of ways including, for example, as a video, a mosaic, a set of still images, and/or derivatives of images.
  • the selection of images is optionally dependent on the various information discussed elsewhere herein.
  • the inclusion of an image in an image sequence is optionally dependent on image tags such as those generated using the systems and methods disclosed herein.
  • An image sequence can be provided to a plurality of remote clients, including clients from which the one or more of the images within the sequence were received.
  • recognition of an image can be upgraded. Upgrading of the image recognition process includes a request for further or improved tags representative of the content of the image. For example, if automated image recognition results in the tags “white car,” an upgrade of this recognition may result in the tags “white Ford Granada.”
  • an upgraded review makes use of an expert human reviewer. For example, the above example may include the use of a human reviewer with an expert knowledge of automobiles. Other examples of reviewer expertise are discussed elsewhere herein.
  • Some embodiments include compact and highly accurate visual analysis application which is capable of performing visual analysis on a device such as a security camera, a smartphone, tablet computer, or other types of handheld device.
  • the visual analysis includes identification of objects, actions, or a series of actions comprising a story within visual content, such as in an image or video. This identification results in metadata, such as image tags and optionally full image descriptors, which characterize the identity of objects or actions within the presented visual content.
  • the visual analysis application including a trained neural network, requires less than 250MB of device storage.
  • both the image data or video data stream, descriptive metadata, and/or resulting image tags may be provided to a remote server for a variety of purposes such as selection of an advertisement, quality control, inventory management, security monitoring and surveillance, safety assessment, and/or training.
  • Image tags generated on a device, or the corresponding user feedback and behavioral data may be communicated (with or without the corresponding images) for the purposes of performing a search, quality control, inventory management, activating alerts, selecting advertisements, surveillance, media archive retrieval and cataloging, building user behavior or demographic profiles, relating similar cohorts and marketing groups, and/or the like.
  • Various embodiments of the invention include features directed toward improving the accuracy of image recognition while also minimizing cost.
  • these features include efficient use of human reviewers, real-time delivery of image tags, and/or seamless upgrades of image recognition.
  • the approaches to image recognition disclosed herein are optionally used to generate image tags suitable for performing internet searches and/or selecting advertisements.
  • image tags are automatically used to perform a Google search and/or sell advertising based on Google’s AdWords.
  • Various embodiments of the invention include an image processing system comprising an I/O configured to communicate an image and image tags over a communication network; an automatic identification interface configured to communicate the image to an automatic identification system and to receive a computer generated review of the image from the automatic identification system, the computer generated review including one or more image tags identifying contents of the image; destination logic configured to determine a first destination to send the image to, for a first manual review of the image by a first human reviewer; image posting logic configured to post the image to the destination; review logic configured to receive the a manual review of the image from the destination and to receive the computer generated review, the manual review including one or more image tags identifying contents of the image; response logic configured to provide the image tags of the computer generated review and the image tags of the manual review to the communication network; memory configured to store the image; and a microprocessor configured to execute at least the destination logic.
  • Various embodiments of the invention include a method of processing an image, the method comprising receiving an image from an image source; distributing the image to an automated image identification system; receiving a computer generated review from the automated image identification system, the computer generated review including one or more image tags assigned to the image by the automated image identification system and a measure of confidence, the measure of confidence being a measure of confidence that the image tags assigned to the image correctly characterize contents of the image; placing the image in an image queue; determining a destination; posting the image for manual review to a first destination, the first destination including a display device of a human image reviewer; and receiving a manual image review of the image from the destination, the image review including one or more image tags assigned to the image by the human image reviewer, the one or more image tags characterizing contents of the image.
  • Various embodiments of the invention include an image source comprising a camera configure to capture an image; a display configured to present the image to a user; eye tracking logic configured to detect an action of one or more eyes of the user; optional image marking logic configured to place a mark on the image, the mark being configured to indicate a particular subset of the image and being responsive to the detected action; display logic configured to display the mark on the image in real time; an I/O configured to provide the image a computer network; and a processor configured to execute at least the display logic.
  • Various embodiments of the invention include an image source comprising a camera configure to capture an image; a display configured to present the image to a user; eye tracking logic configured to detect an action of one or more eyes of the user; image marking logic configured for a user to indicate a particular subset of the image and to highlight an object within the subset, the indication being responsive to the detected action; display logic configured to display the highlighted on the image in real time; an I/O configured to provide the image and the indication of the particular subset to a computer network; and a processor configured to execute at least the display logic.
  • Various embodiments of the invention include an image source comprising a camera configure to capture an image; a display configured to present the image to a user; selection logic configured for selecting; image marking logic configured for a user to indicate a particular subset of the image and to highlight an object within the subset, the indication being responsive to the detected finger; an I/O configured to provide the image and the indication of the particular subset to a computer network; display logic configured to display the image in real time and to display image tags received from the computer network in response to the image, the image tags characterizing contents of the image; and a processor configured to execute at least the display logic.
  • Various embodiments of the invention include an image processing system comprising an I/O configured to communicate an image sequence and image tags over a communication network; optional an automatic identification interface configured to communicate the image sequence to an automatic identification system and to receive a computer generated review of the image from the automatic identification system, the computer generated review including one or more image tags identifying contents of the image; destination logic configured to determine a first destination to send the image sequence to, for a first manual review of the image sequence by a first human reviewer; image posting logic configured to post the image sequence to the destination; review logic configured to receive the a manual review of the image sequence from the destination and optionally to receive the computer generated review, the manual review including one or more image tags identifying an action within of the image sequence; response logic configured to provide the image tags of the manual review to the communication network; memory configured to store the image sequence; and a microprocessor configured to execute at least the destination logic.
  • Various embodiments of the invention include a method of processing an image, the method comprising: receiving one or more first descriptors of an image at an image processing server, from a remote client via a communication network; comparing the received first descriptors to second descriptors stored locally to the image processing server, to determine if the first descriptors match a set of the second descriptors; responsive to the first descriptors matching the set of second descriptors, retrieving one or more image tags stored in association with the set of second descriptors; and providing the one or more image tags to the client.
  • Various embodiments of the invention include a method of processing an image at an image processing server, the method comprising: receiving an image and data characterizing the image from a remote client; determining a destination for the image, the destination being associated with a human image reviewer, the determination of the destination being based on a match between the data characterizing the image and a specialty of the human reviewer; posting the image to the determined destination; receiving one or more image tags characterizing the image, from the destination; and providing the one or more image tags to the client.
  • Various embodiments of the invention include a method of processing an image, the method comprising: receiving data characterizing the image from a mobile device, the data characterizing the image including identified features of an image or descriptors of an image; generating image tags based on the data characterizing the image; providing the image tags to the mobile device.
  • Various embodiments of the invention include a method of processing an image, the method comprising: receiving an image using a portable device; identifying features of the image using a processor of the portable device; providing the features to a remote image processing server via a communication network; receiving image tags based on the features from the image processing server; and displaying the image tags on a display of the portable device.
  • Various embodiments of the invention include a method of processing an image, the method comprising: receiving an image using a portable device; identifying features of the image using a processor of the portable device; deriving image descriptors based on the identified features; providing the descriptors to a remote image processing server via a communication network; receiving image tags based on the descriptors from the image processing server; and displaying the image tags on a display of the portable device.
  • Various embodiments of the invention include a method of processing an image, the method comprising: receiving an image using a portable device; identifying features of the image using a processor of the portable device; deriving image descriptors based on the identified features; comparing the image descriptors with a set of image descriptors previously stored on the portable device to determine if there is a match between the image descriptors and the stored set of image descriptors; if there is a match between the image descriptors and the stored set of image descriptors retrieving one or more image tags associated with the set of image descriptors from memory of the portable device; displaying the retrieved one or more image tags on a display of the portable device.
  • Various embodiments of the invention include a method of processing an image, the method comprising: receiving an image using a portable device; identifying features of the image using a processor of the portable device; deriving image descriptors based on the identified features; comparing the image descriptors with a set of image descriptors previously stored on the portable device to determine if there is a match between the image descriptors and the stored set of image descriptors; classifying the image based on the match between the image descriptors and the stored set of image descriptors; sending the image and the classification of the image to a remote image processing server; receiving one or more image tags based on the image; and displaying the one or more image tags on a display of the portable device.
  • Various embodiments of the invention include an image processing system comprising an I/O configured to communicate an image and image tags over a communication network; an image ranker configured to determine a priority for tagging the image; destination logic configured to determine a first destination to send the image to, for a first manual review of the image by a first human reviewer; image posting logic configured to post the image to the destination; review logic configured to receive the a manual review of the image from the destination, the manual review including one or more image tags identifying contents of the image; memory configured to store the one or more image tags in a data structure; and a microprocessor configured to execute at least the image ranker.
  • Various embodiments of the invention include an image processing system comprising an I/O configured to receive an image over a communication network; an image ranker configured to determine a priority of the image and to determine whether or not to tag the image based on the priority and/or how to tag the image; manual or automatic means for tagging the image to produce one or more image tags characterizing the image; memory configured to store the image and the one or more image tags characterizing the image, in a data structure; and a microprocessor configured to execute at least the image ranker.
  • Various embodiments of the invention include an image processing system comprising an I/O configured to receive an image over a communication network; an image ranker configured to determine a priority of the image and to select a process of tagging the image based on the priority; means for tagging the image to produce one or more image tags characterizing the image; memory configured to store the image and the one or more image tags characterizing the image, in a data structure; and a microprocessor configured to execute at least the image ranker.
  • Various embodiments of the invention include an image processing system comprising an I/O configured to communicate an image and image tags over a communication network; an image ranker configured to determine a priority for tagging the image based on how many times a video including the image is viewed; destination logic configured to determine a destination to send the image to, for a manual review of the image by a human reviewer; image posting logic configured to post the image to the destination; review logic configured to receive the manual review of the image from the destination, the manual review including one or more image tags identifying contents of the image; memory configured to store the one or more image tags in a data structure; and a microprocessor configured to execute at least the image ranker.
  • Various embodiments of the invention include an method of processing an image, the method comprising receiving an image from an image source; distributing the image to an automated image identification system; receiving a computer generated review from the automated image identification system, the computer generated review including one or more image tags assigned to the image by the automated image identification system and a measure of confidence, the measure of confidence being a measure of confidence that the image tags assigned to the image correctly characterize contents of the image; assigning a priority to the image based on the measure of confidence; determining that the image should be manually tagged based on the priority; posting the image for manual review to a first destination, the first destination including a display device of a human image reviewer; and receiving a manual image review of the image from the destination, the image review including one or more image tags assigned to the image by the human image reviewer, the one or more image tags assigned by the human image reviewer characterizing contents of the image.
  • Various embodiments of the invention include an method of processing an image, the method comprising receiving an image from an image source; automatically determining a priority to the image using a microprocessor; determining how the image should be tagged based on the priority; tagging the image to produce one or more tags, the one or more tags characterizing contents of the image; and storing the image and the one or more tags in a data structure.
  • Various embodiments of the invention include a method of generating a stream of images, the method comprising: receiving a first image from a first remote client; posting the first image to an automatic identification system configured to identify contents of the first image and to provide one or more first image tags that characterize these contents; receiving a second image from a second remote client; posting the second image to an automatic identification system configured to identify contents of the second image and to provide one or more second image tags that characterize these contents; adding the first image and the second image to an image sequence; and providing the image sequence to a third remote client.
  • Various embodiments of the invention include an image processing system comprising: an I/O configured to communicate a first image, a second image and image tags over a communication network, the first image and second image being received from first and second remote clients respectively; an automatic identification interface configured to communicate the first of the images to an automatic identification system and to receive a computer generated review of the first image from the automatic identification system, the computer generated review including one or more first image tags identifying contents of the first image; sequence assembly logic configured to add the first image and the second image to an image sequence, the image sequence including a plurality of images received from a plurality of remote clients; streaming logic configured to provide the image sequence to a third remote client; memory configured to store the one or more image tags in a data structure; and a microprocessor configured to execute at least the sequence assembly logic.
  • inventions optionally further comprise destination logic configured to determine a destination to which to send the first image, for a first manual review of the first image by a first human reviewer; image posting logic configured to post the first image to the destination; and review logic configured to receive the first manual review of the image from the destination, the first manual review including one or more second image tags identifying contents of the image.
  • Various embodiments of the invention include a portable computing device comprising: a display configured to present a user interface and to display an advertisement; a camera configured to capture an image or video stream; application memory configured to store a visual processing application, the visual processing application including a neural network and logic configured to generate image tags, the image tags characterizing an identity of a three-dimensional object or an action within the image or video stream, the neural network and the camera sharing a same power supply; execution memory configured for execution of the visual processing application; wireless communication circuits configured to send the image tags generated by the visual processing application to a remote advertising server and to receive the advertisement from the advertising server, the advertisement being selected based on the image tags; and a microprocessor configured to execute the visual processing application to generate the image tags on the portable computing device.
  • An inventory management device comprising: a display configured to present a user interface and to display user messages; a camera configured to capture an image or video stream; application memory configured to store a visual processing application, the visual processing application including a neural network optionally requiring less than 250MB of the application memory and being configured to generate image tags, the image tags characterizing an identity of a three-dimensional object or an action within the image or video stream, the neural network and the camera both being disposed within the housing; inventory logic configured to generate inventory data based on the image tags; a wireless communication circuit configured to send inventory data to a remote server; execution memory configured for execution of the visual processing application; and a microprocessor configured to execute the visual processing application to generate the image tags on the inventory management device.
  • Various embodiments of the invention include a detection system configured to detect an object, an event or a series of events comprising a story, the system comprising: a camera configured to capture an image or video stream; application memory configured to store a visual processing application, the visual processing application including a neural network and being configured to generate one or more image tags by processing the image, the image tags characterizing an identity of a three-dimensional object or an action within the image or video stream, the neural network and the camera sharing a same power supply or the neural network being within a central processing device with one or more connected cameras; execution memory configured for execution of the visual processing application; tag memory configured to store reference tags; alert logic configured to send an alert to a remote server in response to one of the one or more image tags matching one of the reference tags; and a wireless communication circuit configured to send the alert and the image tags generated by the visual processing application to the remote server.
  • Various embodiments of the invention include a method of processing an image on a mobile computing device, the method comprising: capturing an image or video stream using a camera of the mobile computing device; an optional secondary camera configured to capture depth data; an optional LiDAR configured to capture depth and surface data; processing the image or video stream on the mobile computing device using a microprocessor and a visual processing application, the processing including generation of one or more image tags characterizing an identity of a three-dimensional object, an action within the image, or a series of actions comprising a story; the visual processing application including a neural network stored on the mobile computing device; sending the one or more image tags to a remote server; receiving an advertisement from the remote server, the advertisement being selected based on the one or more image tags; and displaying the advertisement on a display of the mobile computing device.
  • FIG. 1 illustrates an image processing system, according to various embodiments of the invention.
  • FIG. 2 illustrates an image capture screen, according to various embodiments of the invention.
  • FIG. 3 illustrates search results based on an image analysis, according to various embodiments of the invention.
  • FIG. 4 illustrates methods of processing an image, according to various embodiments of the invention.
  • FIG. 5 illustrates alternative methods of processing an image, according to various embodiments of the invention.
  • FIG. 6 illustrates methods of managing a reviewer pool, according to various embodiments of the invention.
  • FIG. 7 illustrates methods of receiving image tags in real-time, according to various embodiments of the invention.
  • FIG. 8 illustrates methods of upgrading an image review, according to various embodiments of the invention.
  • FIG. 9 illustrates an example of Image Source 120A including electronic glasses, according to various embodiments of the invention.
  • FIG. 10 illustrates a method of processing an image on an image source, according to various embodiments of the invention.
  • FIG. 11 illustrates a method of processing an image based on image descriptors, according to various embodiments of the invention.
  • FIG. 12 illustrates a method of processing an image using feedback, according to various embodiments of the invention.
  • FIG. 13 and 14 illustrates methods of providing image tags based on image descriptors, according to various embodiments of the invention.
  • FIG. 15 illustrates methods of prioritizing image tagging, according to various embodiments of the invention.
  • FIG. 16 illustrates an image analysis system according to various embodiments of the invention.
  • FIG. 17 illustrates methods of processing an image on a client device according to various embodiments of the invention.
  • FIG. 18 illustrates methods of providing information to an AR/VR device, according to various embodiments of the invention.
  • FIG. 1 illustrates an Image Processing System 110, according to various embodiments of the invention.
  • Image Processing System 110 is configured for tagging of images and may include one or more distributed computing devices.
  • Image Processing System 110 may include one or more servers located at geographically different places.
  • Image Processing System 110 is configured to communicate via a Network 115.
  • Network 115 can include a wide variety of communication networks, such as the internet and/or a cellular telephone system.
  • Network 115 is typically configured to communicate data using standard protocols such as IP/TCP, FTP, etc.
  • the images processed by Image Processing System 110 are received from Image Sources 120 (individually labeled 120A, 120B, etc.).
  • Image Sources 120 can include computing resources connected to the internet and/or personal mobile computing devices.
  • Image Source 120 A may be a web server configured to provide a social networking website or a photo sharing service.
  • Image Source 120B may be a smart phone, camera, a wearable camera, electronic glasses or other portable image capture device.
  • Image sources may be identified by a universal resource locator, an internet protocol address, a MAC address, a cellular telephone identifier, and/or the like.
  • Image Processing System 110 is configured to receive images from a large number of Image Sources 120.
  • Part of the image tagging performed by Image Processing System 110 includes sending images to Destinations 125 (individually labeled 125A, 125B, etc.).
  • Destinations 125 are computing devices of human image reviewers and are typically geographically remote from Image Processing System 110.
  • Destinations 125 include at least a display and data entry devices such as a touch screen, keyboard and/or microphone.
  • Destinations 125 may be in a different building, city, state and/or country than Image Processing System 110.
  • Destinations 125 may include personal computers, computing tablets, smartphones, etc.
  • Destinations 125 include a (computing) application specifically configured to facilitate review of images. This application is optionally provided to Destinations 125 from Image Processing System 110.
  • Image Processing System 110 is configured for human image reviewers to log into a user account from Destinations 125.
  • Destinations 125 are typically associated with an individual reviewer and may be identified by an internet protocol address, a MAC address, a login session identifier, cellular telephone identifier, and/or the like.
  • Destinations 125 include an audio to text converter.
  • Image tagging data provided by a human image reviewer at a member of Destinations 125 is sent to Image Processing System 110.
  • the image tagging data can include textual image tags, audio data including verbalized tags, and/or non-tag information such as upgrade requests or inappropriate (explicit) material designations.
  • Image Processing System 110 includes an I/O (input/output) 130 configured for communicating with external systems.
  • I/O 130 can include routers, switches, modems, firewalls, and/or the like.
  • I/O 130 is configured to receive images from Image Sources 120, to send the images to Destinations 125, to receive tagging data from Destinations 125, and optionally to send image tags to Image Sources 120.
  • I/O 130 includes communication hardware and optionally an application program interface (API).
  • API application program interface
  • Image Processing System 110 further includes Memory 135.
  • Memory 135 includes hardware configured for the non-transient storage of data such as images, image tags, computing instructions, and other data discussed herein.
  • Memory 135 may include, for example, random access memory (RAM), hard drives, optical storage media, and/or the like.
  • RAM random access memory
  • Memory 135 is configured to store specific data, as described herein, through the use of specific data structures, indexing, file structures, data access routines, security protocols, and/or the like.
  • Image Processing System 110 further includes at least one Processor 140.
  • Processor 140 is a hardware device such as an electronic microprocessor.
  • Processor 140 is configured to perform specific functions through hardware, firmware or loading of software instructions into registers of Processor 140.
  • Image Processing System 110 optionally includes a plurality of Processor 140.
  • Processor 140 is configured to execute the various types of logic discussed herein.
  • Image Queue 145 is an ordered list of images pending review, stored in a sorted list. Images stored in Image Queue 145 are typically stored in association with image identifiers used to reference the images and may have different priorities. For example, images received from a photo sharing website may have lower priority than images received from a smartphone. Generally, those images for which a requester is waiting to receive image tags representing an image in real-time are given higher priority relative to those for which the image tags are used for some other purpose. Image Queue 145 is optionally stored in Memory 135.
  • images are optionally stored in association with an image identifier or index, and other data associated with each image.
  • an image may be associated with source data relating to one of Image Sources 120.
  • the source data can include geographic information such as global positioning system coordinates, a street and/or city name, a zip code, and/or the like.
  • the source data may include an internet protocol address, a universal resource locator, an account name, an identifier of a smartphone, and/or the like.
  • Source data can further include information about a language used on a member of Image Sources 120, a requested priority, a search request (e.g., an request to do an internet search based on image tags resulting from the image), and/or the like.
  • an image within Image Queue 145 is stored in association with an indication of a particular subset of the image, the subset typically including an item of particular interest.
  • a requestor of image tags may be interested in obtaining image tags relating to the contents of a particular subset of an image. This can occur when an image includes several objects.
  • the user may wish to designate the ring as being a particular area of interest.
  • Some embodiments of the invention include an application configured for a user to specify the particular item of interest by clicking on the object or touching the object on a display of Image Source 120B. This specification typically occurs prior to sending the image to Image Processing System 110.
  • an Image Marking Logic 147 is optionally used to place a mark on the image.
  • the mark being disposed to highlight the particular subset. This mark may be made by modifying pixels of the image corresponding to the subset and this mark allows a human image reviewer to focus on the marked subset.
  • the image may be marked with a rectangle or circle prior to the image being posted to one or more of Destinations 125.
  • highlighting a subset of the image or an object within the image can include applying a filter to the object or subset, and/or changing a color of the object or subset.
  • Image Marking Logic 147 is included within an application configured to execute on one or more of Image Sources 120 or Destinations 125.
  • Image Marking Logic 147 includes hardware, firmware, and/or software stored on a non-transient computer readable medium. As discussed elsewhere herein, Marking Logic 147 is optionally configured to place a mark on the image in real-time, as the image is being generated.
  • Marking Logic 147 is configured to use image features detected within an image to identify particular objects that may be marked.
  • the detection of image feature is discussed elsewhere herein and is optionally part of image processing that occurs on the client side, e.g., on Image Source 120A.
  • features such as edges may be detected using a processor of Image Source 120A. These features can first be used in highlighting objects for detection and then also sent from Image Source 120A to Image Processing System 110 where they are then used to generate image descriptors as part of processing the image. In this way automated processing of the image is distributed between Image Source 120A, Image Processing System 110 and/or Automatic Identification System 152.
  • Automatic Identification interface 150 includes logic configured to communicate the image, and optionally any data associated with the image, to an Automatic Identification System 152.
  • the logic is hardware, firmware, and/or software stored on a computer readable medium.
  • Automatic Identification Interface 150 is further configured to receive a computer generated review of the image from Automatic Identification System 152, the computer generated review including one or more image tags identifying contents of the image.
  • Automatic Identification Interface 150 is configured to communicate the image and data via Network 115 in a format appropriate for an application programming interface (API) of Automatic Identification System 152.
  • Automatic Identification System 152 is included within Image Processing System 110 and Automatic Identification Interface 150 includes, for example, function calls within a program and/or a system call within an operating system or over a local area network.
  • Automatic Identification System 152 is a computer automated system configured to review images without a need for human input on a per picture basis.
  • the output of Automatic Identification System 152 is a computer generated image review (e.g., a review produced without human input on a per picture basis.) Rudimentary examples of such systems are known in the art. See, for example, Kooaba, Clarifai, Alchemy API and Catchoom.
  • Automatic Identification System 152 is typically configured to automatically identify objects within a two dimensional image based on shapes, characters and/or patterns detected within the image.
  • Automatic Identification System 152 is optionally configured to perform optical character recognition and/or barcode interpretation.
  • Automatic Identification System 152 is distinguished from systems of the prior art in that Automatic Identification System 152 is configured to provide a computer generated review that is based on the image subset indication(s) and/or image source data, discussed elsewhere herein.
  • Automatic Identification System 152 is optionally configured to determine if a copy of the image received from a different image source has already been tagged. For example, the same image may be included in multiple webpages. If the image is extracted from a first of these webpages and tagged, Automatic Identification System 152 may recognize that the image has already been tagged and automatically assign these tags to each instance of the image found. Recognizing that an image has already been tagged optionally includes comparing the image, a part of the image, or data representative of the image to a database of previously tagged images. The image may have been previously tagged automatically or manually. [0070] In various embodiments, Automatic Identification System 152 is configured to identify action within image sequences, e.g., video.
  • Automatic Identification System 152 may be configured to recognize actions, e.g., gestures, a described in U.S. provisional patent application Ser. No. 63/066,081, filed August 14, 2020, which is hereby incorporated herein by reference.
  • images may be tagged with words or phrases representing the identified action.
  • Automatic Identification System 152 is configured to identify relationships between objects in an image or sequence thereof.
  • Automatic Identification System 152 may be configured to provide a tag “boy with ball on beach,” distinct from “boy and ball on beach.” The first tag indicates a relationship between the ball and the boy, whereas the second only indicates the presence of various detected objects in the scene.
  • Automatic Identification System 152 may be configured to include the actions by which various actors are interacting with objects in the scene. As an example, actions may be expressed as “boy playing with ball on beach” or “boy throwing ball to dog on beach.” ⁇ . ⁇
  • the neural network may be configured to identify spatial and conceptual relationships between various objects in the scene based on their linguistic and semantic relationship to each other.
  • the neural network may also be configured to identify temporal relationships in video, or in other words, how the objects are interacting with each other through time. The importance and nature of the objects’ relationships are learned through the repeated linguistic representations presented to the language model portion of the neural network.
  • a computer-generated review generated by Automatic Identification System 152 optionally includes a measure of confidence representative of a confidence that the one or more image tags correctly identify the contents of the image. For example, a computer-generated review of an image that is primarily characters or easily recognizable shapes may have a greater confidence measure than a computer generated review of an image that consists of abstract or ill-defined shapes. Different automated image recognition systems may produce different confidence levels for different types of images. Automatic Identification Interface 150 and Automatic Identification System 152 are optional in embodiments in which automatic identification is performed by a third party.
  • Image Processing System 110 further includes a Reviewer Pool 155 and Reviewer Logic 157 configured to manage the Reviewer Pool 155.
  • Reviewer Pool 155 includes a pool (e.g., group or set) of human image reviewers.
  • Each of the human image reviewers is typically associated with a different member of Destinations 125.
  • each of the different members of Destinations 125 may be known to be operated by a different human image reviewer or to be logged into an account of a different human image reviewer.
  • Memory 135 is optionally configured to store Reviewer Pool 155.
  • the human image reviewers included in Reviewer Pool 155 are classified as “active” and “inactive.”
  • an active human image reviewer is considered to be one that is currently providing image tags or has indicated that they are prepared to provide image tags with minimal delay.
  • the active reviewers are those that are provided image for review.
  • the number of active reviewers may be moderated in real-time in response to a demand for image reviews.
  • the classification of a human image reviewer may be changed from inactive to active based on a number of unviewed images in Image Queue 145.
  • An inactive reviewer is one that is not yet active, that has let the review of an image expire, and/or has indicated that they are not available to review images. Inactive reviewers may request to become active reviewers. Inactive reviewers who have made such a request can be reclassified as active human image reviewers when additional active human image reviewers are needed. The determination of which inactive reviewers are reclassified as active reviewers is optionally dependent on a reviewer score (discussed elsewhere herein).
  • Reviewer Logic 157 is configured to manage Reviewer Pool 155. This management optionally includes the classification of human image reviewers as active or inactive. For example, Reviewer Logic 157 may be configured to monitor a time that a human image reviewer takes to review an image and, if a predetermined maximum review time (referred to herein as an image expiration time), changing the classification of the human image reviewer from active to inactive. In another example, Reviewer Logic 157 may be configured to calculate a review score for a human image reviewer. In some embodiments, the review score is indicative of the completeness, speed and/or accuracy of image reviews performed by the particular human image reviewer. The review score can be calculated or changed based on review times and occasional test images.
  • test images may be, for example images placed in Image Queue 145 that have been previously reviewed by a different human image reviewer.
  • the review score may also be a function of monetary costs associated with the human image reviewer.
  • Reviewer Logic 157 includes hardware, firmware, and/or software stored on a non-transient computer readable medium.
  • reviewer scores are manually determined by human moderators. These human moderators review images and the tags assigned to these images by human image reviewers. Moderators are optionally sent a statistical sampling of reviewed images and they assign a score to the tagging of the images. This score is optionally used in determining reviewer scores.
  • Reviewer Logic 157 is configured to monitor status of human image reviewers in real-time.
  • Reviewer Logic 157 may be configured to monitor the entry of individual words or keystrokes as entered by a reviewer at Destination 125A. This monitoring can be used to determine which reviewers are actively reviewing images, which reviewers have just completed review of an image, and/or which reviewers have not been providing tag input for a number of seconds or minutes. The entry of tag words using an audio device may also be monitored by Reviewer Logic 157.
  • members of Reviewer Pool 155 are associated with a specialty in which the human image reviewer has expertise or special knowledge in.
  • a reviewer may be an expert in automobiles and be associated with that specialty.
  • Other specialties may include art, plants, animals, electronics, music, food medical specialties, clothing, clothing accessories, collectables, etc.
  • a specialty of a reviewer may be used to select that reviewer during an initial manual review and/or during a review upgrade.
  • Reviewer Logic 157 determines which inactive reviewer to make active, when additional active reviewers are required.
  • Reviewer Logic 157 includes hardware, firmware, and/or software stored on a non-transient computer readable medium.
  • Image Processing System 110 further includes Destination Logic 160.
  • Destination Logic 160 is configured to determine one or more destinations (e.g., Destinations 125) to send an image to for manual review. Each of Destinations 125 is associated with a respective human image reviewer of Reviewer Pool 155. The determinations made by Destination Logic 160 are optionally based on characteristics of the human image reviewer at the determined destination.
  • the destination may be a computing device, smartphone, tablet computer, personal computer, etc. of the human image reviewer.
  • the destination is a browser from which the reviewer has logged into Image Processing System 110.
  • determining the destination includes determining an MAC address, session identifier, internet protocol and/or universal resource locator of one of Destinations 125.
  • Destination Logic 160 includes hardware, firmware and/or software stored on a non-transient computer readable medium.
  • Destination Logic 160 is configured to determine Destinations 125 associated with active rather than inactive human image reviewers as determined by Reviewer Logic 157. Destination Logic 160 is also typically configured to determine Destinations 125 based on review scores of reviewers. For example, those reviewers having higher reviewer scores may be selected for higher priority reviews relative to reviewers having lower reviewer scores. Thus, the determination of a member of Destinations 125 can be based on both reviewer scores and image review priority.
  • Destination Logic 160 is configured to determine one or more members of Destinations 125 based on the real-time monitoring of the associated reviewers’ input activity. As discussed elsewhere herein, this monitoring may be performed by Reviewer Logic 157 and can include detection of individual words or keystrokes entered by a human image reviewer. In some embodiments, Destination Logic 160 is configured to favor selecting Destination 125 A at which a human image reviewer has just completed a review of an image relative to Destination 125B at which a human image reviewer is currently typing image tags on a keyboard.
  • Destination Logic 160 is configured to use image tags received via Automatic Identification System 152 to determine one or more members of Destinations 125. For example, if an image tag of “car” is received via Automatic Identification Interface 150 then Destination Logic 160 can use this information to select a member of Destinations 125 associated with a human image reviewer that has a specialty in automobiles.
  • the value of an image review may also be considered in the selection of a destination for manual review.
  • an image review of high value may lead to the determination of a destination associated with a human image reviewer having a relatively high review score
  • an image review of lower value may lead to the determination of a destination associated with a human image reviewer having a relatively lower review score.
  • Destination Logic 160 is configured to select among Destinations 125 so as to minimize a time required to review an image, e.g., to minimize a time until the image tags of the manual review are provided to Network 115.
  • Destination Logic 160 is optionally configured to determine multiple destinations for a single image. For example, a first destination may be selected and then, following an upgrade request, a second destination may be determined. The upgrade request may come from the Image Source 120 A or from a human image reviewer associated with the first destination. In some embodiments, Destination Logic 160 is configured to determine multiple destinations, to which the image will be posted to in parallel. For example, two, three or more destinations, each associated with a different human image reviewer, may be determined and the same image posted to all determined destinations in parallel. As used in this context, “in parallel” means that the image is posted to at least a second destination before any part of a review is received from the first destination.
  • a request for an upgraded review may require a human image reviewer having a particular specialty.
  • a human image reviewer having a particular specialty Referring to the automotive example, an image that is first tagged with the tag “white car” may result in an upgrade request for more information.
  • Destination Logic 160 may be configured to then select a destination associated with a human image reviewer have a specialty in automobiles, e.g., a reviewer who can provide the tags “1976 Ford Granada.”
  • An upgrade request indicates that the image is subject to further review, e.g. the image requires or may benefit from further review.
  • the upgrade request may be represented by a computing object such as a flag, command or data value, etc.
  • Another instance that may require a second destination occurs when the manual review of an image takes too long.
  • the tagging of an image should occur within an allotted time period or the review is considered to expire.
  • the allotted time period is optionally a function of the priority of the image review. Those reviews that are intended to occur in real-time may have a shorter time period relative to lower priority reviews. If the review of an image expires, Image Processing System 110 is optionally configured to provide the image to an additional human image reviewer associated with a destination determined by Destination Logic 160.
  • Another instance that may require a second destination occurs when a first human reviewer makes an upgrade request.
  • the request to upgrade the review resulting in a tag of “car” may come from the human image reviewer that provided the tag “car.” While this example is simplistic, other examples may include images of more esoteric subject matter such as packaged integrated circuits.
  • Image Processing System 110 further includes Image Posting Logic 165 configured to post images for manual review to Destinations 125 determined by Destination Logic 160.
  • Image Posting typically includes communicating the images to one or more Destinations 125 via Network 115.
  • Image Posting Logic 165 is further configured to provide information associated with the image to Destinations 125.
  • Image Posting Logic 165 may post, along with the image, an indication of a subset of the image (e.g., subset identification), an image marked by Image Marking Logic 147, information identifying a source of the image (e.g., source data discussed elsewhere herein), a priority of the review of the image, an image expiration period, location information associated with the image, and/or the like.
  • source data can includes a universal resource locator, global positioning coordinates, longitude and latitude, an account identifier, an internet protocol address, a social account, an photo sharing account, and/or the like.
  • Image Posting Logic 165 is configured to provide an image for manual review to more than one of Destinations 125 at the approximately the same time. For example, an image may be provided to Destination 125A and Destination 125B in parallel. “Parallel delivery” means, for example, that the image is provided to both Destinations 125A and 125B before tagging information is received back from either of these Destinations 125.
  • Image Posting Logic 165 is configured to provide an image for manual review to one or more of Destinations 125 prior to receiving image tags from Automatic Identification System 152. Alternatively, in some embodiments, Image Posting Logic 165 is configured to wait until a computer-generated review for the image is received from Automatic Identification System 152, prior to posting the image to one or more of Destinations 125. In these embodiments, the computer-generated review (including image tags) is optionally also posted to the one or more of Destinations 125 in association with the image.
  • Image Posting Logic 165 is optionally configured to post identifiers of images along with the images.
  • Image Posting Logic 165 includes hardware, firmware and/or software stored on a non-transient computer readable medium.
  • Image Processing System 110 further includes Review Logic 170 configured to manage the manual and automated reviews of images. This management includes monitoring progress of reviews, receiving reviews from Automatic Identification System 152 and/or Destinations 125. The received reviews include image tags as discussed elsewhere herein.
  • Review Logic 170 is configured to control posting of the image to one of Destinations 125 based on a measure of confidence. The measure of confidence being representative of a confidence that one or more image tags already received are correct.
  • These one or more image tags may be received from Automatic Identification System 152 and/or one of Destinations 125. For example, in some embodiments if the confidence of an image review by Automatic Identification System 152 is greater than a predetermined threshold, then Review Logic 170 may determine that manual review of the image is not necessary.
  • the predetermined threshold can be a function of the value of the image review, of the priority of the image review, of the number and quality of the available Destinations 125, and/or the like.
  • Review Logic 170 includes hardware, firmware, and/or software stored on a non-transient computer readable medium.
  • an image was sent to Automatic Identification System 152 in parallel with being sent to one or more of Destinations 125, then the receipt of a review from Automatic Identification System 152 having a confidence above a predetermined threshold may result in cancellation of the manual review at the one or more of Destinations 125 by Review Logic 170.
  • Review Logic 170 is optionally configured to cancel the review requests for the image at the other Destinations 125.
  • Review Logic 170 is configured to cancel the review request at the other Destinations 125 once a keystroke or word is received from the first of the Destinations 125.
  • Review Logic 170 is configured to monitor activity of a human image reviewer in real-time. This monitoring can include receiving review inputs from Destinations 125 on a word by word or individual keystroke basis. As discussed elsewhere herein, the words and/or keystrokes are optionally passed on to one of Image Sources 120 as they are received by Review Logic 170.
  • the monitoring of a manual reviewer’s activity can be used to determine when the review of an image expires and/or the progress in completing a manual image review.
  • the status of a human image reviewer may be provided by Review Logic 170 to Reviewer Logic 157 in real-time. Using this status, Reviewer Logic 157 may change the status of the reviewer from active to inactive, adjust a stored review score of the reviewer, establish or change a specialty for the reviewer, and/or the like.
  • Review Logic 170 is configured to control posting of images to Destinations 125 by receiving measures of confidence (e.g., of the accuracy of image reviews) and sending responsive signals to Destination Logic 160 and/or Image Posting Logic 165.
  • measures of confidence e.g., of the accuracy of image reviews
  • Review Logic 170 can be configured to control posting of an image to one or more of Destinations 125 based on a measure of confidence.
  • the measure of confidence being representative of a confidence that one or more image tags correctly identify the contents of the image.
  • Review Logic 170 is configured to receive reviews from manual image reviewers that include information other than image tags. For example, Review Logic 170 may receive an upgrade request from a human image reviewer and cause an upgraded image review to be requested.
  • Review Logic 170 is optionally configured to process other non-tag information received in a manual or computer-generated review.
  • This information can include identification of the image as being improper (e.g., obscene), identification of the image as containing no identifiable objects, identification of the image as having been sent to a reviewer of the wrong specialty, and/or the like.
  • Review Logic 170 is configured to adjust the confidence of an image review by comparing image reviews of the same image from multiple sources. These image reviews may all be computer generated, all be manual reviews, or include at least one computer generated review and at least one manual review.
  • Review Logic 170 is configured to provide image tags received as part of a first (computer generated or manual) review and to provide the received image tags to a human image reviewer at Destinations 125B.
  • An agent e.g., a browser or special purpose application
  • executing on Destination 125B is optionally configured to provide the image tags of the first review to a display of Destination 125B.
  • the human image reviewer at Destination 125B can edit (add to, delete and/or replace) the image tags of the first review.
  • image tags received from Destination 125A may be provided to Destination 125B for modification.
  • Review Logic 170 is configured to calculate review scores based on the results of image reviews received from Destinations 125, the time taken for these image reviews, and the accuracy of these image reviews.
  • Review Logic 170 is configured to provide image reviews to a source of the image, e.g., one of Image Sources 120, using a Response Logic 175.
  • the image reviews may be provided when the image review is complete, on a character-by-character basis, or on a word-by-word basis.
  • the image tags are optionally provided to the source of the image as the characters or words are received from a human image reviewer.
  • Response Logic 175 is configured to provide the image review via Network 115.
  • Image reviews are not necessarily returned to one of Image Sources 120.
  • Image Source 120A is a photo sharing service or a social networking website
  • image reviews of images from Image Source 120 A may be stored in association with an account on the photo sharing service or the social networking website. This storage can be in Memory 135 or at a location external to Image Processing System 110, such as at a Webserver hosting the website.
  • Image reviews are optionally both returned to one of Image Sources 120 and stored elsewhere.
  • Response Logic 175 is configured to execute a search based on image tags received in a computer generated and/or manual image review. The results of this search can be provided to a source of the image, e.g., Image Source 120A or 120B.
  • a user uses a smartphone to create an image with a camera of Image Source 120A.
  • the image is provided to Image Processing system 110 which generates an image review of the image using Automatic Identification System 152 and Destination 125 A.
  • the image review includes image tags that are then automatically used to perform an internet search (e.g., a google or yahoo search) on the image tags. The results of this internet search are then provided to the user’s smartphone.
  • an internet search e.g., a google or yahoo search
  • Response Logic 175 is configured to provide image tags of a computer generated and/or manual review to an Advertising System 180.
  • Advertising System 180 is configured to select advertisements based on the image tags. The selected advertisements are optionally provided to the source of the image used to generate the image tags. For example, Response Logic 175 may provide the tags “1976 Ford Granada with broken headlight” to Advertising System 180 and, in response, Advertising System 180 may select advertisements for replacement headlights. If the source of the image used to generate these tags is a website, the advertisements may be displayed on the website. Specifically, if the source of the image is an account on a photo sharing or social networking website, then the advertisements may be displayed on that account. Advertising System 180 is optionally included in Image Processing System 110.
  • Advertising System 180 is optionally configured to take bids for providing advertising in response to specific tags.
  • Advertising System 180 optionally includes Google’s Adwords, and/or the like.
  • Selected advertisements are optionally returned to members of Image Sources 120 in association with image tags, by Response Logic 175.
  • Image Processing System 110 optionally further includes Content Processing Logic 185 configured to extract images for tagging from members of Image Sources 120.
  • Content Processing Logic 185 is configured to parse webpages including images and optionally text, and extract images from these webpages for tagging. The resulting image tags may then be provided to Advertising System 180 for selection of advertisements that can be placed on the webpage from which the image was extracted.
  • Content Processing Logic 185 is configured to emulate browser functions in order to load images that would normally be displayed on a webpage. These images may be displayed on a webpage associated with a specific account, a social networking site, a photo sharing site, a blogging site, a news site, a dating site, a sports site, and/or the like.
  • Content Processing Logic 185 is optionally configured to parse metadata tags in order to identify images.
  • Content Processing Logic 185 is optionally configured to parse text disposed on the same webpage as an image. This text may be used by Automatic Identification System 152 in tagging of the image, in combination with content of the image. For example, Content Processing Logic 185 may be configured to identify a caption for an image, comments made about an image, text referring to the image, webpage title or headings, people or objects tagged within an image, text within an image (as determined by optical character recognition (OCR)), and/or the like. The text parsed by Content Processing Logic 185, or a subset thereof, may be used to improve quality and/or speed of tagging.
  • OCR optical character recognition
  • the text parsed is provided to Automatic Identification System 152 and/or provided to one of Destinations 125 for tagging by a human reviewer.
  • Automatic Identification System 152 is configured to use the provided text in the generation of tags for the image.
  • the provided text may be used to provide context, identify a lexicon, ontology, language, and/or information that improves the accuracy, precision, computational efficiency, and/or other quality of automatically and/or manually generated image tags.
  • the provided text is typically not relied on solely as a source of the generated tags, but is used as an input to improve the processing of the image.
  • the resulting tags may include words other than those found in the provided text.
  • Image Posting Logic 165 is configured to provide both an image and text found on the same webpage as the image to Destinations 125. For example, an image of a girl and a bicycle at a park may have a caption “Mountain Bike Sale” or a comment “Happy Birthday Julie.” At Destination 125 this text may be presented to a human reviewer together with the image. The human reviewer may use this information to better understand the focus and/or context of the image, and thereby provide better image tags.
  • Automatic Identification Interface 150 is configured to provide both an image and text fund on the same webpage as the image to Automatic Identification System 152.
  • Image Processing System 110 optionally further includes an Image Ranker 190.
  • Image Ranker 190 is configured to determine a rank (e.g., priority) for tagging an image. The priority may be used to determine how or if at all to tag an image.
  • the determination of priority may be based on , for example, a source of the image, a number of times the image is loaded onto a webpage, a position of the image on a webpage, a number of times the image is viewed on a webpage, a number of webpages on which an image included, a ranking of one or more webpage including the image, an identity of a webpage including the image, a ranking of a second image on the webpage including the image, an owner of webpage including the image, a domain name of a webpage including the image, a keyword on a webpage including the image, text found on a webpage including the image, metadata found on a webpage including the image, a number of times the image is clicked on the webpage, a number of times other images are clicked on the webpage, whether the image is part of a video, image tags automatically generated using Automatic Identification System 152, any combination of these examples, and/or the like.
  • Image Ranker 190 includes logic in the form of hardware, firmware, and/or software stored on a computer readable medium.
  • Image Ranker 190 includes logic in the form of hardware, firmware, and/or software stored on a computer readable medium.
  • the priority determined by Image Ranker 190 includes two levels (tag or no-tag), three levels (automatic tagging, manual tagging, or no-tag), ten priority levels, or some other ranking scheme.
  • Destination Logic 160 is optionally configured to select a destination of manual tagging of an image based on the priority of the image.
  • the number may be per a fixed time period such as per day or per month.
  • the number can be determined by including a line of Java or HTML script on the webpage, as is well known in the art.
  • the position of the image on the webpage may be considered as some images may require that a viewer scroll down before the image is viewed.
  • Image Ranker 190 is optionally configured to assign priority to an image based on a number of times the image is clicked on the webpage or on other webpages, and/or a number of times other images are clicked on the webpage.
  • Image Ranker 190 is optionally configured to determine priority based on a number of times an image is viewed on more than one webpage. For example, if the image is found on 25 different webpages, then the sum of the views on all the webpages may be used to determine priority for the image. In some embodiments Image Ranker 190 is configured to determine priority based on a number of times an image is loaded in a browser.
  • Image Ranker 190 may be configured to calculate the priority of an image as a function of the number of webpages on which it is included and/or the number of webpages that include a link to the image. Image Ranker 190 is optionally configured to identify an image as being included on multiple, possibly otherwise unrelated, webpages. In some embodiments Image Ranker 190 is configured to use a third party service, such as TinEye.com, to determine the number of webpages on which an image is located. Typically, the greater the number of webpages on which an image is included, the greater the priority assigned to the image.
  • TinEye.com TinEye.com
  • Image Ranker 190 is configured to calculate a priority of an image based on a ranking of one or more webpages that include the image. For example, if a webpage is highly ranked in a search engine, is linked to by a significant number of other webpages, or well ranked on some other criteria, then an image on the webpage may be given a priority that is a function of the webpages’ ranking. Typically the higher ranking a webpage has the greater priority is assigned to an image on the webpage. Webpage ranking is optionally obtained from a third party source, such as a search engine.
  • Image Ranker 190 is optionally configured to assign a priority to an image based on an identity of a webpage including the image. For example, an image on a home page for a URL may be assigned greater priority than an image at another webpage for the same website. Further, images may be assigned a priority based on specific types of webpages on which the image is included. For example, images on social networking websites may be given higher priority relative to images on company websites or personal blogs. In another example, images on reference webpages such as dictionary.com or Wikipedia.com may be give higher priority relative to some other types of webpages. The priority assigned to an image is optionally based on the identity of an owner of the webpage.
  • Image Ranker 190 is configured to determine a priority of a first image on a webpage as a function of the priority of a second image on same webpage. For example, if the second image has a high priority the priority of the first image may be increased accordingly.
  • Image Ranker 190 is optionally configured to assign a priority to an image based on other contents of a webpage on which the image is included. For example, if the webpage includes text and/or metadata the presence of specific terms or keywords in this text or metadata may be used to assign the priority of the image. Specifically, if a webpage includes a valuable keyword then an image on that webpage may be assigned a higher priority. The estimated monetary value of a keyword is associated with the value of the word for advertising or some other purpose, e.g., a word that has value on Google’s Adwords®. An image on a webpage that includes terms that would be valued highly as Adwords may be assigned a proportionally high priority.
  • Image Ranker 190 The frequency of use of these terms as well as their number on a webpage may also be considered by Image Ranker 190 in determining image priority.
  • the text and/or metadata considered may be included in the URL of the webpage, within a figure caption, within a comment made on a figure, within a tag assigned to the image by a third party, near text referring to the image, a person’s name, a brand name, a trademark, a corporate name, and/or the like.
  • Image Ranker 190 is configured to receive text derived from an image using optical character recognition and to determine a priority for the image based on this text.
  • Image Ranker 190 may receive text generated by processing an image using Automatic Identification system 152, and assign a priority to the image based on this text.
  • Image Ranker 190 is configured to give a higher priority to a first image on a webpage, relative to images that occur further down the webpage.
  • Image Ranker 190 is further configured to determine how, if at all, to tag an image based on the assigned priority. For example, images of lowest priority may not be tagged at all. Images with somewhat higher priority may be tagged using Automatic Identification System 152, and image with yet higher priority may be tagged by a human reviewer at one of Destinations 125. Those images having priority sufficiently high to be tagged by a human reviewer are optionally further divided into higher and lower priority groups wherein images in the higher priority group are given more attention and tagged more thoroughly or carefully by the human reviewer.
  • Image Posting Logic 165 is optionally configured to provide an indication of the priority of an image, along with the image, to members of Destinations 125.
  • images are first processed using Automatic Identification System 152. Then the images may be sent to one or more members of Destinations 125 based on both a priority for the image and a confidence in the automated tagging performed by Automatic Identification System 152. For example, if the image has relatively low priority then the confidence standard for sending the image to a human reviewer is set relatively low. (A low confidence standard meaning that the automated tags are likely to be deemed sufficient and the image not sent for human analysis.) If the image has a relatively high priority then the confidence standard for sending the image to a human reviewer is relatively higher. Thus, high priority images require a greater confidence to rely just on the automated tagging and are more likely to be sent to a human reviewer.
  • the processing paths that may be selected by Image Ranker 190 for an image include, for example, a) not tagging at all, b) tagging using just Automatic Identification System 152, c) tagging using Automatic Identification System 152 with optional human follow-up based on the importance and/or confidence of the resulting tags d) automated tagging followed by human review of the automated tags, e) tagging by a human reviewer, and/or f) tagging by a human reviewer based on a suggested level of attention to be given by the human reviewer.
  • These processing paths are, at least in part, selected based on the priority assigned to the image by Image Ranker 190. Any combination of these processing paths may be found in various different embodiments.
  • the result of controlling the type of processing used to tag an image results in those images that are potentially more valuable to have a greater probability of being tagged.
  • the human tagging resources are applied to the highest priority - most valuable images.
  • Image Ranker 190 is configured to assign a priority for an image based on how often an advertisement displayed adjacent to or over an image is clicked on. For example, if an image is on a frequently viewed webpage, but advertisements placed over the image are rarely clicked, then the image may be given a relatively high priority for tagging. In this example, an image may be tagged more than once. If advertisements based on initial tags are not clicked on with an expected frequency, then the image may be retagged. Retagging is optionally performed by a human reviewer who receives, via Image Posting Logic 165, the image and the initial (inadequate) tags. The human reviewer can use this information to provide improved tags.
  • Image Processing System 110 optionally further includes Sequence Assembly Logic 192.
  • Sequence Assembly Logic 192 is configured to add images to an image sequence. These images are typically those received from Image Sources 120 and are optionally tagged by Image Processing System 110.
  • Sequence Assembly Logic 192 may be configured to add a first image received from Image Source 120 A and a second image received from Image Source 120B to an image sequence; and/or to add two images received from Image Source 120A to an image sequence.
  • An “image sequence” is a plurality of images configured to be presented in an order. Examples of image sequences include a video or an ordered set of still images.
  • An image sequence may also include a two or three dimensional arrangement of images, such as a mosaic or collage of images.
  • adding an image to an image sequence includes adding the image to multiple frames of a video. These frames may be configured to be presented in less than 1, 2, 3, 5, 10, 15 seconds, or more, or any interval there between.
  • adding an image to an image sequence includes placing the image in an image queue for delivery to an external display device, such as a display of Image Source 120C.
  • the external device optionally includes an application configured to present the images one after another in succession.
  • the image sequences generated by Sequence Assembly Logic 192 optionally include specific transitions between images such as fading, zooming, cutting, wiping, splitting, random bars, flashing, covering, uncovering, flashing, dissolving, etc. Examples of these and other transitions can be found in Microsoft PowerpointTM 2013.
  • Sequence Assembly Logic 192 is configured to add audio (e.g., music, or a reading of image tags) to an image sequence and/or apply filters to one or more images with an image sequence.
  • audio e.g., music, or a reading of image tags
  • the audio is optionally included as part of a standard audio or video format.
  • Image filters applied to an image or entire image sequence can include linear filtering, pixilation, principal components analysis, anisotropic diffusion, or any other techniques used in digital image processing. Different filters are optionally applied to different images within an image sequence.
  • Sequence Assembly Logic 192 includes a video encoder configured to encode and/or compress an image sequence into a standard video format, e.g., flash video, QuicktimeTM video, WindowsTM media video, RealmediaTM, and/or the like. Images may be sent to remote clients one image at a time or as part of a video.
  • the video can include key frames and b-frames.
  • Sequence Assembly Logic 192 may be configured to generate a first image sequence including images having a first set of image tags and generate a second image sequence having a second set of image tags.
  • Sequence Assembly Logic 192 may be configured to include images having animal related tags (e.g., cat, puppy, cow) in a first image sequence and images having automotive related tags (e.g., truck, Ford, convertible) in a second image sequence.
  • image sequences can include specific themes by identifying tags to be used to select images to be included in the sequence. These themes can be predefined as in the examples above, or may be chosen by a user.
  • an end user may wish to see an image sequence of including images having the image tag “Dalmatian” and can define this sequence by adding “Dalmatian” to a list of image tags to be used to generate the image sequence.
  • This list can include one, two or more image tags.
  • images may be selected for inclusion in an image sequence based on a variety of other factors. For example, image may be selected based on colors within an image, comments made regarding the image, a popularity of the image, and/or who provided the image.
  • Sequence Assembly Logic 192 is configured to parse comments made about an image and identify words used in those comments. The identified words may then be used just like image tags, for selecting the image for inclusion in an image sequence. For example, a comment of “nice dress” may be used to select an image just as a tag “dress” may be used as described elsewhere herein.
  • the comments can be provided by the provider of the image or by one or more third party.
  • a user can specify that popular images be included in an image sequence.
  • a sequence may be defined to include the 1, 5 or 10% most popular images.
  • popularity can be determined in a variety of ways.
  • a user may designate that an image sequence include images provided by a specific user or users, this user may be a friend on a social network and/or a well known person.
  • a user may designate that an image sequence include images provided by one specific male actor, any female celebrities, and three friends.
  • one or more lists of tags are associated with a user account.
  • the user may define and customize these lists by accessing Image Processing System 110 from one of Image Sources 120.
  • a user of Image Source 120A, or some other client may log into Image Processing System 110 and define three lists.
  • the first list includes the tags “puppy,” “kitten” and “panda.”
  • the second list includes “Paris,” “Rome,” Venice” and “London.”
  • the third list includes “Andromeda,” “Saturn,” “whisky,” “bourbon” and “bed.”
  • the user can then select among different image streams using these lists. For example, the user can select to receive an image stream of images having tags from the first list, or the first and third list.
  • Some embodiments include premade or shared lists. These lists can include data (other than or in addition to specific tags) specifying that the most popular images, currently trending images, images suggested by friends, friend’s favorite images, images generated by friends, images previously viewed and indicated as being a favorite of the user, etc.
  • the data may include identifiers of groups of image tags. For example, “animal” my be predefined to indicate a set of tags including “cat,” “dog,” “hyena,” “salamander” and any other image tag that would be considered to be within the general categories of animals. Other general categories may include “clothing,” “automobiles,” “jewelry”, etc. A user may optionally define categories of their own.
  • User account information, lists, identifiers of groups of tags, favorites and the like are optionally stored in Memory 135 on Image Processing System 110 and/or Memory 135 on a member of Clients 120. Default premade lists are typically stored in Memory 135 on Image Processing System 110.
  • Sequence Assembly Logic 192 is optionally configured to add advertisements to image sequences. These advertisements may be selected based on image tags associated with images included in the image sequences, based on group identifiers used to generate the image sequences, and/or based on an account associated with the remote client to whom the image sequence is sent. For example, an advertisement for cat food may be added to an image sequence based on the image tag “cat.” An advertisement for a car may be added to an image sequence based on a group identifier automobile. An advertisement of a baby product may be added to an image sequence to be sent to a client associated with an account of a person known to be interested in such products.
  • This interest may be derived from image tags of pictures received from the client, demographics of the person, other image sequences requested by the person, social network accounts of the person, interests of the person, images liked by the person, cookies identifying a browsing pattern of the person, tweets sent or received by the person, images previously uploaded to Image Processing System 110 by the person, events in the person’s life, postings by the person, comments made by the person on webpages and/or about images, images stored in Memory 135, image posted to systems other than Image Processing System 110, and/or the like. Further, an advertisement may be added to an image sequence based on one or more of the same factors as applied to a person’s friends or contacts on a social network or an e-mail system (address book).
  • Sequence Assembly Logic 192 is configured to automatically retrieve data and/or images from a social network account of users.
  • Image Processing System 110 is configured to automatically tag images stored in the memory of a user’s mobile device and based on these tags determine interests of the user.
  • the images stored on the user’s mobile device may or may not be uploaded to Image Processing System 110.
  • Image Processing Logic 960 is optionally used for this tagging and only image characteristics need to be uploaded.
  • images that are automatically obtained from the memory of the mobile device e.g., obtained without explicit per image acknowledgement of the user, are only tagged using Automatic Identification System 152 and not included in image sequences sent to others. This allows the privacy of user to be maintained.
  • Some embodiments of the invention include Popularity Logic 197, which is configured for tracking and calculating measure(s) of the popularity of an image.
  • Popularity Logic 197 is configured to calculate the popularity of an image based on a number of favorable responses for the listing. These responses can include “like,” “favorite” and/or some other positive indication.
  • the popularity of an image may also be calculated based on how often an image is reposted or shared. For example, an image may be considered more popular if it is shared by many people in diverse groups.
  • Popularity Logic 197 may be configured to calculate a popularity measure that is time dependent.
  • favorable responses that are older may not be as highly weighted as recent favorable responses, or a rapid increase in favorable response may be weighted differently than a constant rate of favorable responses.
  • Popularity may be calculated bases on a combination of the factors discussed herein, different factors optionally being weighted differently.
  • the popularity of an image may be considered when determining a potential value of an image.
  • Image Ranker 190 is optionally configured to calculate a value of an image based, at least in part, on a popularity calculated using Popularity Logic 197, or a popularity measure obtained from elsewhere.
  • the popularity is optionally used to determine if the image should be included in an image sequence and/or to determine a value of the image.
  • Popularity Logic 197 may be configured to calculate popularity based on how often the image is shared, how often an image is shared on a social networking website, who shares the image, the number of “likes,” “favorites” and/or comments an image receives, the complexity of the images sharing, and/or the like.
  • Popularity Logic 197 includes hardware, firmware, and/or software stored on a non-transient computer readable medium.
  • Popularity Lotic 197 is configured to automatically retrieve images and/or data from a social network account of a user.
  • an image that is shared by many friends may receive a higher popularity value relative to an image that is shared less or shared by non-friends.
  • the sharing may be within a social network such as FacebookTM or InstagramTM.
  • sharing may occur using a messaging service such as SnapchatTM, SkypeTM, text message and/or e-mail.
  • An image shared by an important person may be calculated as having a higher popularity relative to a less important person.
  • the importance of a person can be dependent on celebrity status, number of friends, number of followers, etc. For example, sharing of a photo by a well-known celebrity or by a person having many followers may count more towards popularity of the image relative to an unknown person with few followers.
  • the popularity of an image may be greater if it is “liked” or marked as a “favorite,” and/or commented on by many people relative to an image “liked,” marked as a “favorite,” and/or commented on by fewer people.
  • the complexity of image sharing is a function of the diversity of the network over which the image is shared. For example, if the image is shared by a closed group of friends having only one or two degrees of separation, this may count less toward popularity than sharing among a diverse group having a high degree of separation.
  • Image Processing System 110 optionally further includes Social Management Logic 133.
  • Social Management Logic 133 is configured to manage and/or monitor social connections of a user.
  • Social Management Logic 133 is optionally configured to retrieve and/or receive data (e.g., images, text, social information (e.g., friends lists), messages, comments, action logs, etc.) from social networking websites such as those discussed elsewhere herein.
  • data e.g., images, text, social information (e.g., friends lists), messages, comments, action logs, etc.
  • Social Management Logic 133 may be configured to receive/retrieve comments provided about an image posted on a social networking website, track how an image is shared on a social networking website, and/or the like.
  • the retrieved and/or received data may come from an original source of the image, one of Image Sources 120, and/or a third party.
  • Social Management Logic 133 is further configured to monitor, retrieve and/or receive data regarding communications made using other communication systems discussed herein, e.g., instant messaging and/or e-mail.
  • Social Management Logic 133 may be configured to track how often an image is sent using the instant messaging system of Facebook®.
  • Social Management Logic 133 is optionally configured to track the identities of a user’s friends and/or the identities of celebrities. For example, Social Management Logic 133 may be configured, to follow one or more celebrities and track images posted by these celebrities. In some embodiments, Social Management Logic 133 is configured to receive or calculate the degrees of separation between people. As discussed elsewhere herein, such information may be used to determine the popularity and/or value of an image.
  • the information collected by Social Management Logic 133 is optionally stored in Memory 135 and/or received from members of Image Sources 120. Images received/retrieved by Social Management Logic 133 are optionally tagged using Image Processing System 110 as discussed elsewhere herein. Social Management Logic 133 includes hardware, firmware, and/or software stored on a non-transient computer readable medium.
  • Image Processing Logic 110 optionally further includes Streaming Logic 195.
  • Streaming Logic 195 is configured to provide images sequences to one or more client of Image Processing System 110, e.g., one or more of Image Sources 120. The image sequence is typically provided using I/O 130 and Network 115.
  • Streaming Logic 195 is configured to stream one or more image sequences to a plurality of remote clients at the same time.
  • the provided image sequences may include a series of still images and/or a video.
  • advertisements are provided as part of and/or in association with the image sequence.
  • FIG. 2 illustrates an Image Capture Screen 210, according to various embodiments of the invention.
  • Image Capture Screen 210 as illustrated is generated by, for example, an application executing on a smartphone, electronic glasses or other Image Source 120.
  • Image Capture Screen 210 includes features configured to capture an image, mark a specific area of interest, and receive image tags.
  • Image Capture Screen 210 includes a Shutter Button 220 configured to take a picture. Once the picture is taken it is optionally automatically sent via Network 115 to Image Processing System 110 for tagging.
  • Image Capture Screen 210 optionally further includes a Rectangle 230 configured to highlight a point of interest within the image.
  • Rectangle 230 is controllable (e.g., movable) by selecting and/or dragging on the screen using a user input device.
  • this user input device may include a touch screen responsive to a finger touch.
  • the point/region of interest may be provided to Image Processing System 110 in association with an image to be tagged.
  • Image Capture Screen 210 further includes a Field 240 showing a previously captured image and resulting image tags.
  • show the previously captured image includes the same white cup without the Rectangle 230 and the image tags include “White Starbucks Coffee Cup.” Also shown is text stating “Slide for options.”
  • FIG. 3 illustrates search results based on an image analysis, according to various embodiments of the invention. These results are optionally displayed automatically or in response to selecting the “Slide for options” input shown in FIG. 2. They may be generated by automatically executing an internet search on the image tags. Illustrated in FIG. 3 are a Sponsored Advertisement 310, Related Images 320 and other search results 330. The search results are optionally generated using Advertising System 180 and image tags generated using Image Processing System 110. A user may of the option of reviewing previously tagged images. This history can be stored on Image Source 120A or in Memory 135.
  • FIG. 4 illustrates methods of processing an image, according to various embodiments of the invention.
  • an image is received.
  • the image is provided to both Automatic Identification System 152 and at least one of Destinations 125.
  • both computer generated and manual image reviews are produced.
  • the methods illustrated in FIG. 4 are optionally performed using embodiments of the system illustrated in FIG. 1.
  • the method steps illustrated in FIGs. 4-8 may be performed in a variety of alternative orders.
  • Receive Image Step 410 and image is received by Image Processing System 110.
  • the image is optionally received from one of Image Sources 120 via Network 115.
  • the image may be in a standard format such as TIF, JPG, PNG, GIF, etc.
  • the image may be one of a sequence of images that form an image sequence of a video.
  • the image may have been captured by a user using a camera.
  • the image may have been captured by a user from a movie or television show.
  • Receive Image Step 410 includes a user using an image capture application to capture the image and communicate the image to Image Processing System 110. This application may be disposed within a camera, television, video display device, multimedia device, and/or the like.
  • Receive Image Step 410 is optionally facilitate using Content Processing Logic 185.
  • the image is received from image sequence, e.g. a video.
  • the video is displayed on a monitor, television, goggle, glasses, or other display device.
  • the video is optionally received via a video streaming service such as youtube.com or Netflix.com® and/or displayed within a browser.
  • Logic within the display system e.g., Image Marking Logic 147 within Image Source 120A
  • the same logic may be configured to receive an advertisement selected in response to image tags generated from the image and to display the advertisement over or at the same time as the video. Selection of advertisements based on image tags is discussed further elsewhere herein.
  • a user may select an object within a video or movie for tagging and in response optionally receive tags characterizing that object.
  • the user may also or alternatively receive an advertisement selected based on the tags.
  • the advertisement may be displayed in real-time in conjunction with the video (e.g., as an overlay or added video sequence) or provided to the user via other communication channels (e.g., e-mail).
  • a user sees an object within a video that they like. They select the object and this selection is received in Receive Image Step 410. In response they receive an advertisement related to the object.
  • the advertisement is displayed as an overlay, bar or caption on the video in real-time as the video is viewed on the display.
  • the advertisement may be added to the image, or the image may be added to the advertisement.
  • the image may be placed in a subset of the advertisement or the advertisement may be placed in a subset of the image.
  • the advertisement is optionally interactive in that it includes a link to make a purchase.
  • objects within an image may include particular characteristics configured to assist in identifying the object. For example, a particular pattern of data bits may be encoded within the image or within object of the image. These data bits may encode for an image tag.
  • an optional Receive Subset Identification Step 415 data identifying one or more subsets of the image is received by Image Processing System 110.
  • the one or more subsets include a set of image pixels in which an item of particular interest is located.
  • the one or more subsets may be identified by pixel locations, screen coordinates, areas, and/or points on the received image.
  • the subsets are selected by a user using a touch screen or cursor of one of Image Sources 120.
  • Receive Source Data Step 420 source data regarding the source of the image, received in Receive Image Step 410, is received by Image Processing System 110.
  • the source data can include geographic information, an internet protocol address, a universal resource locator, an account name, an identifier of a smartphone, information about a language used on a member of Image Sources 120, a search request, user account information, and/or the like.
  • source data is automatically sent by an application/agent running on Image Source 120. For example, global positioning system coordinates may automatically be generated on a smartphone and provided to Image Processing System 110.
  • a priority for the tagging of the image, received in Receive Image Step 410 is received within Image Processing System 110.
  • the priority is manually entered by a user of Image Source 120A.
  • the priority is dependent on an amount paid for the review of the image.
  • the priority is dependent on a type of Image Sources 120A. For example, images received from a static website may automatically be given a lower priority relative to images received from a handheld mobile device. An image whose source is identified by a universal resource locator may be given a lower priority relative to images whose source is identified by a mobile telephone number. As such, the priority is optionally derived from the source data received in Receive Source Data Step 420.
  • Steps 410-425 are optionally received together and optionally stored in Memory 135.
  • a Distribute Image Step 430 the image, and optionally any associated data received in Steps 415-425, is distributed to Automatic Identification System 152 via Automatic Identification Interface 150. This distribution may be internal to Image Processing System 110 or via Network 115.
  • a computer-generated image review is received from Automatic Identification System 152.
  • the computer-generated image review includes one or more image tags assigned to the image by Automatic Identification System 152.
  • the computer-generated image review also includes a measure of confidence.
  • the measure of confidence is a measure of confidence that the image tags assigned to the image correctly characterize contents of the image. For example, an image including primarily easily recognizable characters may receive a higher measure of confidence relative to an image of abstract shapes.
  • an Optional Determine Confidence Step 440 the measure of confidence included in the image review is compared with one or more predetermined levels.
  • the predetermined levels are optionally a function of the priority of the image review, a price of the image review, a source of the image, and/or the like.
  • the process proceeds to an optional Perform Search Step 450 if the confidence of the computer generated image review is above the predetermined threshold(s) and proceeds to a Queue Image Step 460 if the confidence of the computer generated image is below the predetermined threshold(s).
  • Determine Confidence Step 440 is optionally performed using Review Logic 170.
  • Perform Search Step 450 the image tags assigned to an image are used to perform a search.
  • the image tag “Ford car” may be used to automatically perform a search, e.g., a google or database search, using the words “Ford” and “car.”
  • the database searched is optionally a database of advertisements.
  • Perform Search Step 450 is optionally followed by a Select Advertisement Step 452 in which one or more advertisement is selected based on the image tags.
  • image tags related to a wedding may be used select wedding related advertisements.
  • the selection of an advertisement may be performed by Advertising System 180 and/or may include a real-time bidding process in which potential advertisers bid on the amount they are willing to pay for having their advertisement associated with specific keywords (image tags). Any of the methods disclosed herein that include the generation of image tags may further include Select Advertisement Step 452.
  • a Provide Results Step 455 the image tags assigned to the image and optionally the results of a search performed in Perform Search Step 450 are provided to a requester of the image review. For example, if the image was received from Image Source 120A and Image Source 120A is a smartphone, then the image tags and search results are typically provided to the smartphone. If the image was received from a member of Image Sources 120, such as a website, that the image tags and optional search results may be provided to a host of the website, to a third party, to Advertising System 180, and/or the like. In some embodiments, the image tags are automatically added to the website such that the image tags are searchable, e.g., can be searched on to find the reviewed image.
  • Provide Results Step 455 optionally includes providing an advertisement the requester as an alternative to or in addition to providing the results of the search performed in Perform Search Step 450.
  • the image is placed in Image Queue 145. This placement optionally includes marking a subset of the image using Image Marking Logic 147.
  • the marking is typically configured to identify objects of particular interest in the image. Advancement of the image in Image Queue 145 may be dependent on the image’s review priority, the source of the image, available human image reviewers, the measure of confidence of the computer-generated review of the image, and/or the like.
  • a Determine Destination Step 465 one or more members of Destinations 125 are determined for the manual review of the image.
  • the determination of a destination is optionally based on image tags included in a computer-generated image review received from Automatic Identification System 152; optionally based on specialties of human image reviewers at different Destinations 120; optionally based on review scores of these human image reviewers, and/or based on other criteria discussed herein.
  • Determine Destination Step 465 is based on the data characterizing the image and a specialty of the human reviewer.
  • the data characterizing the image can be image features, image descriptors, and/or information derived therefrom. As is discussed elsewhere herein, the image features and/or image descriptors are optionally received along with the image from a member of Image Sources 120. Information derived therefrom may be generated at the member of Image Sources 120, at Image Processing System 110 and/or at Automatic Identification System 152.
  • Post Image Step 470 the image is posted to at least one of the Destinations 125 determined in Determine Destination Step 465.
  • Post Image Step 470 includes posting the image to more than one of Destinations 125 in parallel.
  • the image is optionally posted via Network 115 and is optionally posted along with a mark highlighting a subset of the image, source data for the image, a time before review expiration for the image, image tags for the image received from Automatic Identification System 152, and/or the like.
  • a manual review of the image is received from one or more of the determined Destination(s) 125.
  • the manual image review may include one or more image tags assigned to the image by a human image reviewer.
  • the one or more image tags are representative of the content of the image.
  • the manual review may also include an upgrade request, an indication that the image is unreviewable, an indication that the image is improper, an indication that the review expired, and/or the like.
  • an Image Tagged? Step 480 the progress of the method is dependent on whether image tags were received in Receive Review Step 475. If image tags characterizing the content of the image were received then the method optionally proceeds with Perform Search Step 450 and Provide Results Step 455. In these steps the image tags included in the manual image review and optionally the computer-generated image review are used. Use of the image tags in the computer-generated image review may be dependent on the confidence measure of this review.
  • Steps 460-475 are optional if in Step 445 the confidence measure is found to be above the predetermined threshold(s).
  • Step 485 the progress of the method is dependent on whether an upgrade request has been received. If such a request has been received, then the method proceeds to Determine Destination Step 465 wherein a second/different member of Destinations 125 is determined. The determination may depend on image tags received in the manual image review received in Receive Review Step 475.
  • the upgrade request may be received from a human image reviewer or from a requester of the image review (from Image Source 120A or 120B, etc.).
  • the upgrade request may be received after the requestor has had a chance to review the image tags provided in Provide Results Step 455. For example, the requestor may first receive image tags consisting of “white car” and then request a review upgrade because they desire further information.
  • the review upgrade may result in the image being provided to a human image reviewer with a specialty in automobiles.
  • This human image review can add to the existing image tags to produce “white car, 1976 Ford Granada.”
  • the requester can add source data indicating a subset of the image when requesting a review upgrade.
  • the reviewer may wish to indicate particular interest in a broken headlight. This serves to direct the human image reviewers attention to this feature of the image, produce tags that include “broken headlight,” and result in a search (Perform Search Step 450), directed toward broken headlights for a 1976 Ford Granada.
  • upgrade request are generate automatically by Review Logic 170. For example, if an image review appears too brief, e.g., just “car,” then Review Lotic 170 may automatically initiate a review upgrade.
  • the automatic generation of upgrade requests is based on the presence of keywords within a manual image review. For example, certain review specialties are associated with lists of keywords. In some embodiments, when one of these keywords are received in a manual image review and an automated review upgrade is initiated.
  • the review upgrade preferably includes a human image reviewer having a specialty associated with the received keyword.
  • one specialty includes “automobiles” and is associated with the keywords “car,” “truck,” “van,” “convertible,” and “Ford.”
  • Review Logic 170 checks with Review Logic 157 to determine if a human image reviewer having a specialty in “automobiles” is currently active. If so, then an automatic upgrade is initiated and the image is sent to the Destination 125B of the reviewer having the “automobiles” specialty.
  • FIG. 5 illustrates alternative methods of processing an image, according to various embodiments of the invention.
  • Steps 430-445 are performed in parallel with at least some of Steps 460-475.
  • the manual image review is in Steps 460-475 may be begun before the computer-generated review of Steps 430-445 is complete, thus, the manual image review is started before the confidence measure of the computer generated review is known. If, in Confident? Step 445, the confidence measure is found to be above the predetermined threshold(s), then Steps 460-475 are optionally aborted.
  • FIG. 6 illustrates methods of managing a reviewer pool, according to various embodiments of the invention.
  • the status of a reviewer may be changed based on their performance in reviewing images.
  • the steps illustrated may be part of and performed in consort with the methods illustrated by FIGs. 4 and 5. For example, they may be performed in part between Receive Image Step 410 and Receive Review Step 475.
  • the methods illustrated include sending an image to more than one of Destinations 125.
  • an image is received.
  • the image may be received at Image Processing system 110 via Network 115.
  • the image may be generated by a camera and/or obtained from a webpage. In some embodiments, the image is received along with information about how often the web page is viewed.
  • a Select 1 st destination Step 610 a first destination is selected for manual or automated analysis of the image. Select 1 st Destination Step 610 is performed using Destination Logic 160 and is an embodiment of Determine Destination Step 465.
  • the determination of a destination for the image may be based on a wide variety of factors, including the status of a human reviewer and scores associated with reviewers. For example, typically a member of Destinations 125 associated with an active reviewer will be selected, rather than one without an active reviewer.
  • the selected destination may be a member of Destinations 125 and/or Automatic Identification System 152.
  • Post Image Step 470 the image received in Receive Image Step 410 is posted to the selected member of Destinations 125.
  • posting of the image can include communicating the image via Network 115 using standard network protocols such as TCP or UDP.
  • Reviewer Logic 170 is used to monitor progress of a manual image review of the image at the member of Destinations 125 selected in Select 1 st Destination Step 610.
  • the monitoring can include detection of input by a human reviewer, time taken for the image review, a number of words provided that characterize the image, and/or the like.
  • Monitoring optionally includes measuring a time to taken to tag the image. Where monitoring includes detection of input by a human reviewer, the monitoring can be on a keystroke-by-keystroke basis, on a word-by-word basis and/or on a line-by-line basis.
  • Reviewer Logic 170 may be configured to receive data characterizing the image a character, word or line at a time.
  • Remove Step 630 the image is removed from processing at the member of Destinations 125 Selected in Select 1 st Destination Step 610.
  • “Removal” can include, notifying the human reviewer at the selected member of Destinations 125 that he or she is no longer primarily responsible for reviewing the image, relieving the human reviewer of primary responsibility (without necessarily notifying the human reviewer), removing the image from a display of the human reviewer, and/or the like.
  • Remove Step 630 includes merely placing a human reviewer in a ranking to have secondary or shared responsibility for reviewing an image.
  • the responsibility may now be shared or assigned to other reviewers associated with other members of Destinations 125. In this case it is the primary responsibility that is “removed.”
  • Remove Step 630 may occur if manual review of the image is taking too long.
  • Remove Step 630 may be performed.
  • Other examples, of triggering events for Remove Step 630 include: loss of communication with the selected member of Destinations 125, exceeding a predetermined time allotment for review of the image, improper or inappropriate image tags received from the human reviewer, inaccurate (not characterizing the image) image tags received from the human reviewer, a referral from a first human reviewer to a second human reviewer, an upgrade request of the image review, and/or the like.
  • a second member of Destinations 125 (or Automatic Identification System 152) is selected using Destination Logic 160.
  • the second member may be selected based on any of the criteria discussed above with regard to Select 1 st Destination Step 610 and Determine Destination Step 465. Further, in some embodiments the selection of a second member may be based on a specific referral by a human reviewer associated with the first member of Destinations 125. For example, a first human reviewer may identify the content of an image to be a specialty of a second human reviewer and may refer the image to the member of Destinations 125 associated with the second human reviewer.
  • the selection of a second member of Destinations 125 in Select 2 nd Destination Step 640 is optionally dependent on automated processing on an image using Automatic Identification System 152.
  • the image is posted to the member of Destinations 125 Selected in Select 2 nd Destination Step 640.
  • more than one human reviewer may review an image in parallel. They may perform the review independently or in cooperation.
  • One reviewer may have primary responsibility for review of the image or each reviewer may have equal responsibility.
  • One reviewer may have supervisory responsibility over one or more other reviewers.
  • Select 2 nd Destination step 640 is performed and the image is posted to two or more members of Destinations 125 prior to Monitor Step 620 and/or Remove Step 630.
  • Receive Review Step 475 a review of the image, e.g., image tags, is received as discussed elsewhere herein.
  • the review typically includes image tags characterizing contents of the image.
  • Reviews may be received from more than one of Destinations 125.
  • tags characterizing an image may be received from the members of Destinations 125 selected in both Select 1 st Destination Step 610 and Select 2 nd Destination Step 640.
  • Receive Review Step 475 is optionally performed in real-time as characters or words are provided by human reviewer(s).
  • an optional Associate Tags Step 650 one or more image tags characterizing the image are stored in association with the image.
  • the stored tags optionally include tags provided by more than one human reviewer and may be stored in Memory 135. As described elsewhere herein, the tags may further be provided to a member of Image Sources 120 (e.g., in an embodiment of Provide Results Step 455) or used to select advertisements using Advertising System 180.
  • the tags may also be provided to Automatic Identification System 152 to provide training of automatic image recognition processes.
  • Associate Tags Step 650 is optionally followed by any combination of Perform Search Step 450, Select Advertisement Step 452 and Provide Results Step 455, as discussed elsewhere herein. For example, these steps may be included in the methods illustrated in FIGs. 6-8, 13 or 15.
  • FIG. 7 illustrates methods of receiving image tags in real-time, according to various embodiments of the invention. These methods are optionally performed by Image Processing System 110 and the image tags may be a result of a manual image review. These methods may be performed in consort with the other methods described herein, for example as part of the methods illustrated by FIG. 4.
  • the methods begin with Post Image Step 470 in which an image is provided to one or more members of Destinations 125, as discussed elsewhere herein.
  • the methods illustrated in FIG. 7 are optionally preceded by Receive Image Step 410 in which an image is received from a remote computing device.
  • a Receive Input Step 710 input is received from the one or more members of Destinations 125.
  • This input typically includes characters provided by a human reviewer.
  • the input may be characters typed by a human reviewer at Destination 125 A.
  • Receive Image Step 710 is continued as other steps shown in FIG. 7 are performed.
  • a Detect 1 st Word Step 720 a word is detected in the input received in Receive Input Step 710.
  • the word may be detected by the presence of a whitespace character such as an ASCII space or carriage return. Spell checking is optionally performed on the detected word. If the word is not included in a spellcheck dictionary, then an attempt at correction may be made or the human reviewer may be notified of the failure to recognize the word.
  • Detection of the word in Detect 1 st Word Step 720 results in execution of a Deliver 1 st Word Step 730 in which the word is communicated to a source of the image. For example, once a word is detected it may be provided to Image Source 120A in real-time. At Image Source 120A the word can be displayed to a user. Displaying one word at a time can provide an impression that the analysis of the image is occurring in a shorter amount of time, as compared to waiting until an entire set of image tags are received before displaying the set. [00177] In a Detect 2 nd Word Step 740 a second word is detected in the input received in Receive Input Step 710.
  • the word may be detected by the presence of a whitespace character and can occur after providing the first word to the user at Image Source 120 A.
  • Both the first and second words are expected to be tags characterizing the image.
  • Detection of the second word in Detect 2 nd Word Step 740 triggers a Deliver 2 nd Word Step 750 in which the second word is delivered to the image source, e.g., Image Source 120A.
  • Detect 2 nd Word Step 740 and Deliver 2 nd Word 750 may be repeated for third, fourth and additional word, each being part of the image tags.
  • a Detect Completion Step 760 data indicating that processing of the image is completed, e.g., that the words detected comprise all the words (image tags) to be provided by the human reviewer are received.
  • the data may include a metadata tag such as “/endtags,” an ASCII carriage return, and/or the like.
  • Detect Completion Step 760 occurs after one, two or more image tags have been received.
  • Associate Tags Step 650 the received image tags are associated and/or stored with the image as discussed elsewhere herein.
  • Receive Input Step 710 may continue in parallel with Steps 720-740 and/or 750. Steps 710-760 may be include as part of Receive Review Step 475, discussed elsewhere herein.
  • FIG. 8 illustrates methods of upgrading an image review, according to various embodiments of the invention.
  • an image receives more than one phase of image review.
  • the image review is upgraded and reviewed further (phase 2).
  • the request for upgrade can be automatically generated, initiated by a first human reviewer, and/or be in response to a request from a source of the image.
  • Both the first and second review may be manual, i.e., performed by a human reviewer.
  • the first review may be automatic and one or more subsequent reviews may be manual, or the first review may be manual and one or more subsequent reviews automatic.
  • Receive Image Step 410 an image is received. A first member of Destinations 125 is selected for the image in Select 1 st Destination Step 610. The image is then posted in Post Image Step 470. These steps are discussed elsewhere herein.
  • a first review of the image is received.
  • This first review may include one or more image tags characterizing the contents of the image.
  • the image review may include words “black spider” in response to a picture including an image of a black spider; or the image may include the words “red car” in response to an image including a red automobile.
  • Receive 1 st Review Step is optionally an embodiment of Receive Review Step 475, and may include the real-time communication of image tags as discussed in regard to FIG. 7.
  • the first review can include an indication, provided by the human reviewer that performed the first image review, that the processing of the image should be upgraded.
  • the first human reviewer may manually indicate a field of expertise for a second (optionally specialized) human reviewer.
  • a first human reviewer may provide the “red car” image tags and suggest an upgraded review be performed by a reviewer with automotive expertise.
  • the first review can include image tags that are considered particularly valuable. For example, an automatic review that indicates a 72% probability that the image includes a wedding dress may trigger an automated upgrade to a manual review because the image tags “wedding dress” are potentially of greater commercial value than other image tags.
  • this automated upgrade is performed by Review Logic 170 and is based on a list if relatively important or valuable keywords stored in Memory 135. This list can include keywords and an associated measure of their value.
  • automatic upgrades performed by Review Logic 170 are optionally based on image tags automatically generated using Automatic Identification System 152 and/or information predictive of how often an image will be viewed. These factors are optionally applied using an algorithm that maximizes the potential value of tagging the image using a human reviewer and providing advertisements based on these tags. Examples of more valuable image tags may be related to shoes, cars, jewelry, travel destinations, books, games, clothing, holidays, food, drink, real estate, banks, accidents, etc.
  • upgrades of image reviews are automatic.
  • a tag of “black spider” may automatically result in an upgrade of the image review that includes sending the image to a human reviewer having a particular specialty, e.g., a spider expert.
  • the identification of particular plant or animal life often includes (depends on) location information, as the location of the plant or animal can be important for proper identification.
  • upgraded reviews may be requested by the person who originally requested that the image be reviewed.
  • a user of Image Source 120A may provide an image of a dog and received image tags comprising “black dog.” The user may then request further detail by providing the word “breed?”
  • the image review may be upgraded and sent to a human reviewer with specific knowledge of dog breeds.
  • the user is charged for the upgrade or is required to have a premium account in order to request upgrades. The user may specify a particular part of an image when requesting an image review upgrade.
  • Detect Upgrade Request Step 820 The presence of an upgrade request (automatically and/or manually generated) is detected in a Detect Upgrade Request Step 820.
  • the detection may be based on data or a command received from a member of Image Sources 120, from a member of Destinations 125, Automatic Identification Interface 150, and/or from a component of Image Processing System 110 such as Review Logic 170, Content Processing Logic 185 or Response Logic 175.
  • Destination Logic 160 is used to select a second member of Destinations 125 and/or Automated Identification System 152 for review of the image. This selection can be based on any of the criteria on which Select 1 st Destination Step 610 was based and, in addition, the image tags and/or other information resulting from the first review. For example, the selection of a second member of Destinations 125 may be based, at least in part, on an image tag manually or automatically generated in the first image review. Specifically, a tag of “black spider” may be used by Destination Logic 160 to select a member of Destinations 125 associated with a human reviewer having expertise in the identification of spiders.
  • selection of a second member of Destinations 125 may be based on a word provided by a user requesting the image review. Specifically, if the first review produces image tags “white shoe” and the user responds with “brand?” then Destination Logic 160 may use this information to select a member of Destinations 125 associated with a human reviewer having expertise in shoe brands.
  • Destination Logic 160 is configured to possibly select Automatic Identification System 152 for the second review of the image, rather than a member of Destinations 125. This may occur, for example, when the image has been tagged with the name of an actor and the upgrade request requests “movie name?” In such a case, the image may be searched for in a library of move images. The same approach may be taken for other reproducible objects such as currency, paintings, car models, trademarks, barcodes, QR codes, well known persons, etc.
  • Post Image Step 470 the image is posted to the second selected member of Destinations 125 or Automatic Identification System 152 for a second review of the image.
  • image tags characterizing the content of the image are typically received.
  • Receive 2 nd Review Step 830 is optionally an embodiment of Receive Review Step 475.
  • an additional referral, indication that the image cannot be tagged for some reason or other information may be received.
  • the image tags are received from the member of Destinations 125 or Automatic Identification System 152 to which the image was posted. Steps 820, 640, 470 and 830 may be repeated if needed.
  • Step 650 received image tags are associated with the image and/or provided to the source of the image, as discussed elsewhere herein.
  • an image is received from a web page.
  • the image is sent to Automatic Identification System 152 for automated review.
  • the result of the automated review includes an image tag “ring.”
  • This tag is processed using Review Logic 170 and identified as a potentially valuable image for use in advertising. As discussed elsewhere herein, this identification is optionally also based on other factors such as how often the image is viewed on the web page.
  • the review of the image is automatically upgraded and sent to a member of Destinations 125 that is associated with a human reviewer having expertise in Jewelry.
  • the human reviewer modifies the image tags to include “gold wedding ring” and these tags are associated with the image.
  • an image is received from an application executing on a mobile device.
  • the image includes a scene of a street and is sent to Destination 125A for review by a human reviewer.
  • the human reviewer response with image tags “street scene” and these tags are provided to the mobile device.
  • a request for a review upgrade is then received from the mobile device.
  • This request includes the text “car model?” and an identification of part of the image including a car.
  • the image text and identification are sent to Destination 125 A or another member of Destinations 125 for further manual review.
  • the further review results in the image tags “1909 Model-T” which are then forwarded to the mobile device.
  • an image is received from a computing device.
  • the image includes several pieces of paper currency laying on a plate and is sent to Destination 125A for manual review.
  • the resulting tags include “US currency on white plate” and are sent to the computing device.
  • a request for an image review upgrade is received.
  • the request includes “how much?”
  • the image is sent to Automatic Identification System 152 where automated currency identification logic is used to identify the amounts of the currency and optionally provides a sum. This information is then sent back to the computing device.
  • an image is received from a mobile device.
  • the image includes a leaf of a plant and is sent to Destination 125A.
  • the human reviewer at Destination 125A provides the image tags “green leaf’ and also upgrades the image review.
  • the image is sent to Destination 125B, which is associated with a second human image reviewer having expertise in botany.
  • the selection of Destination 125B is based, in part on the image tags “green leaf.”
  • the image tags “green leaf’ are sent to the mobile device.
  • the second human image reviewer adds the words (tags) “poison ivy” to the already existing image tags. These additional tags are then also sent to the mobile device.
  • the words “green leaf’ are first displayed and then the words “poison ivy” are added to the display once available.
  • FIG. 8 illustrates an example of Image Source 120A including electronic glasses, according to various embodiments of the invention.
  • Electronic glasses include glasses to be worn by a person. Examples include “Google Glass” by Google®, “Ml 00 Smart Glasses” by Vuzix®, the iOptikTM contact lens by Innovega, and/or the like. These systems are configured to allow a user to view both the real world and an electronic display at the same time.
  • Electronic glasses may also include virtual reality systems such as the Oculus RiftTM by Oculus VR. These types of systems display images to a user using an electronic display, but do not provide simultaneous direct view of the real world.
  • a direct view is a view that is not digitized for viewing, e.g., a view through a glass or lens.
  • electronic glasses provide in interactive interface in which a user can select a subset of the image in real-time as the image is being viewed in or through the electronic glasses.
  • “real-time” selection is meant to mean that the image is being viewed as it is being captured, with only inconsequential delay.
  • an image viewed in real-time may be captured by a camera and processed using a graphics engine and displayed with only a delay resulting from electronic processing times.
  • Real-time viewing allows the user to position objects of interest within the viewed image by moving the image capture device as the image is being viewed.
  • real time viewing excludes viewing of images that have been stored for substantial periods before viewing.
  • Image Source 120A includes a Camera 910 configured to capture images.
  • the captured images can include still images or video comprising a sequence of images.
  • a Display 920 is configured to present the captured images to a user of Camera 910.
  • Display 920 includes a touch screen configured to function as a view finder for Camera 910, to display captured images and to display Image Capture Screen 210 (described elsewhere herein).
  • Display Logic 925 is configured to manage the display of images and other content on Display 920.
  • Display Logic 925 can include hardware, firmware and/or software stored on a computer readable medium.
  • Display 920 is optionally configured to display both an image and an advertisement.
  • Display 920 may be configured to display an advertisement and an image at the same time (e.g., as an overlay).
  • Display 920 may further be configured to display an image sequence received from Image Processing System 110.
  • the image sequence can include a series of still images or a video, and may be filtered, as described further elsewhere herein.
  • Display 920 includes a video decoder configured to decode video.
  • Display 920 may include logic configured to display a sequence of images one after another. Such display functionality is also further discussed elsewhere herein.
  • Image Source 120A further comprises
  • Selection Logic 930 configured for the user of Image Source 120 to indicate a subset of a captured image. This indication can be made within Image Capture Screen 210 in real-time as the image is being displayed and/or captured on Display 920. As discussed elsewhere herein, such an indication is made made to select a particular object of interest within the image. Following selection, the subset of the image is optionally marked using Image Marking Logic 147 as discussed elsewhere herein. Image Marking Logic 147 may be used add a mark to the image as shown within Display 920. As such, the user can see the location that has been marked. Selection Logic 930 is optionally configured for the user to remark the subset of the image until the user is satisfied with the selection.
  • Selection Logic 930 includes Tracking Logic 935 configured to track movement of the user’s eyes.
  • Tracking Logic 935 is optionally included within electronic glasses. Eye tracking can include detection of the focal point of both eyes, the direction of one or more eyes (eyeball direction), the focus of one or more eyes, blinking, eyeball movement, and/or the like.
  • Tracking Logic 935 is optionally configured to correlate a state of the user’s eyes with a location within captured images. Geometry data representing a geometric relation between Camera 910 and the physical elements of Selection Logic 930 are used to associate the state of the user’s eyes with the location within an image captured using Camera 910.
  • Tracking Logic 935 optionally includes a second camera directed at the eyes of the user. This camera may be mounted on electronic glasses or be part of other embodiments of Image Sources 120.
  • Tracking Logic 935 configured for tracking a user’s eyes may be included in a web camera, a smartphone, a computer monitor, a television, a tablet computer, and/or the like.
  • Tracking Logic 935 is configured to detect blinking of one or more eyes.
  • Tracking logic 935 may be configured to detect a single eye blink or a pattern of eye blinks. When such an event is detected, Selection Logic 930 may select a position within an image based on eye position data received from Tracking Logic 935, or alternatively select a position at the center of the currently viewed image.
  • the location and/or area of the marking can be displayed to the user on the marked image within Display 920.
  • the image, plus a red “X” at the marked location may be displayed to the user within Image Capture Screen 210.
  • the user may then confirm the selection using Confirmation Logic 940.
  • Confirmation Logic 940 is optionally responsive to Tracking Logic 935. For example, confirmation may be provided using a blink or other eye movement, an audio command, a verbal command, or a touch command.
  • Tracking Logic 935 is configured to detect, and interpret as a command, movement of the eyes into an unnatural position (e.g., cross-eyed). Such a movement can be used to provide a confirmation command. Confirmation is optionally required prior to sending the image to Network 115.
  • Selection Logic 930 includes Tracking Logic 935 that is configured to track something other than or in addition to the eyes.
  • Tracking Logic 335 may be configured to detect a pointing finger of a user, an electronic device worn on a finger or wrist, and/or the like.
  • Selection Logic 930 is configured to infer a location within an image based on the detected object.
  • Tracking Logic 935 is configured to detect the location of a pointing finger within an image and infer that the location to be selected is at the tip of the finger.
  • Tracking Logic 935 is configured to detect the location of a wireless electronic device relative to Image Source 120A and infer that the location to be selected is along a line between the wireless electronic device and a part of Image Source 120A.
  • Image Source 120 A further includes an I/O 945 configured for Image Source 120 A to communicate to Image Processing System 110 via Network 115.
  • I/O 945 can include wired and/or wireless connections.
  • I/O is configured to communicate wirelessly from electronic glasses to a cellular phone using a BluetoothTM connection and then for the communication to be forwarded from the cellular phone to Network 115 using Wifi or a cellular service.
  • Image Source 120 A optionally further includes Channel Logic 965 configured for a user to specify image tags to be used by Sequence Assembly Logic 192 for selecting images to be included in an image sequence.
  • This image sequence may be provided to Image Source 120A and/or other clients remote from Image Processing System 110.
  • Image sequences defined using Channel Logic 965 on Image Source 120A are optionally subscribed to by and provided to third parties.
  • a set of image tags, tag group, and/or other information discussed herein to select contents of an image sequences is optionally considered a image sequence “channel.” Once defined the information for selecting images to be included in an image sequence channel can be shared and/or repeatedly used to generate image sequences.
  • Channel Logic 965 includes a graphical user interface configured for a user to select tags, etc. for the purposes of generating image sequences. The selections made using Channel Logic 965 are optionally shared, stored in Memory 135, and/or used to generate multiple image sequences by different people at different times.
  • Channel Logic 965 is optionally further configured to determine how an image sequence is presented on Image Source 120A. For example, Channel Logic 965 may be used to select a rate at which images are presented, if images are presented one image at a time, transitions between images, how many images are shown at once, if images are shown in a column or row, if images are shown in a two-dimensional mosaic or collage, and/or the like.
  • Channel Logic 965 is configured for a user to pause display of a sequence, to save an image from an image sequence, to “like” and image, and/or designate an image as a “favorite.” The pausing or saving of an image is optionally used by Popularity Logic 197 in determining the popularity of an image.
  • Channel Logic 965 is configured to use images of an image sequence as a screensaver and/or wallpaper on the client.
  • Image Source 120A optionally further includes Filter Logic 970.
  • Filter Lotic 970 is configured to apply a filter to images within an image sequence. As is escribed further elsewhere herein, more than one filter may be applied and different filters can be applied to different images within a sequence.
  • Filter Logic 970 is optionally alternatively included in Image Processing System 110, in which case filters are provided before images within an image sequence are provided to members of Image Sources 120A and/or other clients of Image Processing System 110. Note that, in various embodiments, these clients of Image Processing System 110 can include any combination of the elements illustrated in FIG. 9. For example, they need not necessarily include Camera 910.
  • Image Source 120 further includes an embodiment of Memory 135 configured to store images captured using Camera 910, geometric data, account data, data used by Channel Logic 965, data used by Filter Logic 970, and/or the like.
  • Memory 135 includes non-transient memory such as Random Access Memory (RAM) or Read Only Memory (ROM).
  • RAM Random Access Memory
  • ROM Read Only Memory
  • Memory 135 typically includes data structures configured to store captured images and marking locations within these images.
  • Image Source 120 further includes a Processor 950.
  • Processor 950 is a digital processor configured to execute computing instructions.
  • Processor 950 is encoded with computing instructions to execute Display Logic 925, Selection Logic 930, Image Marking Logic 147, Channel Logic 965, Filter Logic 970, and/or Tracking Logic 935.
  • Processor 950 optionally includes an Application Specific Integrated Circuit (ASIC) or Programmable Logic Array. (PLA).
  • ASIC Application Specific Integrated Circuit
  • PDA Programmable Logic Array.
  • Image Source 120 optionally further includes Object Tracking Logic 955.
  • Object Tracking Logic 955 is configured to track movement of an object of interest within a sequence of images. For example, in some embodiments, a user may use Selection Logic 930 to select a subset of an image or an aspect of the image for which information is requested. This subset may include one or more pixels.
  • Object Tracking logic 955 is configured to use automatic (computer based) image interpretation logic to identify a specific object occupying the selected subset. This object may be a person, a vehicle, an animal, or any other object. The boundaries or other pixels of the selected object are optionally highlighted in Display 920 (discussed further below) by Object Tracking Logic 955.
  • This highlight can track the object as it moves within the sequence of images and can include changing pixel characteristics.
  • the highlighting optionally moves with the object on the display.
  • An aspect of the image may be a brand of an object within the image, a movie from which the image is obtained, a location of the content of an image, etc.
  • aspects of the images can be specified as being of interest using text such as “shoe brand?,” “movie?,” “actor?,” “location,” “breed?,” etc. Such specifications may be provided in an original request to tag an image and/or in an upgrade request.
  • images communicated from Image Sources 120 to Image Processing System 110 are part of a sequence of images that comprise a short video sequence. These video sequences may be tagged using the systems and methods described elsewhere herein.
  • One advantage of tagging a video sequence is that the tag(s) may characterize a specific action that occurs in the video. For example, tags of a figure skater may characterize specific jumps (double Lutz, etc.) that are better identified in video than in a still image.
  • tags of a figure skater may characterize specific jumps (double Lutz, etc.) that are better identified in video than in a still image.
  • Various embodiments include a specific limit on the length of the image sequence, e.g., the video must be no more than 3, 5, 7 or 10 seconds.
  • FIG. 9 While the embodiments illustrated by FIG. 9 include electronic glasses, these embodiments may be adapted to any device having eye tracking technology including cell phones, video display monitors (e.g., computers or television screens), tablet computers, advertising displays, etc.
  • the embodiments of Image Source 120A illustrated in FIG. 9 include a television having an eye tracking camera configured to determine at which part of a television screen a user is looking.
  • Image Source 120A optionally further includes Image Processing Logic 960 configured to perform one or more steps for the purpose of tagging an image.
  • Image Processing Logic 960 is optionally configured to reduce the load on Image Processing System 110 by performing these one or more steps locally to Image Source 120 A.
  • Image Processing Logic 960 may be configured for performing initial steps in tagging of an image and then send the results of these initial steps to Image Processing System 110 for generation of image tags.
  • Image Processing Logic 960 is capable of completing the tagging process for some but not necessarily all images.
  • Image Processing Logic 960 includes hardware, firmware and/or software stored on computer readable media.
  • some embodiments include an instance of Processor 950 specifically configured to perform the functions of Image Processing Logic 960 discussed herein.
  • Image Processing System 110 is configured to provide Image Processing Logic 960 to Image Sources 120. This is optionally via an “app store” such as the Apple App Store. Where applicable, Providing Image Processing Logic 960 to a member of Image Sources 120 is an optional step in the various methods illustrated herein. Processing Logic 960 can be provided as an “app” or computer instructions that further includes other logic discussed herein, for example the logic discussed in relation to FIG. 9.
  • Image Processing Logic 960 is configured to identify specific features within an image. Feature identification includes determining if specific points within an image are or are not part of a feature of a given type. Types of features include, but are not limited to, edges, comers, blobs and ridges. Generally, a feature is an “interesting” or “useful” part of an image, for the purpose of identifying contents of the image. Image Processing Logic 960 may be configured to perform one or more of a number of different feature detection algorithms. In some embodiments, Image Processing Logic 960 is configured to select from among a number of different algorithms based on available processing power and/or contents of the image.
  • Examples of known feature detection algorithms include “Canny,” “Sobel,” “Harris & Stephens/Plessy,” “SUSAN,” “Shi & Tomasi,” “Level curve curvature,” FAST,” “Laplacian of Gaussian,” “Difference of Gaussians,” “Determinant of Hessian,” “MSER,” “PCBR” and Grey-level blobs.” These types of algorithms are executed on a computing device and other such algorithms will be apparent to one of ordinary skill in the art.
  • the results of feature identification include identification of a specific feature type at a specific location within the image. This may be encoded in a “feature descriptor” or “feature vector,” etc.
  • the results of feature detection may also include a value representing a confidence level at which the feature is identified.
  • Image Processing Logic 960 is further configured to calculate image descriptors based on identified image features.
  • Image descriptors are visual features of the contents of an image and include characteristics such as shape, color, texture and motion (in video).
  • Image descriptors may be part of a specific descriptor domain, such as descriptors related to the domains of face recognition or currency recognition.
  • the derivation of image descriptors is typically based on image features. For example, derivation of a 3-D shape descriptor may be based on detected edge features.
  • Image descriptors may characterize one or more identified objects within an image.
  • Image Processing Logic 960 and/or Image Processing System 110 are configured to first attempt identification of image features and derivation of image descriptors of various types and then to select from among a plurality of alternative image processing algorithms based on the levels of confidence at which the image descriptors are derived. For example, if image descriptors in a facial recognition domain are derived with a high level of confidence, then an image processing algorithm specific to facial recognition may be selected to generate image tags from these image descriptors.
  • the task of tagging an image can be distributed between Image Sources 120 and Image Processing System 110. How the task is distributed may be fixed or may be dynamic. In embodiments were the distribution is fixed specific steps are performed consistently on specific devices. In embodiments were the distribution is dynamic the distribution of steps may be responsive to, for example, communication bandwidth, image type (still or video) processing power on Image Source 120 A, current load on Image Processing System 110, availability of image reviewers at Destinations 125, the confidence to which steps are accomplished on Image Source 120, and/or image descriptor data present on Image Source 120 A. Any combination of these factors may be used to dynamically allocate distribution of processing steps.
  • Image Processing System 110 For example, if the derivation of image descriptors occurs with a low degree of confidence (relative to a predetermined requirement) on Image Source 120A, then the image features and/or image maybe communicated to Image Processing System 110 for derivation of image descriptors using more powerful or alternative image processing algorithms. In contrast, if the derivation of image descriptors occurs on Image Source 120A with an adequate degree of confidence, then this step need not typically be performed on Image Processing System 110.
  • Image Processing Logic 960 the results of these steps and/or the image may be communicated to Image Processing System 110 using I/O 945.
  • image descriptors may be used in an attempt to automatically tag the image or may be provided to a human image reviewer at one or more of Destinations 125.
  • the image descriptors may be used to identify a descriptor domain and this domain then used to select a member of Destinations 125 to which the image is sent.
  • a descriptor domain of “vehicles” may be used to select an image review having expertise in vehicles.
  • the classification of an image into a domain based on image descriptors may occur on either Image Processing System 110 or Image Processing Logic 960.
  • automatic tagging of an image is attempted based on derived image descriptors. In various embodiments, this may occur using Image Processing Logic 960 and/or Automatic Identification System 152.
  • Classification optionally occurs by comparing the image descriptors derived from the image with a library of image descriptors associated with different classes. For example, an image descriptor identifying a vehicle shape may match with a previously stored image descriptor associated with a “vehicle” class. If the class is suitable (in type, scope, etc.) the identification of a class may be sufficient to automatically select a tag for the image. For example, image descriptors matching those of a class “child face” may be sufficient to generate the tags “child’s face.”
  • Image Processing System 110 includes a larger library of image descriptors associated with different classes relative to Image Source 120A. These libraries are optionally stored in Memory 135 of Image Processing System 110 or Image Source 120 A, or Automatic Identification System 152. Libraries of image descriptors stored in Image Source 120 A are optionally based on images previously processed using Image Source 120 A. For example, if several images from Image Source 120 A are identified as having descriptors and tags relating to currency, a library of descriptors in the currency domain/class may be stored in Memory 135 of Image Source 120 A.
  • Image Processing Logic 960 is optionally configured to automatically tag the image using the associated tags. While the descriptor library may be received from Image Processing System 110, or may be developed using image tags received from Image Processing System 110, the tagging in the above example is not dependent on real-time communication with Image Processing System 110. [00225] In various embodiments, data characterizing relationships between image descriptors and classes and/or tags may be developed on Image Processing System 110, Image Source 120A, Destination 125 A and/or Automatic Identification System 152. Once developed the data may be transferred to improve and/or supplement the libraries at any of the other devices.
  • Image Sources 120 and Destinations 125 are connected in a peer-to-peer architecture.
  • any combination of the elements illustrated in Image Processing System 110 may be included in Image Sources 120 and/or Destinations 125.
  • One of Image Sources 120 may perform the image tagging and processing tasks discussed herein on an image received from another of Image Sources 120.
  • FIG. 10 illustrates a method of processing an image at least partially on Image Source 120 A, according to various embodiments of the invention.
  • the methods illustrated by FIG. 10 can include a range of different processing steps performed on Image Source 120 A. For example, those steps involving image descriptors are optionally performed on Image Processing System 110.
  • Image Source 120A receives image from Image Source 120A.
  • the image may be received from a camera included in Image Source 120 A, from Image Source 120B, from Network 115, from Image Processing System 110, from a wireless device, from a memory device, and/or the like.
  • the received image is optionally one of a sequence of images that form a video.
  • Image Processing Logic 960 is used to identify image features within the received image. As discussed elsewhere herein, methods of identifying image features are known in the art. Identify Features Step 1020 may apply one, two or more of these methods. The identification of features optionally includes a confidence level reflecting an estimated accuracy and/or completeness of the feature identification.
  • an optional Send Features Step 1030 the image features identified in Identify Features Step 1020 are sent to Image Processing System 110.
  • the features may be sent with or without the associated image and may be sent via Network 115.
  • Send Features Step 1030 is included in the method, the method optionally next proceeds to a Generate/Receive Tags Step 1070 in which Tags for the image are received from Image Processing System 110.
  • Image Processing Logic 960 is optionally configured to perform Send Features Step 1030 based on a confidence level of the features calculated in Identify Features Step 1020. For example, if the confidence is below a threshold the step may be executed and both the image and the features sent.
  • Derive Descriptors Step 1040 Image processing Logic 960 is used to derive image descriptors from the image features identify in Identify Features Step 1020. As discussed herein, a wide assortment of methods is known in the art for deriving image descriptors. In some embodiments, Derive Descriptors Step 1040 includes using more than one method. The derivation may include a confidence level reflecting an estimated accuracy and/or completeness of the descriptor derivation. The types and content of descriptors derived is typically dependent on the image recognition algorithm(s) used.
  • an optional Send Descriptors Step 1050 the image descriptors derived in Derive Descriptors Step 1040 are sent to Image Processing System 110.
  • the image descriptors may be sent with or without the associated image and may be sent via Network 115.
  • Send Descriptors Step 1050 is included in the method, the method optionally next proceeds to a Generate/Receive Tags Step 1070 in which Tags for the image are received from Image Processing System 110.
  • Image Processing Logic 960 is optionally configured to perform Send Descriptors Step 1050 based on a confidence level of the image features derived in Derive Descriptors Step 1040. For example, if the confidence is below a threshold the step may be executed and both the image and the features sent.
  • the one or more image descriptors derived in Derive Descriptors Step 1040 are compared with one or more image descriptors stored locally. As discussed elsewhere herein, these locally stored image descriptors are associated with image classes and/or image tags. The comparison may include calculation of a characteristic reflecting the quality of the match.
  • both Send Descriptors Step 1050 and Compare Descriptors Step 160 are performed. In this case processing of the image descriptors can occur both on Image Source 125A and Image Processing System 110. Likewise, in some embodiments both Send Features Step 1030 and Derive Descriptors Step 1040 are performed and the image features are processed on both systems/devices.
  • image tags characterizing the image are generated and/or received. For example, if the image, image features or image descriptors have been sent to Image Processing System 110, then corresponding tags may be received from Image Processing system 110 in Assign/Receive Tags Step 1070. If a match is found between the derived descriptors and the local stored descriptors in Compare Descriptors Step 1060, then tags associated to the matched locally stored image descriptors are retrieved from local memory and assigned to the image. Tags may be both locally assigned and received for the same image.
  • the Image tags are optionally generated using image features and/or descriptors, e.g., without Image Processing System 110 receiving the actual image.
  • Assign/Receive Tags Step 1070 includes assigning a classification to an image, sending the image and the classification to Image Processing System 110, and receiving corresponding tags back from Image Processing System 110.
  • the tags may be identified using the methods illustrated in FIG. 4.
  • the classification may be used by Image Processing System 110 to generate the tags using Automatic Identification system 152 and/or a human reviewer at Destination 125 A.
  • FIG. 11 illustrates a method of processing an image based on image descriptors, according to various embodiments of the invention. This method is typically performed on one of Image Sources 120. In the illustrated embodiments, Steps 1010, 1020, 1040 and 1060 are performed as described elsewhere herein. In a Classify Image Step 1110 the image being processed is classified based on a match between one or more image descriptors derived in Derive Descriptors Step 1040 and image descriptors previously stored on the one of Image Sources 120. The class or classes assigned to the image is the class or classes associated with the matched image descriptors previously stored.
  • a Send Step 1120 the image and the class or classes assigned to the image are sent to Image Processing System 110.
  • the image is there processed as described elsewhere herein to produce image tags assigned to the image.
  • the processing optionally includes use of the class or classes to select a human image review or to assist in automatically tagging the image.
  • a Receive Tags Step 1130 the tags assigned to the image are received by the one of Image Sources 120 on which Receive Image Step 1010 was performed. The tags are then presented in Provide Results Step 455.
  • FIG. 12 illustrates a method of processing an image using feedback, according to various embodiments of the invention.
  • This method is optionally performed on Image Source 120 A and includes several communications between Image Source 120 A and Image Processing System 110 in order to improve tagging of an image.
  • a Provide Image Step 1210 an image is provided from Image Source 120A to Image Processing System 110.
  • a Receive 1 st Response Step 1220 a first response is received from Image Processing system 110. This response may include one or more image tags.
  • a Provide Feedback Step 1230 feedback regarding the received image tags is provided from Image Source 120A to Image Processing System 110.
  • This feedback is optionally manually entered by a human user of Image Source 120A and may include an upgrade request as discussed elsewhere herein.
  • Feedback may include correction to one or more of the received tags.
  • the feedback may include an indication that one of the tags is not representative of the image.
  • the feedback may include a classification of the image.
  • a second response is received from Image Processing System 110.
  • the second response is typically generated using the feedback provided in Provide Feedback Step 1230.
  • the first response includes the tag “car”
  • the feedback includes the term “toy”
  • the second response includes the tags “Fisher-Price Superwagon.”
  • the methods illustrated by FIG. 12 are optionally used to improve the accuracy of image tagging.
  • FIG. 13 and 14 illustrates methods of providing image tags based on image descriptors, according to various embodiments of the invention.
  • the image descriptors are used to generate image tags that are then communicated over a computing network to a source of the image descriptors.
  • the image descriptors are used to determine a Destination 125 for an image.
  • the methods illustrated in FIGs. 13 and 14 are optionally performed in conjunction with methods illustrated elsewhere herein. For example the steps of these methods may be combined with those illustrated in FIG. 4.
  • a Receive Descriptors Step 1310 one or more image descriptors characterizing an image are received at Image Processing System 110. These image descriptors are optionally received without the associated image. Receiving only the descriptors typically requires less bandwidth than receiving the image.
  • the image descriptors are optionally received from Image Source 120A via Network 115 and generated using the methods illustrated in FIGs. 10 or 11.
  • a Compare Descriptors Step 1320 the received image descriptors are compared to one or more image descriptors previously stored at Image Processing System 110, e.g., stored in Memory 135. This comparison is made to determine if any of the received descriptors match the stored descriptors.
  • the stored descriptors are stored in association with one or more image tags and/or classifications. For example, one set of stored descriptors may be associated with the image tags “oak tree.”
  • a Retrieve Tags Step 1330 one or more image tags are retrieved responsive to a match between the received descriptors and the stored descriptors.
  • the retrieved image tags are those associated with matched set.
  • the retrieved image tags are provided back to the source of the received descriptors, e.g., to Image Source 120A. They may there be presented to a user or otherwise processed as described elsewhere herein.
  • FIG. 14 illustrates methods in which an image and data characterizing the image are processed at Image Processing Server 110.
  • a Receive Image & Data Step 1410 the image and the data characterizing the image are received at Image Processing Server 110.
  • the data characterizing the image can include, for example, a classification of the image or image descriptors characterizing the image.
  • the image and characterizing data are optionally received from Image Source 125A.
  • Receive Image & Data Step 1410 is optionally an embodiment of Receive Image Step 410 and Receive Source Data Step 420.
  • a destination for the image is determined based on the data characterizing the image.
  • the destination may be one of Destination 125 and/or Automatic Identification System 152.
  • the data characterizing the image includes a specific classification and the determined destination may be one of Destination 125 being associated with a human image review having expertise in that classification.
  • Determine Destination Step 1420 is optionally an embodiment of Determine Destination Step 465.
  • a Post Image Step 1430 the image, and optionally the classification, are communicated to the determined destination.
  • a Receive Tags Step 1440 one or more image tags are received. The image tags being based on the image and being selected to characterize the image.
  • a Provide Tags Step 1340 the image tags are provided to the source of the image, e.g. Image Source 125A.
  • Post Image Step 1430 is optionally an embodiment of Post Image Step 470.
  • FIG. 15 illustrates methods of prioritizing image tagging, according to various embodiments of the invention.
  • Image Ranker 190 is used to assign a priority to an image and the priority is used to determine how, if at all, the image is tagged.
  • Receive Image Step 410 an image is received at Image Processing System 110 as discussed elsewhere herein.
  • the image may be from one of Image Sources 120, and may be received by crawling webpages for images.
  • one or more of Image Source 120 include logic configured to crawl websites and retrieve images from these websites.
  • Information received along with the image may include data regarding a webpage from which the image was retrieved. For example, the image may be received along with text and metadata from the webpage, data indicating how often the webpage is loaded (viewed), a URL of the webpage, and/or any other data on which image priority may be determined as discussed elsewhere herein.
  • Image Ranker 190 is used to automatically assign a priority to the received image.
  • the priority is optionally represented by a numerical value from 1-100, by a letter grade, or the like.
  • Priority optionally implies an (ordered) ranking of images. As described elsewhere herein, the priority may be determined based on a wide variety of factors.
  • a method of tagging (processing) the image is determined. The determination is based on the assigned priority of the image. In some embodiments, images with lowest priority are not processed (tagged) at all.
  • the methods of tagging include automated tagging and/or manual tagging by a human reviewer, as described elsewhere herein.
  • an optional Automatic Tagging Step 1540 the image is tagged using Automatic Identification System 152.
  • Automatic Tagging Step 1540 is optional in embodiments where the method of tagging determined in Determine Processing Step 1530 does not include use of Automatic Identification System 152.
  • Automatic Tagging Step 1540 is optionally performed prior to Assign Priority Step 1520. For example, an image may be tagged using automatic Identification System 152, and a confidence level for the automatically generated tags may then be used in Assign Priority Step 1520 to determine a priority for manual (human) tagging. If the confidence of the automatically generated tags is high then the priority for manual tagging may be set low, and if the confidence is relatively low then the priority for manual tagging may be set relatively high.
  • Manual Tagging Step 1550 the image is sent to one of Destinations 125 for tagging by a human reviewer.
  • the image may be sent with tags generated using Automatic Identification System 152 and/or a variety of other information as described elsewhere herein.
  • Manual Tagging Step 1550 may include any of the steps illustrated by FIGs. 6 8
  • an advertisement is assigned to the image for display on a webpage.
  • This webpage is optionally the webpage from which the image was obtained in Receive Image Step 410.
  • Auction Tag Step 1560 is optionally performed in real time as a request for the webpage is received. At that time, the tag(s) assigned to the image can be auctioned off to the party willing to provide the greatest consideration for placing an advertisement over or beside the image.
  • Auction Tag Step 1560 is optionally performed using Advertising System 180 and the auction process may be managed by a third party, such as Google’s Adsence®.
  • Retag Step 1570 an image is retagged.
  • Retag Step 1570 may include an analysis of how often advertisement s) assigned to the image are clicked as compared to an expected click rate. For example, if advertisements assigned to an image based on a first tagging are not clicked on at an expected rate, then the tags may not be an optimal representation of the image. The image may be retagged in an attempt to improve the click rate of assigned advertisements.
  • Retag Step 1570 may include any of the tagging methods disclosed herein, e.g., those methods discussed in relation to FIGs. 6-8 and 15. Retag Step 1570 may use the knowledge that the tags resulting from the first tagging were not optimal.
  • the methods illustrated by FIG. 15 may also be applied to image sequences, e.g., video.
  • the image sequence may be presented in a browser or using a variety of alternative applications.
  • video may be provided to a member of Image Sources 120 from a website such as youtube.com or from streaming services such as Netflix, Comcast cable television, direct TV, Ruku or Hulu.
  • Factors used to determine if an image within a video should be manually tagged include: how often the video is viewed and/or the estimated value of expected tags.
  • Expected tags may be indicated by an automatic review of the image using Automated Identification System 152, dialog within the video, text accompanying the video (e.g., a description, caption or title), and/or the like.
  • the advertisement may be a video appended at the beginning or end of the image sequence, or spliced within the image sequence. Thus, the advertisement and video can be presented together in association.
  • the advertisement may include an overlay placed over part of the image sequence, typically a part including the tagged image.
  • Image Processing System 110 and/or Automatic Identification System 152 are configured to overcome the limitations of machine-readable labels by observing salient object features sufficient to identify and confirm an observed object, a collection of objects, or to comprehend the actions describing a story in which a collection of objects interacts.
  • Automatic Identification System 152 and/or Image Processing System 110 may be used to identify an object without use of a machine-readable label or to confirm an identification made using a machine -readable label.
  • this new type of analysis can be applied incrementally to legacy systems, offering quality control to traditionally machine-read objects.
  • Image Processing Systeml 10 may be deployed into self-checkout kiosks to visually confirm an object matches a read barcode.
  • Automatic Identification System 152 may be embodied on a mobile device.
  • Automatic Identification System 152 may be disposed on a smartphone, in a vehicle, on a camara, etc. In this way the processing of images can be performed on the same device used to collect the images.
  • Automatic Identification System 152, and optionally any other elements of Image Processing System 110 may be disposed on a device locally connected to a device configured to capture images.
  • Automatic Identification System 152 may be disposed on a network hub connected to a set of security cameras. The connections between the network hub and the cameras may be wired or may use a local wireless connection such as WiFi, Bluetooth, and/or the like.
  • Automatic Identification System 152 may be disposed on a smartphone locally connected to smart glasses (e.g., AR/VR glasses) which are optionally configured to both collect images and/or display image tags generated by Automatic Identification System 152.
  • the network hub or smartphone may be connected to other devices using long range communications such as Network 115 (e.g., cellular communications and the internet).
  • Automatic Identification System 152 and optionally any other elements of Image Processing System 110 are optionally disposed in Intemet-of-Things (IoT) devices.
  • IoT devices include, for example surveillance cameras, smartphones, point of sale devices, barcode readers, robotics, drones, appliances, door locks, civil/traffic management systems, and/or automobiles.
  • One advantage of including Automatic Identification System 152 on such devices is that the ability to process images gives these devices the intelligence required to comprehend their environment, without having to communicate all the processed images to a remote server.
  • this “edge” processing of images on IoT devices can reduce the bandwidth (relative to remote server processing) required to understand the images by at least 5, 10, 20 50 or 100 times.
  • each IoT device can participate in a clustered learning environment, whereby they independently gather salient data about their individual environments and contribute to a shared, learned understanding as the data is aggregated through parent nodes in the network.
  • parent nodes may include a remote server and/or instance of Image Processing System 110, are configured to observe differences in the visual features, or other intermediate neural network data, being received from edge nodes and determine salience, interest, and other metrics that are optionally used to further train a neural network of Automatic Identification System 152, or perform some global image analysis.
  • IoT devices including Automatic Identification System 152
  • a network may include 2, 3 or more layers.
  • IoT devices include a set of cameras
  • intermediate nodes include a set of network hubs
  • a parent node includes an image processing system, such as Image Processing System 110.
  • FIG. 16 illustrates an Image Analysis System 1700 according to various embodiments of the invention.
  • Image Analysis System 1700 is optionally an embodiment of the system(s) illustrated in FIG.
  • Image Analysis System 1700 may include Automatic Identification System 152, Advertising System 180, Image Sources 120, and/or any practical combination of elements included in Image Processing System 110 ⁇ .
  • Image Analysis System 1700 optionally includes Training Server 1720 and optionally more than one Computing Device 1710.
  • Computing Devices 1710 are optionally embodiments of Image Source 120A.
  • Network 115 transmits data bidirectionally through a wired or wireless network between one or more of Computing Device 1710 and a Training Server 1720.
  • Image Analysis System 1700 includes one or optionally more of Advertising System 180 or Inventory Server 1735.
  • Network 115 may optionally transmit data bidirectionally between Training Server 1720, Advertising System 180, and/or Inventory Server 1735.
  • Computing Devices 1710 include Visual Processing Logic 1747 configured to process image or image sequences (and optionally 3D information).
  • Visual Processing Logic 1747 is typically an embodiment of Automatic Identification System 152 and may include other elements of Image Processing System 110.
  • Visual Processing Logic 1747 may include an embodiment of Image Ranker 190 and/or Destination Logic 160 configured to select images based on rank and/or to select a destination to send those images.
  • Image Ranker 190 and/or Destination Logic 160 configured to select images based on rank and/or to select a destination to send those images.
  • Only a fraction of the images processed by Visual Processing Logic 1747 on Computing Devices 1710 may be communicated elsewhere. Those images that are communicated may be selected for communication based on the actions and/or objects identified in the images by Visual Processing Logic 1747.
  • Visual Processing Logic 1747 disposed on Computing Device 1710 are optionally optimized with regard to their processing hardware and/or memory requirements. For example, when a neural network is prepared for a lightweight footprint, vocabulary may be pruned and optimized for the application’s domain, the model may be quantized, or the nodes pruned.
  • Visual Processing Logic 1747 may include a downloadable application (stored on a non-transient computer readable medium) configured to execute on a mobile device, may be embedded in hardware, and/or may include computing instructions as firmware on a logic circuit.
  • Image Analysis System 1700 optionally includes a Display 1740, which is configured to present image tags to the user.
  • Display 1740 is configured to provide an interface by which to select the Input Device 1745 (e.g., a Camera) field of view and/or focus.
  • Display 1740 is configured provide a visual interface to allow for the selection of object(s), actions, or a series of actions comprising a story for which to apply Visual Processing Logic 1747.
  • Display 1740 is configured to allow a selection of items in the Input Device 1745 field of view for which to apply the Visual Processing Logic 1747.
  • Display 1740 is configured to display surface and depth data from Input Device 1745, allowing the user to differentiate between foreground and background objects or other scenery features for which to apply the Visual Processing Logic 1747.
  • Display 1710 may be disposed on Computing Device 1710 and/or may be included in a Monitoring System 1781 configured for remote control of Computing Device 1710. For example, a drone or remotely controlled camera.
  • Image Analysis System 1700 includes Input Device 1745, which are comprised of one or more visual or spatial sensors.
  • Input Device 1745 is optionally an embodiment of Image Source 120A or included in an embodiment of Image source 120A.
  • Input Device 1745 may be comprised of a single visual spectrum camera, or optionally an infrared spectrum camera, or both visual and infrared cameras.
  • Input Device 1745 may be comprised of both a primary camera and secondary camera, for which secondary camera is optionally configured to compute a depth map and other spatial data of the scene.
  • Input Device 1745 may be comprised of LIDAR which is optionally configured to gather depth, surface, contour, and other spatial data from the scene.
  • Input Device 1745 includes both Image Source 120A (e.g., a camera) and a barcode reader, where Image Source 120A is generate images of an object associated with a barcode.
  • the barcode reader may include a scanning laser or may be configured to interpret a barcode based on a captured image.
  • Visual Processing Logic 1747 may be disposed in the same device as Input Device 1745, or may be disposed in a device locally connected to Input Device 1745.
  • Visual Processing Logic 1747 may be disposed in a local inventory system or point of sale system and images and barcodes are communicated from Input Device 1745 to Visual Processing Logic 1747 via a wire or short-range wireless signals.
  • Visual Processing Logic 1747 is optionally configured to both detect and interpret a barcode and also tag other objects within an image.
  • the combined barcode interpretation and image tagging are optionally performed on a same image, and the resulting image tags may be compared to the barcode interpretation to confirm accuracy of the barcode read and confirmation that the barcode is properly associated with an object identified in the image.
  • Display 1740 and/or Input Device 1745 is part of a wearable device while other parts of Image Analysis System 1700 are disposed in a separate mobile device.
  • a lightweight wearable device may communicate images to a local mobile device on which image analysis (tagging/captioning) is performed.
  • Such embodiments may include, for example, smart glasses and a smartphone, wherein the smart glasses include the Display 1740 and Input Device 1745 and the smartphone tags image contents locally using Visual Processing Logic 1747.
  • Communications between the smart glasses (e.g., AR/VR system) and the smartphone may occur using a wired connection or short-range radio protocols such as Bluetooth or WiFi, while communication between the smartphone and other devices via Network 115 are accomplished using cellular protocols and/or Internet protocols.
  • This allows an Augmented Reality and/or a Virtual Reality (AR/VR) system to received image tags (e.g., captions) using the processing power of the smartphone and without sending all of the images to a remote device, e.g., to a server via the internet.
  • the AR/VR system can be embodied in a device of lower processing and/or power requirements (relative to the smartphone).
  • the AR/VR system can, thus, be embodied in light weight stylish glasses or other wearable device.
  • Image Analysis System 1700 is optionally configured with a keyboard or other input device.
  • the keyboard provides a mechanism for which to provide corrective feedback on the output of Visual Processing Logic 1747.
  • the keyboard may be configured virtually as part of Display 1740 or as a separate physical keyboard attached to Computing Device 1710.
  • Image Analysis System 1700 includes Communication Circuit 1750, which is optionally enclosed in Computing Device 1710 and configured to manage network traffic, signal flow, and data between Display 1740 and Microprocessor 1790, and one or more of Input Device 1745.
  • Image Analysis System 1700 includes Communication Circuit 1750, which is both external and internal to Computing Device 1710 and is configured to manage network traffic, signal flow, and data between Display 1740 and Microprocessor 1790, and one or more of Input Device 1745.
  • Image Analysis System 1700 is installed in a municipality in one or more processing nodes, each of which has at least 25, 50, 100 or more connected cameras. In this embodiment, image processing at the edge nodes results in a reduction in the amount of data communicated by Communication Circuit 1750.
  • Communication Circuit 1750 optionally includes embodiments of Network 115.
  • Image Analysis System 1700 includes Microprocessor 1790 which is configured to execute Visual Processing Logic 1747 and/or device Update Logic 1728.
  • Microprocessor 1790 is connected to Memory 1755 and both are contained within Computing Device 1710, sharing both a common power source and data communication bus.
  • Microprocessor 1790 is configured to execute any combination of Visual Processing Logic 1747, Inventory Logic 1775, and/or Alert Logic 1780 (discussed elsewhere herein).
  • Microprocessor 1790 is optionally an embodiment of Processor 140.
  • Memory 1755 includes, for example, Application Memory 1760, Execution Memory 1765, and Tag Memory 1770.
  • Application Memory 1760 includes non-transient memory configured to store computing instructions, firmware, and/or hardware.
  • Application Memory 1760 is configured to store any combination of Inventory Logic 1775, Device Update Logic 1729, Visual Processing Logic 1747, and/or Alert Logic 1780.
  • Application Memory 1760 may be configured to store a neural network of Visual Processing Logic 1747, associated neural network control logic, and neural network output interpretation logic.
  • Execution Memory 1765 is configured to store the executing application code for Microprocessor 190, and in some embodiments may be optionally connected with a specialized data bus to Microprocessor 190.
  • Tag Memory 1770 is configured to store vocabulary (words used for tags or captions) for the neural network interpretation logic.
  • Tag Memory 170 may be updated remotely by Device Update Logic 1729.
  • Neural network interpretation logic (typically part of Visual Processing Logic 1747) is logic configured to take an output of a neural network of Visual Processing Logic 1747 and convert the output into a phrase (e.g., image caption) which optionally includes both identification of objects within one or more images and relationships between those objects.
  • Analysis System 100 optionally includes Device Update Logic 1729, which is configured to receive full or partial updates to Visual Processing Logic 1747 and/or the vocabulary stored in Tag Memory 1770, from Training Server 1720.
  • Device Update Logic 1729 is configured to schedule and apply an update to the Visual Processing Logic 1747 from Training Server 1720.
  • a scheduled update may include a full or partial update in either of Visual Processing Logic 1747, including the neural network of Visual Processing Logic 1747, and/or Tag Memory 1770.
  • Image Analysis System 1700 In cases where Image Analysis System 1700 is deployed in inventory management, warehousing, robotics, archival, retail, and other industries whereby material objects are retrieved, natural visual representation (i.e., objects without a predetermined computer code as in barcode, QR code, or any other coded label), various embodiments of Image Analysis System 1700 optionally include Inventory Logic 1775, which is configured to retain and match various objects by inventory record through Visual Processing Logic 1747. As noted elsewhere herein, Image Analysis System 1700 may be used on combination with various types of barcode (e.g., QR code) readers.
  • barcode e.g., QR code
  • optional Alert Logic 1780 is configured to monitor the output of Visual Processing Logic 1747 for various lists of trigger words, actions, and/or concepts. For example, a tag “gun” or a relationship “hitting.” When a trigger word is detected Alert Logic 1780 may be configured to transmit an alert indicating the occurrence of such triggers words, actions and/or concepts (and optionally related images) through Network 115 to other systems outside Image Analysis System 1700.
  • Alert Logic 1780 multiple surveillance cameras, or other Input Devices 1745, may be connected throughout a city to several Computing Device 1710 edge nodes, whereby each Computing Device 1710 is responsible for one or more locally generated camera video feeds.
  • the images from cameras are processed using Visual Processing Logic 1747, and any triggers resulting from this processing results in Alert Logic 1780 sending an alert to a city-wide central monitoring station, as an Office of Emergency Management (OEM) Monitoring Center. Only when such actions or sequence of actions exist in the salient video feeds.
  • OEM Office of Emergency Management
  • Such a configuration may also be used in residential or commercial surveillance and security systems, whereby a home’s Computing Device 1710, e.g., a residential security system control panel, may be configured to create curated image and/or video feeds that only include triggering content, reducing a homeowner’s concern over the privacy of a camera until such time as image indicating that human life or property are at risk are processed.
  • Training Server 1720 is configured to receive image and video streams, image features, and/or other intermediate neural network data, and one or more image tags via Network 115 from one or more Computing Device 1710.
  • one or more images may be captured by a wearable camera or augmented reality device (e.g., headset) and the images communicated wirelessly to a nearby smartphone for processing.
  • the smartphone, or other Computing Device 1710 may then communicate a fraction of the captured and processed images to Training Server 1720 and/or Image Processing System 110, e.g., less than 1/10 th , l/50 th , l/100 th or l/1000 th of the captured images for the purpose of quality control and/or training.
  • the fraction of captured and processed images may be sent directly from any instance of Computing Device 1710. As only a fraction of images are transmitted to either Image Processing System 110 and/or Training Server 1720, the required bandwidth and server processing power can be substantially lower than if more images (e.g., all processed images) are transmitted.
  • Training Server 1720 contains Confirmation Logic 1724 which is configured to analyze corrective and other feedback data from one or more Computing Device 1710.
  • Confirmation Logic 1724 Data received by Confirmation Logic 1724 are optionally compared through a configurable threshold in quorum, salience, disagreement, general differential, cosine similarity, Euclidian distance, or other comparative analysis across one or more input data.
  • Confirmation Logic 1724 may be configured to receive corrective data through direct and explicit user feedback or indirect user behavior data.
  • An exemplary embodiment is configured to monitor user interaction in the form of acceptable user behaviors, such as a purchase or successful advertisement interaction (click, view, lead, etc.), which serves as one or more positive confirmation steps in validating neural network success.
  • Such comprehensible input may also be optionally configured in the inverse, by reducing the weight of the corresponding neural network results.
  • Confirmation Logic 1724 is configured to compare analyses of several images including the same object or objects from different views such that the results of the analyses can be compared.
  • Training Sever 1720 may include any combination of the elements of Image Processing System 110 and/or Visual Processing Logic 1747.
  • Training Server 1720 is optionally configured to store a master copy of a neural network. This master neural network serves as a reference for neural networks deployed in one or more Computing Device 1710.
  • NN Training Logic 1726 receives corrective adjustment data from Confirmation Logic 1724, which is then applied to a master copy of the neural network weights.
  • a partial neural network is stored in Training Server 1720.
  • neural network data stored in Training Server 1720 is then used to compute a delta through Update Logic 1728, which then packages and distributes either a full or partial update through Network 115 to one or more Computing Device 1710.
  • Image Analysis System 1700 is configured to communicate with or include Advertising System 180.
  • Selection Logic 1733 is configured to receive a request through Network 115 from one or more Computing Device 1710, the request containing one or more image tags, image captions, image features, or other intermediate neural network data.
  • Selection Logic 1733 is configured to utilize received image tags to retrieve or more advertisements from Advertisement Storage 1737.
  • Selection Logic 1733 may be optionally configured to compare image features or other intermediate neural network data for selection of advertisements.
  • hiking advertisements are configured to match similar or related image tags, such as “hiking books”, or a cluster of image features, or other intermediate neural network data, belonging to a particular cohort of users interested “hiking boots” or similar topics.
  • Image Analysis System 1700 is optionally configured with Inventory Server 1735 including Inventory Logic 1775.
  • Inventory Logic 1775 can include any inventory management system configured to receive object identities, such as object identities determined by Visual Processing Logic 1747 and optionally confirmed by a barcode reading.
  • a warehouse inventory management system is configured to select one or more products matching or similar to image tags “3 Door French Door Stainless Steel Refrigerator” received from Computing Device 1710 when Input Device 1745 captures the visual, surface, or other spatial data from an object or scene and such data has been interpreted by Visual Processing Logic 1747. This approach may be used for tracking inventory and/or sales.
  • Smart wearable devices have increased in popularity in recent years. As described herein, it is advantageous to have an intermediary device perform some computations on behalf of a wearable device. For example, a smartphone may perform image analysis on behalf of a smartwatch or smart glasses using a wired or local wireless connection. The smartphone is an “intermediary” device because it may also be connected to a remote server, i.e., via the internet.
  • a wearable device such as a watch, headset, clothing, pendant, glasses, bodycam, pin, etc. includes a camera configured to capture images. These images are communicated to a mobile device such as a smartphone or vehicle, etc. On the mobile device images are processed and the results of the processing are then communicated to the wearable device and/or a remote server.
  • the wearable device may be an example of Input Device 1745 configured to record the images. These results can include identification of subject matter within the images, e.g., image tags.
  • a first neural network is used on the mobile device to make the identification based on processing of the image. On the wearable device the identification can be communicated to a user either in the form of text or audio.
  • a subset of the images may be communicated from the mobile device to a remote server, such as Image Processing System 110 and/or Automatic Identification System 152.
  • the remote server is configured for scoring the accuracy of the identification.
  • the server may provide a score representing an accuracy of an image tag determined using Visual Processing Logic 1747.
  • the determination of a score can be done manually (by a person) and/or can be done using a second neural network that has greater processing power and/or his better trained relative to the first neural network.
  • the server e.g., Training Server 1720, is configured to send an update to the first neural network to the mobile device, the update being based on images received from multiple mobile devices, e.g., multiple instances of Computing Device 1710.
  • a mobile device acts as an intermediary local to an image capture device, where there is more communication of images between the mobile device and the image capture device relative to communication of images between the mobile device and a server.
  • a smartphone intermediary may support an VR/AR headset having less processing power than the smartphone.
  • the server receives a fraction of the images and provides quality control, neural network updates, and/or other information such as advertisements.
  • FIG. 17 illustrates methods of processing an image on a client device according to various embodiments of the invention. These methods are optionally performed using Image Analysis System 1700 and/or Image Processing System 110.
  • Image or video content is captured by Input Device 1745 and encoded into a data file or stream.
  • Image or video content may be displayed via Display 1740 to allow an operator of Analysis System 100 to control, focus, or otherwise adjust Input Device 150, or to select an object or objects for identification.
  • Process Step 1815 image and video data from Capture Step 1810 are transmitted to Visual Processing Logic 1747.
  • data are optionally pre-processed to align dimension and format with those acceptable to a first neural network of Visual Processing Logic 1747 and then forwarded through the first and optionally more neural networks for further processing.
  • Process Step 1815 includes, for example, processing the image or video stream on the mobile computing device using a microprocessor and a visual processing application, the processing including generation of one or more image tags characterizing an identity of a three-dimensional object, an action within the image, and/or a series of actions comprising a story.
  • the visual processing application optionally including a neural network stored on the mobile computing device.
  • image tags produced in Process Step 1815 are sent to one of several optional servers in a Send Tags Step 1820 in order to receive product inventory, ad inventory, or various data further describing the object or scene captured during the Capture Step 1810.
  • Inventory Logic 1775 and/or Alert Logic 1780 may be invoked to determine whether image tags are salient or interesting enough for any downstream services, such as Advertising System 180400 or Inventory Server 1735.
  • Send Tags Step 1820 is invoked, sending image tags to Inventory Server 1735, which then retrieves product inventory to fulfill the user’s search.
  • a camera continuously captures scene data through Input Device 1745 and/or Image Source 120A and through Process Step 1815 produces a stream of image tags, the tags which are then processed via Alert Logic 1780 to determine if tags are interesting to a number of downstream services or devices (SMS or other message delivery systems, emergency services, security monitoring, etc.).
  • Receive Ad Step 1825 one or more advertisement is received from advertisement inventory of Advertising System 180, the advertisement is selected based on the tags received as a result of Send Tags Step 1820.
  • Send Tags Step 1820 is invoked, sending image tags to Advertising System 180.
  • image tag data generated using Visual Processing Logic 200 are combined and exhibited and/or exhibits an advertisement obtained from Advertising System 180 to a user.
  • Data displayed to the user may optionally include the image tags as generated by Visual Processing Logic 200, or an interpretation of image tags as they related to the application of Image Processing Device 110. For example, an interpretation may be “Inventory Confirmed” or “Inspection Passed.”
  • Image tags produced in Process Step 1815 are optionally compared with validation data of similar visual content. If these data are determined to be salient or interesting through analysis, or on random/periodic selection, a Send Image Step 1835 is invoked.
  • Communication Circuit 1750 is used to transmit visual data, image tags, image features, and/or intermediate neural network data to a remote server such as Image Processing System 110. For example, in a periodic selection, every 100 th image is sent for quality control and/or training purposes.
  • Computing Device 1710 receives either a full or partial update of Visual Processing Logic 1747 from Training Server 1720 through Network 115 and applies the update in either or both of Application Memory 1760 and/or Tag memory 1770.
  • one or more Computing Device 1710 capture visual data related to a traffic collision, process these data into image tags, and send the tags and other intermediate neural network data via Network 115 to Training Server 1720.
  • Training Server 1720 compares differentials related to salience and uniqueness, and optionally also accepts any corrective input from field experts.
  • Training Server 1720 determines an update to the neural network which is sent back to Computing Device 1710 via Network 115.
  • Visual Processing Logic 1747 is then replaced or partially updated based on the received update.
  • the methods illustrated by FIG. 17 include: sending the image, video stream, depth data, or surface data to the remote server and receiving an update to the neural network, the update being based on a correctness of the one or more image tags in characterizing the identity of the object or action within the image.
  • the image, video stream, depth data, or surface data is optionally one of a set of similar type (whether image, video, etc) comprising a configurable quorum for which associated or corrective tags are communicated to the remote server.
  • the image, video stream, depth data, or surface data optionally contains a determinable novelty threshold between one of a set of images, video stream, depth data, or surface data sources, for which associated or corrective tags are communicated to the remote server.
  • the image, video stream, depth data, or surface data optionally contain a configurable novelty threshold between previously trained inputs, for which associated or corrective tags are communicated to the remote server.
  • the image, video stream, depth data, or surface data optionally contain a configurable salience threshold between previously recorded input data, for which associated or corrective tags are communicated to the remote server.
  • FIG. 18 illustrates methods of providing information to an AR/VR device, according to various embodiments of the invention.
  • the possible small footprint (memory and/or processing power requirements) of Visual Processing Logic 1747 is used to provide image processing proximate locally, as opposed to at a distant server.
  • the image processing may occur on the same device used to capture images or on a nearby device.
  • a body cam, smart glasses, AR/VR headest or heads-up display, vehicle camera, drone, an/or the like may be used to both capture images and process the captured images to generate image tags.
  • any of the above devices may be used to capture images and a local device (intermediary) such as a smartphone, vehicle computer, wearable computer, drone controller, personal assistant device (e.g., Amazon Alexa), network hub, and/or the like may be used to process the captured images to generate image tags.
  • a local device such as a smartphone, vehicle computer, wearable computer, drone controller, personal assistant device (e.g., Amazon Alexa), network hub, and/or the like may be used to process the captured images to generate image tags.
  • the image tags (or associated information) are then communicated back to the device used to capture the images.
  • a fraction of processed images is typically also provided to a remote server for the purposes of quality control, training, etc.
  • the mode of communication between the local devices and remote servers are optionally different, e.g., using the Internet or cellular networks verses using wired or short-range radio frequency communication standards.
  • the use of an intermediary is optional.
  • Capture Step 1810 one or more images are captured using Input Device 1745 or Image Source 120A.
  • the captured images may include a sequence of images embodied in a video standard.
  • Capture Step 1810 can include capture of images and/or 3D spatial information, and may be accomplished using any of the image capture devices discussed herein.
  • the captured images are transmitted to a local device including Visual Processing Logic 1747, e.g., to Computing Device 1710. This transmission can occur using a wired connection or short-range radio frequency protocols.
  • images captured using an AR/VR headset or smart glasses are communicated to a smartphone or wearable device.
  • images captured using vehicle navigation cameras and/or LIDAR are communicated to a vehicle computer.
  • the communicated images are processed using Visual Processing Logic 1747 and/or Image Processing System 110 to generate image tags characterizing actions, objects, and/or relationships between objects within the communicated images.
  • a Transmit Tags Step 1930 the image tags generated in Process Step 1815 are transmitted back to the device that captured the images or to a different local device. For example, back to the smart glasses or AR/VR device.
  • images may be captured by vehicle navigation cameras, communicated to a vehicle computing device including Visual Processing Logic 1747, and then transmitted to an augmented reality heads-up display position for viewing by a driver.
  • the image tags are displayed to a user. They may be displayed on an AR/VR device, on a heads-up display, and/or the like.
  • An example of a derivative of the tags can include an indication “improper license plate” derived from tags indicating a vehicle make and model and tags indicating a license plate number - where the license plate does not match the vehicle or is expired.
  • a Send Image Step 1835 a fraction of the images processed in Process Step 1815 are sent from the device including Visual Processing Logic 1747 to a separate device such as Training Server 1720 and/or Image Processing System 110. As noted elsewhere herein, the fraction sent may be less than 1/10 th , l/100 th or l/1000 th of the total images processed.
  • Send Image Step 1835 may be performed using a communication standard having longer range than Transmit Step 1910. For example, cellular protocols and/or the internet.
  • Example applications for use of image tagging system and/or methods discussed herein include: Marketing (what does the user look at - eye tracking w/ heartrate/sweat detection to measure excitement, provide advertisements based on length of view or eye focus); Describe a scene (accessibility for vision impaired); Estimating weight/sizes (recognize a couch and estimate it weight, moving & storage); Estimate work to be done (automated measurements based on images, recognizing surfaces such as floor, comers and walls, etc., contractor) - measurements; Detect violations/issues/problems (inspection); Medical diagnostics and support (surgeon in ER or via.
  • Anomaly detection for example, identify license plates that don’t match a car make/model/year, identify license plates of vehicles that have not been registered
  • Aeronautics pilot support - process visual field - safety/hazard detection
  • motorcycle/Bicycle helmet safety/hazard detection
  • Contact tracing for example, Business/Networking/Political support (facial detection); Public Safety (facial / anomaly / illegal activity detection); and Industrial warehousing (automatically identify items on a pick list in an AR device, label items and inventory / picker).
  • Various embodiments of the invention include use of Image Processing System 110 to facilitate autonomous operations, such as autonomous flight. Such embodiments can include use of any combination of the systems and method disclosed elsewhere herein.
  • Computing Device 1710 may be included in an autonomous or semi -autonomous drone or other autonomous or semi- autonomous vehicle. Such drone or other vehicle may operate in conditions wherein direct radio communications prevent optimal remote piloted operation.
  • Various embodiments of the invention include Navigation Logic 188 configured to navigate a vehicle based at least in part on image analysis (e.g., the descriptive tagging) performed using Image Processing System 110 and/or Computing Device 1710.
  • image analysis e.g., the descriptive tagging
  • Computing Device 1710 image analysis on a small mobile device, such as a drone, can enable navigation normally only found in larger devices.
  • Computing Device 1710 may weigh less than 250, 200 or 170 grams.
  • Visual Processing Logic 1747 is executed using an instance of Microprocessor 1790 having less than 70, 50, 40 or 30 GFLOPs.
  • Visual Processing Logic 1747 includes an image processing model less than 1 GB, 600 MB or 450 MB in size. In various embodiments, Visual Processing Logic 1747 is configured to process images having an image size of 224x224, 331x331 or 448x448, or any size therebetween, or more.
  • Image Processing System 110 and/or Computing Device 1710 may include in a vehicle, wherein radio or optical communications are intermittent or undesirable. Intermittent communications can include situations where radio or direct laser communications are unreliable, such as where they may be subject to interference or jamming or lack of line-of-sight. Undesirable communications can include situations in which communications are subject to monitoring, spoofing, or detection of a party involved in the communications. For example, where one or more sender in the communications does not wish to be located. In these embodiments, vehicle operation (e.g., drone flight) may fall over from management by a remote pilot to management dependent on Image Processing System 110 and/or Computing Device 1710.
  • vehicle operation e.g., drone flight
  • some embodiments of the invention include Fall-Over Logic 187, configured to determine when Image Processing System 110 and/or Computing Device 1710 are partially and or fully responsible for vehicle operation (e.g., navigation), optionally as compared to full and/or real-time remote human vehicle operation.
  • Fall-Over Logic 187 configured to determine when Image Processing System 110 and/or Computing Device 1710 are partially and or fully responsible for vehicle operation (e.g., navigation), optionally as compared to full and/or real-time remote human vehicle operation.
  • “Fall-over” as used herein is intended to include full or partial exchange of responsibility form a first entity to a second entity, when the first entity is compromised.
  • fall-over is between different vehicles. For example, between different drones. This may occur when, 1) operation is switched from being dependent on one source to another source, or 2) operation is switched from being dependent on one source to multiple sources.
  • fall-over from a remote operator to Navigation Logic 188 may occur when a radio signal is jammed or otherwise lost.
  • Image Processing System 110 and/or Computing Device 1710 are configured to receive and process images from several mobile devices working in consort. For example, a group of two or more drones may each obtain images of an area and/or object. These images can be used by Computing Device 1710 to confirm identity of the object and/or action within the images. Visual Processing Logic 1747 may use the multiple images to produce a single descriptive tag or may generate a descriptive tag for each image and then compare the tags produced to confirm their accuracy.
  • Image Processing System 110 and/or Computing Device 1710 include Targeting Logic 189 configured to use the descriptive tags generated by image processing (e.g., by Visual Processing Logic 1747) to identify and/or track targets.
  • Targeting Logic 189 may be subject to fall-over (e.g., under the control of Fall-Over Logic 187) from a remote human operator to operation at least partially dependent on outputs of Image Processing System 110 and/or Computing Device 1710; and Targeting Logic 189 may also make use of processing of images obtained from multiple vehicles working in consort.
  • Computing Device 1710 is configured to use images obtained by one or more drones to identify an object (e.g., another drone, aircraft, flying object, or ground-based vehicle). The identification may be used to avoid, track, navigate to, follow, target, locate, and/or otherwise interact with the object. For example, identification of a missing person may be used to generate and communicate a location of that person, while following the person. In another example, images generated by one or more drones may be used to identify another drone and navigate to, target, fire on, and/or follow the other drone. [00309] In some embodiments, two or more of Computing Device 1710 are configured to form a peer-to- peer network, optionally with each in communication with the others and each including Visual Processing Logic 1747.
  • an object e.g., another drone, aircraft, flying object, or ground-based vehicle.
  • the identification may be used to avoid, track, navigate to, follow, target, locate, and/or otherwise interact with the object. For example, identification of a missing person may be used to generate and communicate a
  • the Computing Devices 1710 may communicate via radio and/or optical signals. Each may capture images of an object and/or area, and optionally each capable of processing the captured images. The processed images including views of the object and/or area from different viewpoints. As noted elsewhere herein, the images from different Computing Device 1710 may be used for confirmation of descriptive tags and/or two or more images may be processed as a group by one or more of Visual Processing Logic 1747.
  • Computing Device 1710 is included in a drone configured to be piloted by a remote user via a wireless signal.
  • the drone includes Fall-Over Logic 187 configured to switch between piloting by the remote user and piloting using Navigation Logic 188, the Navigation Logic 188 being responsive to the output of Visual Processing Logic 1747.
  • Fall-Over Logic 187 is configured to make this switch responsive to loss and/or restoration of the wireless signal.
  • Fall- Over Logic 187 is configured to switch between these two piloting modes based on a schedule and/or a detection event.
  • the drone may be configured to spend a majority of flight time in radio silence (e.g., at least 50, 70, 90, 95 or 98% of the time).
  • radio silence e.g., at least 50, 70, 90, 95 or 98% of the time.
  • Navigation Logic 188 and Visual Processing Logic 1747 are used for navigation.
  • piloting and/or target acquisition
  • a switch between the autonomous piloting and remote piloting is performed by a detection of a target object, as determined by Visual Processing Logic 1747.
  • the target may be, for example, a person, a specific person, a vehicle, a license plate, an animal, an object of a predetermined color, detection of an object of a specific type (e.g., a vehicle model, another drone, a tracked vehicle, a vehicle greater than a certain size, a weapon, a flying object, a drone, a helicopter, a jet, a boat/ship, ... ), detection of a moving vehicle, a type of movement (e.g., flying, running, crawling, shooting, burning, ... ), detection of a heat signature, and/or the like.
  • a specific type e.g., a vehicle model, another drone, a tracked vehicle, a vehicle greater than a certain size, a weapon, a flying object, a drone, a helicopter, a jet, a boat/ship, ...
  • detection of a moving vehicle e.g., a type of movement (e.g., flying, running, crawling, shooting, burning,
  • Fall-Over Logic 187 may be configured to automatically switch between piloting modes. For example, switching navigation from an autonomous mode to a remote human pilot. Fall-Over Logic 187 may be configured to activate fall-over before or after target acquisition. Fall-Over Logic 187 may be configured to activate fall-over based on detection of a jamming signal.
  • Image Processing System 110 and/or Computing Device 1710 are disposed within a “mother” vehicle configured to carry one or more sub-vehicles.
  • the sub-vehicles may or may not include additional instances of Image Processing System 110 and/or Computing Device 1710.
  • a drone is configured to carry at least 1, 2, 4, 5 or 8 smaller drones.
  • the mother drone is configured to release a smaller drone, which may include another instance of Computing Device 1710 and/or Input Device 1745.
  • the smaller drone may be guided to the target by its own instance of Visual Processing Logic 1747, by the Visual Processing Logic 1747 of the mother drone, or both.
  • the smaller drone may be replaced by a missile, optionally including Visual Processing Logic 1747.
  • the mother drone may be configured to loiter in an area until a target is detected, optionally, switch to remote pilot navigation to confirm the detected target, and then release the smaller drones to attack the target.
  • images from the smaller drones are optionally used to confirm and/or track the target, the images being processed on the smaller drones or the mother drone.
  • Image processing System 110 includes multiple devices, e.g., Computing Device 1710.
  • image analysis in any of the embodiments described herein is distributed among the multiple devices, the image analysis may be based on data, e.g., images, received from the multiple devices. Further, the output of each instance of Image Processing System 110 and/or Image Source 120, and/or Computing Device 1710, may be evaluated at a subset thereof. For example, a system including multiple flying vehicles and have one, two or more vehicles including Visual Processing Logic 1747 and further vehicles including cameras configured to capture images to be communicated to the one, two or more vehicles for processing.
  • Various embodiments of the invention include methods of operating a remote vehicle, the methods comprising the following steps. 1) Guiding the remote vehicle using instructions received via a radio signal from a remote operator.
  • the radio signal is optionally operated using frequency hopping and/or a burst mode. 2) Detecting an event requiring fall-over (a “fall-over event”) from control by the remote operator (a “remote mode”) to Navigation Logic 188 supported by Visual Processing Logic 1747.
  • the event may include any of the events discussed elsewhere herein, such as jamming of the radio signal or detection of a target.
  • Falling over from the “remote mode” to the autonomous mode is 4) Navigating the remote vehicle in the autonomous mode based on image processing by Visual Processing Logic 1747. 5) Optionally receiving from the remote operator a confirmation of the target.
  • Computing Device 1710 can include any combination of Fall-Over Logic 187, Navigation Logic 188, and/or Targeting Logic 189.
  • Computing Device 1710 may be in communication with a remote Monitoring System 1781, on which Display 1740 is optionally located, and which is configured for a human or AI operator to navigate in the remote mode.
  • Images discussed herein are optionally part of a video sequence of a video.
  • Human image reviews may provide image tags at Destinations 125 using audio input.
  • the audio input can be converted to text in real-time using audio to text conversion logic disposed on Destinations 125 and/or Image Processing System 110.
  • Image tags are optionally processed by spellcheck logic.
  • the “smartphone” discussed herein may be replaced by other (optionally mobile) devices, such as a police or emergency services radio, a tablet computing, a home/commercial security system, a vehicle computing system, a bodycam system, a vehicle camera system, a sentry system, a boarder control system, a drone, and/or the like, any of which may include Computing Device 1710.
  • the term “Real-time” means without unnecessary delay such that a user can easily wait for completion.
  • the systems and methods described herein are optionally used to tag audio content, such as music or dialog. This audio content may be part of a video or otherwise associated with an image. In some embodiments, audio content is automatically converted to text and this text is used to assist in manually or automatically tag an image. Text generated from audio content may be used in manners similar to those described herein for text found on a webpage including an image, to assist in tagging the image.
  • Computing systems referred to herein can comprise an integrated circuit, a microprocessor, a personal computer, a server, a distributed computing system, a communication device, a network device, or the like, and various combinations of the same.
  • a computing system may also comprise volatile and/or non-volatile memory such as random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), magnetic media, optical media, nano-media, a hard drive, a compact disk, a digital versatile disc (DVD), and/or other devices configured for storing analog or digital information, such as in a database.
  • logic or “applications” noted above can comprise hardware, firmware, or software stored on a non-transient computer-readable medium, or combinations thereof. This logic may be implemented in an electronic device, e.g., circuit, to produce a special purpose computing system.
  • Computer-implemented steps of the methods noted herein can comprise a set of instructions stored on a computer -readable medium that when executed cause the computing system to perform the steps.
  • a computing system programmed to perform particular functions pursuant to instructions from program software is a special purpose computing system for performing those particular functions.
  • Data that is manipulated by a special purpose computing system while performing those particular functions is at least electronically saved in buffers of the computing system, physically changing the special purpose computing system from one state to the next with each change to the stored data.

Abstract

The computer generated and manual image reviews can be performed sequentially or in parallel. The generated image tags may be provided to a requester in real-time, be used to select an advertisement, and/or be used as the basis of an internet search. In some embodiments generated image tags are used as a basis for an upgraded image review. A confidence of a computer generated image review may be used to determine whether or not to perform a manual image review. Images and their associated image tags are optionally added to an image sequence. Image recognition may occur on a client and/or mobile device, e.g., a drone.

Description

Mobile Client Based Image Analysis CROSS-REFERENCE TO RELATED APPLICATIONS [001] This application is a continuation in part of U.S. non-provisional Application No. 17/399,024 filed August 10, 2021, this application claims priority and benefit of U.S. provisional application No. 63/165,054 filed March 23, 2021. All the above patent applications are hereby incorporated herein by reference.
BACKGROUND
[002] Field of the Invention
[003] The invention is in the field of image processing, and more particularly in the field of characterizing content of images.
[004] Related Art
[005] It is typically more difficult to extract information from images as compared to text data. However, a significant fraction of information is found in images. The reliability of automated image recognition systems is highly dependent on the contents of an image. For example, optical character recognition is more reliable than facial recognition. It is a goal of image recognition to tag an image. Tagging refers to the identification of tags (words) that characterize the content of an image. For example, an image of a car may be tagged with the words “car,” “Ford Granada,” or “White 1976 Ford Granada with broken headlight.” These tags include varying amounts of information and, as such, may vary in usefulness.
[006] Analysis of images can include identification of objects and/or actions within an image or sequence of images. Typically, such analysis is performed on a suitably powerful computer such as a server. Applications executable on portable devices, such as smartphones or tablets, are known to communication images and video to remote servers for analysis. The remote servers process the visual content and return information derived from visual media, such as tags, captions, full image descriptors, and other types of metadata (hereinafter referred to as image tags), to the portable devices. See for example, the system described in US Patent Ser. Nos. 9,569,465, 9,575,995, 9,830,522, 9,665,595, 9,959,467, 10,140,631, 9,639,867, 10,185,898, 10,223,454 and 10,831,820, the disclosures of which are incorporated herein by reference.
[007] Current visual analyses on mobile devices include “image feature” identification and “similarity analysis.” For example, Apple, Inc.’s iPhone can automatically detect image features, such as edges, surfaces, and faces. These capabilities can be used to classify pictures as having similar faces and provide tools in the form of a software framework, or SDK, that developers may leverage when developing software on Apple’s iOS operating system. These feature sets, unique to each OEM’s platform, attract developers to and, thus, increase a company’s application portfolio and revenue. Furthermore, highly novel and sophisticated feature sets create a type of vendor lock-in with developers, given that these foundational elements are necessary for their applications to function.
SUMMARY
[008] Embodiments of the invention include a two-pronged approach to tagging of images.
The first prong is to perform automated image recognition on an image. The automated image recognition results in a review of the image. The image review includes one or more tags identifying contents of the image and optionally also a measure of confidence representative of the reliability of the automated image recognition. The second prong in the approach to tagging of images includes a manual tagging of the image. Manual tagging includes a person viewing each image, considering the content of the image, and manually providing tags representative of the content of the image. Automated image recognition has an advantage in that the cost, in time or money, of analyzing each image can be relatively low. Manual tagging of images has an advantage of higher accuracy and reliability.
[009] Embodiments of the invention combine both automated image recognition and manual image recognition. In some embodiments automated image recognition is performed first. The resulting image review typically includes both one or more tags characterizing the image and a measure of confidence in the accuracy of these tags. If the confidence is above a predetermined threshold, then these tags are associated with the image and provided as an output of the tagging process. If the confidence is below the predetermined threshold, then a manual review of the image is performed. The manual review results in additional and/or different tags that characterize the contents of the image. In some embodiments, the automated image recognition and the manual review of the image are performed in parallel. The manual review is optionally cancelled or aborted if the automated image recognition results in one or more tags having a confidence above the predetermined threshold.
[0010] The image processing systems described herein are optionally configured to provide an image sequence as output. For example, images received from a first and second remote client may be included in an image sequence provided to a third remote client. In various embodiments, the image sequence can include images received from at least 10, 100 or 1000 remote clients. The image sequence can be presented in a variety of ways including, for example, as a video, a mosaic, a set of still images, and/or derivatives of images. The selection of images is optionally dependent on the various information discussed elsewhere herein. For example, the inclusion of an image in an image sequence is optionally dependent on image tags such as those generated using the systems and methods disclosed herein. An image sequence can be provided to a plurality of remote clients, including clients from which the one or more of the images within the sequence were received.
[0011] In some embodiments recognition of an image can be upgraded. Upgrading of the image recognition process includes a request for further or improved tags representative of the content of the image. For example, if automated image recognition results in the tags “white car,” an upgrade of this recognition may result in the tags “white Ford Granada.” In some embodiments, an upgraded review makes use of an expert human reviewer. For example, the above example may include the use of a human reviewer with an expert knowledge of automobiles. Other examples of reviewer expertise are discussed elsewhere herein.
[0012] Some embodiments include compact and highly accurate visual analysis application has been developed which is capable of performing visual analysis on a device such as a security camera, a smartphone, tablet computer, or other types of handheld device. The visual analysis includes identification of objects, actions, or a series of actions comprising a story within visual content, such as in an image or video. This identification results in metadata, such as image tags and optionally full image descriptors, which characterize the identity of objects or actions within the presented visual content. In an exemplary embodiment, the visual analysis application, including a trained neural network, requires less than 250MB of device storage. After analysis of the visual content, both the image data or video data stream, descriptive metadata, and/or resulting image tags may be provided to a remote server for a variety of purposes such as selection of an advertisement, quality control, inventory management, security monitoring and surveillance, safety assessment, and/or training.
[0013] Image tags generated on a device, or the corresponding user feedback and behavioral data, may be communicated (with or without the corresponding images) for the purposes of performing a search, quality control, inventory management, activating alerts, selecting advertisements, surveillance, media archive retrieval and cataloging, building user behavior or demographic profiles, relating similar cohorts and marketing groups, and/or the like.
[0014] Various embodiments of the invention include features directed toward improving the accuracy of image recognition while also minimizing cost. By way of example, these features include efficient use of human reviewers, real-time delivery of image tags, and/or seamless upgrades of image recognition. The approaches to image recognition disclosed herein are optionally used to generate image tags suitable for performing internet searches and/or selecting advertisements. For example, in some embodiments, image tags are automatically used to perform a Google search and/or sell advertising based on Google’s AdWords.
[0015] Various embodiments of the invention include an image processing system comprising an I/O configured to communicate an image and image tags over a communication network; an automatic identification interface configured to communicate the image to an automatic identification system and to receive a computer generated review of the image from the automatic identification system, the computer generated review including one or more image tags identifying contents of the image; destination logic configured to determine a first destination to send the image to, for a first manual review of the image by a first human reviewer; image posting logic configured to post the image to the destination; review logic configured to receive the a manual review of the image from the destination and to receive the computer generated review, the manual review including one or more image tags identifying contents of the image; response logic configured to provide the image tags of the computer generated review and the image tags of the manual review to the communication network; memory configured to store the image; and a microprocessor configured to execute at least the destination logic.
[0016] Various embodiments of the invention include a method of processing an image, the method comprising receiving an image from an image source; distributing the image to an automated image identification system; receiving a computer generated review from the automated image identification system, the computer generated review including one or more image tags assigned to the image by the automated image identification system and a measure of confidence, the measure of confidence being a measure of confidence that the image tags assigned to the image correctly characterize contents of the image; placing the image in an image queue; determining a destination; posting the image for manual review to a first destination, the first destination including a display device of a human image reviewer; and receiving a manual image review of the image from the destination, the image review including one or more image tags assigned to the image by the human image reviewer, the one or more image tags characterizing contents of the image. [0017] Various embodiments of the invention include an image source comprising a camera configure to capture an image; a display configured to present the image to a user; eye tracking logic configured to detect an action of one or more eyes of the user; optional image marking logic configured to place a mark on the image, the mark being configured to indicate a particular subset of the image and being responsive to the detected action; display logic configured to display the mark on the image in real time; an I/O configured to provide the image a computer network; and a processor configured to execute at least the display logic.
[0018] Various embodiments of the invention include an image source comprising a camera configure to capture an image; a display configured to present the image to a user; eye tracking logic configured to detect an action of one or more eyes of the user; image marking logic configured for a user to indicate a particular subset of the image and to highlight an object within the subset, the indication being responsive to the detected action; display logic configured to display the highlighted on the image in real time; an I/O configured to provide the image and the indication of the particular subset to a computer network; and a processor configured to execute at least the display logic.
[0019] Various embodiments of the invention include an image source comprising a camera configure to capture an image; a display configured to present the image to a user; selection logic configured for selecting; image marking logic configured for a user to indicate a particular subset of the image and to highlight an object within the subset, the indication being responsive to the detected finger; an I/O configured to provide the image and the indication of the particular subset to a computer network; display logic configured to display the image in real time and to display image tags received from the computer network in response to the image, the image tags characterizing contents of the image; and a processor configured to execute at least the display logic.
[0020] Various embodiments of the invention include an image processing system comprising an I/O configured to communicate an image sequence and image tags over a communication network; optional an automatic identification interface configured to communicate the image sequence to an automatic identification system and to receive a computer generated review of the image from the automatic identification system, the computer generated review including one or more image tags identifying contents of the image; destination logic configured to determine a first destination to send the image sequence to, for a first manual review of the image sequence by a first human reviewer; image posting logic configured to post the image sequence to the destination; review logic configured to receive the a manual review of the image sequence from the destination and optionally to receive the computer generated review, the manual review including one or more image tags identifying an action within of the image sequence; response logic configured to provide the image tags of the manual review to the communication network; memory configured to store the image sequence; and a microprocessor configured to execute at least the destination logic.
[0021] Various embodiments of the invention include a method of processing an image, the method comprising: receiving one or more first descriptors of an image at an image processing server, from a remote client via a communication network; comparing the received first descriptors to second descriptors stored locally to the image processing server, to determine if the first descriptors match a set of the second descriptors; responsive to the first descriptors matching the set of second descriptors, retrieving one or more image tags stored in association with the set of second descriptors; and providing the one or more image tags to the client.
[0022] Various embodiments of the invention include a method of processing an image at an image processing server, the method comprising: receiving an image and data characterizing the image from a remote client; determining a destination for the image, the destination being associated with a human image reviewer, the determination of the destination being based on a match between the data characterizing the image and a specialty of the human reviewer; posting the image to the determined destination; receiving one or more image tags characterizing the image, from the destination; and providing the one or more image tags to the client.
[0023] Various embodiments of the invention include a method of processing an image, the method comprising: receiving data characterizing the image from a mobile device, the data characterizing the image including identified features of an image or descriptors of an image; generating image tags based on the data characterizing the image; providing the image tags to the mobile device.
[0024] Various embodiments of the invention include a method of processing an image, the method comprising: receiving an image using a portable device; identifying features of the image using a processor of the portable device; providing the features to a remote image processing server via a communication network; receiving image tags based on the features from the image processing server; and displaying the image tags on a display of the portable device. [0025] Various embodiments of the invention include a method of processing an image, the method comprising: receiving an image using a portable device; identifying features of the image using a processor of the portable device; deriving image descriptors based on the identified features; providing the descriptors to a remote image processing server via a communication network; receiving image tags based on the descriptors from the image processing server; and displaying the image tags on a display of the portable device.
[0026] Various embodiments of the invention include a method of processing an image, the method comprising: receiving an image using a portable device; identifying features of the image using a processor of the portable device; deriving image descriptors based on the identified features; comparing the image descriptors with a set of image descriptors previously stored on the portable device to determine if there is a match between the image descriptors and the stored set of image descriptors; if there is a match between the image descriptors and the stored set of image descriptors retrieving one or more image tags associated with the set of image descriptors from memory of the portable device; displaying the retrieved one or more image tags on a display of the portable device.
[0027] Various embodiments of the invention include a method of processing an image, the method comprising: receiving an image using a portable device; identifying features of the image using a processor of the portable device; deriving image descriptors based on the identified features; comparing the image descriptors with a set of image descriptors previously stored on the portable device to determine if there is a match between the image descriptors and the stored set of image descriptors; classifying the image based on the match between the image descriptors and the stored set of image descriptors; sending the image and the classification of the image to a remote image processing server; receiving one or more image tags based on the image; and displaying the one or more image tags on a display of the portable device.
[0028] Various embodiments of the invention include an image processing system comprising an I/O configured to communicate an image and image tags over a communication network; an image ranker configured to determine a priority for tagging the image; destination logic configured to determine a first destination to send the image to, for a first manual review of the image by a first human reviewer; image posting logic configured to post the image to the destination; review logic configured to receive the a manual review of the image from the destination, the manual review including one or more image tags identifying contents of the image; memory configured to store the one or more image tags in a data structure; and a microprocessor configured to execute at least the image ranker.
[0029] Various embodiments of the invention include an image processing system comprising an I/O configured to receive an image over a communication network; an image ranker configured to determine a priority of the image and to determine whether or not to tag the image based on the priority and/or how to tag the image; manual or automatic means for tagging the image to produce one or more image tags characterizing the image; memory configured to store the image and the one or more image tags characterizing the image, in a data structure; and a microprocessor configured to execute at least the image ranker.
[0030] Various embodiments of the invention include an image processing system comprising an I/O configured to receive an image over a communication network; an image ranker configured to determine a priority of the image and to select a process of tagging the image based on the priority; means for tagging the image to produce one or more image tags characterizing the image; memory configured to store the image and the one or more image tags characterizing the image, in a data structure; and a microprocessor configured to execute at least the image ranker. [0031] Various embodiments of the invention include an image processing system comprising an I/O configured to communicate an image and image tags over a communication network; an image ranker configured to determine a priority for tagging the image based on how many times a video including the image is viewed; destination logic configured to determine a destination to send the image to, for a manual review of the image by a human reviewer; image posting logic configured to post the image to the destination; review logic configured to receive the manual review of the image from the destination, the manual review including one or more image tags identifying contents of the image; memory configured to store the one or more image tags in a data structure; and a microprocessor configured to execute at least the image ranker.
[0032] Various embodiments of the invention include an method of processing an image, the method comprising receiving an image from an image source; distributing the image to an automated image identification system; receiving a computer generated review from the automated image identification system, the computer generated review including one or more image tags assigned to the image by the automated image identification system and a measure of confidence, the measure of confidence being a measure of confidence that the image tags assigned to the image correctly characterize contents of the image; assigning a priority to the image based on the measure of confidence; determining that the image should be manually tagged based on the priority; posting the image for manual review to a first destination, the first destination including a display device of a human image reviewer; and receiving a manual image review of the image from the destination, the image review including one or more image tags assigned to the image by the human image reviewer, the one or more image tags assigned by the human image reviewer characterizing contents of the image.
[0033] Various embodiments of the invention include an method of processing an image, the method comprising receiving an image from an image source; automatically determining a priority to the image using a microprocessor; determining how the image should be tagged based on the priority; tagging the image to produce one or more tags, the one or more tags characterizing contents of the image; and storing the image and the one or more tags in a data structure.
[0034] Various embodiments of the invention include a method of generating a stream of images, the method comprising: receiving a first image from a first remote client; posting the first image to an automatic identification system configured to identify contents of the first image and to provide one or more first image tags that characterize these contents; receiving a second image from a second remote client; posting the second image to an automatic identification system configured to identify contents of the second image and to provide one or more second image tags that characterize these contents; adding the first image and the second image to an image sequence; and providing the image sequence to a third remote client.
[0035] Various embodiments of the invention include an image processing system comprising: an I/O configured to communicate a first image, a second image and image tags over a communication network, the first image and second image being received from first and second remote clients respectively; an automatic identification interface configured to communicate the first of the images to an automatic identification system and to receive a computer generated review of the first image from the automatic identification system, the computer generated review including one or more first image tags identifying contents of the first image; sequence assembly logic configured to add the first image and the second image to an image sequence, the image sequence including a plurality of images received from a plurality of remote clients; streaming logic configured to provide the image sequence to a third remote client; memory configured to store the one or more image tags in a data structure; and a microprocessor configured to execute at least the sequence assembly logic. These embodiments optionally further comprise destination logic configured to determine a destination to which to send the first image, for a first manual review of the first image by a first human reviewer; image posting logic configured to post the first image to the destination; and review logic configured to receive the first manual review of the image from the destination, the first manual review including one or more second image tags identifying contents of the image.
[0036] Various embodiments of the invention include a portable computing device comprising: a display configured to present a user interface and to display an advertisement; a camera configured to capture an image or video stream; application memory configured to store a visual processing application, the visual processing application including a neural network and logic configured to generate image tags, the image tags characterizing an identity of a three-dimensional object or an action within the image or video stream, the neural network and the camera sharing a same power supply; execution memory configured for execution of the visual processing application; wireless communication circuits configured to send the image tags generated by the visual processing application to a remote advertising server and to receive the advertisement from the advertising server, the advertisement being selected based on the image tags; and a microprocessor configured to execute the visual processing application to generate the image tags on the portable computing device.
[0037] Various embodiments of the invention include An inventory management device comprising: a display configured to present a user interface and to display user messages; a camera configured to capture an image or video stream; application memory configured to store a visual processing application, the visual processing application including a neural network optionally requiring less than 250MB of the application memory and being configured to generate image tags, the image tags characterizing an identity of a three-dimensional object or an action within the image or video stream, the neural network and the camera both being disposed within the housing; inventory logic configured to generate inventory data based on the image tags; a wireless communication circuit configured to send inventory data to a remote server; execution memory configured for execution of the visual processing application; and a microprocessor configured to execute the visual processing application to generate the image tags on the inventory management device.
[0038] Various embodiments of the invention include a detection system configured to detect an object, an event or a series of events comprising a story, the system comprising: a camera configured to capture an image or video stream; application memory configured to store a visual processing application, the visual processing application including a neural network and being configured to generate one or more image tags by processing the image, the image tags characterizing an identity of a three-dimensional object or an action within the image or video stream, the neural network and the camera sharing a same power supply or the neural network being within a central processing device with one or more connected cameras; execution memory configured for execution of the visual processing application; tag memory configured to store reference tags; alert logic configured to send an alert to a remote server in response to one of the one or more image tags matching one of the reference tags; and a wireless communication circuit configured to send the alert and the image tags generated by the visual processing application to the remote server.
[0039] Various embodiments of the invention include a method of processing an image on a mobile computing device, the method comprising: capturing an image or video stream using a camera of the mobile computing device; an optional secondary camera configured to capture depth data; an optional LiDAR configured to capture depth and surface data; processing the image or video stream on the mobile computing device using a microprocessor and a visual processing application, the processing including generation of one or more image tags characterizing an identity of a three-dimensional object, an action within the image, or a series of actions comprising a story; the visual processing application including a neural network stored on the mobile computing device; sending the one or more image tags to a remote server; receiving an advertisement from the remote server, the advertisement being selected based on the one or more image tags; and displaying the advertisement on a display of the mobile computing device.
BRIEF DESCRIPTION OF THE DRAWINGS
[0040] FIG. 1 illustrates an image processing system, according to various embodiments of the invention.
[0041] FIG. 2 illustrates an image capture screen, according to various embodiments of the invention.
[0042] FIG. 3 illustrates search results based on an image analysis, according to various embodiments of the invention.
[0043] FIG. 4 illustrates methods of processing an image, according to various embodiments of the invention.
[0044] FIG. 5 illustrates alternative methods of processing an image, according to various embodiments of the invention.
[0045] FIG. 6 illustrates methods of managing a reviewer pool, according to various embodiments of the invention.
[0046] FIG. 7 illustrates methods of receiving image tags in real-time, according to various embodiments of the invention. [0047] FIG. 8 illustrates methods of upgrading an image review, according to various embodiments of the invention.
[0048] FIG. 9 illustrates an example of Image Source 120A including electronic glasses, according to various embodiments of the invention.
[0049] FIG. 10 illustrates a method of processing an image on an image source, according to various embodiments of the invention.
[0050] FIG. 11 illustrates a method of processing an image based on image descriptors, according to various embodiments of the invention.
[0051] FIG. 12 illustrates a method of processing an image using feedback, according to various embodiments of the invention.
[0052] FIG. 13 and 14 illustrates methods of providing image tags based on image descriptors, according to various embodiments of the invention.
[0053] FIG. 15 illustrates methods of prioritizing image tagging, according to various embodiments of the invention.
[0054] FIG. 16 illustrates an image analysis system according to various embodiments of the invention. [0055] FIG. 17 illustrates methods of processing an image on a client device according to various embodiments of the invention.
[0056] FIG. 18 illustrates methods of providing information to an AR/VR device, according to various embodiments of the invention.
DETAILED DESCRIPTION
[0057] FIG. 1 illustrates an Image Processing System 110, according to various embodiments of the invention. Image Processing System 110 is configured for tagging of images and may include one or more distributed computing devices. For example, Image Processing System 110 may include one or more servers located at geographically different places. Image Processing System 110 is configured to communicate via a Network 115. Network 115 can include a wide variety of communication networks, such as the internet and/or a cellular telephone system. Network 115 is typically configured to communicate data using standard protocols such as IP/TCP, FTP, etc. The images processed by Image Processing System 110 are received from Image Sources 120 (individually labeled 120A, 120B, etc.). Image Sources 120 can include computing resources connected to the internet and/or personal mobile computing devices. For example Image Source 120 A may be a web server configured to provide a social networking website or a photo sharing service. Image Source 120B may be a smart phone, camera, a wearable camera, electronic glasses or other portable image capture device. Image sources may be identified by a universal resource locator, an internet protocol address, a MAC address, a cellular telephone identifier, and/or the like. In some embodiments Image Processing System 110 is configured to receive images from a large number of Image Sources 120.
[0058] Part of the image tagging performed by Image Processing System 110 includes sending images to Destinations 125 (individually labeled 125A, 125B, etc.). Destinations 125 are computing devices of human image reviewers and are typically geographically remote from Image Processing System 110. Destinations 125 include at least a display and data entry devices such as a touch screen, keyboard and/or microphone. For example, Destinations 125 may be in a different building, city, state and/or country than Image Processing System 110. Destinations 125 may include personal computers, computing tablets, smartphones, etc. In some embodiments, Destinations 125 include a (computing) application specifically configured to facilitate review of images. This application is optionally provided to Destinations 125 from Image Processing System 110. In some embodiments, Image Processing System 110 is configured for human image reviewers to log into a user account from Destinations 125. Destinations 125 are typically associated with an individual reviewer and may be identified by an internet protocol address, a MAC address, a login session identifier, cellular telephone identifier, and/or the like. In some embodiments, Destinations 125 include an audio to text converter.
Image tagging data provided by a human image reviewer at a member of Destinations 125 is sent to Image Processing System 110. The image tagging data can include textual image tags, audio data including verbalized tags, and/or non-tag information such as upgrade requests or inappropriate (explicit) material designations.
[0059] Image Processing System 110 includes an I/O (input/output) 130 configured for communicating with external systems. I/O 130 can include routers, switches, modems, firewalls, and/or the like. I/O 130 is configured to receive images from Image Sources 120, to send the images to Destinations 125, to receive tagging data from Destinations 125, and optionally to send image tags to Image Sources 120. I/O 130 includes communication hardware and optionally an application program interface (API).
[0060] Image Processing System 110 further includes Memory 135. Memory 135 includes hardware configured for the non-transient storage of data such as images, image tags, computing instructions, and other data discussed herein. Memory 135 may include, for example, random access memory (RAM), hard drives, optical storage media, and/or the like. Memory 135 is configured to store specific data, as described herein, through the use of specific data structures, indexing, file structures, data access routines, security protocols, and/or the like.
[0061] Image Processing System 110 further includes at least one Processor 140. Processor 140 is a hardware device such as an electronic microprocessor. Processor 140 is configured to perform specific functions through hardware, firmware or loading of software instructions into registers of Processor 140. Image Processing System 110 optionally includes a plurality of Processor 140. Processor 140 is configured to execute the various types of logic discussed herein.
[0062] Images received by Image Processing System 110 are first stored in an Image Queue 145. Image Queue 145 is an ordered list of images pending review, stored in a sorted list. Images stored in Image Queue 145 are typically stored in association with image identifiers used to reference the images and may have different priorities. For example, images received from a photo sharing website may have lower priority than images received from a smartphone. Generally, those images for which a requester is waiting to receive image tags representing an image in real-time are given higher priority relative to those for which the image tags are used for some other purpose. Image Queue 145 is optionally stored in Memory 135.
[0063] Within Image Queue 145 images are optionally stored in association with an image identifier or index, and other data associated with each image. For example, an image may be associated with source data relating to one of Image Sources 120. The source data can include geographic information such as global positioning system coordinates, a street and/or city name, a zip code, and/or the like. The source data may include an internet protocol address, a universal resource locator, an account name, an identifier of a smartphone, and/or the like. Source data can further include information about a language used on a member of Image Sources 120, a requested priority, a search request (e.g., an request to do an internet search based on image tags resulting from the image), and/or the like.
[0064] In some embodiments, an image within Image Queue 145 is stored in association with an indication of a particular subset of the image, the subset typically including an item of particular interest. For example, a requestor of image tags may be interested in obtaining image tags relating to the contents of a particular subset of an image. This can occur when an image includes several objects. To illustrate, considering an image of a hand with a ring on one of the fingers, the user may wish to designate the ring as being a particular area of interest. Some embodiments of the invention include an application configured for a user to specify the particular item of interest by clicking on the object or touching the object on a display of Image Source 120B. This specification typically occurs prior to sending the image to Image Processing System 110.
[0065] If an image is stored in association with an indication that a particular subset of the image is of particular importance, then an Image Marking Logic 147 is optionally used to place a mark on the image. The mark being disposed to highlight the particular subset. This mark may be made by modifying pixels of the image corresponding to the subset and this mark allows a human image reviewer to focus on the marked subset. For example, the image may be marked with a rectangle or circle prior to the image being posted to one or more of Destinations 125. For example, highlighting a subset of the image or an object within the image can include applying a filter to the object or subset, and/or changing a color of the object or subset. In alternative embodiments, Image Marking Logic 147 is included within an application configured to execute on one or more of Image Sources 120 or Destinations 125. Image Marking Logic 147 includes hardware, firmware, and/or software stored on a non-transient computer readable medium. As discussed elsewhere herein, Marking Logic 147 is optionally configured to place a mark on the image in real-time, as the image is being generated.
[0066] In some embodiments, Marking Logic 147 is configured to use image features detected within an image to identify particular objects that may be marked. The detection of image feature is discussed elsewhere herein and is optionally part of image processing that occurs on the client side, e.g., on Image Source 120A. For example, features such as edges may be detected using a processor of Image Source 120A. These features can first be used in highlighting objects for detection and then also sent from Image Source 120A to Image Processing System 110 where they are then used to generate image descriptors as part of processing the image. In this way automated processing of the image is distributed between Image Source 120A, Image Processing System 110 and/or Automatic Identification System 152. [0067] Under the control of Processor 140, images within Image Queue 145 are provided to an Automatic Identification Interface 150. The images are provided thus as a function of their priority and position in Image Queue 145. Automatic Identification interface 150 includes logic configured to communicate the image, and optionally any data associated with the image, to an Automatic Identification System 152. The logic is hardware, firmware, and/or software stored on a computer readable medium. Automatic Identification Interface 150 is further configured to receive a computer generated review of the image from Automatic Identification System 152, the computer generated review including one or more image tags identifying contents of the image.
In some embodiments, Automatic Identification Interface 150 is configured to communicate the image and data via Network 115 in a format appropriate for an application programming interface (API) of Automatic Identification System 152. In some embodiments, Automatic Identification System 152 is included within Image Processing System 110 and Automatic Identification Interface 150 includes, for example, function calls within a program and/or a system call within an operating system or over a local area network.
[0068] Automatic Identification System 152 is a computer automated system configured to review images without a need for human input on a per picture basis. The output of Automatic Identification System 152 is a computer generated image review (e.g., a review produced without human input on a per picture basis.) Rudimentary examples of such systems are known in the art. See, for example, Kooaba, Clarifai, Alchemy API and Catchoom. Automatic Identification System 152 is typically configured to automatically identify objects within a two dimensional image based on shapes, characters and/or patterns detected within the image. Automatic Identification System 152 is optionally configured to perform optical character recognition and/or barcode interpretation. In some embodiments, Automatic Identification System 152 is distinguished from systems of the prior art in that Automatic Identification System 152 is configured to provide a computer generated review that is based on the image subset indication(s) and/or image source data, discussed elsewhere herein.
[0069] Automatic Identification System 152 is optionally configured to determine if a copy of the image received from a different image source has already been tagged. For example, the same image may be included in multiple webpages. If the image is extracted from a first of these webpages and tagged, Automatic Identification System 152 may recognize that the image has already been tagged and automatically assign these tags to each instance of the image found. Recognizing that an image has already been tagged optionally includes comparing the image, a part of the image, or data representative of the image to a database of previously tagged images. The image may have been previously tagged automatically or manually. [0070] In various embodiments, Automatic Identification System 152 is configured to identify action within image sequences, e.g., video. For example, Automatic Identification System 152 may be configured to recognize actions, e.g., gestures, a described in U.S. provisional patent application Ser. No. 63/066,081, filed August 14, 2020, which is hereby incorporated herein by reference. In these embodiments, images may be tagged with words or phrases representing the identified action. In various embodiments, Automatic Identification System 152 is configured to identify relationships between objects in an image or sequence thereof. For example, Automatic Identification System 152 may be configured to provide a tag “boy with ball on beach,” distinct from “boy and ball on beach.” The first tag indicates a relationship between the ball and the boy, whereas the second only indicates the presence of various detected objects in the scene. In some embodiments, Automatic Identification System 152 may be configured to include the actions by which various actors are interacting with objects in the scene. As an example, actions may be expressed as “boy playing with ball on beach” or “boy throwing ball to dog on beach.”·.· The neural network may be configured to identify spatial and conceptual relationships between various objects in the scene based on their linguistic and semantic relationship to each other.
The neural network may also be configured to identify temporal relationships in video, or in other words, how the objects are interacting with each other through time. The importance and nature of the objects’ relationships are learned through the repeated linguistic representations presented to the language model portion of the neural network.
[0071] In addition to one or more image tag(s), a computer-generated review generated by Automatic Identification System 152 optionally includes a measure of confidence representative of a confidence that the one or more image tags correctly identify the contents of the image. For example, a computer-generated review of an image that is primarily characters or easily recognizable shapes may have a greater confidence measure than a computer generated review of an image that consists of abstract or ill-defined shapes. Different automated image recognition systems may produce different confidence levels for different types of images. Automatic Identification Interface 150 and Automatic Identification System 152 are optional in embodiments in which automatic identification is performed by a third party.
[0072] Image Processing System 110 further includes a Reviewer Pool 155 and Reviewer Logic 157 configured to manage the Reviewer Pool 155. Reviewer Pool 155 includes a pool (e.g., group or set) of human image reviewers. Each of the human image reviewers is typically associated with a different member of Destinations 125. For example, each of the different members of Destinations 125 may be known to be operated by a different human image reviewer or to be logged into an account of a different human image reviewer. Memory 135 is optionally configured to store Reviewer Pool 155. In some embodiments, the human image reviewers included in Reviewer Pool 155 are classified as “active” and “inactive.” For the purposes of this disclosure, an active human image reviewer is considered to be one that is currently providing image tags or has indicated that they are prepared to provide image tags with minimal delay. In embodiments that include both active and inactive human image reviewers, the active reviewers are those that are provided image for review. The number of active reviewers may be moderated in real-time in response to a demand for image reviews. For example, the classification of a human image reviewer may be changed from inactive to active based on a number of unviewed images in Image Queue 145. An inactive reviewer is one that is not yet active, that has let the review of an image expire, and/or has indicated that they are not available to review images. Inactive reviewers may request to become active reviewers. Inactive reviewers who have made such a request can be reclassified as active human image reviewers when additional active human image reviewers are needed. The determination of which inactive reviewers are reclassified as active reviewers is optionally dependent on a reviewer score (discussed elsewhere herein).
[0073] Reviewer Logic 157 is configured to manage Reviewer Pool 155. This management optionally includes the classification of human image reviewers as active or inactive. For example, Reviewer Logic 157 may be configured to monitor a time that a human image reviewer takes to review an image and, if a predetermined maximum review time (referred to herein as an image expiration time), changing the classification of the human image reviewer from active to inactive. In another example, Reviewer Logic 157 may be configured to calculate a review score for a human image reviewer. In some embodiments, the review score is indicative of the completeness, speed and/or accuracy of image reviews performed by the particular human image reviewer. The review score can be calculated or changed based on review times and occasional test images. These test images may be, for example images placed in Image Queue 145 that have been previously reviewed by a different human image reviewer. The review score may also be a function of monetary costs associated with the human image reviewer. Reviewer Logic 157 includes hardware, firmware, and/or software stored on a non-transient computer readable medium. In some embodiments, reviewer scores are manually determined by human moderators. These human moderators review images and the tags assigned to these images by human image reviewers. Moderators are optionally sent a statistical sampling of reviewed images and they assign a score to the tagging of the images. This score is optionally used in determining reviewer scores.
[0074] In some embodiments, Reviewer Logic 157 is configured to monitor status of human image reviewers in real-time. For example, Reviewer Logic 157 may be configured to monitor the entry of individual words or keystrokes as entered by a reviewer at Destination 125A. This monitoring can be used to determine which reviewers are actively reviewing images, which reviewers have just completed review of an image, and/or which reviewers have not been providing tag input for a number of seconds or minutes. The entry of tag words using an audio device may also be monitored by Reviewer Logic 157.
[0075] In some embodiments, members of Reviewer Pool 155 are associated with a specialty in which the human image reviewer has expertise or special knowledge in. For example, a reviewer may be an expert in automobiles and be associated with that specialty. Other specialties may include art, plants, animals, electronics, music, food medical specialties, clothing, clothing accessories, collectables, etc. As is discussed elsewhere herein, a specialty of a reviewer may be used to select that reviewer during an initial manual review and/or during a review upgrade.
[0076] The review score and/or specialty associated with a human image reviewer are optionally used by Reviewer Logic 157 to determine which inactive reviewer to make active, when additional active reviewers are required. Reviewer Logic 157 includes hardware, firmware, and/or software stored on a non-transient computer readable medium.
[0077] Image Processing System 110 further includes Destination Logic 160. Destination Logic 160 is configured to determine one or more destinations (e.g., Destinations 125) to send an image to for manual review. Each of Destinations 125 is associated with a respective human image reviewer of Reviewer Pool 155. The determinations made by Destination Logic 160 are optionally based on characteristics of the human image reviewer at the determined destination. The destination may be a computing device, smartphone, tablet computer, personal computer, etc. of the human image reviewer. In some embodiments, the destination is a browser from which the reviewer has logged into Image Processing System 110. In some embodiments, determining the destination includes determining an MAC address, session identifier, internet protocol and/or universal resource locator of one of Destinations 125. Destination Logic 160 includes hardware, firmware and/or software stored on a non-transient computer readable medium.
[0078] Typically, Destination Logic 160 is configured to determine Destinations 125 associated with active rather than inactive human image reviewers as determined by Reviewer Logic 157. Destination Logic 160 is also typically configured to determine Destinations 125 based on review scores of reviewers. For example, those reviewers having higher reviewer scores may be selected for higher priority reviews relative to reviewers having lower reviewer scores. Thus, the determination of a member of Destinations 125 can be based on both reviewer scores and image review priority.
[0079] In some embodiments, Destination Logic 160 is configured to determine one or more members of Destinations 125 based on the real-time monitoring of the associated reviewers’ input activity. As discussed elsewhere herein, this monitoring may be performed by Reviewer Logic 157 and can include detection of individual words or keystrokes entered by a human image reviewer. In some embodiments, Destination Logic 160 is configured to favor selecting Destination 125 A at which a human image reviewer has just completed a review of an image relative to Destination 125B at which a human image reviewer is currently typing image tags on a keyboard.
[0080] In some embodiments, Destination Logic 160 is configured to use image tags received via Automatic Identification System 152 to determine one or more members of Destinations 125. For example, if an image tag of “car” is received via Automatic Identification Interface 150 then Destination Logic 160 can use this information to select a member of Destinations 125 associated with a human image reviewer that has a specialty in automobiles.
[0081] The value of an image review may also be considered in the selection of a destination for manual review. For example, an image review of high value may lead to the determination of a destination associated with a human image reviewer having a relatively high review score, while an image review of lower value may lead to the determination of a destination associated with a human image reviewer having a relatively lower review score. In some embodiments, for some image reviews, Destination Logic 160 is configured to select among Destinations 125 so as to minimize a time required to review an image, e.g., to minimize a time until the image tags of the manual review are provided to Network 115.
[0082] Destination Logic 160 is optionally configured to determine multiple destinations for a single image. For example, a first destination may be selected and then, following an upgrade request, a second destination may be determined. The upgrade request may come from the Image Source 120 A or from a human image reviewer associated with the first destination. In some embodiments, Destination Logic 160 is configured to determine multiple destinations, to which the image will be posted to in parallel. For example, two, three or more destinations, each associated with a different human image reviewer, may be determined and the same image posted to all determined destinations in parallel. As used in this context, “in parallel” means that the image is posted to at least a second destination before any part of a review is received from the first destination.
[0083] In various embodiments, there are a variety of reasons that two or more destinations may be determined by Destination Logic 160. For example, a request for an upgraded review may require a human image reviewer having a particular specialty. Referring to the automotive example, an image that is first tagged with the tag “white car” may result in an upgrade request for more information. Destination Logic 160 may be configured to then select a destination associated with a human image reviewer have a specialty in automobiles, e.g., a reviewer who can provide the tags “1976 Ford Granada.” An upgrade request indicates that the image is subject to further review, e.g. the image requires or may benefit from further review. The upgrade request may be represented by a computing object such as a flag, command or data value, etc.
[0084] Another instance that may require a second destination occurs when the manual review of an image takes too long. Typically, the tagging of an image should occur within an allotted time period or the review is considered to expire. The allotted time period is optionally a function of the priority of the image review. Those reviews that are intended to occur in real-time may have a shorter time period relative to lower priority reviews. If the review of an image expires, Image Processing System 110 is optionally configured to provide the image to an additional human image reviewer associated with a destination determined by Destination Logic 160.
[0085] Another instance that may require a second destination occurs when a first human reviewer makes an upgrade request. For example, the request to upgrade the review resulting in a tag of “car” may come from the human image reviewer that provided the tag “car.” While this example is simplistic, other examples may include images of more esoteric subject matter such as packaged integrated circuits.
[0086] Image Processing System 110 further includes Image Posting Logic 165 configured to post images for manual review to Destinations 125 determined by Destination Logic 160.
Posting typically includes communicating the images to one or more Destinations 125 via Network 115. In various embodiments, Image Posting Logic 165 is further configured to provide information associated with the image to Destinations 125. For example, Image Posting Logic 165 may post, along with the image, an indication of a subset of the image (e.g., subset identification), an image marked by Image Marking Logic 147, information identifying a source of the image (e.g., source data discussed elsewhere herein), a priority of the review of the image, an image expiration period, location information associated with the image, and/or the like. As discussed elsewhere herein, source data can includes a universal resource locator, global positioning coordinates, longitude and latitude, an account identifier, an internet protocol address, a social account, an photo sharing account, and/or the like.
[0087] In some embodiments Image Posting Logic 165 is configured to provide an image for manual review to more than one of Destinations 125 at the approximately the same time. For example, an image may be provided to Destination 125A and Destination 125B in parallel. “Parallel delivery” means, for example, that the image is provided to both Destinations 125A and 125B before tagging information is received back from either of these Destinations 125.
[0088] In some embodiments, Image Posting Logic 165 is configured to provide an image for manual review to one or more of Destinations 125 prior to receiving image tags from Automatic Identification System 152. Alternatively, in some embodiments, Image Posting Logic 165 is configured to wait until a computer-generated review for the image is received from Automatic Identification System 152, prior to posting the image to one or more of Destinations 125. In these embodiments, the computer-generated review (including image tags) is optionally also posted to the one or more of Destinations 125 in association with the image.
[0089] Image Posting Logic 165 is optionally configured to post identifiers of images along with the images. Image Posting Logic 165 includes hardware, firmware and/or software stored on a non-transient computer readable medium. [0090] Image Processing System 110 further includes Review Logic 170 configured to manage the manual and automated reviews of images. This management includes monitoring progress of reviews, receiving reviews from Automatic Identification System 152 and/or Destinations 125. The received reviews include image tags as discussed elsewhere herein. In some embodiments, Review Logic 170 is configured to control posting of the image to one of Destinations 125 based on a measure of confidence. The measure of confidence being representative of a confidence that one or more image tags already received are correct. These one or more image tags may be received from Automatic Identification System 152 and/or one of Destinations 125. For example, in some embodiments if the confidence of an image review by Automatic Identification System 152 is greater than a predetermined threshold, then Review Logic 170 may determine that manual review of the image is not necessary. The predetermined threshold can be a function of the value of the image review, of the priority of the image review, of the number and quality of the available Destinations 125, and/or the like. Review Logic 170 includes hardware, firmware, and/or software stored on a non-transient computer readable medium.
[0091] In some embodiments, if an image was sent to Automatic Identification System 152 in parallel with being sent to one or more of Destinations 125, then the receipt of a review from Automatic Identification System 152 having a confidence above a predetermined threshold may result in cancellation of the manual review at the one or more of Destinations 125 by Review Logic 170. Likewise, if an image is sent to multiple Destinations 125 in parallel, and an image review is received from a first of these Destinations 125, then Review Logic 170 is optionally configured to cancel the review requests for the image at the other Destinations 125. In some embodiments, Review Logic 170 is configured to cancel the review request at the other Destinations 125 once a keystroke or word is received from the first of the Destinations 125. [0092] In some embodiments Review Logic 170 is configured to monitor activity of a human image reviewer in real-time. This monitoring can include receiving review inputs from Destinations 125 on a word by word or individual keystroke basis. As discussed elsewhere herein, the words and/or keystrokes are optionally passed on to one of Image Sources 120 as they are received by Review Logic 170. The monitoring of a manual reviewer’s activity can be used to determine when the review of an image expires and/or the progress in completing a manual image review. The status of a human image reviewer may be provided by Review Logic 170 to Reviewer Logic 157 in real-time. Using this status, Reviewer Logic 157 may change the status of the reviewer from active to inactive, adjust a stored review score of the reviewer, establish or change a specialty for the reviewer, and/or the like.
[0093] In some embodiments Review Logic 170 is configured to control posting of images to Destinations 125 by receiving measures of confidence (e.g., of the accuracy of image reviews) and sending responsive signals to Destination Logic 160 and/or Image Posting Logic 165. As such, Review Logic 170 can be configured to control posting of an image to one or more of Destinations 125 based on a measure of confidence. The measure of confidence being representative of a confidence that one or more image tags correctly identify the contents of the image. In some embodiments, Review Logic 170 is configured to receive reviews from manual image reviewers that include information other than image tags. For example, Review Logic 170 may receive an upgrade request from a human image reviewer and cause an upgraded image review to be requested. Review Logic 170 is optionally configured to process other non-tag information received in a manual or computer-generated review. This information can include identification of the image as being improper (e.g., obscene), identification of the image as containing no identifiable objects, identification of the image as having been sent to a reviewer of the wrong specialty, and/or the like.
[0094] In some embodiments, Review Logic 170 is configured to adjust the confidence of an image review by comparing image reviews of the same image from multiple sources. These image reviews may all be computer generated, all be manual reviews, or include at least one computer generated review and at least one manual review.
[0095] In some embodiments, Review Logic 170 is configured to provide image tags received as part of a first (computer generated or manual) review and to provide the received image tags to a human image reviewer at Destinations 125B. An agent (e.g., a browser or special purpose application) executing on Destination 125B is optionally configured to provide the image tags of the first review to a display of Destination 125B. In this manner, the human image reviewer at Destination 125B can edit (add to, delete and/or replace) the image tags of the first review. For example, image tags received from Destination 125A may be provided to Destination 125B for modification.
[0096] In some embodiments, Review Logic 170 is configured to calculate review scores based on the results of image reviews received from Destinations 125, the time taken for these image reviews, and the accuracy of these image reviews. [0097] In some embodiments Review Logic 170 is configured to provide image reviews to a source of the image, e.g., one of Image Sources 120, using a Response Logic 175. The image reviews may be provided when the image review is complete, on a character-by-character basis, or on a word-by-word basis. When provided on a character-by-character basis or a word-by- word basis, the image tags are optionally provided to the source of the image as the characters or words are received from a human image reviewer. Optionally Response Logic 175 is configured to provide the image review via Network 115.
[0098] Image reviews are not necessarily returned to one of Image Sources 120. For example, if Image Source 120A is a photo sharing service or a social networking website, image reviews of images from Image Source 120 A may be stored in association with an account on the photo sharing service or the social networking website. This storage can be in Memory 135 or at a location external to Image Processing System 110, such as at a Webserver hosting the website. Image reviews are optionally both returned to one of Image Sources 120 and stored elsewhere. [0099] In some embodiments, Response Logic 175 is configured to execute a search based on image tags received in a computer generated and/or manual image review. The results of this search can be provided to a source of the image, e.g., Image Source 120A or 120B. For example, in some embodiments a user uses a smartphone to create an image with a camera of Image Source 120A. The image is provided to Image Processing system 110 which generates an image review of the image using Automatic Identification System 152 and Destination 125 A. The image review includes image tags that are then automatically used to perform an internet search (e.g., a google or yahoo search) on the image tags. The results of this internet search are then provided to the user’s smartphone.
[00100] In some embodiments, Response Logic 175 is configured to provide image tags of a computer generated and/or manual review to an Advertising System 180. Advertising System 180 is configured to select advertisements based on the image tags. The selected advertisements are optionally provided to the source of the image used to generate the image tags. For example, Response Logic 175 may provide the tags “1976 Ford Granada with broken headlight” to Advertising System 180 and, in response, Advertising System 180 may select advertisements for replacement headlights. If the source of the image used to generate these tags is a website, the advertisements may be displayed on the website. Specifically, if the source of the image is an account on a photo sharing or social networking website, then the advertisements may be displayed on that account. Advertising System 180 is optionally included in Image Processing System 110. Advertising System 180 is optionally configured to take bids for providing advertising in response to specific tags. Advertising System 180 optionally includes Google’s Adwords, and/or the like. Selected advertisements are optionally returned to members of Image Sources 120 in association with image tags, by Response Logic 175.
[00101] Image Processing System 110 optionally further includes Content Processing Logic 185 configured to extract images for tagging from members of Image Sources 120. Content Processing Logic 185 is configured to parse webpages including images and optionally text, and extract images from these webpages for tagging. The resulting image tags may then be provided to Advertising System 180 for selection of advertisements that can be placed on the webpage from which the image was extracted. In some embodiments, Content Processing Logic 185 is configured to emulate browser functions in order to load images that would normally be displayed on a webpage. These images may be displayed on a webpage associated with a specific account, a social networking site, a photo sharing site, a blogging site, a news site, a dating site, a sports site, and/or the like. Content Processing Logic 185 is optionally configured to parse metadata tags in order to identify images.
[00102] Content Processing Logic 185 is optionally configured to parse text disposed on the same webpage as an image. This text may be used by Automatic Identification System 152 in tagging of the image, in combination with content of the image. For example, Content Processing Logic 185 may be configured to identify a caption for an image, comments made about an image, text referring to the image, webpage title or headings, people or objects tagged within an image, text within an image (as determined by optical character recognition (OCR)), and/or the like. The text parsed by Content Processing Logic 185, or a subset thereof, may be used to improve quality and/or speed of tagging. The text parsed is provided to Automatic Identification System 152 and/or provided to one of Destinations 125 for tagging by a human reviewer. In some embodiments Automatic Identification System 152 is configured to use the provided text in the generation of tags for the image. For example, the provided text may be used to provide context, identify a lexicon, ontology, language, and/or information that improves the accuracy, precision, computational efficiency, and/or other quality of automatically and/or manually generated image tags. The provided text is typically not relied on solely as a source of the generated tags, but is used as an input to improve the processing of the image. As such, the resulting tags may include words other than those found in the provided text.
[00103] In some embodiments, Image Posting Logic 165 is configured to provide both an image and text found on the same webpage as the image to Destinations 125. For example, an image of a girl and a bicycle at a park may have a caption “Mountain Bike Sale” or a comment “Happy Birthday Julie.” At Destination 125 this text may be presented to a human reviewer together with the image. The human reviewer may use this information to better understand the focus and/or context of the image, and thereby provide better image tags. Likewise, in some embodiments, Automatic Identification Interface 150 is configured to provide both an image and text fund on the same webpage as the image to Automatic Identification System 152. At Automatic Identification System 152 the provided text is used to improve the automated tagging of the image based on contents of the image. In the above example, the provided text may suggest to Automatic Identification System 152 that emphasis should be placed on the bike or on Julie. This may result in such widely different tags as “Schwinn Bike” or “Birthday Girl.” [00104] Image Processing System 110 optionally further includes an Image Ranker 190. Image Ranker 190 is configured to determine a rank (e.g., priority) for tagging an image. The priority may be used to determine how or if at all to tag an image. The determination of priority may be based on , for example, a source of the image, a number of times the image is loaded onto a webpage, a position of the image on a webpage, a number of times the image is viewed on a webpage, a number of webpages on which an image included, a ranking of one or more webpage including the image, an identity of a webpage including the image, a ranking of a second image on the webpage including the image, an owner of webpage including the image, a domain name of a webpage including the image, a keyword on a webpage including the image, text found on a webpage including the image, metadata found on a webpage including the image, a number of times the image is clicked on the webpage, a number of times other images are clicked on the webpage, whether the image is part of a video, image tags automatically generated using Automatic Identification System 152, any combination of these examples, and/or the like. Image Ranker 190 includes logic in the form of hardware, firmware, and/or software stored on a computer readable medium. Image Ranker 190 includes logic in the form of hardware, firmware, and/or software stored on a computer readable medium. In various embodiments, the priority determined by Image Ranker 190 includes two levels (tag or no-tag), three levels (automatic tagging, manual tagging, or no-tag), ten priority levels, or some other ranking scheme.
[00105] Destination Logic 160 is optionally configured to select a destination of manual tagging of an image based on the priority of the image.
[00106] In those embodiments, wherein a number of times the image is loaded onto a webpage is used to determine priority, the number may be per a fixed time period such as per day or per month. The number can be determined by including a line of Java or HTML script on the webpage, as is well known in the art. The position of the image on the webpage may be considered as some images may require that a viewer scroll down before the image is viewed.
As such, the number of times the image is actually viewed may be used to calculate the image’s priority. Typically, greater priority is assigned to images that are viewed more often. Image Ranker 190 is optionally configured to assign priority to an image based on a number of times the image is clicked on the webpage or on other webpages, and/or a number of times other images are clicked on the webpage. Image Ranker 190 is optionally configured to determine priority based on a number of times an image is viewed on more than one webpage. For example, if the image is found on 25 different webpages, then the sum of the views on all the webpages may be used to determine priority for the image. In some embodiments Image Ranker 190 is configured to determine priority based on a number of times an image is loaded in a browser.
[00107] Popular images may be included in a number of webpages. For example an image that is widely shared on a social media website may be included on numerous webpages. Image Ranker 190 may be configured to calculate the priority of an image as a function of the number of webpages on which it is included and/or the number of webpages that include a link to the image. Image Ranker 190 is optionally configured to identify an image as being included on multiple, possibly otherwise unrelated, webpages. In some embodiments Image Ranker 190 is configured to use a third party service, such as TinEye.com, to determine the number of webpages on which an image is located. Typically, the greater the number of webpages on which an image is included, the greater the priority assigned to the image.
[00108] In some embodiments, Image Ranker 190 is configured to calculate a priority of an image based on a ranking of one or more webpages that include the image. For example, if a webpage is highly ranked in a search engine, is linked to by a significant number of other webpages, or well ranked on some other criteria, then an image on the webpage may be given a priority that is a function of the webpages’ ranking. Typically the higher ranking a webpage has the greater priority is assigned to an image on the webpage. Webpage ranking is optionally obtained from a third party source, such as a search engine.
[00109] Image Ranker 190 is optionally configured to assign a priority to an image based on an identity of a webpage including the image. For example, an image on a home page for a URL may be assigned greater priority than an image at another webpage for the same website. Further, images may be assigned a priority based on specific types of webpages on which the image is included. For example, images on social networking websites may be given higher priority relative to images on company websites or personal blogs. In another example, images on reference webpages such as dictionary.com or Wikipedia.com may be give higher priority relative to some other types of webpages. The priority assigned to an image is optionally based on the identity of an owner of the webpage.
[00110] In some embodiments, Image Ranker 190 is configured to determine a priority of a first image on a webpage as a function of the priority of a second image on same webpage. For example, if the second image has a high priority the priority of the first image may be increased accordingly.
[00111] Image Ranker 190 is optionally configured to assign a priority to an image based on other contents of a webpage on which the image is included. For example, if the webpage includes text and/or metadata the presence of specific terms or keywords in this text or metadata may be used to assign the priority of the image. Specifically, if a webpage includes a valuable keyword then an image on that webpage may be assigned a higher priority. The estimated monetary value of a keyword is associated with the value of the word for advertising or some other purpose, e.g., a word that has value on Google’s Adwords®. An image on a webpage that includes terms that would be valued highly as Adwords may be assigned a proportionally high priority. The frequency of use of these terms as well as their number on a webpage may also be considered by Image Ranker 190 in determining image priority. The text and/or metadata considered may be included in the URL of the webpage, within a figure caption, within a comment made on a figure, within a tag assigned to the image by a third party, near text referring to the image, a person’s name, a brand name, a trademark, a corporate name, and/or the like. [00112] In some embodiments, Image Ranker 190 is configured to receive text derived from an image using optical character recognition and to determine a priority for the image based on this text. For example, Image Ranker 190 may receive text generated by processing an image using Automatic Identification system 152, and assign a priority to the image based on this text. In some embodiments, Image Ranker 190 is configured to give a higher priority to a first image on a webpage, relative to images that occur further down the webpage.
[00113] Image Ranker 190 is further configured to determine how, if at all, to tag an image based on the assigned priority. For example, images of lowest priority may not be tagged at all. Images with somewhat higher priority may be tagged using Automatic Identification System 152, and image with yet higher priority may be tagged by a human reviewer at one of Destinations 125. Those images having priority sufficiently high to be tagged by a human reviewer are optionally further divided into higher and lower priority groups wherein images in the higher priority group are given more attention and tagged more thoroughly or carefully by the human reviewer. Image Posting Logic 165 is optionally configured to provide an indication of the priority of an image, along with the image, to members of Destinations 125.
[00114] In some embodiments, images are first processed using Automatic Identification System 152. Then the images may be sent to one or more members of Destinations 125 based on both a priority for the image and a confidence in the automated tagging performed by Automatic Identification System 152. For example, if the image has relatively low priority then the confidence standard for sending the image to a human reviewer is set relatively low. (A low confidence standard meaning that the automated tags are likely to be deemed sufficient and the image not sent for human analysis.) If the image has a relatively high priority then the confidence standard for sending the image to a human reviewer is relatively higher. Thus, high priority images require a greater confidence to rely just on the automated tagging and are more likely to be sent to a human reviewer.
[00115] The processing paths that may be selected by Image Ranker 190 for an image include, for example, a) not tagging at all, b) tagging using just Automatic Identification System 152, c) tagging using Automatic Identification System 152 with optional human follow-up based on the importance and/or confidence of the resulting tags d) automated tagging followed by human review of the automated tags, e) tagging by a human reviewer, and/or f) tagging by a human reviewer based on a suggested level of attention to be given by the human reviewer. These processing paths are, at least in part, selected based on the priority assigned to the image by Image Ranker 190. Any combination of these processing paths may be found in various different embodiments. In some embodiments, the result of controlling the type of processing used to tag an image results in those images that are potentially more valuable to have a greater probability of being tagged. As a result, the human tagging resources are applied to the highest priority - most valuable images.
[00116] In some embodiments, Image Ranker 190 is configured to assign a priority for an image based on how often an advertisement displayed adjacent to or over an image is clicked on. For example, if an image is on a frequently viewed webpage, but advertisements placed over the image are rarely clicked, then the image may be given a relatively high priority for tagging. In this example, an image may be tagged more than once. If advertisements based on initial tags are not clicked on with an expected frequency, then the image may be retagged. Retagging is optionally performed by a human reviewer who receives, via Image Posting Logic 165, the image and the initial (inadequate) tags. The human reviewer can use this information to provide improved tags.
[00117] Image Processing System 110 optionally further includes Sequence Assembly Logic 192. Sequence Assembly Logic 192 is configured to add images to an image sequence. These images are typically those received from Image Sources 120 and are optionally tagged by Image Processing System 110. For example, Sequence Assembly Logic 192 may be configured to add a first image received from Image Source 120 A and a second image received from Image Source 120B to an image sequence; and/or to add two images received from Image Source 120A to an image sequence. An “image sequence” is a plurality of images configured to be presented in an order. Examples of image sequences include a video or an ordered set of still images. An image sequence may also include a two or three dimensional arrangement of images, such as a mosaic or collage of images. In some embodiments, adding an image to an image sequence includes adding the image to multiple frames of a video. These frames may be configured to be presented in less than 1, 2, 3, 5, 10, 15 seconds, or more, or any interval there between. In some embodiments, adding an image to an image sequence includes placing the image in an image queue for delivery to an external display device, such as a display of Image Source 120C. In these embodiments, the external device optionally includes an application configured to present the images one after another in succession. [00118] The image sequences generated by Sequence Assembly Logic 192 optionally include specific transitions between images such as fading, zooming, cutting, wiping, splitting, random bars, flashing, covering, uncovering, flashing, dissolving, etc. Examples of these and other transitions can be found in Microsoft Powerpoint™ 2013. The recipient of an image sequence can optionally select the type of transaction to be used. In various embodiments, Sequence Assembly Logic 192 is configured to add audio (e.g., music, or a reading of image tags) to an image sequence and/or apply filters to one or more images with an image sequence. The audio is optionally included as part of a standard audio or video format. Image filters applied to an image or entire image sequence can include linear filtering, pixilation, principal components analysis, anisotropic diffusion, or any other techniques used in digital image processing. Different filters are optionally applied to different images within an image sequence. In some embodiments, Sequence Assembly Logic 192 includes a video encoder configured to encode and/or compress an image sequence into a standard video format, e.g., flash video, Quicktime™ video, Windows™ media video, Realmedia™, and/or the like. Images may be sent to remote clients one image at a time or as part of a video. The video can include key frames and b-frames.
[00119] The first image or second images are optionally added to the image sequence base on image tags associated with those images. For example, Sequence Assembly Logic 192 may be configured to generate a first image sequence including images having a first set of image tags and generate a second image sequence having a second set of image tags. In a more specific example, Sequence Assembly Logic 192 may be configured to include images having animal related tags (e.g., cat, puppy, cow) in a first image sequence and images having automotive related tags (e.g., truck, Ford, convertible) in a second image sequence. As such, image sequences can include specific themes by identifying tags to be used to select images to be included in the sequence. These themes can be predefined as in the examples above, or may be chosen by a user. For example, an end user may wish to see an image sequence of including images having the image tag “Dalmatian” and can define this sequence by adding “Dalmatian” to a list of image tags to be used to generate the image sequence. This list can include one, two or more image tags.
[00120] In addition to or as an alternative to image tags, images may be selected for inclusion in an image sequence based on a variety of other factors. For example, image may be selected based on colors within an image, comments made regarding the image, a popularity of the image, and/or who provided the image. Specifically, in some embodiments Sequence Assembly Logic 192 is configured to parse comments made about an image and identify words used in those comments. The identified words may then be used just like image tags, for selecting the image for inclusion in an image sequence. For example, a comment of “nice dress” may be used to select an image just as a tag “dress” may be used as described elsewhere herein. The comments can be provided by the provider of the image or by one or more third party. [00121] In some embodiments a user can specify that popular images be included in an image sequence. For example, a sequence may be defined to include the 1, 5 or 10% most popular images. As is described further elsewhere herein popularity can be determined in a variety of ways. A user may designate that an image sequence include images provided by a specific user or users, this user may be a friend on a social network and/or a well known person. For example, a user may designate that an image sequence include images provided by one specific male actor, any female celebrities, and three friends.
[00122] In some embodiments, one or more lists of tags are associated with a user account. The user may define and customize these lists by accessing Image Processing System 110 from one of Image Sources 120. For example, a user of Image Source 120A, or some other client, may log into Image Processing System 110 and define three lists. The first list includes the tags “puppy,” “kitten” and “panda.” The second list includes “Paris,” “Rome,” Venice” and “London.” The third list includes “Andromeda,” “Saturn,” “whisky,” “bourbon” and “bed.” Once these lists are established, the user can then select among different image streams using these lists. For example, the user can select to receive an image stream of images having tags from the first list, or the first and third list. These lists are the basis for different sequence “channels” that a user can subscribe to. The images added to a particular image sequence can, thus, be based on an identity of an account associated with a remote client. This remote client is optionally a member of Image Sources 120.
[00123] Some embodiments include premade or shared lists. These lists can include data (other than or in addition to specific tags) specifying that the most popular images, currently trending images, images suggested by friends, friend’s favorite images, images generated by friends, images previously viewed and indicated as being a favorite of the user, etc. The data may include identifiers of groups of image tags. For example, “animal” my be predefined to indicate a set of tags including “cat,” “dog,” “hyena,” “salamander” and any other image tag that would be considered to be within the general categories of animals. Other general categories may include “clothing,” “automobiles,” “jewelry”, etc. A user may optionally define categories of their own.
[00124] User account information, lists, identifiers of groups of tags, favorites and the like are optionally stored in Memory 135 on Image Processing System 110 and/or Memory 135 on a member of Clients 120. Default premade lists are typically stored in Memory 135 on Image Processing System 110.
[00125] Sequence Assembly Logic 192 is optionally configured to add advertisements to image sequences. These advertisements may be selected based on image tags associated with images included in the image sequences, based on group identifiers used to generate the image sequences, and/or based on an account associated with the remote client to whom the image sequence is sent. For example, an advertisement for cat food may be added to an image sequence based on the image tag “cat.” An advertisement for a car may be added to an image sequence based on a group identifier automobile. An advertisement of a baby product may be added to an image sequence to be sent to a client associated with an account of a person known to be interested in such products. This interest may be derived from image tags of pictures received from the client, demographics of the person, other image sequences requested by the person, social network accounts of the person, interests of the person, images liked by the person, cookies identifying a browsing pattern of the person, tweets sent or received by the person, images previously uploaded to Image Processing System 110 by the person, events in the person’s life, postings by the person, comments made by the person on webpages and/or about images, images stored in Memory 135, image posted to systems other than Image Processing System 110, and/or the like. Further, an advertisement may be added to an image sequence based on one or more of the same factors as applied to a person’s friends or contacts on a social network or an e-mail system (address book). For example, an image including a yoga position and/or a related yoga advertisement may be added to an image sequence provided to a user based on many of the user’s friends liking images having “yoga” as an image tag. In another example, an image of a baby and/or an advertisement for baby supplies may be added to an image sequence provided to a user based on images stored in the memory of the user’s smart phone, e.g., Memory 135, and tags associated with these images. In some embodiments, Sequence Assembly Logic 192 is configured to automatically retrieve data and/or images from a social network account of users.
[00126] In some embodiments, Image Processing System 110 is configured to automatically tag images stored in the memory of a user’s mobile device and based on these tags determine interests of the user. The images stored on the user’s mobile device may or may not be uploaded to Image Processing System 110. For example, Image Processing Logic 960 is optionally used for this tagging and only image characteristics need to be uploaded. In some embodiments, images that are automatically obtained from the memory of the mobile device, e.g., obtained without explicit per image acknowledgement of the user, are only tagged using Automatic Identification System 152 and not included in image sequences sent to others. This allows the privacy of user to be maintained.
[00127] Some embodiments of the invention include Popularity Logic 197, which is configured for tracking and calculating measure(s) of the popularity of an image. In some embodiments, Popularity Logic 197 is configured to calculate the popularity of an image based on a number of favorable responses for the listing. These responses can include “like,” “favorite” and/or some other positive indication. The popularity of an image may also be calculated based on how often an image is reposted or shared. For example, an image may be considered more popular if it is shared by many people in diverse groups. Further, Popularity Logic 197 may be configured to calculate a popularity measure that is time dependent. For example, favorable responses that are older may not be as highly weighted as recent favorable responses, or a rapid increase in favorable response may be weighted differently than a constant rate of favorable responses. Popularity may be calculated bases on a combination of the factors discussed herein, different factors optionally being weighted differently. Further, the popularity of an image may be considered when determining a potential value of an image. For example, Image Ranker 190 is optionally configured to calculate a value of an image based, at least in part, on a popularity calculated using Popularity Logic 197, or a popularity measure obtained from elsewhere.
[00128] As discussed elsewhere herein, the popularity is optionally used to determine if the image should be included in an image sequence and/or to determine a value of the image. Popularity Logic 197 may be configured to calculate popularity based on how often the image is shared, how often an image is shared on a social networking website, who shares the image, the number of “likes,” “favorites” and/or comments an image receives, the complexity of the images sharing, and/or the like. Popularity Logic 197 includes hardware, firmware, and/or software stored on a non-transient computer readable medium. In some embodiments, Popularity Lotic 197 is configured to automatically retrieve images and/or data from a social network account of a user.
[00129] For example, an image that is shared by many friends may receive a higher popularity value relative to an image that is shared less or shared by non-friends. The sharing may be within a social network such as Facebook™ or Instagram™. Alternatively, sharing may occur using a messaging service such as Snapchat™, Skype™, text message and/or e-mail. An image shared by an important person may be calculated as having a higher popularity relative to a less important person. The importance of a person can be dependent on celebrity status, number of friends, number of followers, etc. For example, sharing of a photo by a well-known celebrity or by a person having many followers may count more towards popularity of the image relative to an unknown person with few followers. The popularity of an image may be greater if it is “liked” or marked as a “favorite,” and/or commented on by many people relative to an image “liked,” marked as a “favorite,” and/or commented on by fewer people. The complexity of image sharing is a function of the diversity of the network over which the image is shared. For example, if the image is shared by a closed group of friends having only one or two degrees of separation, this may count less toward popularity than sharing among a diverse group having a high degree of separation.
[00130] Image Processing System 110 optionally further includes Social Management Logic 133. Social Management Logic 133 is configured to manage and/or monitor social connections of a user. Social Management Logic 133 is optionally configured to retrieve and/or receive data (e.g., images, text, social information (e.g., friends lists), messages, comments, action logs, etc.) from social networking websites such as those discussed elsewhere herein. For example, Social Management Logic 133 may be configured to receive/retrieve comments provided about an image posted on a social networking website, track how an image is shared on a social networking website, and/or the like. The retrieved and/or received data may come from an original source of the image, one of Image Sources 120, and/or a third party.
[00131] In some embodiments, Social Management Logic 133 is further configured to monitor, retrieve and/or receive data regarding communications made using other communication systems discussed herein, e.g., instant messaging and/or e-mail. For example, Social Management Logic 133 may be configured to track how often an image is sent using the instant messaging system of Facebook®.
[00132] Social Management Logic 133 is optionally configured to track the identities of a user’s friends and/or the identities of celebrities. For example, Social Management Logic 133 may be configured, to follow one or more celebrities and track images posted by these celebrities. In some embodiments, Social Management Logic 133 is configured to receive or calculate the degrees of separation between people. As discussed elsewhere herein, such information may be used to determine the popularity and/or value of an image.
[00133] The information collected by Social Management Logic 133 is optionally stored in Memory 135 and/or received from members of Image Sources 120. Images received/retrieved by Social Management Logic 133 are optionally tagged using Image Processing System 110 as discussed elsewhere herein. Social Management Logic 133 includes hardware, firmware, and/or software stored on a non-transient computer readable medium.
[00134] Image Processing Logic 110 optionally further includes Streaming Logic 195. Streaming Logic 195 is configured to provide images sequences to one or more client of Image Processing System 110, e.g., one or more of Image Sources 120. The image sequence is typically provided using I/O 130 and Network 115. In some embodiments, Streaming Logic 195 is configured to stream one or more image sequences to a plurality of remote clients at the same time. The provided image sequences may include a series of still images and/or a video. In some embodiments, advertisements are provided as part of and/or in association with the image sequence.
[00135] FIG. 2 illustrates an Image Capture Screen 210, according to various embodiments of the invention. Image Capture Screen 210 as illustrated is generated by, for example, an application executing on a smartphone, electronic glasses or other Image Source 120. Image Capture Screen 210 includes features configured to capture an image, mark a specific area of interest, and receive image tags. Specifically, Image Capture Screen 210 includes a Shutter Button 220 configured to take a picture. Once the picture is taken it is optionally automatically sent via Network 115 to Image Processing System 110 for tagging. Image Capture Screen 210 optionally further includes a Rectangle 230 configured to highlight a point of interest within the image. Rectangle 230 is controllable (e.g., movable) by selecting and/or dragging on the screen using a user input device. On a typical smartphone this user input device may include a touch screen responsive to a finger touch. As described elsewhere herein, the point/region of interest may be provided to Image Processing System 110 in association with an image to be tagged.
[00136] Image Capture Screen 210 further includes a Field 240 showing a previously captured image and resulting image tags. In the example, show the previously captured image includes the same white cup without the Rectangle 230 and the image tags include “White Starbucks Coffee Cup.” Also shown is text stating “Slide for options.”
[00137] FIG. 3 illustrates search results based on an image analysis, according to various embodiments of the invention. These results are optionally displayed automatically or in response to selecting the “Slide for options” input shown in FIG. 2. They may be generated by automatically executing an internet search on the image tags. Illustrated in FIG. 3 are a Sponsored Advertisement 310, Related Images 320 and other search results 330. The search results are optionally generated using Advertising System 180 and image tags generated using Image Processing System 110. A user may of the option of reviewing previously tagged images. This history can be stored on Image Source 120A or in Memory 135.
[00138] FIG. 4 illustrates methods of processing an image, according to various embodiments of the invention. In these methods an image is received. The image is provided to both Automatic Identification System 152 and at least one of Destinations 125. As a result, both computer generated and manual image reviews are produced. The methods illustrated in FIG. 4 are optionally performed using embodiments of the system illustrated in FIG. 1. The method steps illustrated in FIGs. 4-8 may be performed in a variety of alternative orders.
[00139] In a Receive Image Step 410 and image is received by Image Processing System 110. The image is optionally received from one of Image Sources 120 via Network 115. The image may be in a standard format such as TIF, JPG, PNG, GIF, etc. The image may be one of a sequence of images that form an image sequence of a video. The image may have been captured by a user using a camera. The image may have been captured by a user from a movie or television show. In some embodiments Receive Image Step 410 includes a user using an image capture application to capture the image and communicate the image to Image Processing System 110. This application may be disposed within a camera, television, video display device, multimedia device, and/or the like. Receive Image Step 410 is optionally facilitate using Content Processing Logic 185.
[00140] In one illustrative example, the image is received from image sequence, e.g. a video. The video is displayed on a monitor, television, goggle, glasses, or other display device. The video is optionally received via a video streaming service such as youtube.com or Netflix.com® and/or displayed within a browser. Logic within the display system (e.g., Image Marking Logic 147 within Image Source 120A) is configured for a user to indicate a particular subset of the images within the video. The same logic may be configured to receive an advertisement selected in response to image tags generated from the image and to display the advertisement over or at the same time as the video. Selection of advertisements based on image tags is discussed further elsewhere herein.
[00141] Specifically, using this system, a user may select an object within a video or movie for tagging and in response optionally receive tags characterizing that object. The user may also or alternatively receive an advertisement selected based on the tags. The advertisement may be displayed in real-time in conjunction with the video (e.g., as an overlay or added video sequence) or provided to the user via other communication channels (e.g., e-mail). In one illustrative example, a user sees an object within a video that they like. They select the object and this selection is received in Receive Image Step 410. In response they receive an advertisement related to the object. The advertisement is displayed as an overlay, bar or caption on the video in real-time as the video is viewed on the display. The advertisement may be added to the image, or the image may be added to the advertisement. For example, the image may be placed in a subset of the advertisement or the advertisement may be placed in a subset of the image. The advertisement is optionally interactive in that it includes a link to make a purchase. [00142] In some embodiments, objects within an image may include particular characteristics configured to assist in identifying the object. For example, a particular pattern of data bits may be encoded within the image or within object of the image. These data bits may encode for an image tag.
[00143] In an optional Receive Subset Identification Step 415, data identifying one or more subsets of the image is received by Image Processing System 110. Typically, the one or more subsets include a set of image pixels in which an item of particular interest is located. The one or more subsets may be identified by pixel locations, screen coordinates, areas, and/or points on the received image. In some embodiments, the subsets are selected by a user using a touch screen or cursor of one of Image Sources 120.
[00144] In an optional Receive Source Data Step 420, source data regarding the source of the image, received in Receive Image Step 410, is received by Image Processing System 110.
As discussed elsewhere herein, the source data can include geographic information, an internet protocol address, a universal resource locator, an account name, an identifier of a smartphone, information about a language used on a member of Image Sources 120, a search request, user account information, and/or the like. In some embodiments, source data is automatically sent by an application/agent running on Image Source 120. For example, global positioning system coordinates may automatically be generated on a smartphone and provided to Image Processing System 110.
[00145] In an optional Receive Analysis Priority Step 425 a priority for the tagging of the image, received in Receive Image Step 410, is received within Image Processing System 110. In some embodiments, the priority is manually entered by a user of Image Source 120A. In some embodiments, the priority is dependent on an amount paid for the review of the image. In some embodiments, the priority is dependent on a type of Image Sources 120A. For example, images received from a static website may automatically be given a lower priority relative to images received from a handheld mobile device. An image whose source is identified by a universal resource locator may be given a lower priority relative to images whose source is identified by a mobile telephone number. As such, the priority is optionally derived from the source data received in Receive Source Data Step 420.
[00146] The image and data received in Steps 410-425 are optionally received together and optionally stored in Memory 135.
[00147] In a Distribute Image Step 430, the image, and optionally any associated data received in Steps 415-425, is distributed to Automatic Identification System 152 via Automatic Identification Interface 150. This distribution may be internal to Image Processing System 110 or via Network 115.
[00148] In a Receive Automated Response Step 435, a computer-generated image review is received from Automatic Identification System 152. The computer-generated image review includes one or more image tags assigned to the image by Automatic Identification System 152. The computer-generated image review also includes a measure of confidence. The measure of confidence is a measure of confidence that the image tags assigned to the image correctly characterize contents of the image. For example, an image including primarily easily recognizable characters may receive a higher measure of confidence relative to an image of abstract shapes.
[00149] In an Optional Determine Confidence Step 440, the measure of confidence included in the image review is compared with one or more predetermined levels. The predetermined levels are optionally a function of the priority of the image review, a price of the image review, a source of the image, and/or the like. In an Optional Confident? Step 445 the process proceeds to an optional Perform Search Step 450 if the confidence of the computer generated image review is above the predetermined threshold(s) and proceeds to a Queue Image Step 460 if the confidence of the computer generated image is below the predetermined threshold(s). Determine Confidence Step 440 is optionally performed using Review Logic 170. [00150] In Perform Search Step 450, the image tags assigned to an image are used to perform a search. For example, the image tag “Ford car” may be used to automatically perform a search, e.g., a google or database search, using the words “Ford” and “car.” The database searched is optionally a database of advertisements. Perform Search Step 450 is optionally followed by a Select Advertisement Step 452 in which one or more advertisement is selected based on the image tags. For example, image tags related to a wedding may be used select wedding related advertisements. The selection of an advertisement may be performed by Advertising System 180 and/or may include a real-time bidding process in which potential advertisers bid on the amount they are willing to pay for having their advertisement associated with specific keywords (image tags). Any of the methods disclosed herein that include the generation of image tags may further include Select Advertisement Step 452.
[00151] In a Provide Results Step 455, the image tags assigned to the image and optionally the results of a search performed in Perform Search Step 450 are provided to a requester of the image review. For example, if the image was received from Image Source 120A and Image Source 120A is a smartphone, then the image tags and search results are typically provided to the smartphone. If the image was received from a member of Image Sources 120, such as a website, that the image tags and optional search results may be provided to a host of the website, to a third party, to Advertising System 180, and/or the like. In some embodiments, the image tags are automatically added to the website such that the image tags are searchable, e.g., can be searched on to find the reviewed image. Provide Results Step 455 optionally includes providing an advertisement the requester as an alternative to or in addition to providing the results of the search performed in Perform Search Step 450.
[00152] In Queue Image Step 460, the image is placed in Image Queue 145. This placement optionally includes marking a subset of the image using Image Marking Logic 147.
As described elsewhere herein, the marking is typically configured to identify objects of particular interest in the image. Advancement of the image in Image Queue 145 may be dependent on the image’s review priority, the source of the image, available human image reviewers, the measure of confidence of the computer-generated review of the image, and/or the like.
[00153] In a Determine Destination Step 465 one or more members of Destinations 125 are determined for the manual review of the image. The determination of a destination is optionally based on image tags included in a computer-generated image review received from Automatic Identification System 152; optionally based on specialties of human image reviewers at different Destinations 120; optionally based on review scores of these human image reviewers, and/or based on other criteria discussed herein. In some embodiments, Determine Destination Step 465 is based on the data characterizing the image and a specialty of the human reviewer.
The data characterizing the image can be image features, image descriptors, and/or information derived therefrom. As is discussed elsewhere herein, the image features and/or image descriptors are optionally received along with the image from a member of Image Sources 120. Information derived therefrom may be generated at the member of Image Sources 120, at Image Processing System 110 and/or at Automatic Identification System 152.
[00154] In a Post Image Step 470, the image is posted to at least one of the Destinations 125 determined in Determine Destination Step 465. In some embodiments, Post Image Step 470 includes posting the image to more than one of Destinations 125 in parallel. The image is optionally posted via Network 115 and is optionally posted along with a mark highlighting a subset of the image, source data for the image, a time before review expiration for the image, image tags for the image received from Automatic Identification System 152, and/or the like. [00155] In a Receive Review Step 475, a manual review of the image is received from one or more of the determined Destination(s) 125. The manual image review may include one or more image tags assigned to the image by a human image reviewer. The one or more image tags are representative of the content of the image. The manual review may also include an upgrade request, an indication that the image is unreviewable, an indication that the image is improper, an indication that the review expired, and/or the like.
[00156] In an Image Tagged? Step 480 the progress of the method is dependent on whether image tags were received in Receive Review Step 475. If image tags characterizing the content of the image were received then the method optionally proceeds with Perform Search Step 450 and Provide Results Step 455. In these steps the image tags included in the manual image review and optionally the computer-generated image review are used. Use of the image tags in the computer-generated image review may be dependent on the confidence measure of this review.
[00157] Steps 460-475 are optional if in Step 445 the confidence measure is found to be above the predetermined threshold(s).
[00158] In an optional Upgrade? Step 485 the progress of the method is dependent on whether an upgrade request has been received. If such a request has been received, then the method proceeds to Determine Destination Step 465 wherein a second/different member of Destinations 125 is determined. The determination may depend on image tags received in the manual image review received in Receive Review Step 475. The upgrade request may be received from a human image reviewer or from a requester of the image review (from Image Source 120A or 120B, etc.). The upgrade request may be received after the requestor has had a chance to review the image tags provided in Provide Results Step 455. For example, the requestor may first receive image tags consisting of “white car” and then request a review upgrade because they desire further information. The review upgrade may result in the image being provided to a human image reviewer with a specialty in automobiles. This human image review can add to the existing image tags to produce “white car, 1976 Ford Granada.” In some embodiments, the requester can add source data indicating a subset of the image when requesting a review upgrade. For example, the reviewer may wish to indicate particular interest in a broken headlight. This serves to direct the human image reviewers attention to this feature of the image, produce tags that include “broken headlight,” and result in a search (Perform Search Step 450), directed toward broken headlights for a 1976 Ford Granada.
[00159] In some embodiments, upgrade request are generate automatically by Review Logic 170. For example, if an image review appears too brief, e.g., just “car,” then Review Lotic 170 may automatically initiate a review upgrade. In some embodiments, the automatic generation of upgrade requests is based on the presence of keywords within a manual image review. For example, certain review specialties are associated with lists of keywords. In some embodiments, when one of these keywords are received in a manual image review and an automated review upgrade is initiated. The review upgrade preferably includes a human image reviewer having a specialty associated with the received keyword. In a specific example, one specialty includes “automobiles” and is associated with the keywords “car,” “truck,” “van,” “convertible,” and “Ford.” When one of these keywords is received in a manual image review, Review Logic 170 checks with Review Logic 157 to determine if a human image reviewer having a specialty in “automobiles” is currently active. If so, then an automatic upgrade is initiated and the image is sent to the Destination 125B of the reviewer having the “automobiles” specialty.
[00160] If no upgrade requests are made, then in an End Step 490, the process is completed.
[00161] FIG. 5 illustrates alternative methods of processing an image, according to various embodiments of the invention. In these methods, at least some of Steps 430-445 are performed in parallel with at least some of Steps 460-475. The manual image review is in Steps 460-475 may be begun before the computer-generated review of Steps 430-445 is complete, thus, the manual image review is started before the confidence measure of the computer generated review is known. If, in Confident? Step 445, the confidence measure is found to be above the predetermined threshold(s), then Steps 460-475 are optionally aborted.
[00162] FIG. 6 illustrates methods of managing a reviewer pool, according to various embodiments of the invention. In this method the status of a reviewer may be changed based on their performance in reviewing images. The steps illustrated may be part of and performed in consort with the methods illustrated by FIGs. 4 and 5. For example, they may be performed in part between Receive Image Step 410 and Receive Review Step 475. The methods illustrated include sending an image to more than one of Destinations 125.
[00163] In Receive Image Step 410 an image is received. As is discussed elsewhere herein, the image may be received at Image Processing system 110 via Network 115. The image may be generated by a camera and/or obtained from a webpage. In some embodiments, the image is received along with information about how often the web page is viewed. [00164] In a Select 1st destination Step 610 a first destination is selected for manual or automated analysis of the image. Select 1st Destination Step 610 is performed using Destination Logic 160 and is an embodiment of Determine Destination Step 465. As described elsewhere herein, the determination of a destination for the image may be based on a wide variety of factors, including the status of a human reviewer and scores associated with reviewers. For example, typically a member of Destinations 125 associated with an active reviewer will be selected, rather than one without an active reviewer. The selected destination may be a member of Destinations 125 and/or Automatic Identification System 152.
[00165] In Post Image Step 470, the image received in Receive Image Step 410 is posted to the selected member of Destinations 125. As discussed elsewhere herein, posting of the image can include communicating the image via Network 115 using standard network protocols such as TCP or UDP.
[00166] In an optional Monitor Step 620, Reviewer Logic 170 is used to monitor progress of a manual image review of the image at the member of Destinations 125 selected in Select 1st Destination Step 610. The monitoring can include detection of input by a human reviewer, time taken for the image review, a number of words provided that characterize the image, and/or the like. Monitoring optionally includes measuring a time to taken to tag the image. Where monitoring includes detection of input by a human reviewer, the monitoring can be on a keystroke-by-keystroke basis, on a word-by-word basis and/or on a line-by-line basis. As such, Reviewer Logic 170 may be configured to receive data characterizing the image a character, word or line at a time.
[00167] In Remove Step 630, the image is removed from processing at the member of Destinations 125 Selected in Select 1st Destination Step 610. “Removal” can include, notifying the human reviewer at the selected member of Destinations 125 that he or she is no longer primarily responsible for reviewing the image, relieving the human reviewer of primary responsibility (without necessarily notifying the human reviewer), removing the image from a display of the human reviewer, and/or the like. In some embodiments, Remove Step 630 includes merely placing a human reviewer in a ranking to have secondary or shared responsibility for reviewing an image. For example, if the human reviewer associated with the member of Destinations 125 selected in Select 1st Destination Step 610 had primary responsibility for reviewing an image, the responsibility may now be shared or assigned to other reviewers associated with other members of Destinations 125. In this case it is the primary responsibility that is “removed.”
[00168] Remove Step 630 may occur if manual review of the image is taking too long.
For example, if in Monitor Step 620 it is found that the reviewer has not started typing after a predetermined time, then Remove Step 630 may be performed. Other examples, of triggering events for Remove Step 630 include: loss of communication with the selected member of Destinations 125, exceeding a predetermined time allotment for review of the image, improper or inappropriate image tags received from the human reviewer, inaccurate (not characterizing the image) image tags received from the human reviewer, a referral from a first human reviewer to a second human reviewer, an upgrade request of the image review, and/or the like.
[00169] In a Select 2nd Destination Step 640 a second member of Destinations 125 (or Automatic Identification System 152) is selected using Destination Logic 160. The second member may be selected based on any of the criteria discussed above with regard to Select 1st Destination Step 610 and Determine Destination Step 465. Further, in some embodiments the selection of a second member may be based on a specific referral by a human reviewer associated with the first member of Destinations 125. For example, a first human reviewer may identify the content of an image to be a specialty of a second human reviewer and may refer the image to the member of Destinations 125 associated with the second human reviewer. The selection of a second member of Destinations 125 in Select 2nd Destination Step 640 is optionally dependent on automated processing on an image using Automatic Identification System 152. [00170] In another Post Image Step 470 the image is posted to the member of Destinations 125 Selected in Select 2nd Destination Step 640. In some embodiments, more than one human reviewer may review an image in parallel. They may perform the review independently or in cooperation. One reviewer may have primary responsibility for review of the image or each reviewer may have equal responsibility. One reviewer may have supervisory responsibility over one or more other reviewers. In some embodiments, Select 2nd Destination step 640 is performed and the image is posted to two or more members of Destinations 125 prior to Monitor Step 620 and/or Remove Step 630.
[00171] In Receive Review Step 475 a review of the image, e.g., image tags, is received as discussed elsewhere herein. The review typically includes image tags characterizing contents of the image. Reviews may be received from more than one of Destinations 125. For example, tags characterizing an image may be received from the members of Destinations 125 selected in both Select 1st Destination Step 610 and Select 2nd Destination Step 640. Receive Review Step 475 is optionally performed in real-time as characters or words are provided by human reviewer(s).
[00172] In an optional Associate Tags Step 650, one or more image tags characterizing the image are stored in association with the image. The stored tags optionally include tags provided by more than one human reviewer and may be stored in Memory 135. As described elsewhere herein, the tags may further be provided to a member of Image Sources 120 (e.g., in an embodiment of Provide Results Step 455) or used to select advertisements using Advertising System 180. The tags may also be provided to Automatic Identification System 152 to provide training of automatic image recognition processes. Associate Tags Step 650 is optionally followed by any combination of Perform Search Step 450, Select Advertisement Step 452 and Provide Results Step 455, as discussed elsewhere herein. For example, these steps may be included in the methods illustrated in FIGs. 6-8, 13 or 15. These steps may be used to provide an advertisement to a source of the image, e.g., a requestor or a website that includes the image. [00173] FIG. 7 illustrates methods of receiving image tags in real-time, according to various embodiments of the invention. These methods are optionally performed by Image Processing System 110 and the image tags may be a result of a manual image review. These methods may be performed in consort with the other methods described herein, for example as part of the methods illustrated by FIG. 4. The methods begin with Post Image Step 470 in which an image is provided to one or more members of Destinations 125, as discussed elsewhere herein. The methods illustrated in FIG. 7 are optionally preceded by Receive Image Step 410 in which an image is received from a remote computing device.
[00174] In a Receive Input Step 710 input is received from the one or more members of Destinations 125. This input typically includes characters provided by a human reviewer. For example, the input may be characters typed by a human reviewer at Destination 125 A.
Typically, Receive Image Step 710 is continued as other steps shown in FIG. 7 are performed. [00175] In a Detect 1st Word Step 720 a word is detected in the input received in Receive Input Step 710. The word may be detected by the presence of a whitespace character such as an ASCII space or carriage return. Spell checking is optionally performed on the detected word. If the word is not included in a spellcheck dictionary, then an attempt at correction may be made or the human reviewer may be notified of the failure to recognize the word.
[00176] Detection of the word in Detect 1st Word Step 720 results in execution of a Deliver 1st Word Step 730 in which the word is communicated to a source of the image. For example, once a word is detected it may be provided to Image Source 120A in real-time. At Image Source 120A the word can be displayed to a user. Displaying one word at a time can provide an impression that the analysis of the image is occurring in a shorter amount of time, as compared to waiting until an entire set of image tags are received before displaying the set. [00177] In a Detect 2nd Word Step 740 a second word is detected in the input received in Receive Input Step 710. Again, the word may be detected by the presence of a whitespace character and can occur after providing the first word to the user at Image Source 120 A. Both the first and second words are expected to be tags characterizing the image. Detection of the second word in Detect 2nd Word Step 740 triggers a Deliver 2nd Word Step 750 in which the second word is delivered to the image source, e.g., Image Source 120A. Detect 2nd Word Step 740 and Deliver 2nd Word 750 may be repeated for third, fourth and additional word, each being part of the image tags.
[00178] In a Detect Completion Step 760 data indicating that processing of the image is completed, e.g., that the words detected comprise all the words (image tags) to be provided by the human reviewer are received. The data may include a metadata tag such as “/endtags,” an ASCII carriage return, and/or the like. Typically, Detect Completion Step 760 occurs after one, two or more image tags have been received. In optional Associate Tags Step 650 the received image tags are associated and/or stored with the image as discussed elsewhere herein.
[00179] While FIG. 7 illustrates detection and delivery of a word at a time, in alternative embodiments, individual keystrokes are detected and delivered. Receive Input Step 710 may continue in parallel with Steps 720-740 and/or 750. Steps 710-760 may be include as part of Receive Review Step 475, discussed elsewhere herein.
[00180] FIG. 8 illustrates methods of upgrading an image review, according to various embodiments of the invention. In these methods an image receives more than one phase of image review. Following a first review (phase 1) the image review is upgraded and reviewed further (phase 2). The request for upgrade can be automatically generated, initiated by a first human reviewer, and/or be in response to a request from a source of the image. Both the first and second review may be manual, i.e., performed by a human reviewer. Alternatively, the first review may be automatic and one or more subsequent reviews may be manual, or the first review may be manual and one or more subsequent reviews automatic.
[00181] In Receive Image Step 410 an image is received. A first member of Destinations 125 is selected for the image in Select 1st Destination Step 610. The image is then posted in Post Image Step 470. These steps are discussed elsewhere herein.
[00182] In a Receive 1st Review Step 810 a first review of the image is received. This first review may include one or more image tags characterizing the contents of the image. For example, the image review may include words “black spider” in response to a picture including an image of a black spider; or the image may include the words “red car” in response to an image including a red automobile. Receive 1st Review Step is optionally an embodiment of Receive Review Step 475, and may include the real-time communication of image tags as discussed in regard to FIG. 7.
[00183] In some embodiments, the first review can include an indication, provided by the human reviewer that performed the first image review, that the processing of the image should be upgraded. For example, the first human reviewer may manually indicate a field of expertise for a second (optionally specialized) human reviewer. For example, a first human reviewer may provide the “red car” image tags and suggest an upgraded review be performed by a reviewer with automotive expertise. Alternatively, the first review can include image tags that are considered particularly valuable. For example, an automatic review that indicates a 72% probability that the image includes a wedding dress may trigger an automated upgrade to a manual review because the image tags “wedding dress” are potentially of greater commercial value than other image tags. In some embodiments, this automated upgrade is performed by Review Logic 170 and is based on a list if relatively important or valuable keywords stored in Memory 135. This list can include keywords and an associated measure of their value. As discussed elsewhere herein, automatic upgrades performed by Review Logic 170 are optionally based on image tags automatically generated using Automatic Identification System 152 and/or information predictive of how often an image will be viewed. These factors are optionally applied using an algorithm that maximizes the potential value of tagging the image using a human reviewer and providing advertisements based on these tags. Examples of more valuable image tags may be related to shoes, cars, jewelry, travel destinations, books, games, clothing, holidays, food, drink, real estate, banks, accidents, etc.
[00184] In some embodiments, upgrades of image reviews are automatic. For example, a tag of “black spider” may automatically result in an upgrade of the image review that includes sending the image to a human reviewer having a particular specialty, e.g., a spider expert. The identification of particular plant or animal life often includes (depends on) location information, as the location of the plant or animal can be important for proper identification.
[00185] In some embodiments, as discussed elsewhere herein, upgraded reviews may be requested by the person who originally requested that the image be reviewed. For example, a user of Image Source 120A may provide an image of a dog and received image tags comprising “black dog.” The user may then request further detail by providing the word “breed?” In this case the image review may be upgraded and sent to a human reviewer with specific knowledge of dog breeds. In some embodiments, the user is charged for the upgrade or is required to have a premium account in order to request upgrades. The user may specify a particular part of an image when requesting an image review upgrade.
[00186] The presence of an upgrade request (automatically and/or manually generated) is detected in a Detect Upgrade Request Step 820. The detection may be based on data or a command received from a member of Image Sources 120, from a member of Destinations 125, Automatic Identification Interface 150, and/or from a component of Image Processing System 110 such as Review Logic 170, Content Processing Logic 185 or Response Logic 175.
[00187] In a Select 2nd Destination Step 640, Destination Logic 160 is used to select a second member of Destinations 125 and/or Automated Identification System 152 for review of the image. This selection can be based on any of the criteria on which Select 1st Destination Step 610 was based and, in addition, the image tags and/or other information resulting from the first review. For example, the selection of a second member of Destinations 125 may be based, at least in part, on an image tag manually or automatically generated in the first image review. Specifically, a tag of “black spider” may be used by Destination Logic 160 to select a member of Destinations 125 associated with a human reviewer having expertise in the identification of spiders. In another example, selection of a second member of Destinations 125 may be based on a word provided by a user requesting the image review. Specifically, if the first review produces image tags “white shoe” and the user responds with “brand?” then Destination Logic 160 may use this information to select a member of Destinations 125 associated with a human reviewer having expertise in shoe brands.
[00188] In some embodiments of Select 2nd Destination Step 640, Destination Logic 160 is configured to possibly select Automatic Identification System 152 for the second review of the image, rather than a member of Destinations 125. This may occur, for example, when the image has been tagged with the name of an actor and the upgrade request requests “movie name?” In such a case, the image may be searched for in a library of move images. The same approach may be taken for other reproducible objects such as currency, paintings, car models, trademarks, barcodes, QR codes, well known persons, etc.
[00189] In another instance of Post Image Step 470 the image is posted to the second selected member of Destinations 125 or Automatic Identification System 152 for a second review of the image. In a Receive 2nd Review Step 830, image tags characterizing the content of the image are typically received. Receive 2nd Review Step 830 is optionally an embodiment of Receive Review Step 475. Alternatively, an additional referral, indication that the image cannot be tagged for some reason or other information may be received. The image tags are received from the member of Destinations 125 or Automatic Identification System 152 to which the image was posted. Steps 820, 640, 470 and 830 may be repeated if needed.
[00190] In Associate Tags Step 650 received image tags are associated with the image and/or provided to the source of the image, as discussed elsewhere herein.
[00191] In one illustrative example of the methods illustrated by FIG. 8 an image is received from a web page. The image is sent to Automatic Identification System 152 for automated review. The result of the automated review includes an image tag “ring.” This tag is processed using Review Logic 170 and identified as a potentially valuable image for use in advertising. As discussed elsewhere herein, this identification is optionally also based on other factors such as how often the image is viewed on the web page. As a result of the identification, the review of the image is automatically upgraded and sent to a member of Destinations 125 that is associated with a human reviewer having expertise in Jewelry. The human reviewer modifies the image tags to include “gold wedding ring” and these tags are associated with the image.
These image tags may then be used to select advertisements using the systems and methods described elsewhere herein. [00192] In one illustrative example of the method illustrated by FIG. 8 an image is received from an application executing on a mobile device. The image includes a scene of a street and is sent to Destination 125A for review by a human reviewer. The human reviewer response with image tags “street scene” and these tags are provided to the mobile device. A request for a review upgrade is then received from the mobile device. This request includes the text “car model?” and an identification of part of the image including a car. As a result of this request the image, text and identification are sent to Destination 125 A or another member of Destinations 125 for further manual review. The further review results in the image tags “1909 Model-T” which are then forwarded to the mobile device.
[00193] In one illustrative example of the method illustrated by FIG. 8 an image is received from a computing device. The image includes several pieces of paper currency laying on a plate and is sent to Destination 125A for manual review. The resulting tags include “US currency on white plate” and are sent to the computing device. A request for an image review upgrade is received. The request includes “how much?” As a result of this request the image is sent to Automatic Identification System 152 where automated currency identification logic is used to identify the amounts of the currency and optionally provides a sum. This information is then sent back to the computing device.
[00194] In one illustrative example of the method illustrated by FIG. 8 an image is received from a mobile device. The image includes a leaf of a plant and is sent to Destination 125A. The human reviewer at Destination 125A provides the image tags “green leaf’ and also upgrades the image review. As a result of the upgrade and the image tags the image is sent to Destination 125B, which is associated with a second human image reviewer having expertise in botany. The selection of Destination 125B is based, in part on the image tags “green leaf.” In parallel, the image tags “green leaf’ are sent to the mobile device. At Destination 125B the second human image reviewer adds the words (tags) “poison ivy” to the already existing image tags. These additional tags are then also sent to the mobile device. On the mobile device the words “green leaf’ are first displayed and then the words “poison ivy” are added to the display once available.
[00195] The methods illustrated by FIG. 8 are optionally used in consort with other methods described herein. For example, the image tags may be used in Auction Tag Step 1560 described elsewhere herein. The tags may be used to select an advertisement and the advertisement provided to a remote browser for display on a webpage along with the image. [00196] FIG. 9 illustrates an example of Image Source 120A including electronic glasses, according to various embodiments of the invention. Electronic glasses include glasses to be worn by a person. Examples include “Google Glass” by Google®, “Ml 00 Smart Glasses” by Vuzix®, the iOptik™ contact lens by Innovega, and/or the like. These systems are configured to allow a user to view both the real world and an electronic display at the same time. Electronic glasses may also include virtual reality systems such as the Oculus Rift™ by Oculus VR. These types of systems display images to a user using an electronic display, but do not provide simultaneous direct view of the real world. A direct view is a view that is not digitized for viewing, e.g., a view through a glass or lens.
[00197] Generally, electronic glasses provide in interactive interface in which a user can select a subset of the image in real-time as the image is being viewed in or through the electronic glasses. As used herein, “real-time” selection is meant to mean that the image is being viewed as it is being captured, with only inconsequential delay. For example, an image viewed in real-time may be captured by a camera and processed using a graphics engine and displayed with only a delay resulting from electronic processing times. Real-time viewing allows the user to position objects of interest within the viewed image by moving the image capture device as the image is being viewed. Thus, real time viewing excludes viewing of images that have been stored for substantial periods before viewing.
[00198] As illustrated in FIG. 9, Image Source 120A includes a Camera 910 configured to capture images. The captured images can include still images or video comprising a sequence of images. A Display 920 is configured to present the captured images to a user of Camera 910. In some embodiments, such as those in which Image Source 120A is a smart phone, Display 920 includes a touch screen configured to function as a view finder for Camera 910, to display captured images and to display Image Capture Screen 210 (described elsewhere herein). Display Logic 925 is configured to manage the display of images and other content on Display 920. Display Logic 925 can include hardware, firmware and/or software stored on a computer readable medium.
[00199] Display 920 is optionally configured to display both an image and an advertisement. For example, Display 920 may be configured to display an advertisement and an image at the same time (e.g., as an overlay). Display 920 may further be configured to display an image sequence received from Image Processing System 110. The image sequence can include a series of still images or a video, and may be filtered, as described further elsewhere herein. For example, in some embodiments, Display 920 includes a video decoder configured to decode video. Display 920 may include logic configured to display a sequence of images one after another. Such display functionality is also further discussed elsewhere herein.
[00200] In the embodiments illustrated by FIG. 9, Image Source 120A further comprises
Selection Logic 930 configured for the user of Image Source 120 to indicate a subset of a captured image. This indication can be made within Image Capture Screen 210 in real-time as the image is being displayed and/or captured on Display 920. As discussed elsewhere herein, such an indication is made made to select a particular object of interest within the image. Following selection, the subset of the image is optionally marked using Image Marking Logic 147 as discussed elsewhere herein. Image Marking Logic 147 may be used add a mark to the image as shown within Display 920. As such, the user can see the location that has been marked. Selection Logic 930 is optionally configured for the user to remark the subset of the image until the user is satisfied with the selection.
[00201] In some embodiments, Selection Logic 930 includes Tracking Logic 935 configured to track movement of the user’s eyes. Tracking Logic 935 is optionally included within electronic glasses. Eye tracking can include detection of the focal point of both eyes, the direction of one or more eyes (eyeball direction), the focus of one or more eyes, blinking, eyeball movement, and/or the like. Tracking Logic 935 is optionally configured to correlate a state of the user’s eyes with a location within captured images. Geometry data representing a geometric relation between Camera 910 and the physical elements of Selection Logic 930 are used to associate the state of the user’s eyes with the location within an image captured using Camera 910.
[00202] Tracking Logic 935 optionally includes a second camera directed at the eyes of the user. This camera may be mounted on electronic glasses or be part of other embodiments of Image Sources 120. For example, Tracking Logic 935 configured for tracking a user’s eyes may be included in a web camera, a smartphone, a computer monitor, a television, a tablet computer, and/or the like. [00203] In some embodiments, Tracking Logic 935 is configured to detect blinking of one or more eyes. For example, Tracking logic 935 may be configured to detect a single eye blink or a pattern of eye blinks. When such an event is detected, Selection Logic 930 may select a position within an image based on eye position data received from Tracking Logic 935, or alternatively select a position at the center of the currently viewed image.
[00204] Once an image has been marked using Marking Logic 147 and Selection Logic 930, the location and/or area of the marking can be displayed to the user on the marked image within Display 920. For example, the image, plus a red “X” at the marked location, may be displayed to the user within Image Capture Screen 210. In some embodiments, the user may then confirm the selection using Confirmation Logic 940. Confirmation Logic 940 is optionally responsive to Tracking Logic 935. For example, confirmation may be provided using a blink or other eye movement, an audio command, a verbal command, or a touch command. In some embodiments, Tracking Logic 935 is configured to detect, and interpret as a command, movement of the eyes into an unnatural position (e.g., cross-eyed). Such a movement can be used to provide a confirmation command. Confirmation is optionally required prior to sending the image to Network 115.
[00205] In some embodiments, Selection Logic 930 includes Tracking Logic 935 that is configured to track something other than or in addition to the eyes. For example, Tracking Logic 335 may be configured to detect a pointing finger of a user, an electronic device worn on a finger or wrist, and/or the like. In these embodiments, Selection Logic 930 is configured to infer a location within an image based on the detected object. In one embodiment, Tracking Logic 935 is configured to detect the location of a pointing finger within an image and infer that the location to be selected is at the tip of the finger. A user can point to an object within their field of view, provide an audio, eye based, and/or touch based command to Image Source 120A, and the position of the pointing finger will be used to make the selection of a position within the image. In one embodiment, Tracking Logic 935 is configured to detect the location of a wireless electronic device relative to Image Source 120A and infer that the location to be selected is along a line between the wireless electronic device and a part of Image Source 120A.
[00206] Image Source 120 A further includes an I/O 945 configured for Image Source 120 A to communicate to Image Processing System 110 via Network 115. I/O 945 can include wired and/or wireless connections. For example, in some embodiments, I/O is configured to communicate wirelessly from electronic glasses to a cellular phone using a Bluetooth™ connection and then for the communication to be forwarded from the cellular phone to Network 115 using Wifi or a cellular service.
[00207] Image Source 120 A optionally further includes Channel Logic 965 configured for a user to specify image tags to be used by Sequence Assembly Logic 192 for selecting images to be included in an image sequence. This image sequence may be provided to Image Source 120A and/or other clients remote from Image Processing System 110. Image sequences defined using Channel Logic 965 on Image Source 120A are optionally subscribed to by and provided to third parties. A set of image tags, tag group, and/or other information discussed herein to select contents of an image sequences is optionally considered a image sequence “channel.” Once defined the information for selecting images to be included in an image sequence channel can be shared and/or repeatedly used to generate image sequences. In some embodiments Channel Logic 965 includes a graphical user interface configured for a user to select tags, etc. for the purposes of generating image sequences. The selections made using Channel Logic 965 are optionally shared, stored in Memory 135, and/or used to generate multiple image sequences by different people at different times.
[00208] Channel Logic 965 is optionally further configured to determine how an image sequence is presented on Image Source 120A. For example, Channel Logic 965 may be used to select a rate at which images are presented, if images are presented one image at a time, transitions between images, how many images are shown at once, if images are shown in a column or row, if images are shown in a two-dimensional mosaic or collage, and/or the like. In various embodiments, Channel Logic 965 is configured for a user to pause display of a sequence, to save an image from an image sequence, to “like” and image, and/or designate an image as a “favorite.” The pausing or saving of an image is optionally used by Popularity Logic 197 in determining the popularity of an image. In various embodiments, Channel Logic 965 is configured to use images of an image sequence as a screensaver and/or wallpaper on the client. [00209] Image Source 120A optionally further includes Filter Logic 970. Filter Lotic 970 is configured to apply a filter to images within an image sequence. As is escribed further elsewhere herein, more than one filter may be applied and different filters can be applied to different images within a sequence. The filters that may be applied as described elsewhere herein. [00210] Filter Logic 970 is optionally alternatively included in Image Processing System 110, in which case filters are provided before images within an image sequence are provided to members of Image Sources 120A and/or other clients of Image Processing System 110. Note that, in various embodiments, these clients of Image Processing System 110 can include any combination of the elements illustrated in FIG. 9. For example, they need not necessarily include Camera 910.
[00211] Image Source 120 further includes an embodiment of Memory 135 configured to store images captured using Camera 910, geometric data, account data, data used by Channel Logic 965, data used by Filter Logic 970, and/or the like. Memory 135 includes non-transient memory such as Random Access Memory (RAM) or Read Only Memory (ROM). Memory 135 typically includes data structures configured to store captured images and marking locations within these images.
[00212] Image Source 120 further includes a Processor 950. Processor 950 is a digital processor configured to execute computing instructions. For example, in some embodiments Processor 950 is encoded with computing instructions to execute Display Logic 925, Selection Logic 930, Image Marking Logic 147, Channel Logic 965, Filter Logic 970, and/or Tracking Logic 935. Processor 950 optionally includes an Application Specific Integrated Circuit (ASIC) or Programmable Logic Array. (PLA).
[00213] Image Source 120 optionally further includes Object Tracking Logic 955. Object Tracking Logic 955 is configured to track movement of an object of interest within a sequence of images. For example, in some embodiments, a user may use Selection Logic 930 to select a subset of an image or an aspect of the image for which information is requested. This subset may include one or more pixels. Object Tracking logic 955 is configured to use automatic (computer based) image interpretation logic to identify a specific object occupying the selected subset. This object may be a person, a vehicle, an animal, or any other object. The boundaries or other pixels of the selected object are optionally highlighted in Display 920 (discussed further below) by Object Tracking Logic 955. This highlight can track the object as it moves within the sequence of images and can include changing pixel characteristics. The highlighting optionally moves with the object on the display. An aspect of the image may be a brand of an object within the image, a movie from which the image is obtained, a location of the content of an image, etc. In some embodiments, aspects of the images can be specified as being of interest using text such as “shoe brand?,” “movie?,” “actor?,” “location,” “breed?,” etc. Such specifications may be provided in an original request to tag an image and/or in an upgrade request.
[00214] In some embodiments, images communicated from Image Sources 120 to Image Processing System 110 are part of a sequence of images that comprise a short video sequence. These video sequences may be tagged using the systems and methods described elsewhere herein. One advantage of tagging a video sequence is that the tag(s) may characterize a specific action that occurs in the video. For example, tags of a figure skater may characterize specific jumps (double Lutz, etc.) that are better identified in video than in a still image. Various embodiments include a specific limit on the length of the image sequence, e.g., the video must be no more than 3, 5, 7 or 10 seconds.
[00215] While the embodiments illustrated by FIG. 9 include electronic glasses, these embodiments may be adapted to any device having eye tracking technology including cell phones, video display monitors (e.g., computers or television screens), tablet computers, advertising displays, etc. For example, the embodiments of Image Source 120A illustrated in FIG. 9 include a television having an eye tracking camera configured to determine at which part of a television screen a user is looking.
[00216] Image Source 120A optionally further includes Image Processing Logic 960 configured to perform one or more steps for the purpose of tagging an image. Image Processing Logic 960 is optionally configured to reduce the load on Image Processing System 110 by performing these one or more steps locally to Image Source 120 A. For example, Image Processing Logic 960 may be configured for performing initial steps in tagging of an image and then send the results of these initial steps to Image Processing System 110 for generation of image tags. In some embodiments, Image Processing Logic 960 is capable of completing the tagging process for some but not necessarily all images. Image Processing Logic 960 includes hardware, firmware and/or software stored on computer readable media. For example, some embodiments include an instance of Processor 950 specifically configured to perform the functions of Image Processing Logic 960 discussed herein.
[00217] In some embodiments, Image Processing System 110 is configured to provide Image Processing Logic 960 to Image Sources 120. This is optionally via an “app store” such as the Apple App Store. Where applicable, Providing Image Processing Logic 960 to a member of Image Sources 120 is an optional step in the various methods illustrated herein. Processing Logic 960 can be provided as an “app” or computer instructions that further includes other logic discussed herein, for example the logic discussed in relation to FIG. 9.
[00218] In some embodiments, Image Processing Logic 960 is configured to identify specific features within an image. Feature identification includes determining if specific points within an image are or are not part of a feature of a given type. Types of features include, but are not limited to, edges, comers, blobs and ridges. Generally, a feature is an “interesting” or “useful” part of an image, for the purpose of identifying contents of the image. Image Processing Logic 960 may be configured to perform one or more of a number of different feature detection algorithms. In some embodiments, Image Processing Logic 960 is configured to select from among a number of different algorithms based on available processing power and/or contents of the image. Examples of known feature detection algorithms include “Canny,” “Sobel,” “Harris & Stephens/Plessy,” “SUSAN,” “Shi & Tomasi,” “Level curve curvature,” FAST,” “Laplacian of Gaussian,” “Difference of Gaussians,” “Determinant of Hessian,” “MSER,” “PCBR” and Grey-level blobs.” These types of algorithms are executed on a computing device and other such algorithms will be apparent to one of ordinary skill in the art. The results of feature identification include identification of a specific feature type at a specific location within the image. This may be encoded in a “feature descriptor” or “feature vector,” etc. The results of feature detection may also include a value representing a confidence level at which the feature is identified.
[00219] In some embodiments, Image Processing Logic 960 is further configured to calculate image descriptors based on identified image features. Image descriptors are visual features of the contents of an image and include characteristics such as shape, color, texture and motion (in video). Image descriptors may be part of a specific descriptor domain, such as descriptors related to the domains of face recognition or currency recognition. The derivation of image descriptors is typically based on image features. For example, derivation of a 3-D shape descriptor may be based on detected edge features. Image descriptors may characterize one or more identified objects within an image.
[00220] The particular image features and image descriptors used in a particular embodiment are dependent on the particular image recognition algorithms used. A large number of image recognition algorithms are known in the art. In some embodiments, Image Processing Logic 960 and/or Image Processing System 110 are configured to first attempt identification of image features and derivation of image descriptors of various types and then to select from among a plurality of alternative image processing algorithms based on the levels of confidence at which the image descriptors are derived. For example, if image descriptors in a facial recognition domain are derived with a high level of confidence, then an image processing algorithm specific to facial recognition may be selected to generate image tags from these image descriptors.
[00221] In those embodiments that include Image Processing Logic 960 the task of tagging an image can be distributed between Image Sources 120 and Image Processing System 110. How the task is distributed may be fixed or may be dynamic. In embodiments were the distribution is fixed specific steps are performed consistently on specific devices. In embodiments were the distribution is dynamic the distribution of steps may be responsive to, for example, communication bandwidth, image type (still or video) processing power on Image Source 120 A, current load on Image Processing System 110, availability of image reviewers at Destinations 125, the confidence to which steps are accomplished on Image Source 120, and/or image descriptor data present on Image Source 120 A. Any combination of these factors may be used to dynamically allocate distribution of processing steps. For example, if the derivation of image descriptors occurs with a low degree of confidence (relative to a predetermined requirement) on Image Source 120A, then the image features and/or image maybe communicated to Image Processing System 110 for derivation of image descriptors using more powerful or alternative image processing algorithms. In contrast, if the derivation of image descriptors occurs on Image Source 120A with an adequate degree of confidence, then this step need not typically be performed on Image Processing System 110.
[00222] If image processing steps are successfully performed on Image Source 120A by
Image Processing Logic 960, the results of these steps and/or the image may be communicated to Image Processing System 110 using I/O 945. For example, in some embodiments both an image and image descriptors are communicated to Image Processing System 110. The image descriptors may be used in an attempt to automatically tag the image or may be provided to a human image reviewer at one or more of Destinations 125. The image descriptors may be used to identify a descriptor domain and this domain then used to select a member of Destinations 125 to which the image is sent. For example, a descriptor domain of “vehicles” may be used to select an image review having expertise in vehicles. The classification of an image into a domain based on image descriptors may occur on either Image Processing System 110 or Image Processing Logic 960.
[00223] In some embodiments automatic tagging of an image is attempted based on derived image descriptors. In various embodiments, this may occur using Image Processing Logic 960 and/or Automatic Identification System 152. Classification optionally occurs by comparing the image descriptors derived from the image with a library of image descriptors associated with different classes. For example, an image descriptor identifying a vehicle shape may match with a previously stored image descriptor associated with a “vehicle” class. If the class is suitable (in type, scope, etc.) the identification of a class may be sufficient to automatically select a tag for the image. For example, image descriptors matching those of a class “child face” may be sufficient to generate the tags “child’s face.”
[00224] Typically, Image Processing System 110 includes a larger library of image descriptors associated with different classes relative to Image Source 120A. These libraries are optionally stored in Memory 135 of Image Processing System 110 or Image Source 120 A, or Automatic Identification System 152. Libraries of image descriptors stored in Image Source 120 A are optionally based on images previously processed using Image Source 120 A. For example, if several images from Image Source 120 A are identified as having descriptors and tags relating to currency, a library of descriptors in the currency domain/class may be stored in Memory 135 of Image Source 120 A. These descriptors may be associated with tags such as “US $5 bill.” When a new image is received having the same set of descriptors, Image Processing Logic 960 is optionally configured to automatically tag the image using the associated tags. While the descriptor library may be received from Image Processing System 110, or may be developed using image tags received from Image Processing System 110, the tagging in the above example is not dependent on real-time communication with Image Processing System 110. [00225] In various embodiments, data characterizing relationships between image descriptors and classes and/or tags may be developed on Image Processing System 110, Image Source 120A, Destination 125 A and/or Automatic Identification System 152. Once developed the data may be transferred to improve and/or supplement the libraries at any of the other devices.
[00226] Will the systems illustrated show a client-server architecture, in alternative embodiments Image Sources 120 and Destinations 125 are connected in a peer-to-peer architecture. In these embodiments, any combination of the elements illustrated in Image Processing System 110 may be included in Image Sources 120 and/or Destinations 125. One of Image Sources 120 may perform the image tagging and processing tasks discussed herein on an image received from another of Image Sources 120.
[00227] FIG. 10 illustrates a method of processing an image at least partially on Image Source 120 A, according to various embodiments of the invention. The methods illustrated by FIG. 10 can include a range of different processing steps performed on Image Source 120 A. For example, those steps involving image descriptors are optionally performed on Image Processing System 110.
[00228] In a Receive Image Step 1010 and image is received by Image Source 120A. The image may be received from a camera included in Image Source 120 A, from Image Source 120B, from Network 115, from Image Processing System 110, from a wireless device, from a memory device, and/or the like. The received image is optionally one of a sequence of images that form a video.
[00229] In an Identify Features Step 1020, Image Processing Logic 960 is used to identify image features within the received image. As discussed elsewhere herein, methods of identifying image features are known in the art. Identify Features Step 1020 may apply one, two or more of these methods. The identification of features optionally includes a confidence level reflecting an estimated accuracy and/or completeness of the feature identification.
[00230] In an optional Send Features Step 1030 the image features identified in Identify Features Step 1020 are sent to Image Processing System 110. The features may be sent with or without the associated image and may be sent via Network 115. If Send Features Step 1030 is included in the method, the method optionally next proceeds to a Generate/Receive Tags Step 1070 in which Tags for the image are received from Image Processing System 110. Image Processing Logic 960 is optionally configured to perform Send Features Step 1030 based on a confidence level of the features calculated in Identify Features Step 1020. For example, if the confidence is below a threshold the step may be executed and both the image and the features sent.
[00231] In an optional Derive Descriptors Step 1040 Image processing Logic 960 is used to derive image descriptors from the image features identify in Identify Features Step 1020. As discussed herein, a wide assortment of methods is known in the art for deriving image descriptors. In some embodiments, Derive Descriptors Step 1040 includes using more than one method. The derivation may include a confidence level reflecting an estimated accuracy and/or completeness of the descriptor derivation. The types and content of descriptors derived is typically dependent on the image recognition algorithm(s) used.
[00232] In an optional Send Descriptors Step 1050 the image descriptors derived in Derive Descriptors Step 1040 are sent to Image Processing System 110. The image descriptors may be sent with or without the associated image and may be sent via Network 115. If Send Descriptors Step 1050 is included in the method, the method optionally next proceeds to a Generate/Receive Tags Step 1070 in which Tags for the image are received from Image Processing System 110. Image Processing Logic 960 is optionally configured to perform Send Descriptors Step 1050 based on a confidence level of the image features derived in Derive Descriptors Step 1040. For example, if the confidence is below a threshold the step may be executed and both the image and the features sent.
[00233] In an optional Compare Descriptors Step 1060, the one or more image descriptors derived in Derive Descriptors Step 1040 are compared with one or more image descriptors stored locally. As discussed elsewhere herein, these locally stored image descriptors are associated with image classes and/or image tags. The comparison may include calculation of a characteristic reflecting the quality of the match.
[00234] In some embodiments, both Send Descriptors Step 1050 and Compare Descriptors Step 160 are performed. In this case processing of the image descriptors can occur both on Image Source 125A and Image Processing System 110. Likewise, in some embodiments both Send Features Step 1030 and Derive Descriptors Step 1040 are performed and the image features are processed on both systems/devices.
[00235] In an Assign/Receive Tags Step 1070 image tags characterizing the image are generated and/or received. For example, if the image, image features or image descriptors have been sent to Image Processing System 110, then corresponding tags may be received from Image Processing system 110 in Assign/Receive Tags Step 1070. If a match is found between the derived descriptors and the local stored descriptors in Compare Descriptors Step 1060, then tags associated to the matched locally stored image descriptors are retrieved from local memory and assigned to the image. Tags may be both locally assigned and received for the same image. The Image tags are optionally generated using image features and/or descriptors, e.g., without Image Processing System 110 receiving the actual image.
[00236] In some embodiments, Assign/Receive Tags Step 1070 includes assigning a classification to an image, sending the image and the classification to Image Processing System 110, and receiving corresponding tags back from Image Processing System 110. The tags may be identified using the methods illustrated in FIG. 4. The classification may be used by Image Processing System 110 to generate the tags using Automatic Identification system 152 and/or a human reviewer at Destination 125 A.
[00237] The assigned and/or receive tags, and/or other results, are provided in Provide Results Step 455, as discussed elsewhere herein.
[00238] FIG. 11 illustrates a method of processing an image based on image descriptors, according to various embodiments of the invention. This method is typically performed on one of Image Sources 120. In the illustrated embodiments, Steps 1010, 1020, 1040 and 1060 are performed as described elsewhere herein. In a Classify Image Step 1110 the image being processed is classified based on a match between one or more image descriptors derived in Derive Descriptors Step 1040 and image descriptors previously stored on the one of Image Sources 120. The class or classes assigned to the image is the class or classes associated with the matched image descriptors previously stored.
[00239] In a Send Step 1120 the image and the class or classes assigned to the image are sent to Image Processing System 110. The image is there processed as described elsewhere herein to produce image tags assigned to the image. The processing optionally includes use of the class or classes to select a human image review or to assist in automatically tagging the image.
[00240] In a Receive Tags Step 1130 the tags assigned to the image are received by the one of Image Sources 120 on which Receive Image Step 1010 was performed. The tags are then presented in Provide Results Step 455.
[00241] FIG. 12 illustrates a method of processing an image using feedback, according to various embodiments of the invention. This method is optionally performed on Image Source 120 A and includes several communications between Image Source 120 A and Image Processing System 110 in order to improve tagging of an image. In a Provide Image Step 1210, an image is provided from Image Source 120A to Image Processing System 110. [00242] In a Receive 1st Response Step 1220 a first response is received from Image Processing system 110. This response may include one or more image tags. In a Provide Feedback Step 1230 feedback regarding the received image tags is provided from Image Source 120A to Image Processing System 110. This feedback is optionally manually entered by a human user of Image Source 120A and may include an upgrade request as discussed elsewhere herein. Feedback may include correction to one or more of the received tags. For example, the feedback may include an indication that one of the tags is not representative of the image. The feedback may include a classification of the image.
[00243] In an optional Receive 2nd Response Step 1240 a second response is received from Image Processing System 110. The second response is typically generated using the feedback provided in Provide Feedback Step 1230. In one example, considering an image of a toy car, the first response includes the tag “car”, the feedback includes the term “toy” and the second response includes the tags “Fisher-Price Superwagon.” The methods illustrated by FIG. 12 are optionally used to improve the accuracy of image tagging.
[00244] FIG. 13 and 14 illustrates methods of providing image tags based on image descriptors, according to various embodiments of the invention. In FIG. 13 the image descriptors are used to generate image tags that are then communicated over a computing network to a source of the image descriptors. In FIG. 14 the image descriptors are used to determine a Destination 125 for an image. The methods illustrated in FIGs. 13 and 14 are optionally performed in conjunction with methods illustrated elsewhere herein. For example the steps of these methods may be combined with those illustrated in FIG. 4.
[00245] Specifically, referring to FIG. 13, in a Receive Descriptors Step 1310 one or more image descriptors characterizing an image are received at Image Processing System 110. These image descriptors are optionally received without the associated image. Receiving only the descriptors typically requires less bandwidth than receiving the image. The image descriptors are optionally received from Image Source 120A via Network 115 and generated using the methods illustrated in FIGs. 10 or 11.
[00246] In a Compare Descriptors Step 1320 the received image descriptors are compared to one or more image descriptors previously stored at Image Processing System 110, e.g., stored in Memory 135. This comparison is made to determine if any of the received descriptors match the stored descriptors. The stored descriptors are stored in association with one or more image tags and/or classifications. For example, one set of stored descriptors may be associated with the image tags “oak tree.”
[00247] In a Retrieve Tags Step 1330 one or more image tags are retrieved responsive to a match between the received descriptors and the stored descriptors. The retrieved image tags are those associated with matched set.
[00248] In a Provide Tags Step 1340 the retrieved image tags are provided back to the source of the received descriptors, e.g., to Image Source 120A. They may there be presented to a user or otherwise processed as described elsewhere herein.
[00249] FIG. 14 illustrates methods in which an image and data characterizing the image are processed at Image Processing Server 110. In a Receive Image & Data Step 1410 the image and the data characterizing the image are received at Image Processing Server 110. The data characterizing the image can include, for example, a classification of the image or image descriptors characterizing the image. The image and characterizing data are optionally received from Image Source 125A. Receive Image & Data Step 1410 is optionally an embodiment of Receive Image Step 410 and Receive Source Data Step 420.
[00250] In a Determine Destination Step 1420 a destination for the image is determined based on the data characterizing the image. The destination may be one of Destination 125 and/or Automatic Identification System 152. For example, if the data characterizing the image includes a specific classification and the determined destination may be one of Destination 125 being associated with a human image review having expertise in that classification. Determine Destination Step 1420 is optionally an embodiment of Determine Destination Step 465.
[00251] In a Post Image Step 1430 the image, and optionally the classification, are communicated to the determined destination. In a Receive Tags Step 1440 one or more image tags are received. The image tags being based on the image and being selected to characterize the image. In a Provide Tags Step 1340 the image tags are provided to the source of the image, e.g. Image Source 125A. Post Image Step 1430 is optionally an embodiment of Post Image Step 470.
[00252] FIG. 15 illustrates methods of prioritizing image tagging, according to various embodiments of the invention. In these methods Image Ranker 190 is used to assign a priority to an image and the priority is used to determine how, if at all, the image is tagged. In Receive Image Step 410 an image is received at Image Processing System 110 as discussed elsewhere herein. The image may be from one of Image Sources 120, and may be received by crawling webpages for images. In some embodiments one or more of Image Source 120 include logic configured to crawl websites and retrieve images from these websites. Information received along with the image may include data regarding a webpage from which the image was retrieved. For example, the image may be received along with text and metadata from the webpage, data indicating how often the webpage is loaded (viewed), a URL of the webpage, and/or any other data on which image priority may be determined as discussed elsewhere herein.
[00253] In an Assign Priority Step 1520, Image Ranker 190 is used to automatically assign a priority to the received image. The priority is optionally represented by a numerical value from 1-100, by a letter grade, or the like. Priority optionally implies an (ordered) ranking of images. As described elsewhere herein, the priority may be determined based on a wide variety of factors.
[00254] In a Determine Processing Step 1530 a method of tagging (processing) the image is determined. The determination is based on the assigned priority of the image. In some embodiments, images with lowest priority are not processed (tagged) at all. The methods of tagging include automated tagging and/or manual tagging by a human reviewer, as described elsewhere herein.
[00255] In an optional Automatic Tagging Step 1540 the image is tagged using Automatic Identification System 152. Automatic Tagging Step 1540 is optional in embodiments where the method of tagging determined in Determine Processing Step 1530 does not include use of Automatic Identification System 152. Automatic Tagging Step 1540 is optionally performed prior to Assign Priority Step 1520. For example, an image may be tagged using automatic Identification System 152, and a confidence level for the automatically generated tags may then be used in Assign Priority Step 1520 to determine a priority for manual (human) tagging. If the confidence of the automatically generated tags is high then the priority for manual tagging may be set low, and if the confidence is relatively low then the priority for manual tagging may be set relatively high.
[00256] In an optional Manual Tagging Step 1550 the image is sent to one of Destinations 125 for tagging by a human reviewer. The image may be sent with tags generated using Automatic Identification System 152 and/or a variety of other information as described elsewhere herein. Manual Tagging Step 1550 may include any of the steps illustrated by FIGs. 6 8
[00257] In an optional Auction Tag Step 1560 an advertisement is assigned to the image for display on a webpage. This webpage is optionally the webpage from which the image was obtained in Receive Image Step 410. Auction Tag Step 1560 is optionally performed in real time as a request for the webpage is received. At that time, the tag(s) assigned to the image can be auctioned off to the party willing to provide the greatest consideration for placing an advertisement over or beside the image. Auction Tag Step 1560 is optionally performed using Advertising System 180 and the auction process may be managed by a third party, such as Google’s Adsence®.
[00258] In an optional Retag Step 1570, an image is retagged. Retag Step 1570 may include an analysis of how often advertisement s) assigned to the image are clicked as compared to an expected click rate. For example, if advertisements assigned to an image based on a first tagging are not clicked on at an expected rate, then the tags may not be an optimal representation of the image. The image may be retagged in an attempt to improve the click rate of assigned advertisements. Retag Step 1570 may include any of the tagging methods disclosed herein, e.g., those methods discussed in relation to FIGs. 6-8 and 15. Retag Step 1570 may use the knowledge that the tags resulting from the first tagging were not optimal.
[00259] The methods illustrated by FIG. 15 may also be applied to image sequences, e.g., video. The image sequence may be presented in a browser or using a variety of alternative applications. For example, video may be provided to a member of Image Sources 120 from a website such as youtube.com or from streaming services such as Netflix, Comcast cable television, direct TV, Ruku or Hulu. Factors used to determine if an image within a video should be manually tagged include: how often the video is viewed and/or the estimated value of expected tags. Expected tags may be indicated by an automatic review of the image using Automated Identification System 152, dialog within the video, text accompanying the video (e.g., a description, caption or title), and/or the like. The advertisement may be a video appended at the beginning or end of the image sequence, or spliced within the image sequence. Thus, the advertisement and video can be presented together in association. The advertisement may include an overlay placed over part of the image sequence, typically a part including the tagged image. [00260] Client-side image analysis - An exemplary embodiment.
[00261] For several decades, machine readable inventory has been critical in automating processes in retail, warehousing, industrial, surveillance, general Internet search, and other applications where objects must be cataloged or otherwise identified. However, these processes have traditionally been limited to machine readable codes embedded in or otherwise attached to the objects beforehand. There are many cases in which the machine-readable label process fails - a label may be accidentally removed or objects that have never been labeled necessitate identification. In recent years, the retail industry has made an effort to streamline checkout processes, replacing automated stations with fully automated purchase processes by way of a generalized video monitoring system. In various embodiments the systems illustrated in FIG. 1, including Image Processing System 110 and/or Automatic Identification System 152, are configured to overcome the limitations of machine-readable labels by observing salient object features sufficient to identify and confirm an observed object, a collection of objects, or to comprehend the actions describing a story in which a collection of objects interacts. For example, Automatic Identification System 152 and/or Image Processing System 110 may be used to identify an object without use of a machine-readable label or to confirm an identification made using a machine -readable label. Furthermore, this new type of analysis can be applied incrementally to legacy systems, offering quality control to traditionally machine-read objects. As an example, Image Processing Systeml 10 may be deployed into self-checkout kiosks to visually confirm an object matches a read barcode.
[00262] In various embodiments Automatic Identification System 152, and optionally any other elements of Image Processing System 110, may be embodied on a mobile device. For Example, Automatic Identification System 152 may be disposed on a smartphone, in a vehicle, on a camara, etc. In this way the processing of images can be performed on the same device used to collect the images. Alternatively, Automatic Identification System 152, and optionally any other elements of Image Processing System 110, may be disposed on a device locally connected to a device configured to capture images. For example, Automatic Identification System 152 may be disposed on a network hub connected to a set of security cameras. The connections between the network hub and the cameras may be wired or may use a local wireless connection such as WiFi, Bluetooth, and/or the like. In another example, Automatic Identification System 152 may be disposed on a smartphone locally connected to smart glasses (e.g., AR/VR glasses) which are optionally configured to both collect images and/or display image tags generated by Automatic Identification System 152. The network hub or smartphone may be connected to other devices using long range communications such as Network 115 (e.g., cellular communications and the internet).
[00263] Automatic Identification System 152, and optionally any other elements of Image Processing System 110 are optionally disposed in Intemet-of-Things (IoT) devices. Such IoT devices include, for example surveillance cameras, smartphones, point of sale devices, barcode readers, robotics, drones, appliances, door locks, civil/traffic management systems, and/or automobiles. One advantage of including Automatic Identification System 152 on such devices is that the ability to process images gives these devices the intelligence required to comprehend their environment, without having to communicate all the processed images to a remote server. In various embodiments, this “edge” processing of images on IoT devices can reduce the bandwidth (relative to remote server processing) required to understand the images by at least 5, 10, 20 50 or 100 times. Using Automatic Identification System 152, and optionally other elements of Image Processing System 110, each IoT device can participate in a clustered learning environment, whereby they independently gather salient data about their individual environments and contribute to a shared, learned understanding as the data is aggregated through parent nodes in the network. These parent nodes may include a remote server and/or instance of Image Processing System 110, are configured to observe differences in the visual features, or other intermediate neural network data, being received from edge nodes and determine salience, interest, and other metrics that are optionally used to further train a neural network of Automatic Identification System 152, or perform some global image analysis. This further training results in updates that are then delivered to the edge nodes (e.g., IoT devices including Automatic Identification System 152), thereby increasing the overall level of understanding and intelligence in the network. Such a network may include 2, 3 or more layers. For example, in some embodiments, IoT devices (edge nodes) include a set of cameras, intermediate nodes include a set of network hubs, and a parent node includes an image processing system, such as Image Processing System 110.
[00264] FIG. 16 illustrates an Image Analysis System 1700 according to various embodiments of the invention. Image Analysis System 1700 is optionally an embodiment of the system(s) illustrated in FIG.
1 and may include Automatic Identification System 152, Advertising System 180, Image Sources 120, and/or any practical combination of elements included in Image Processing System 110··. Image Analysis System 1700 optionally includes Training Server 1720 and optionally more than one Computing Device 1710. Computing Devices 1710 are optionally embodiments of Image Source 120A. As discussed herein, Network 115 transmits data bidirectionally through a wired or wireless network between one or more of Computing Device 1710 and a Training Server 1720. In some embodiments, Image Analysis System 1700 includes one or optionally more of Advertising System 180 or Inventory Server 1735. In other embodiments, Network 115 may optionally transmit data bidirectionally between Training Server 1720, Advertising System 180, and/or Inventory Server 1735.
[00265] Computing Devices 1710 include Visual Processing Logic 1747 configured to process image or image sequences (and optionally 3D information). Visual Processing Logic 1747 is typically an embodiment of Automatic Identification System 152 and may include other elements of Image Processing System 110. For example, Visual Processing Logic 1747 may include an embodiment of Image Ranker 190 and/or Destination Logic 160 configured to select images based on rank and/or to select a destination to send those images. As noted elsewhere herein, only a fraction of the images processed by Visual Processing Logic 1747 on Computing Devices 1710 may be communicated elsewhere. Those images that are communicated may be selected for communication based on the actions and/or objects identified in the images by Visual Processing Logic 1747. The embodiments, of Visual Processing Logic 1747 disposed on Computing Device 1710 are optionally optimized with regard to their processing hardware and/or memory requirements. For example, when a neural network is prepared for a lightweight footprint, vocabulary may be pruned and optimized for the application’s domain, the model may be quantized, or the nodes pruned. Visual Processing Logic 1747 may include a downloadable application (stored on a non-transient computer readable medium) configured to execute on a mobile device, may be embedded in hardware, and/or may include computing instructions as firmware on a logic circuit.
[00266] Image Analysis System 1700 optionally includes a Display 1740, which is configured to present image tags to the user. In various embodiments, Display 1740 is configured to provide an interface by which to select the Input Device 1745 (e.g., a Camera) field of view and/or focus. In some embodiments, Display 1740 is configured provide a visual interface to allow for the selection of object(s), actions, or a series of actions comprising a story for which to apply Visual Processing Logic 1747. In other embodiments, Display 1740 is configured to allow a selection of items in the Input Device 1745 field of view for which to apply the Visual Processing Logic 1747. In other embodiments, Display 1740 is configured to display surface and depth data from Input Device 1745, allowing the user to differentiate between foreground and background objects or other scenery features for which to apply the Visual Processing Logic 1747. Display 1710 may be disposed on Computing Device 1710 and/or may be included in a Monitoring System 1781 configured for remote control of Computing Device 1710. For example, a drone or remotely controlled camera.
[00267] Image Analysis System 1700 includes Input Device 1745, which are comprised of one or more visual or spatial sensors. Input Device 1745 is optionally an embodiment of Image Source 120A or included in an embodiment of Image source 120A. In various embodiments, Input Device 1745 may be comprised of a single visual spectrum camera, or optionally an infrared spectrum camera, or both visual and infrared cameras. In other embodiments, Input Device 1745 may be comprised of both a primary camera and secondary camera, for which secondary camera is optionally configured to compute a depth map and other spatial data of the scene. In other embodiments, Input Device 1745 may be comprised of LIDAR which is optionally configured to gather depth, surface, contour, and other spatial data from the scene.
[00268] In some embodiments, Input Device 1745 includes both Image Source 120A (e.g., a camera) and a barcode reader, where Image Source 120A is generate images of an object associated with a barcode. The barcode reader may include a scanning laser or may be configured to interpret a barcode based on a captured image. In these embodiments, Visual Processing Logic 1747 may be disposed in the same device as Input Device 1745, or may be disposed in a device locally connected to Input Device 1745. For example, Visual Processing Logic 1747 may be disposed in a local inventory system or point of sale system and images and barcodes are communicated from Input Device 1745 to Visual Processing Logic 1747 via a wire or short-range wireless signals. Visual Processing Logic 1747 is optionally configured to both detect and interpret a barcode and also tag other objects within an image. The combined barcode interpretation and image tagging are optionally performed on a same image, and the resulting image tags may be compared to the barcode interpretation to confirm accuracy of the barcode read and confirmation that the barcode is properly associated with an object identified in the image.
[00269] In some embodiments Display 1740 and/or Input Device 1745 is part of a wearable device while other parts of Image Analysis System 1700 are disposed in a separate mobile device. As is discussed elsewhere herein, a lightweight wearable device may communicate images to a local mobile device on which image analysis (tagging/captioning) is performed. Such embodiments may include, for example, smart glasses and a smartphone, wherein the smart glasses include the Display 1740 and Input Device 1745 and the smartphone tags image contents locally using Visual Processing Logic 1747. Communications between the smart glasses (e.g., AR/VR system) and the smartphone may occur using a wired connection or short-range radio protocols such as Bluetooth or WiFi, while communication between the smartphone and other devices via Network 115 are accomplished using cellular protocols and/or Internet protocols. This allows an Augmented Reality and/or a Virtual Reality (AR/VR) system to received image tags (e.g., captions) using the processing power of the smartphone and without sending all of the images to a remote device, e.g., to a server via the internet. By utilizing a local smartphone (or other mobile device), the AR/VR system can be embodied in a device of lower processing and/or power requirements (relative to the smartphone). The AR/VR system can, thus, be embodied in light weight stylish glasses or other wearable device.
[00270] Image Analysis System 1700 is optionally configured with a keyboard or other input device. In various embodiments the keyboard provides a mechanism for which to provide corrective feedback on the output of Visual Processing Logic 1747. The keyboard may be configured virtually as part of Display 1740 or as a separate physical keyboard attached to Computing Device 1710.
[00271] Image Analysis System 1700 includes Communication Circuit 1750, which is optionally enclosed in Computing Device 1710 and configured to manage network traffic, signal flow, and data between Display 1740 and Microprocessor 1790, and one or more of Input Device 1745. In other embodiments, Image Analysis System 1700 includes Communication Circuit 1750, which is both external and internal to Computing Device 1710 and is configured to manage network traffic, signal flow, and data between Display 1740 and Microprocessor 1790, and one or more of Input Device 1745. In an exemplary embodiment, Image Analysis System 1700 is installed in a municipality in one or more processing nodes, each of which has at least 25, 50, 100 or more connected cameras. In this embodiment, image processing at the edge nodes results in a reduction in the amount of data communicated by Communication Circuit 1750. Specifically, instead of communicating every image or every processed image, only the most salient data is communicated from each camera or network hub local to the cameras (i.e., alerts when a particular action or series of actions comprising a story have occurred), which is thereafter transmitted to a central monitoring station. A typical MPEG-4 video stream requires between 300kbps to 5Mbps per second to transmit video data over a network, and by reducing the data to only the most salient information, an average installation may save several orders of magnitude in bandwidth costs, utilizing bandwidth only during interesting events (i.e., from 36GB to 0.3GB or less per day, depending on activity). Communication Circuit 1750 optionally includes embodiments of Network 115.
[00272] Image Analysis System 1700 includes Microprocessor 1790 which is configured to execute Visual Processing Logic 1747 and/or device Update Logic 1728. In some embodiments, Microprocessor 1790 is connected to Memory 1755 and both are contained within Computing Device 1710, sharing both a common power source and data communication bus. In some embodiments, Microprocessor 1790 is configured to execute any combination of Visual Processing Logic 1747, Inventory Logic 1775, and/or Alert Logic 1780 (discussed elsewhere herein). Microprocessor 1790 is optionally an embodiment of Processor 140.
[00273] Memory 1755 includes, for example, Application Memory 1760, Execution Memory 1765, and Tag Memory 1770. Application Memory 1760 includes non-transient memory configured to store computing instructions, firmware, and/or hardware. Application Memory 1760 is configured to store any combination of Inventory Logic 1775, Device Update Logic 1729, Visual Processing Logic 1747, and/or Alert Logic 1780. Lor example, Application Memory 1760 may be configured to store a neural network of Visual Processing Logic 1747, associated neural network control logic, and neural network output interpretation logic. Execution Memory 1765 is configured to store the executing application code for Microprocessor 190, and in some embodiments may be optionally connected with a specialized data bus to Microprocessor 190. Tag Memory 1770 is configured to store vocabulary (words used for tags or captions) for the neural network interpretation logic. In some embodiments, Tag Memory 170 may be updated remotely by Device Update Logic 1729. Neural network interpretation logic (typically part of Visual Processing Logic 1747) is logic configured to take an output of a neural network of Visual Processing Logic 1747 and convert the output into a phrase (e.g., image caption) which optionally includes both identification of objects within one or more images and relationships between those objects. [00274] Analysis System 100 optionally includes Device Update Logic 1729, which is configured to receive full or partial updates to Visual Processing Logic 1747 and/or the vocabulary stored in Tag Memory 1770, from Training Server 1720. For example, in various embodiments, Device Update Logic 1729 is configured to schedule and apply an update to the Visual Processing Logic 1747 from Training Server 1720. In some embodiments, a scheduled update may include a full or partial update in either of Visual Processing Logic 1747, including the neural network of Visual Processing Logic 1747, and/or Tag Memory 1770.
[00275] In cases where Image Analysis System 1700 is deployed in inventory management, warehousing, robotics, archival, retail, and other industries whereby material objects are retrieved, natural visual representation (i.e., objects without a predetermined computer code as in barcode, QR code, or any other coded label), various embodiments of Image Analysis System 1700 optionally include Inventory Logic 1775, which is configured to retain and match various objects by inventory record through Visual Processing Logic 1747. As noted elsewhere herein, Image Analysis System 1700 may be used on combination with various types of barcode (e.g., QR code) readers.
[00276] In other embodiments of Image Analysis System 1700, optional Alert Logic 1780 is configured to monitor the output of Visual Processing Logic 1747 for various lists of trigger words, actions, and/or concepts. For example, a tag “gun” or a relationship “hitting.” When a trigger word is detected Alert Logic 1780 may be configured to transmit an alert indicating the occurrence of such triggers words, actions and/or concepts (and optionally related images) through Network 115 to other systems outside Image Analysis System 1700. In an exemplary embodiment of Alert Logic 1780, multiple surveillance cameras, or other Input Devices 1745, may be connected throughout a city to several Computing Device 1710 edge nodes, whereby each Computing Device 1710 is responsible for one or more locally generated camera video feeds. At Computing Device 1710 the images from cameras are processed using Visual Processing Logic 1747, and any triggers resulting from this processing results in Alert Logic 1780 sending an alert to a city-wide central monitoring station, as an Office of Emergency Management (OEM) Monitoring Center. Only when such actions or sequence of actions exist in the salient video feeds. Such a configuration may also be used in residential or commercial surveillance and security systems, whereby a home’s Computing Device 1710, e.g., a residential security system control panel, may be configured to create curated image and/or video feeds that only include triggering content, reducing a homeowner’s concern over the privacy of a camera until such time as image indicating that human life or property are at risk are processed.
[00277] Training Server 1720 is configured to receive image and video streams, image features, and/or other intermediate neural network data, and one or more image tags via Network 115 from one or more Computing Device 1710. For example, one or more images may be captured by a wearable camera or augmented reality device (e.g., headset) and the images communicated wirelessly to a nearby smartphone for processing. The smartphone, or other Computing Device 1710, may then communicate a fraction of the captured and processed images to Training Server 1720 and/or Image Processing System 110, e.g., less than 1/10th, l/50th, l/100th or l/1000th of the captured images for the purpose of quality control and/or training. This approach (a smartphone intermediary) is useful when the capture device is a lightweight pair of smart glasses or other device lacking the processing capacity or having less processing capacity of the smartphone (e.g., iPhone 11). However, the use of an intermediary is not required. The fraction of captured and processed images may be sent directly from any instance of Computing Device 1710. As only a fraction of images are transmitted to either Image Processing System 110 and/or Training Server 1720, the required bandwidth and server processing power can be substantially lower than if more images (e.g., all processed images) are transmitted.
[00278] In various embodiments, Training Server 1720 contains Confirmation Logic 1724 which is configured to analyze corrective and other feedback data from one or more Computing Device 1710.
Data received by Confirmation Logic 1724 are optionally compared through a configurable threshold in quorum, salience, disagreement, general differential, cosine similarity, Euclidian distance, or other comparative analysis across one or more input data. In other embodiments, Confirmation Logic 1724 may be configured to receive corrective data through direct and explicit user feedback or indirect user behavior data. An exemplary embodiment is configured to monitor user interaction in the form of acceptable user behaviors, such as a purchase or successful advertisement interaction (click, view, lead, etc.), which serves as one or more positive confirmation steps in validating neural network success. Such comprehensible input may also be optionally configured in the inverse, by reducing the weight of the corresponding neural network results. In some embodiments, Confirmation Logic 1724 is configured to compare analyses of several images including the same object or objects from different views such that the results of the analyses can be compared. Training Sever 1720 may include any combination of the elements of Image Processing System 110 and/or Visual Processing Logic 1747.
[00279] Training Server 1720 is optionally configured to store a master copy of a neural network. This master neural network serves as a reference for neural networks deployed in one or more Computing Device 1710. NN Training Logic 1726 receives corrective adjustment data from Confirmation Logic 1724, which is then applied to a master copy of the neural network weights. In other embodiments, a partial neural network is stored in Training Server 1720. In these embodiments, neural network data stored in Training Server 1720 is then used to compute a delta through Update Logic 1728, which then packages and distributes either a full or partial update through Network 115 to one or more Computing Device 1710.
[00280] In various embodiments, Image Analysis System 1700 is configured to communicate with or include Advertising System 180. Selection Logic 1733 is configured to receive a request through Network 115 from one or more Computing Device 1710, the request containing one or more image tags, image captions, image features, or other intermediate neural network data. Selection Logic 1733 is configured to utilize received image tags to retrieve or more advertisements from Advertisement Storage 1737. In other embodiments, Selection Logic 1733 may be optionally configured to compare image features or other intermediate neural network data for selection of advertisements. In an exemplary embodiment, hiking advertisements are configured to match similar or related image tags, such as “hiking books”, or a cluster of image features, or other intermediate neural network data, belonging to a particular cohort of users interested “hiking boots” or similar topics.
[00281] In various embodiments, Image Analysis System 1700 is optionally configured with Inventory Server 1735 including Inventory Logic 1775. Inventory Logic 1775 can include any inventory management system configured to receive object identities, such as object identities determined by Visual Processing Logic 1747 and optionally confirmed by a barcode reading. In an exemplary embodiment, a warehouse inventory management system is configured to select one or more products matching or similar to image tags “3 Door French Door Stainless Steel Refrigerator” received from Computing Device 1710 when Input Device 1745 captures the visual, surface, or other spatial data from an object or scene and such data has been interpreted by Visual Processing Logic 1747. This approach may be used for tracking inventory and/or sales.
[00282] Smart wearable devices have increased in popularity in recent years. As described herein, it is advantageous to have an intermediary device perform some computations on behalf of a wearable device. For example, a smartphone may perform image analysis on behalf of a smartwatch or smart glasses using a wired or local wireless connection. The smartphone is an “intermediary” device because it may also be connected to a remote server, i.e., via the internet.
[00283] In a detailed example a wearable device such as a watch, headset, clothing, pendant, glasses, bodycam, pin, etc. includes a camera configured to capture images. These images are communicated to a mobile device such as a smartphone or vehicle, etc. On the mobile device images are processed and the results of the processing are then communicated to the wearable device and/or a remote server. The wearable device may be an example of Input Device 1745 configured to record the images. These results can include identification of subject matter within the images, e.g., image tags. A first neural network is used on the mobile device to make the identification based on processing of the image. On the wearable device the identification can be communicated to a user either in the form of text or audio. In addition, a subset of the images may be communicated from the mobile device to a remote server, such as Image Processing System 110 and/or Automatic Identification System 152. The remote server is configured for scoring the accuracy of the identification. For example, the server may provide a score representing an accuracy of an image tag determined using Visual Processing Logic 1747. The determination of a score can be done manually (by a person) and/or can be done using a second neural network that has greater processing power and/or his better trained relative to the first neural network. In some embodiments, the server, e.g., Training Server 1720, is configured to send an update to the first neural network to the mobile device, the update being based on images received from multiple mobile devices, e.g., multiple instances of Computing Device 1710.
[00284] In various embodiments, a mobile device acts as an intermediary local to an image capture device, where there is more communication of images between the mobile device and the image capture device relative to communication of images between the mobile device and a server. For example, a smartphone intermediary may support an VR/AR headset having less processing power than the smartphone. The server receives a fraction of the images and provides quality control, neural network updates, and/or other information such as advertisements.
[00285] FIG. 17 illustrates methods of processing an image on a client device according to various embodiments of the invention. These methods are optionally performed using Image Analysis System 1700 and/or Image Processing System 110.
[00286] In a Capture Step 1810, image or video content is captured by Input Device 1745 and encoded into a data file or stream. Image or video content may be displayed via Display 1740 to allow an operator of Analysis System 100 to control, focus, or otherwise adjust Input Device 150, or to select an object or objects for identification.
[00287] In a Process Step 1815, image and video data from Capture Step 1810 are transmitted to Visual Processing Logic 1747. In this step, data are optionally pre-processed to align dimension and format with those acceptable to a first neural network of Visual Processing Logic 1747 and then forwarded through the first and optionally more neural networks for further processing. Process Step 1815 includes, for example, processing the image or video stream on the mobile computing device using a microprocessor and a visual processing application, the processing including generation of one or more image tags characterizing an identity of a three-dimensional object, an action within the image, and/or a series of actions comprising a story. The visual processing application optionally including a neural network stored on the mobile computing device. [00288] In various embodiments, image tags produced in Process Step 1815 are sent to one of several optional servers in a Send Tags Step 1820 in order to receive product inventory, ad inventory, or various data further describing the object or scene captured during the Capture Step 1810. In other embodiments, Inventory Logic 1775 and/or Alert Logic 1780 may be invoked to determine whether image tags are salient or interesting enough for any downstream services, such as Advertising System 180400 or Inventory Server 1735. As an exemplary embodiment, when a user takes a picture of a consumer product (with or without a barcode or other machine-readable label) like a piece of furniture and image tags are produced similar to “black suede leather 3-seat sectional sofa,” Send Tags Step 1820 is invoked, sending image tags to Inventory Server 1735, which then retrieves product inventory to fulfill the user’s search.
In other various embodiments, a camera continuously captures scene data through Input Device 1745 and/or Image Source 120A and through Process Step 1815 produces a stream of image tags, the tags which are then processed via Alert Logic 1780 to determine if tags are interesting to a number of downstream services or devices (SMS or other message delivery systems, emergency services, security monitoring, etc.).
[00289] In an optional Receive Ad Step 1825 one or more advertisement is received from advertisement inventory of Advertising System 180, the advertisement is selected based on the tags received as a result of Send Tags Step 1820. In an exemplary embodiment, when a user takes a picture of a consumer product (without a barcode or other machine-readable label) like a piece of furniture and image tags are produced similar to “black suede leather 3-seat sectional sofa.” Send Tags Step 1820 is invoked, sending image tags to Advertising System 180.
[00290] In an optional Display Step 1830, image tag data generated using Visual Processing Logic 200 are combined and exhibited and/or exhibits an advertisement obtained from Advertising System 180 to a user. Data displayed to the user may optionally include the image tags as generated by Visual Processing Logic 200, or an interpretation of image tags as they related to the application of Image Processing Device 110. For example, an interpretation may be “Inventory Confirmed” or “Inspection Passed.” [00291] Image tags produced in Process Step 1815 are optionally compared with validation data of similar visual content. If these data are determined to be salient or interesting through analysis, or on random/periodic selection, a Send Image Step 1835 is invoked. In Send Image Step 1835 Communication Circuit 1750 is used to transmit visual data, image tags, image features, and/or intermediate neural network data to a remote server such as Image Processing System 110. For example, in a periodic selection, every 100th image is sent for quality control and/or training purposes.
[00292] In an optional Receive Update Step 1840, Computing Device 1710 receives either a full or partial update of Visual Processing Logic 1747 from Training Server 1720 through Network 115 and applies the update in either or both of Application Memory 1760 and/or Tag memory 1770. In an exemplary embodiment, one or more Computing Device 1710 capture visual data related to a traffic collision, process these data into image tags, and send the tags and other intermediate neural network data via Network 115 to Training Server 1720. Training Server 1720 then compares differentials related to salience and uniqueness, and optionally also accepts any corrective input from field experts. At Receive Update 1840 step, Training Server 1720 then determines an update to the neural network which is sent back to Computing Device 1710 via Network 115. At Computing Device 1710, Visual Processing Logic 1747 is then replaced or partially updated based on the received update.
[00293] In various embodiments, the methods illustrated by FIG. 17 include: sending the image, video stream, depth data, or surface data to the remote server and receiving an update to the neural network, the update being based on a correctness of the one or more image tags in characterizing the identity of the object or action within the image. The image, video stream, depth data, or surface data is optionally one of a set of similar type (whether image, video, etc) comprising a configurable quorum for which associated or corrective tags are communicated to the remote server. The image, video stream, depth data, or surface data optionally contains a determinable novelty threshold between one of a set of images, video stream, depth data, or surface data sources, for which associated or corrective tags are communicated to the remote server. The image, video stream, depth data, or surface data optionally contain a configurable novelty threshold between previously trained inputs, for which associated or corrective tags are communicated to the remote server. The image, video stream, depth data, or surface data optionally contain a configurable salience threshold between previously recorded input data, for which associated or corrective tags are communicated to the remote server.
[00294] FIG. 18 illustrates methods of providing information to an AR/VR device, according to various embodiments of the invention. In these methods, the possible small footprint (memory and/or processing power requirements) of Visual Processing Logic 1747 is used to provide image processing proximate locally, as opposed to at a distant server. For example, on a wearable, on a mobile device, on a handheld device, a vehicle, and/or at a local network hub. The image processing may occur on the same device used to capture images or on a nearby device. For example, a body cam, smart glasses, AR/VR headest or heads-up display, vehicle camera, drone, an/or the like, may be used to both capture images and process the captured images to generate image tags. Alternatively, any of the above devices may be used to capture images and a local device (intermediary) such as a smartphone, vehicle computer, wearable computer, drone controller, personal assistant device (e.g., Amazon Alexa), network hub, and/or the like may be used to process the captured images to generate image tags. Optionally, the image tags (or associated information) are then communicated back to the device used to capture the images. A fraction of processed images is typically also provided to a remote server for the purposes of quality control, training, etc. The mode of communication between the local devices and remote servers are optionally different, e.g., using the Internet or cellular networks verses using wired or short-range radio frequency communication standards. The use of an intermediary is optional.
[00295] In a Capture Step 1810, one or more images are captured using Input Device 1745 or Image Source 120A. The captured images may include a sequence of images embodied in a video standard. Capture Step 1810 can include capture of images and/or 3D spatial information, and may be accomplished using any of the image capture devices discussed herein.
[00296] In a Transmit Step 1920, the captured images are transmitted to a local device including Visual Processing Logic 1747, e.g., to Computing Device 1710. This transmission can occur using a wired connection or short-range radio frequency protocols. In a specific example, images captured using an AR/VR headset or smart glasses are communicated to a smartphone or wearable device. In another embodiments, images captured using vehicle navigation cameras and/or LIDAR are communicated to a vehicle computer.
[00297] In a Process Step 1815, the communicated images are processed using Visual Processing Logic 1747 and/or Image Processing System 110 to generate image tags characterizing actions, objects, and/or relationships between objects within the communicated images.
[00298] In a Transmit Tags Step 1930, the image tags generated in Process Step 1815 are transmitted back to the device that captured the images or to a different local device. For example, back to the smart glasses or AR/VR device. In another example, images may be captured by vehicle navigation cameras, communicated to a vehicle computing device including Visual Processing Logic 1747, and then transmitted to an augmented reality heads-up display position for viewing by a driver.
[00299] In an optional Display Step 1830, the image tags (or derivative thereof) are displayed to a user. They may be displayed on an AR/VR device, on a heads-up display, and/or the like. An example of a derivative of the tags can include an indication “improper license plate” derived from tags indicating a vehicle make and model and tags indicating a license plate number - where the license plate does not match the vehicle or is expired.
[00300] In a Send Image Step 1835, a fraction of the images processed in Process Step 1815 are sent from the device including Visual Processing Logic 1747 to a separate device such as Training Server 1720 and/or Image Processing System 110. As noted elsewhere herein, the fraction sent may be less than 1/10th, l/100th or l/1000th of the total images processed. Send Image Step 1835 may be performed using a communication standard having longer range than Transmit Step 1910. For example, cellular protocols and/or the internet.
[00301] Send Image Step 1835 is optionally followed by Receive Update Step 1840, as discussed elsewhere herein. [00302] Example applications for use of image tagging system and/or methods discussed herein include: Marketing (what does the user look at - eye tracking w/ heartrate/sweat detection to measure excitement, provide advertisements based on length of view or eye focus); Describe a scene (accessibility for vision impaired); Estimating weight/sizes (recognize a couch and estimate it weight, moving & storage); Estimate work to be done (automated measurements based on images, recognizing surfaces such as floor, comers and walls, etc., contractor) - measurements; Detect violations/issues/problems (inspection); Medical diagnostics and support (surgeon in ER or via. remote diagnostics, viewing something on one’s own body); Anomaly detection (for example, identify license plates that don’t match a car make/model/year, identify license plates of vehicles that have not been registered); Aeronautics (pilot support - process visual field - safety/hazard detection); Motorcycle/Bicycle helmet (safety/hazard detection); Contact tracing (facial detection); Business/Networking/Political support (facial detection); Public Safety (facial / anomaly / illegal activity detection); and Industrial warehousing (automatically identify items on a pick list in an AR device, label items and inventory / picker).
[00303] Various embodiments of the invention include use of Image Processing System 110 to facilitate autonomous operations, such as autonomous flight. Such embodiments can include use of any combination of the systems and method disclosed elsewhere herein. For example, Computing Device 1710 may be included in an autonomous or semi -autonomous drone or other autonomous or semi- autonomous vehicle. Such drone or other vehicle may operate in conditions wherein direct radio communications prevent optimal remote piloted operation.
[00304] Various embodiments of the invention include Navigation Logic 188 configured to navigate a vehicle based at least in part on image analysis (e.g., the descriptive tagging) performed using Image Processing System 110 and/or Computing Device 1710. The descriptive nature of the output of Image Processing System 110 and/or Computing Device 1710 can facilitate the resulting navigation. In Computing Device 1710, image analysis on a small mobile device, such as a drone, can enable navigation normally only found in larger devices. In various embodiments, Computing Device 1710 may weigh less than 250, 200 or 170 grams. In various embodiments, Visual Processing Logic 1747 is executed using an instance of Microprocessor 1790 having less than 70, 50, 40 or 30 GFLOPs. In various embodiments, Visual Processing Logic 1747 includes an image processing model less than 1 GB, 600 MB or 450 MB in size. In various embodiments, Visual Processing Logic 1747 is configured to process images having an image size of 224x224, 331x331 or 448x448, or any size therebetween, or more.
[00305] Image Processing System 110 and/or Computing Device 1710 may include in a vehicle, wherein radio or optical communications are intermittent or undesirable. Intermittent communications can include situations where radio or direct laser communications are unreliable, such as where they may be subject to interference or jamming or lack of line-of-sight. Undesirable communications can include situations in which communications are subject to monitoring, spoofing, or detection of a party involved in the communications. For example, where one or more sender in the communications does not wish to be located. In these embodiments, vehicle operation (e.g., drone flight) may fall over from management by a remote pilot to management dependent on Image Processing System 110 and/or Computing Device 1710. Specifically, some embodiments of the invention include Fall-Over Logic 187, configured to determine when Image Processing System 110 and/or Computing Device 1710 are partially and or fully responsible for vehicle operation (e.g., navigation), optionally as compared to full and/or real-time remote human vehicle operation.
[00306] “Fall-over” as used herein is intended to include full or partial exchange of responsibility form a first entity to a second entity, when the first entity is compromised. In some embodiments, fall-over is between different vehicles. For example, between different drones. This may occur when, 1) operation is switched from being dependent on one source to another source, or 2) operation is switched from being dependent on one source to multiple sources. In an illustrative example, fall-over from a remote operator to Navigation Logic 188 may occur when a radio signal is jammed or otherwise lost.
[00307] In some embodiments, Image Processing System 110 and/or Computing Device 1710 are configured to receive and process images from several mobile devices working in consort. For example, a group of two or more drones may each obtain images of an area and/or object. These images can be used by Computing Device 1710 to confirm identity of the object and/or action within the images. Visual Processing Logic 1747 may use the multiple images to produce a single descriptive tag or may generate a descriptive tag for each image and then compare the tags produced to confirm their accuracy.
[00308] In some embodiments, Image Processing System 110 and/or Computing Device 1710 include Targeting Logic 189 configured to use the descriptive tags generated by image processing (e.g., by Visual Processing Logic 1747) to identify and/or track targets. As described herein with respect to Navigation Logic 188, Targeting Logic 189 may be subject to fall-over (e.g., under the control of Fall-Over Logic 187) from a remote human operator to operation at least partially dependent on outputs of Image Processing System 110 and/or Computing Device 1710; and Targeting Logic 189 may also make use of processing of images obtained from multiple vehicles working in consort. In a specific example, Computing Device 1710 is configured to use images obtained by one or more drones to identify an object (e.g., another drone, aircraft, flying object, or ground-based vehicle). The identification may be used to avoid, track, navigate to, follow, target, locate, and/or otherwise interact with the object. For example, identification of a missing person may be used to generate and communicate a location of that person, while following the person. In another example, images generated by one or more drones may be used to identify another drone and navigate to, target, fire on, and/or follow the other drone. [00309] In some embodiments, two or more of Computing Device 1710 are configured to form a peer-to- peer network, optionally with each in communication with the others and each including Visual Processing Logic 1747. In these embodiments, the Computing Devices 1710 may communicate via radio and/or optical signals. Each may capture images of an object and/or area, and optionally each capable of processing the captured images. The processed images including views of the object and/or area from different viewpoints. As noted elsewhere herein, the images from different Computing Device 1710 may be used for confirmation of descriptive tags and/or two or more images may be processed as a group by one or more of Visual Processing Logic 1747.
[00310] In a specific example, Computing Device 1710 is included in a drone configured to be piloted by a remote user via a wireless signal. The drone includes Fall-Over Logic 187 configured to switch between piloting by the remote user and piloting using Navigation Logic 188, the Navigation Logic 188 being responsive to the output of Visual Processing Logic 1747. Fall-Over Logic 187 is configured to make this switch responsive to loss and/or restoration of the wireless signal. In some embodiments, Fall- Over Logic 187 is configured to switch between these two piloting modes based on a schedule and/or a detection event. For example, where it is desirable to reduce or minimize radio communications the drone may be configured to spend a majority of flight time in radio silence (e.g., at least 50, 70, 90, 95 or 98% of the time). During these times of radio silence, Navigation Logic 188 and Visual Processing Logic 1747 are used for navigation. During the remaining times, of radio communications with a remote user, piloting (and/or target acquisition) is optionally performed by the remote user. In another example, a switch between the autonomous piloting and remote piloting is performed by a detection of a target object, as determined by Visual Processing Logic 1747. The target may be, for example, a person, a specific person, a vehicle, a license plate, an animal, an object of a predetermined color, detection of an object of a specific type (e.g., a vehicle model, another drone, a tracked vehicle, a vehicle greater than a certain size, a weapon, a flying object, a drone, a helicopter, a jet, a boat/ship, ... ), detection of a moving vehicle, a type of movement (e.g., flying, running, crawling, shooting, burning, ... ), detection of a heat signature, and/or the like. Once a target is detected, as determined by a descriptive tag produced by Visual Processing Logic 1747, Fall-Over Logic 187 may be configured to automatically switch between piloting modes. For example, switching navigation from an autonomous mode to a remote human pilot. Fall-Over Logic 187 may be configured to activate fall-over before or after target acquisition. Fall-Over Logic 187 may be configured to activate fall-over based on detection of a jamming signal.
[00311] In some embodiments Image Processing System 110 and/or Computing Device 1710 are disposed within a “mother” vehicle configured to carry one or more sub-vehicles. The sub-vehicles may or may not include additional instances of Image Processing System 110 and/or Computing Device 1710. For example, in some embodiments a drone is configured to carry at least 1, 2, 4, 5 or 8 smaller drones. When a target is detected, as determined by processing one or more images using Visual Processing Logic 1747, the mother drone is configured to release a smaller drone, which may include another instance of Computing Device 1710 and/or Input Device 1745. The smaller drone may be guided to the target by its own instance of Visual Processing Logic 1747, by the Visual Processing Logic 1747 of the mother drone, or both. The smaller drone may be replaced by a missile, optionally including Visual Processing Logic 1747. The mother drone may be configured to loiter in an area until a target is detected, optionally, switch to remote pilot navigation to confirm the detected target, and then release the smaller drones to attack the target. In this case, images from the smaller drones are optionally used to confirm and/or track the target, the images being processed on the smaller drones or the mother drone.
[00312] In some embodiments, Image processing System 110 includes multiple devices, e.g., Computing Device 1710. In these embodiments, image analysis (in any of the embodiments described herein) is distributed among the multiple devices, the image analysis may be based on data, e.g., images, received from the multiple devices. Further, the output of each instance of Image Processing System 110 and/or Image Source 120, and/or Computing Device 1710, may be evaluated at a subset thereof. For example, a system including multiple flying vehicles and have one, two or more vehicles including Visual Processing Logic 1747 and further vehicles including cameras configured to capture images to be communicated to the one, two or more vehicles for processing.
Inter-vehicle coordination.
[00313] Various embodiments of the invention include methods of operating a remote vehicle, the methods comprising the following steps. 1) Guiding the remote vehicle using instructions received via a radio signal from a remote operator. The radio signal is optionally operated using frequency hopping and/or a burst mode. 2) Detecting an event requiring fall-over (a “fall-over event”) from control by the remote operator (a “remote mode”) to Navigation Logic 188 supported by Visual Processing Logic 1747. The event may include any of the events discussed elsewhere herein, such as jamming of the radio signal or detection of a target. 3) Falling over from the “remote mode” to the autonomous mode. 4) Navigating the remote vehicle in the autonomous mode based on image processing by Visual Processing Logic 1747. 5) Optionally receiving from the remote operator a confirmation of the target.
[00314] In various embodiments Computing Device 1710 can include any combination of Fall-Over Logic 187, Navigation Logic 188, and/or Targeting Logic 189. Computing Device 1710 may be in communication with a remote Monitoring System 1781, on which Display 1740 is optionally located, and which is configured for a human or AI operator to navigate in the remote mode.
[00315] Several embodiments are specifically illustrated and/or described herein.
However, it will be appreciated that modifications and variations are covered by the above teachings and within the scope of the appended claims without departing from the spirit and intended scope thereof. For example, the images discussed herein are optionally part of a video sequence of a video. Human image reviews may provide image tags at Destinations 125 using audio input. The audio input can be converted to text in real-time using audio to text conversion logic disposed on Destinations 125 and/or Image Processing System 110. Image tags are optionally processed by spellcheck logic. The “smartphone” discussed herein may be replaced by other (optionally mobile) devices, such as a police or emergency services radio, a tablet computing, a home/commercial security system, a vehicle computing system, a bodycam system, a vehicle camera system, a sentry system, a boarder control system, a drone, and/or the like, any of which may include Computing Device 1710. As used herein, the term “Real-time” means without unnecessary delay such that a user can easily wait for completion. The systems and methods described herein are optionally used to tag audio content, such as music or dialog. This audio content may be part of a video or otherwise associated with an image. In some embodiments, audio content is automatically converted to text and this text is used to assist in manually or automatically tag an image. Text generated from audio content may be used in manners similar to those described herein for text found on a webpage including an image, to assist in tagging the image.
[00316] The embodiments discussed herein are illustrative of the present invention. As these embodiments of the present invention are described with reference to illustrations, various modifications or adaptations of the methods and/or specific structures described may become apparent to those skilled in the art. All such modifications, adaptations, or variations that rely upon the teachings of the present invention, and through which these teachings have advanced the art, are considered to be within the spirit and scope of the present invention. Hence, these descriptions and drawings should not be considered in a limiting sense, as it is understood that the present invention is in no way limited to only the embodiments illustrated.
[00317] Computing systems referred to herein, (e.g., Image Processing System 110, Images Sources 120 and Destinations 125), can comprise an integrated circuit, a microprocessor, a personal computer, a server, a distributed computing system, a communication device, a network device, or the like, and various combinations of the same. A computing system may also comprise volatile and/or non-volatile memory such as random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), magnetic media, optical media, nano-media, a hard drive, a compact disk, a digital versatile disc (DVD), and/or other devices configured for storing analog or digital information, such as in a database. The various examples of logic or “applications” noted above can comprise hardware, firmware, or software stored on a non-transient computer-readable medium, or combinations thereof. This logic may be implemented in an electronic device, e.g., circuit, to produce a special purpose computing system. Computer-implemented steps of the methods noted herein can comprise a set of instructions stored on a computer -readable medium that when executed cause the computing system to perform the steps. A computing system programmed to perform particular functions pursuant to instructions from program software is a special purpose computing system for performing those particular functions. Data that is manipulated by a special purpose computing system while performing those particular functions is at least electronically saved in buffers of the computing system, physically changing the special purpose computing system from one state to the next with each change to the stored data.

Claims

CLAIMS What is claimed is:
1. A portable computing device comprising: a display configured to present a user interface and to display an advertisement; a camera configured to capture an image or video stream; application memory configured to store a visual processing application, the visual processing application including a neural network and logic configured to generate image tags, the image tags characterizing an identity of a three-dimensional object or an action within the image or video stream, the neural network and the camera sharing a same power supply; execution memory configured for execution of the visual processing application; wireless communication circuits configured to send the image tags generated by the visual processing application to a remote advertising server and to receive the advertisement from the advertising server, the advertisement being selected based on the image tags; and a microprocessor configured to execute the visual processing application to generate the image tags on the portable computing device.
2. An inventory management device comprising: a display configured to present a user interface and to display user messages; a camera configured to capture an image or video stream; application memory configured to store a visual processing application, the visual processing application including a neural network optionally requiring less than 250MB of the application memory and being configured to generate image tags, the image tags characterizing an identity of a three-dimensional object or an action within the image or video stream, the neural network and the camera both being disposed within the housing; inventory logic configured to generate inventory data based on the image tags; a wireless communication circuit configured to send inventory data to a remote server; execution memory configured for execution of the visual processing application; and a microprocessor configured to execute the visual processing application to generate the image tags on the inventory management device.
3. A detection system configured to detect an object, an event or a series of events comprising a story, the system comprising: a camera configured to capture an image or video stream; application memory configured to store a visual processing application, the visual processing application including a neural network and being configured to generate one or more image tags by processing the image, the image tags characterizing an identity of a three- dimensional object or an action within the image or video stream, the neural network and the camera sharing a same power supply or the neural network being within a central processing device with one or more connected cameras; execution memory configured for execution of the visual processing application; tag memory configured to store reference tags; alert logic configured to send an alert to a remote server in response to one of the one or more image tags matching one of the reference tags; and a wireless communication circuit configured to send the alert and the image tags generated by the visual processing application to the remote server.
4. A flyable drone comprising: a camera configured to capture an image or video stream; application memory configured to store a visual processing application, the visual processing application including a neural network and logic configured to generate image tags, the image tags characterizing an identity of a three-dimensional object or an action within the image or video stream; execution memory configured for execution of the visual processing application; wireless communication circuits configured to send the image tags generated by the visual processing application (and/or the image) to a remote monitoring system and to receive real-time navigation instructions from the remote monitoring system; navigation logic configured to navigate the drone in an autonomous navigation mode independent of the real-time navigation instructions from the remote monitoring system; fall -over logic configured to switch between the autonomous navigation mode and a remote mode in which navigation of the drone is based on the real-time navigation instructions from the remote monitoring system, the switch being responsive to a fall-over event; optional targeting logic configured to target an object identified in the image or video stream by the visual processing application; and a microprocessor configured to execute the visual processing application to generate the image tags on the flying drone system.
5. A method of navigating a remote vehicle, the method comprising: guiding the vehicle using instructions received via a radio signal from a remote operator, in a “remote mode”; detecting a fall-over event; falling-over from the remote mode to an autonomous mode, navigation in the autonomous mode being responsive to images generated by camera on the remote vehicle and processed to generate descriptive image tags using visual processing logic on the remote vehicle; and navigating in the autonomous mode based on the descriptive image tags.
5. A method of navigating a drone, the method comprising: navigating the drone in an autonomous mode in which radio communications (sending and/or receiving) are minimized; detecting a fall-over event, wherein the fall-over event optionally includes detection of a target object; and using fall-over logic to switch from the autonomous mode to a remote mode of navigation, the remote mode of navigation including navigation responsive to commands received from a remote operator via radio signals.
6. The system or method of any one of the proceeding claims, wherein the reference image tags includes one or more of “fire,” “vehicle,” “person,” “throwing,” “party,” “breaking,” “gun,” “paint,” “taking,” “hitting,” “falling,” “tripping,” “attacking,” “driving,” “walking,” “running,” “gathering,” “jumping,” “stabbing,” “shooting,” “chasing,” or any other words optionally configured to trigger alert logic.
7. The system or method of any one of the proceeding claims, wherein the visual processing application is further configured to send a fraction but not all of the processed images to the remote server.
8. The system or method of any one of the proceeding claims, wherein the visual processing application is further configured to send image features or other intermediate neural network data (aka. latent vectors) to the remote server and to optionally receive a partial or full neural network update based on update logic.
9. The system or method of any one of the proceeding claims, wherein the visual processing application is configured to generate the one or more image tags by processing the image or video stream without communicating the image or video stream over a network connection.
10. The system or method of any one of the proceeding claims, wherein the visual processing application is configured to generate the one or more image tags by processing the image or video stream without communicating the image or video stream over a network connection.
11. The system or method of any one of the proceeding claims, wherein the visual processing application is configured to generate the one or more image tags by processing the image or video stream without communicating the image or video stream over a network connection.
12. The system or method of any one of the proceeding claims, wherein the visual processing application is configured to receive a partial or full neural network update based on update logic based on the fraction of the processed images.
13. The system or method of any one of the proceeding claims, further comprising a secondary camera configured to capture depth or surface data.
14. The system or method of any one of the proceeding claims, further comprising LIDAR configured to capture depth data.
15. The system or method of any one of the proceeding claims, wherein the visual processing application is configured to identify gestures within videos.
16. The system or method of any one of the proceeding claims, wherein the visual processing application requires less than 250MB of the application memory.
17. The system or method of any one of the proceeding claims, wherein the wireless communication circuit is further configured to send the image to the remote server when the one or more image tags matches one of the reference tags.
18. The system or method of any one of the proceeding claims, wherein the reference image tags includes one or more of “drone,” “launch,” “fire,” “vehicle,” “person,” “throwing,” “party,” “breaking,” “gun,” “paint,” “taking,” “hitting,” “falling,” “tripping,” “attacking,” “driving,” “walking,” “running,” “gathering,” “jumping,” “stabbing,” “shooting,” “chasing,” or any other words optionally configured to trigger alert logic.
PCT/US2022/021570 2013-05-01 2022-03-23 Mobile client based image analysis WO2022204296A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/372,675 US20240013237A1 (en) 2013-05-01 2023-09-25 Mobile Client Based Image Analysis

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202163165054P 2021-03-23 2021-03-23
US63/165,054 2021-03-23
US17/399,024 2021-08-10
US17/399,024 US20220207274A1 (en) 2013-05-01 2021-08-10 Client Based Image Analysis

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US17/399,024 Continuation-In-Part US20220207274A1 (en) 2013-05-01 2021-08-10 Client Based Image Analysis

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/372,675 Continuation-In-Part US20240013237A1 (en) 2013-05-01 2023-09-25 Mobile Client Based Image Analysis

Publications (1)

Publication Number Publication Date
WO2022204296A1 true WO2022204296A1 (en) 2022-09-29

Family

ID=83397848

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/021570 WO2022204296A1 (en) 2013-05-01 2022-03-23 Mobile client based image analysis

Country Status (1)

Country Link
WO (1) WO2022204296A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180012109A1 (en) * 2015-12-31 2018-01-11 Dropbox, Inc. Generating and utilizing normalized scores for classifying digital objects
US20190089770A1 (en) * 2015-12-15 2019-03-21 Oath Inc. Computerized system and method for determining and communicating media content to a user based on a physical location of the user
US20190258683A1 (en) * 2013-05-01 2019-08-22 Cloudsight, Inc. Image Directed Search
US20200034887A1 (en) * 2017-07-26 2020-01-30 Adhark, Inc. Systems and methods for automating content design transformations based on user preference and activity data
US20200250465A1 (en) * 2016-02-12 2020-08-06 Adobe Inc. Accurate tag relevance prediction for image search

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190258683A1 (en) * 2013-05-01 2019-08-22 Cloudsight, Inc. Image Directed Search
US20190089770A1 (en) * 2015-12-15 2019-03-21 Oath Inc. Computerized system and method for determining and communicating media content to a user based on a physical location of the user
US20180012109A1 (en) * 2015-12-31 2018-01-11 Dropbox, Inc. Generating and utilizing normalized scores for classifying digital objects
US20200250465A1 (en) * 2016-02-12 2020-08-06 Adobe Inc. Accurate tag relevance prediction for image search
US20200034887A1 (en) * 2017-07-26 2020-01-30 Adhark, Inc. Systems and methods for automating content design transformations based on user preference and activity data

Similar Documents

Publication Publication Date Title
US9959467B2 (en) Image processing client
US20220207274A1 (en) Client Based Image Analysis
US9830522B2 (en) Image processing including object selection
US10223454B2 (en) Image directed search
RU2729956C2 (en) Detecting objects from visual search requests
US20210168442A1 (en) Computerized system and method for automatically detecting and rendering highlights from streaming videos
US9575995B2 (en) Image processing methods
US10825048B2 (en) Image processing methods
US11087182B1 (en) Image processing including streaming image output
CA2885880C (en) Image processing including object selection
US20160342624A1 (en) Image Tagging System
CA2885858A1 (en) Image tagging system
CA2885879C (en) Image processing methods
CA2885874A1 (en) Image processing system including image priority
CN106164959A (en) Behavior affair system and correlation technique
US9639867B2 (en) Image processing system including image priority
US20150220569A1 (en) Priority Based Image Processing Methods
US20240013237A1 (en) Mobile Client Based Image Analysis
CA2885835A1 (en) Image processing server
WO2022204296A1 (en) Mobile client based image analysis
CA2885852A1 (en) Image processing client
CA2885863C (en) Priority based image processing methods
CA2932964A1 (en) Image directed search
CN105045793A (en) Image processing client

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22776579

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22776579

Country of ref document: EP

Kind code of ref document: A1