US20070162761A1 - Methods and Systems to Help Detect Identity Fraud - Google Patents
Methods and Systems to Help Detect Identity Fraud Download PDFInfo
- Publication number
- US20070162761A1 US20070162761A1 US11/613,891 US61389106A US2007162761A1 US 20070162761 A1 US20070162761 A1 US 20070162761A1 US 61389106 A US61389106 A US 61389106A US 2007162761 A1 US2007162761 A1 US 2007162761A1
- Authority
- US
- United States
- Prior art keywords
- applicant
- data
- question
- information
- questions
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 61
- 238000012360 testing method Methods 0.000 claims abstract description 11
- 230000004044 response Effects 0.000 claims description 4
- 238000005516 engineering process Methods 0.000 abstract description 25
- 238000012795 verification Methods 0.000 description 12
- 230000008569 process Effects 0.000 description 11
- 239000011435 rock Substances 0.000 description 10
- 238000012545 processing Methods 0.000 description 8
- 230000000875 corresponding effect Effects 0.000 description 6
- 230000001815 facial effect Effects 0.000 description 6
- 241000282412 Homo Species 0.000 description 4
- 239000008186 active pharmaceutical agent Substances 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 4
- 238000013459 approach Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 238000013138 pruning Methods 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 235000019557 luminance Nutrition 0.000 description 3
- 230000000873 masking effect Effects 0.000 description 3
- 241001155433 Centrarchus macropterus Species 0.000 description 2
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000002596 correlated effect Effects 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 230000002349 favourable effect Effects 0.000 description 2
- 230000006855 networking Effects 0.000 description 2
- 238000012946 outsourcing Methods 0.000 description 2
- 238000007639 printing Methods 0.000 description 2
- 238000012797 qualification Methods 0.000 description 2
- 238000012552 review Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- QXJQHYBHAIHNGG-UHFFFAOYSA-N trimethylolethane Chemical compound OCC(C)(CO)CO QXJQHYBHAIHNGG-UHFFFAOYSA-N 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- FDQGNLOWMMVRQL-UHFFFAOYSA-N Allobarbital Chemical compound C=CCC1(CC=C)C(=O)NC(=O)NC1=O FDQGNLOWMMVRQL-UHFFFAOYSA-N 0.000 description 1
- 241000218645 Cedrus Species 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 206010028916 Neologism Diseases 0.000 description 1
- 241000143957 Vanessa atalanta Species 0.000 description 1
- 239000004480 active ingredient Substances 0.000 description 1
- 230000004931 aggregating effect Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 229940088007 benadryl Drugs 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000013479 data entry Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000003292 diminished effect Effects 0.000 description 1
- ZZVUWRFHKOJYTH-UHFFFAOYSA-N diphenhydramine Chemical compound C=1C=CC=CC=1C(OCCN(C)C)C1=CC=CC=C1 ZZVUWRFHKOJYTH-UHFFFAOYSA-N 0.000 description 1
- 230000029036 donor selection Effects 0.000 description 1
- 238000005553 drilling Methods 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000013213 extrapolation Methods 0.000 description 1
- 238000000556 factor analysis Methods 0.000 description 1
- 201000003373 familial cold autoinflammatory syndrome 3 Diseases 0.000 description 1
- 235000013305 food Nutrition 0.000 description 1
- 239000000446 fuel Substances 0.000 description 1
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 1
- 239000010931 gold Substances 0.000 description 1
- 229910052737 gold Inorganic materials 0.000 description 1
- 230000037308 hair color Effects 0.000 description 1
- 238000003306 harvesting Methods 0.000 description 1
- 238000010438 heat treatment Methods 0.000 description 1
- 244000144980 herd Species 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 230000000116 mitigating effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000035755 proliferation Effects 0.000 description 1
- NHDHVHZZCFYRSB-UHFFFAOYSA-N pyriproxyfen Chemical compound C=1C=CC=NC=1OC(C)COC(C=C1)=CC=C1OC1=CC=CC=C1 NHDHVHZZCFYRSB-UHFFFAOYSA-N 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
- 230000000699 topical effect Effects 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- 210000003462 vein Anatomy 0.000 description 1
- 230000002747 voluntary effect Effects 0.000 description 1
- 235000020795 whole food diet Nutrition 0.000 description 1
- 238000013316 zoning Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
- G06F16/68—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/683—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
- G06F16/43—Querying
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
- G06F16/48—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/489—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using time information
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
- G06N5/048—Fuzzy inferencing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/18—Legal services; Handling legal documents
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/26—Government or public services
- G06Q50/265—Personal security, identity or safety
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/08—Network architectures or network communication protocols for network security for authentication of entities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/32—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
- H04L9/3247—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials involving digital signatures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/01—Social networking
Definitions
- the technology detailed herein generally relates to methods and systems to aid in verifying a person's identity, e.g., in connection with applying for an identity document (such as a passport or driver's license), or in connection with qualifying to enter a secured area (such as at an airport).
- an identity document such as a passport or driver's license
- qualifying to enter a secured area such as at an airport
- the assignee's application Ser. No. 11/132,724 notes that some parts of the applicant enrollment process can be performed from the applicant's home.
- a state Department of Motor Vehicles (DMV) may have a web site through which an applicant for a driver's license can enter their name, address, birth date, hair color, organ donor preferences, and other background information. Scans of breeder documents that the applicant intends to present (e.g., birth certificate and passport) can also be submitted from home. In some systems the applicant may even be allowed to submit a proposed portrait photograph for printing on their license.
- This data-entry web session can conclude by allowing the applicant to schedule an appointment to visit a nearby DMV office to complete the enrollment and license issuance process.
- the DMV can undertake more thorough vetting of an applicant's identity than if they simply appear at the DMV office.
- Such vetting generally involves researching the applicant and his/her purported identity, and checking any breeder document data, to make sure that nothing appears amiss.
- the DMV may check third party databases, such as credit bureaus, telephone directories, social security databases, etc., to verify that the information submitted by the applicant, and the information represented by the breeder documents, is consistent with data maintained by these third parties.
- Any portrait photograph submitted by the applicant can likewise be checked against an archive of previous driver license images to determine whether a person of similar appearance has already been issued a driver license. If these checks give any ground for suspicion, the DMV can contact the applicant to solicit further information. If issues are not satisfactorily addressed prior to the appointment, the appointment may be canceled.
- Application Ser. No. 10/979,770 details how a risk score may be generated, to give an indication of the relative possibility of fraud associated with a given applicant (e.g., by considering past fraud experiences correlated with different types of breeder documents). Other risk scoring techniques are known the art.
- US20030052768 a trusted-traveler card system, in which trust is scored based on various factors, including how long the traveler has lived at a given address
- US20030099379 and US20030115459 variant ID card attributes are sensed, and facial features and third party databases are checked, to yield a score by which confidence in a person's asserted identity is assessed
- US20030216988 the validity of an applicant's telephone number is checked, and combined with other indicators, to produce a risk score for a proposed transaction
- US20040059953 a transportation worker identity card system in which various personal data is collected from the applicant and checked against third party databases, to determine if confidence in the asserted identity exceeds a threshold
- US20040064415 a traveler's identity is checked by reference to various public and private databases—which may include birth certificate data, social security number data, and INS records—and a resulting score is produced
- US20040153663 in processing a change-of-address
- U.S. Pat. No. 6,513,018 details empirically-based scoring technologies employed by Fair, Isaac and Company in computing credit scores. Particularly detailed are arrangements by which different applicant characteristics are either positively- or negatively-correlated with certain performance results, and methods for determining and applying such correlative information.
- U.S. Pat. No. 6,597,775 details Fair, Isaac's extrapolation and improvement of such methodologies to predictive modeling, and mitigation, of telecommunications fraud.
- Some identity-verification systems employ multi-factor approaches.
- An example is US20050154924, which validates a user based on a collection of user-provided factors, such as a combination of ‘what the user knows’ (e.g., knowledge-based data), ‘who the user is’ (e.g., biometric-based data), ‘what the user possesses’ (e.g., token-based data), ‘where the user is’ (i.e., location-based data), and when the user is seeking validation (i.e., time-based data).
- ‘what the user knows’ e.g., knowledge-based data
- ‘who the user is’ e.g., biometric-based data
- ‘what the user possesses’ e.g., token-based data
- ‘where the user is’ i.e., location-based data
- time-based data time-based data
- Biometrics can be useful in checking identity. But biometrics can only check for a match between an earlier-collected and presently-collected set of data. Unless there is confidence about the identity of the person from whom the earlier biometric data was collected, such technologies are of limited utility.
- FIGS. 1-11 illustrate information that can be used in testing an applicant to help determine relative confidence in the applicant's asserted identity.
- profile data about an applicant is collected in an XML-based data structure, based on a collection of standardized tags.
- Part of an illustrative collection of data for an applicant may be as follows
- Such a collection of data can be seeded by information provided by the applicant, e.g., name, address, phone number, and social security number.
- the collection can then be supplemented by further information obtained from public sources (e.g., the web and public databases), as well as private data collections.
- credit agency databases typically store prior addresses for myriad people, in addition to their current addresses. Based on address data, a wealth of additional information can be obtained.
- Various public web sites can provide corresponding provide latitude/longitude information.
- Online municipal tax databases can be queried to obtain information about the home (three bedrooms, two full baths, cedar shake roof, etc.).
- Third party commercial databases can provide statistical demographics about the neighborhood (e.g., average income, age distribution, percentage of renters, etc.). All of this information can be added to the profile (or accessed online, as needed).
- Knowing alleged facts about the applicant allows the DMV to pose questions that help establish confidence in the applicant's asserted identity.
- the questions are generally of a sort that can be correctly answered by the applicant—if the facts collected concerning the applicant are correct, but would be more difficult to answer otherwise.
- the web increasingly offers a rich trove of geospatial data, such as maps, aerial imagery, and curbside imagery, which is readily accessible by street addresses or other location data.
- the maps are of different types, e.g., street maps, zoning maps, topographic maps, utility (e.g., water, sewer, electrical) maps, hazard maps (e.g., flood plain, earthquake), etc. (The portlandmaps.com site referenced above includes all these categories of information.)
- Such resources offer a rich source of information about which an applicant can be questioned—based on current and/or prior residence addresses. Consider, e.g., the following questions which may be posed to an applicant who asserts his address is 6200 SW Tower Way, Portland, Oreg.:
- the map about which the applicant is questioned would be presented on an electronic display device, e.g., on a testing kiosk at a DMV office.
- the question could be posed to the applicant at a remote terminal (e.g., at the applicant's home)—provided certain safeguards are put in place to prevent the applicant from researching the answer. (E.g., the applicant would have a limited period of time after the presentation of each question to provide the answer.)
- Source material for the foregoing queries can come from commercial providers (e.g., Google Earth, Terraserver.com, Yahoo!, MapQuest, etc.), or from government databases (national, state, municipal, etc.)
- the overlay of “A” “B” “C” markings, etc., can be provided—as desired—by the DMV or other entity.
- Amazon's Yellow Pages search and its A9 search engine.
- Amazon has compiled a library of curbside-scene images, accessible by street address or business name.
- Various public taxation districts also publish such information. Again, such imagery can be used to test an applicant's familiarity with the neighborhood in which she claims to reside.
- a mapping database can be used to identify the fastest route between the stated address and one of the towns.
- a major intersection along this route (preferably near the applicant's residence) can be identified.
- a curbside imagery database (e.g., Amazon Yellow Pages A9) can then be queried to obtain a scene from this intersection.
- This image, or an unrelated image can be presented to the applicant with a question such as:
- Microsoft's Windows Live Local service (aka Virtual Earth) is yet another source of geographic image/map data against which the knowledge of an applicant may be tested.
- Microsoft's service offers “Bird's eye” (oblique), in addition to “aerial” (straight down), photographic imagery of many city locations.
- FIG. 6 shows a “Bird's eye” view of part of the campus of the Georgia Institute of Technology, obtained through the Microsoft service. Such imagery can be used in the foregoing examples.
- Portland's “Tri-Met” transit system offers a web-based service (at the domain trimet.org) by which users can specify their start points and desired destinations, and the service identifies buses and trains, and their schedules, that the user can take to commute between the locations.
- the system's web services also offer local maps, showing which bus routes travel along which streets, and the location of bus stops. A user can specify an intersection, and obtain a map of bus routes near that intersection. Such a map, and the associated user interface, is shown in FIG. 7 .
- Such transit system information can be used to assess an applicant's knowledge about their alleged residence. An applicant may be asked, for example:
- the DMV may sometimes ask questions expecting that the applicant will not know the answer. For example, a person living in the neighborhood depicted in the bus map of FIG. 7 may be asked to name one or more bus routes that travel SW Beaverton-Hillsdale Highway (e.g., 54, 56, 61 and 92). If the applicant cannot answer this question, he is not faulted; the answer is likely unfamiliar to many neighborhood residents. If, however, the applicant can answer this difficult question correctly, such correct answer may help his score more than a correct answer to a more routine question. (The ability to identify two or more of the buses along this route could boost his score still further.) Thus, answers to different questions may be weighted differently in scoring confidence in the applicant's asserted identity.
- SW Beaverton-Hillsdale Highway e.g., 54, 56, 61 and 92.
- Google offers a Transit Trip Planner service (google.com/transit) that seeks to standardize delivery of mass transit schedule and route information across many cities' transportation services. It, too, may be used as a fact resource in compiling challenge questions for applicants.
- Another knowledge domain comprises facts an applicant may be expected to know by reason of their education. For example:
- information relating to high schools and colleges, including size and faculty can be obtained by web sites maintained by the respective high schools and universities, and also from databases maintained by independent providers, such as classmates.com.
- Amazon's Mechanical Turk service Some of this information can be garnered from independent human searchers, e.g., using Amazon's Mechanical Turk service. Amazon's Turk web site explains:
- automated processes e.g., using more traditional artificial intelligence techniques, can be applied to generate and check questions from available online resources, such as the maps and databases noted above, given profile data on the applicant (e.g., current and prior residence addresses, education, city where attended high school, etc.).
- Still another knowledge domain comprises facts the applicant may be expected to know by reason of their age. For example:
- Another factor that can help confirm identity is an on-line account. Many individuals have accounts with on-line entities. Examples include Amazon, PayPal, EBay, Orbitz, Expedia, NetFlix, etc. Such accounts can provide verification information—either with the companies' cooperation, or without.
- Amazon Its web site allows registered users to sign-in and display historical information, including a list of all orders ever placed with the company (at last back to 1997, as of this writing). For each order, Amazon provides information including the address to which it was shipped, the shipping date, and the billing address.
- a DMV clerk may direct a web browser on a DMV computer to the Amazon Sign-In page.
- the applicant can then type his/her account information (e.g., email address and password), and navigate from the resulting “My Account” page to select “View by Item—all items ordered” ( FIG. 9 ).
- a report like that shown in FIG. 10 is produced, listing all items ordered through that account. Clicking on an order number generates a screen like that shown in FIG. 11 , showing both the address to which the item was shipped, as well as the billing address for the credit card used. If this address information is consistent with the address given by the applicant, this tends to confirm the applicant's credibility.
- EBay and other sites provide various mechanisms by which members are critiqued by their peers or others users. EBay thus reports a feedback score, indicating the percentage of unique users who have scored their transactions with a particular member as favorable.
- member “mgmill” has a feedback score of 100%: 2773 members left positive feedback; none left negative feedback. (A 100% score based on thousands of feedbacks is naturally more meaningful than a 100% score based on a dozen feedbacks.)
- Such peer/user rankings serve as an additional factor on which an applicant can be scored (albeit one that might be given relative little weight—absent some demonstrated correlation between EBay feedback scores, and the metric that the agency is trying to assess).
- Expedia, Orbitz, and other online travel companies have histories that a user can access giving information useful in verification. From the Expedia home page, for example, a user can click on “My Account” and sign-in by entering a login name and password. An Account Overview page is then produced. This page lists the account holder and others for whom that person has made travel arrangements. Each name is hyperlinked to a further screen with information about that traveler, such as home phone number, work phone number, cell phone number, passport number and country, frequent flier numbers, etc. At the bottom of the screen there is a listing of credit cards associated with the account—including billing addresses and telephone numbers. Again, all such information can be accessed—with the cooperation of the applicant—and factored into a possibility-of-fraud fraud assessment for that applicant.
- PayPal offers further data useful in checking a person.
- Payment currently has over 80 million accounts.
- going to the PayPal home page yields a screen on which the applicant can enter their email address and PayPal password.
- the user can navigate to the “Profile” page.
- a great wealth of information useful in corroborating an applicant is available, e.g., links to email, street address, telephone, credit card accounts registered with PayPal, bank accounts registered with PayPal, etc. Clicking on the credit card account link produces a screen giving the last digits of the credit card number, and the credit card billing address. Clicking on the “History” tab allows historical transactions to be reviewed.
- histories going back several years are available. As before, a long-tenured account may be taken as a favorable factor in assessing an applicant.
- ID verification can be augmented by allowing the applicant to demonstrate knowledge of accounts with which the applicant is associated. Knowledge of passwords, order history, and consistency between shipping addresses and applicant's asserted address, all tend to boost a confidence score.
- Still richer data may be obtained by establishing a commercial relationship with companies having accounts with large numbers of consumers, so that additional information—not generally available—might also be made available.
- Such techniques serve to enlarge the “neighborhood” of those who can vouch for an individual, to encompass entities involved in commercial transactions.
- Still another source from which question data can be derived is information provided by prior applicants. Applicants can be queried for information (questions and answers, or facts) that would most likely be known to other applicants having a similar background qualification, but would likely be unknown to others.
- a person who is a lawyer in Portland, Oreg. may offer the question, “Who is the old federal courthouse named after?” and offer the answer: “Gus Solomon.”
- a bus driver in the same city may offer “Where is the bus garage on the west side of the Willamette River? Answer: Merlo Street.”
- a teacher at Hayhurst Elementary School in Portland may offer “To what middle school do Hayhurst students go? Answer: Robert Gray Middle School.”
- a nineteen year old student at the University of Portland may offer, “What live music club is closest to school? Answer: Portsmouth Club.”
- a sixty year old faculty member at the same school may offer, “What is the Catholic order with which the University is affiliated? Answer: Holy Cross.”
- questions should not be selected for presentation to an applicant based on just a single factor (e.g., a connection to the University of Portland, or a residence on S.W. Dakota Street). Rather, the questions may be assigned based on two or more factors (e.g., age plus school affiliation; profession plus address; ethnicity plus location where grew up; etc.).
- the questions can each be located based on two or more classification data that help identify applicants who are likely to be able to answer such questions.
- classification data includes geographic location to which question relates, professional expertise (e.g., law) useful in answering question, age of persons most likely to be familiar with question, education of persons most likely to be familiar with questions, etc.
- the applicant can be similarly located in this vector space, again based on factors such as noted above.
- the questions nearest the applicant in this vector space are thus those that most closely match that applicant's background and other credentials, and are thus most likely to be useful in confirming that the applicant's background/credentials are as he or she states.
- classification data based on the proposer's attributes.
- a Portland, Oreg. lawyer, aged 60 proposed the question about the name of the old courthouse, then that question might be stored with such applicant attributes (e.g., lawyer, Oregon, age 60) as classification data.
- applicant attributes e.g., lawyer, Oregon, age 60
- Classification data may be of the same type, or may even be drawn from the same vocabulary, as applicant attributes, but this need not be the case.
- question data from which questions are drawn is stored in a large database, each record tagged with plural of the classification data noted earlier. Proximity between plural of the question data, and the particular applicant is then determined, so as to identify a subset of the original question space from which questions might usefully be posed. The method can then select questions from this subset (e.g., randomly) for presentation to the applicant.
- new question information collected from applicants can be vetted prior to use in verifying further applicants.
- Such vetting can be performed by DMV personnel, or by contractors (“professional vetting,” which may include the Mechanical Turk service). Computer verification may be employed in some instances.
- the vetting may be performed by presenting such questions/answers to new applicants, on a trial basis. Answers to such questions would likely not be included in a substantive assessment as to the new applicant's identity or risk. Rather, such answers would be used to judge whether the question/answer is a useful tool.
- each proposed new question might be posed to five new applicants, each of whom appears to have a background that would allow them to correctly answer same. If at least three of them (or, alternatively, if at least four of them, or if all of them) give the answer offered by the proposer of the question, then the question can be moved to the pool of real questions—and used to substantively assess applicants.
- Combinations of professional and applicant vetting can also be used. For example, a candidate new question may be posed to five new applicants on a trial basis—as outlined above. If a sufficient number answer correctly, it can be further vetted by a professional. Or, such order may be reversed.
- Tests of the sort detailed above might be posed routinely to applicants. If an applicant does not demonstrate knowledge of an expected sort, this can trigger further scrutiny of other aspects of the applicant's application. For example, a DMV agent might respond by more closely inspecting breeder documents presented by the dubious applicant, or by requesting additional information not required from routine applicants.
- the checks presented above might be posed only if an applicant's profile otherwise suggests further checking is prudent (e.g., if a risk score based on the ensemble of presented breeder documents exceeds a threshold).
- an incorrect answer isn't fatal to applicant verification. There may be many credible explanations for an incorrect answer. Rather, answers are typically used to adjust an aggregate score—with no one question being determinative.
- Applicants' technology detailed herein may be regarded as a knowledge-based system—utilizing a knowledge base of information to perform or facilitate an applicant-verification process.
- Earlier work in such knowledge-based systems is shown, e.g., by U.S. Pat. Nos. 6,968,328, 6,965,889 and 6,944,604.
- the Mechanical Turk service can be widely used in gathering and vetting information used in the foregoing.
- the Mechanical Turk service can be used in conjunction with the systems detailed in the earlier-referenced patent documents to facilitate some of the operations required by those systems, e.g., making judgments and undertaking tasks that computers are ill-suited to perform—such as performing fuzzy matching, applying common-sense knowledge, and interpreting documents.
- the names “Bill,” “Will,” “Wm.,” and the like all can be acceptable matches to the name “William;” likewise that address lines “Apartment 3A,” “Apt.
- the Turk service can be used to harvest public records data that can be used in verification operations.
- a number of such applications of the Mechanical Turk service to the arrangements in the cited documents are within the capabilities of the artisan from the teachings herein. Appendix A further details some further inventive uses of the Mechanical Turk service, and similar “crowdsourcing” technologies.)
- a passport control checkpoint in an airport, where a government official inspects passports of travelers.
- the passport is swiped or scanned by a passport reader, the official is presented with a screen of information pertaining to the person.
- This same screen, or another, can be used to present one or more verification checks like those detailed above, e.g., showing a map of the passport holder's neighborhood, and requesting the traveler to identify her home.
- the Mechanical Turk service (detailed in Appendix B) may be regarded as a structured implementation of a technology commonly termed “crowdsourcing”—employing a group of outsiders to perform a task. Wikipedia explains:
- One use of the Mechanical Turk service is in connection with computationally difficult tasks, such as identification of audio, video and imagery content. These tasks are sometimes addressed by so-called “fingerprint” technology, which seeks to generate a “robust hash” of content (e.g., distilling a digital file of the content down to perceptually relevant features), and then compare the thus-obtained fingerprint against a database of reference fingerprints computed from known pieces of content, to identify a “best” match.
- Fingerprint seeks to generate a “robust hash” of content (e.g., distilling a digital file of the content down to perceptually relevant features), and then compare the thus-obtained fingerprint against a database of reference fingerprints computed from known pieces of content, to identify a “best” match.
- Such technology is detailed, e.g., in Haitsma, et al, “A Highly Robust Audio Fingerprinting System,” Proc.
- a particular example of such technology is in facial recognition—matching an unknown face to a reference database of facial images. Again, each of the faces is distilled down to a characteristic set of features, and a match is sought between an unknown feature set, and feature sets corresponding to reference images.
- the feature set may comprise eigenvectors or shape primitives.
- Patent documents particularly concerned with such technology include US20020031253, U.S. Pat No. 6,292,575, U.S. Pat No. 6,301,370, U.S. Pat No. 6,430,306, U.S. Pat No. 6,466,695, and U.S. Pat No. 6,563,950.
- one approach is to prune the database—identifying excerpts thereof that are believed to be relatively likely to have a match, and limiting the search to those excerpts (or, similarly, identifying excerpts that are believed relatively unlikely to have a match, and not searching those excerpts).
- Such content identification systems can be improved by injecting a human into the process—by the Mechanical Turk service or similar systems.
- the content identification system makes an assessment of the results of its search, e.g., by a score.
- a score of 100 may correspond to a perfect match between the unknown fingerprint and a reference fingerprint.
- Lower scores may correspond to successively less correspondence.
- S x (perhaps 60) the system may decide that there is no suitable match, and a “no-match” result is returned, with no identification made.)
- S y (perhaps 70) the system may be sufficiently confident of the result that no human intervention is necessary.
- the system may make a call through the Mechnical Turk service for assistance.
- the Mechanical Turk can be presented the unknown content (or an excerpt thereof), and some reference content, and asked to make a comparison.
- the reference content may be stored in the fingerprint database, or may be readily obtainable through use of a link stored in the reference database.
- the requested comparison can take different forms.
- the service can be asked simply whether two items appear to match. Or it can be asked to identify the best of several possible matches (or indicate that none appears to match). Or it can be asked to give a relative match score (e.g., 0-100) between the unknown content and one or more items reference content.
- a relative match score e.g., 0-100
- a query is referred to several different humans (e.g., 2-50) through the Mechanical Turk service, and the returned results are examined for consensus on a particular answer.
- queries e.g., does Content A match Content B? Or is Content A a better match to Content C?
- a threshold of consensus e.g., 51%, 75%, 90%, 100%
- the scores returned from plural such calls may be combined to yield a net result.
- the high and/or low and/or outlier scores may be disregarded in computing the net result; weighting can sometimes be employed, as noted below.
- the data returned from the Mechanical Turk calls may serve as a biasing factor, e.g., pushing an algorithmically determined output one way or another, to yield a final answer (e.g., a net score).
- a final answer e.g., a net score
- the data returned from the Mechanical Turk calls may be treated as a definitive answer—with results from preceding processes disregarded.
- the database search may reveal several candidate matches, all with comparable scores (which may be above the threshold S y ). Again, one or more calls to the Mechanical Turk service may be invoked to decide which match is the best, from a subjective human standpoint.
- the Mechanical Turk service can be invoked even in situations where the original confidence score is below the threshold, S x , which is normally taken as indicating “no match.” Thus, the service can be employed to effectively reduce this threshold—continuing to search for potential matches when the rote database search does not yield any results that appear reliable.
- a database may be organized with several partitions (physical or logical), each containing information of a different class.
- the data may be segregated by subject gender (i.e., male facial portraits, female facial portraits), and/or by age (15-40, 30-65, 55 and higher—data may sometimes be indexed in two or more classifications), etc.
- the data may be segregated by topical classification (e.g., portrait, sports, news, landscape).
- topical classification e.g., portrait, sports, news, landscape
- the data may be segregated by type (spoken word, music, other).
- Each classification in turn, can be further segregated (e.g., “music” may be divided into classical, country, rock, other). And these can be further segregated (e.g., “rock” may be classified by genre, such as soft rock, hard rock, Southern rock; by artist, e.g., Beatles, Rolling Stones, etc).
- a call to the Mechanical Turk can be made, passing the unknown content object (or an excerpt thereof) to a human reviewer, soliciting advice on classification.
- the human can indicate the apparent class to which the object belongs (e.g., is this a male or female face? Is this music classical, country, rock, or other?). Or, the human can indicate one or more classes to which the object does not belong.
- the system can focus the database search where a correct match—if any—is more likely to be found (or avoid searching in unproductive database excerpts). This focusing can be done at different times. In one scenario it is done after a rote search is completed, in which the search results yield matches below the desired confidence level of S y If the database search space is thereafter restricted by application of human judgment, the search can be conducted again in the limited search space. A more thorough search can be undertaken in the indicated subset(s) of the database. Since a smaller excerpt is being searched, a looser criteria for a “match” might be employed, since the likelihood of false-positive matches is diminished. Thus, for example, the desired confidence level S y might be reduced from 70 to 65. Or the threshold S x at which “no match” is concluded, may be reduced from 60 to 55. Alternatively, the focusing can be done before any rote searching is attempted.
- the result of such a human-focused search may reveal one or more candidate matches.
- the Mechnical Turk service may be called a second time, to vet the candidate matches—in the manner discussed above. This is one of several cases in which it may be desirable to cascade Mechanical Turk calls—the subsequent calls benefiting from the former.
- the first Mechanical Turk call aids in pruning the database for subsequent search.
- the second call aids in assessing the results of that subsequent search.
- Mechanical Turk calls of the same sort can be cascaded.
- the Mechanical Turk first may be called to identify audio as music/speech/other.
- a second call may identify music (identified per the first call) as classical/country/rock/other.
- a third call may identify rock (identified per the second call) as Beatles/Rolling Stones/etc.
- iterative calling of a crowdsourcing service a subjective judgment can be made that would be very difficult to achieve otherwise.
- human reviewers are pre-qualified as knowledgeable in a specific domain (e.g., relatively expert in recognizing Beatles music). This qualification can be established by an online examination, which reviewers are invited to take to enable them to take on specific tasks (often at an increased rate of pay). Some queries may be routed only to individuals that are pre-qualified in a particular knowledge domain. In the cascaded example just given, for example, the third call might be routed to one or more users with demonstrated expertise with the Beatles (and, optionally, to one or more users with demonstrated expertise with the Rolling Stones, etc). A positive identification of the unknown content as sounding like the Beatles would be given more relative weight if coming from a human qualified in this knowledge domain.
- Calls to the Mechanical Turk service may request the human to provide metadata relevant to any content reviewed. This can include supposed artist(s), genre, title, subject, date, etc.
- This information (which may be ancillary to a main request, or may comprise the entirety of the request) can be entered into a database. For example, it can be entered into a fingerprint database—in association with the content reviewed by the human.
- data gleaned from Mechanical Turk calls are entered into the database, and employed to enrich its data—and enrich information that can be later mined from the database. For example, if unknown content X has a fingerprint F x , and through the Mechanical Turk service it is determined that this content is a match to reference content Y, with fingerprint F y , then a corresponding notation can be added to the database, so that a later query on fingerprint F x , (or close variants thereof) will indicate a match to content Y. (E.g., a lookup table initially indexed with a hash of the fingerprint F x will point to the database record for content Y.)
- Calls to outsourcing engines involve a time lag before results are returned.
- the calling system can generally cope, or be adapted to cope, with such lags.
- a user social networking site such as YouTube (now Google) that distributes “user generated content” (e.g., video files), and employs fingerprinting to recognize media content that should not be distributed.
- the site may check a video file at the time of its uploading with a fingerprint recognition system (e.g., of the sort offered by Audible Magic, or Gracenote). If no clear match is identified, the video may be indexed and stored on YouTube's servers, available for public downloading. Meanwhile, the content can be queued for review by one our more crowdsource reviewers. They may recognize it as a clip from the old TV sitcom “I Love Lucy”—perhaps digitally rotated 3 degrees to avoid fingerprint detection. This tentative identification is returned to YouTube from the API call.
- a fingerprint recognition system e.g., of the sort offered by Audible Magic, or Gracenote
- YouTube can check the returning metadata against a title list of works that should not be distributed (e.g., per the request of copyright owners), and may discover that “I Love Lucy” clips should not be distributed. It can then remove the content from public distribution. (This generally follows a double-check of the identification by a YouTube employee.) Additionally, the fingerprint database can be updated with the fingerprint of the rotated version of the I Love Lucy, allowing it to be immediately recognized the next time it is encountered.
- the delivery can be interrupted.
- An explanatory message can be provided to the user (e.g., a splash screen presented at the interruption point in the video).
- Rotating a video by a few degrees is one of several hacks that can defeat fingerprint identification. (It is axiomatic that introduction of any new content protection technology draws hacker scrutiny. Familiar examples include attacks against Macrovision protection for VHS tapes, and against CSS protection for packaged DVD discs.) If fingerprinting is employed in content protection applications, such as in social networking sites (as outlined above) or peer-to-peer networks, its vulnerability to attack will eventually be determined and exploited.
- a well known fingerprinting algorithm operates by repeatedly analyzing the frequency content of a short excerpt of an audio track (e.g., 0.4 seconds). The method determines the relative energy of this excerpt within 33 narrow frequency bands that logarithmically span the range 300 Hz-2000 Hz. A corresponding 32-bit identifier is then generated from the resulting data.
- a frequency band corresponds to a data bit “1” if its energy level is larger than that of the band above, and a “0” if its energy level is lower.
- Such a 32 bit identifier is computed every hundredth of a second or so, for the immediately preceding 0.4 second excerpt of the audio track, resulting in a large number of “fingerprints.”
- This series of characteristic fingerprints can be stored in a database entry associated with the track, or only a subset may be stored (e.g., every fourth fingerprint).
- the algorithm can use average luminances of blocks into which the image is divided as the key perceptual features.
- a fingerprint can be defined by determining whether the luminance in each block is larger or smaller than the luminance of the preceding block.
- the reader may be familiar with different loudness profiles selectable on car radios, e.g., jazz, Talk, Rock, etc. Each applies a different frequency equalization profile to the audio, e.g., making bass notes louder if the Rock setting is selected, and quieter if the Talk setting is selected, etc. The difference is often quite audible when switching between different settings.
- the listener is generally unaware of which loudness profile is being employed. That is, without the ability to switch between different profiles, the frequency equalization imposed by a particular loudness profile is typically not noticed by a listener.
- the different loudness profiles yield different fingerprints.
- the 300 Hz energy in a particular 0.4 second excerpt may be greater than the 318 Hz energy.
- the situation may be reversed. This change prompts a change in the leading bit of the fingerprint.
- Audio multiband compression a form of processing that is commonly employed by broadcasters to increase the apparent loudness of their signal (most especially commercials).
- Such tools operate by reducing the dynamic range of a soundtrack—increasing the loudness of quiet passages on a band-by-band basis, to thereby achieve a higher average signal level. Again, this processing of the audio changes its fingerprint, yet is generally not objectionable to the listeners.
- Some formal attacks are based on psychoacoustic masking. This is the phenomena by which, e.g., a loud sound at one instant (e.g., a drum beat) obscures a listener's ability to perceive a quieter sound at a later instant. Or the phenomena by which a loud sound at one frequency (e.g., 338 Hz) obscures a listener's ability to perceive a quieter sound at a nearby frequency (e.g., 358 Hz) at the same instant. Research in this field goes back decades. (Modem watermarking software employs psychoacoustic masking in an advantageous way, to help hide extra data in audio and video content.)
- the algorithm detailed above would generate a fingerprint of ⁇ 011. . . ⁇ from this data (i.e., 69 is less, than 71, so the first bit is ‘0’; 71 is greater than 70, so the second bit is ‘1’; 70 is greater than 68, so the third bit is ‘1’).
- the fingerprint is now ⁇ 101 . . . ⁇ .
- Two of the three illustrated fingerprint bits have been changed. Yet the change to the audio excerpt is essentially inaudible.
- the exemplary fingerprinting technique noted above (which is understood to be the basis for Gracenote's commercial implementation, MusicID, built from technology licensed from Philips) is not unique in being vulnerable to various attacks. All fingerprinting techniques (including the recently announced MediaHedge, as well as CopySense and RepliCheck) are similarly believed to have vulnerabilities that can be exploited by hackers. (A quandary for potential adopters is that susceptibility of different techniques to different attacks has not been a focus of academic attention.)
Abstract
Description
- This application claims priority benefit to provisional application 60/753,652, filed Dec. 23, 2005.
- The subject matter herein is generally related to that in various of the assignee's other patent applications, including Ser. No. 10/723,240, filed Nov. 26, 2003 (published as US20040213437); Ser. No. 10/979,770, filed Nov. 1, 2004; Ser. Nos. 10/ and 11/132,724, filed May 18, 2005 (published as US20050288952).
- The technology detailed herein generally relates to methods and systems to aid in verifying a person's identity, e.g., in connection with applying for an identity document (such as a passport or driver's license), or in connection with qualifying to enter a secured area (such as at an airport).
- Traditionally, applicants for identity documents have been required to present only a few items of collateral identification, such as a birth certificate, a social security card, and/or a study body ID card. (Such collateral documents are sometimes termed “breeder documents;” a fuller list of commonly-accepted breeder documents is detailed in application Ser. No. 10/979,770). With the proliferation of low-cost and high-quality scanning and printing technologies, as well as simple image editing software, such breeder documents have become easier to counterfeit. Thus, there is a need for techniques by which the identity of an applicant can more reliably be determined.
- The present assignee's application Ser. No. 10/979,770 notes that the risk of identity fraud in the issuance of ID documents varies, with some types of breeder documents being more reliable in establishing a person's identity (e.g., US passports) than other types of breeder documents (e.g., student body ID cards). Data on the incidence of discovered fraud can be collected, and correlated back to the types of breeder documents submitted in each case, e.g., using factor analysis. This historical data permits a risk score to be generated for each new applicant, based on the particular types of breeder documents he or she presents. Applicants with relatively high breeder document risk scores can be scrutinized relatively more closely than applicants with relatively low risk scores. Such techniques allow security personnel to focus their efforts where they will do the most good.
- The assignee's application Ser. No. 11/132,724 (published as US20050288952) notes that some parts of the applicant enrollment process can be performed from the applicant's home. A state Department of Motor Vehicles (DMV), for example, may have a web site through which an applicant for a driver's license can enter their name, address, birth date, hair color, organ donor preferences, and other background information. Scans of breeder documents that the applicant intends to present (e.g., birth certificate and passport) can also be submitted from home. In some systems the applicant may even be allowed to submit a proposed portrait photograph for printing on their license. This data-entry web session can conclude by allowing the applicant to schedule an appointment to visit a nearby DMV office to complete the enrollment and license issuance process.
- By receiving this applicant information in advance, the DMV can undertake more thorough vetting of an applicant's identity than if they simply appear at the DMV office. Such vetting generally involves researching the applicant and his/her purported identity, and checking any breeder document data, to make sure that nothing appears amiss. For example, the DMV may check third party databases, such as credit bureaus, telephone directories, social security databases, etc., to verify that the information submitted by the applicant, and the information represented by the breeder documents, is consistent with data maintained by these third parties. Any portrait photograph submitted by the applicant can likewise be checked against an archive of previous driver license images to determine whether a person of similar appearance has already been issued a driver license. If these checks give any ground for suspicion, the DMV can contact the applicant to solicit further information. If issues are not satisfactorily addressed prior to the appointment, the appointment may be canceled.
- The assignee's published application US20040213437 details technologies by which the photograph of a new applicant for a driver license can be checked against a database of photographs on previously-issued driver licenses. If a suspected match is found, the circumstances can be investigated to determine whether the applicant may be engaged in fraud.
- Application Ser. No. 10/979,770 details how a risk score may be generated, to give an indication of the relative possibility of fraud associated with a given applicant (e.g., by considering past fraud experiences correlated with different types of breeder documents). Other risk scoring techniques are known the art. Examples are shown in published applications and patents such as US20030052768 (a trusted-traveler card system, in which trust is scored based on various factors, including how long the traveler has lived at a given address); US20030099379 and US20030115459 (various ID card attributes are sensed, and facial features and third party databases are checked, to yield a score by which confidence in a person's asserted identity is assessed); US20030216988 (the validity of an applicant's telephone number is checked, and combined with other indicators, to produce a risk score for a proposed transaction); US20040059953 (a transportation worker identity card system in which various personal data is collected from the applicant and checked against third party databases, to determine if confidence in the asserted identity exceeds a threshold); US20040064415 (a traveler's identity is checked by reference to various public and private databases—which may include birth certificate data, social security number data, and INS records—and a resulting score is produced); US20040153663 (in processing a change-of-address request, a credit card company compares demographics of the applicant's previous and new neighborhoods, e.g., average income, average net worth, percentage of renters, etc., looking for unexpected disparities; a polynomial equation is applied to compute an associated risk score); US20040230527 (before completing a wired money transfer, circumstances of the transfer are compared against historical data and referred to third party evaluation services, to generate a score indicative of the risk of charge-back); US20040245330 (before completing a financial transaction, various parameters are considered and contribute—positively or negatively—to a net score, which is used to determine whether the transaction should proceed); US20050039057 (during enrollment, a person is questioned re opinions and trivial facts, e.g., “I carry my car keys in my (a) pocket; (b) purse; (c) briefcase; (d) backpack”, “The phone number of a childhood friend is XXX-YYY-ZZZZ,” and the given answers are stored in a database; when the person's identity is later to be checked, a random subset of these questions is posed, some with different weightings, until a given confidence of identity is met); US20050132235 and US20050171851 (a user's speech is biometrically analyzed to determine degree of match against earlier-captured voice data, to yield an identification factor; this can be combined with other checks to generate a score indicating confidence in the speaker's identity); and U.S. Pat. No. 5679938 and U.S. Pat. No. 5679940 (different attributes of a financial check and an associated transaction are weighted to compute a score indicating degree of confidence that the check will be honored).
- U.S. Pat. No. 6,513,018 details empirically-based scoring technologies employed by Fair, Isaac and Company in computing credit scores. Particularly detailed are arrangements by which different applicant characteristics are either positively- or negatively-correlated with certain performance results, and methods for determining and applying such correlative information. U.S. Pat. No. 6,597,775 details Fair, Isaac's extrapolation and improvement of such methodologies to predictive modeling, and mitigation, of telecommunications fraud.
- Many ID verification systems rely on challenge-response testing concerning information that could become known to an imposter, e.g., by stealing a person's wallet or purse, or by removing mail from a mailbox. This gives rise to systems based on “out-of-wallet” information—the most common of which is “mother's maiden name.” More elaborate “out-of-wallet” systems for confirming identity are detailed, e.g., in some of the patents and publications referenced above, as well as in the patent publications such as: US20040189441 (which checks identity by testing applicant's knowledge of inherent attributes—such as mother's maiden name, as well as voluntary attributes—such as favorite color; the techniques provide some error tolerance in assessing answers, e.g., answer of “Smith” vs. expected answer of “Mr. Smith); and 20040205030 (provides a lengthy catalog of “out-of-wallet” information that may be used in confirming applicant identity, e.g., name of a signatory on a particular document, alimony payments, surgical records, medication currently prescribed, judicial records, etc.).
- Some identity-verification systems employ multi-factor approaches. An example is US20050154924, which validates a user based on a collection of user-provided factors, such as a combination of ‘what the user knows’ (e.g., knowledge-based data), ‘who the user is’ (e.g., biometric-based data), ‘what the user possesses’ (e.g., token-based data), ‘where the user is’ (i.e., location-based data), and when the user is seeking validation (i.e., time-based data).
- Biometrics can be useful in checking identity. But biometrics can only check for a match between an earlier-collected and presently-collected set of data. Unless there is confidence about the identity of the person from whom the earlier biometric data was collected, such technologies are of limited utility.
-
FIGS. 1-11 illustrate information that can be used in testing an applicant to help determine relative confidence in the applicant's asserted identity. - For expository convenience, most of the following detailed description focuses on one particular application of applicants' technology: verifying the identity of an applicant for a driver license. It will be recognized that such technologies can likewise be employed to help verify the identity of persons in myriad other contexts.
- The reader is presumed to be familiar with driver license issuance systems and procedures. (The commonly-owned patent applications identified above provide useful information in this regard.)
- In one arrangement, profile data about an applicant is collected in an XML-based data structure, based on a collection of standardized tags. Part of an illustrative collection of data for an applicant may be as follows
- <BIRTHDATE> Dec. 19, 1945
- <CURRENT_RES_ADDRESS> 6299 SW Tower Way, Portland, Oreg. 97221
- <PRIOR_RES_ADDRESS—1> 3609 SW Admiral St., Portland, Oreg. 97221
- <
PRIOR_RES_ADDRESS —2> 5544 SW 152nd Ave., Portland, Oreg. 97226 - <
HIGH_SCHOOL —1> Summit High School, Summit, N.J. - <
HIGH_SCHOOL —2> Wilson High School, Portland, Oreg. - <
COLLEGE —1> Washington State University, Pullman, Wash. - <
COLLEGE —2> University of Waterloo, Ontario, Canada - <
COLLEGE_STUDY —1> Geology - <
OCCUPATION —1> Professor - <
OCCUPATION —2> School of Earth Sciences - <
OCCUPATION —3> University of Portland - <CITIZEN> USA
- Such a collection of data can be seeded by information provided by the applicant, e.g., name, address, phone number, and social security number. The collection can then be supplemented by further information obtained from public sources (e.g., the web and public databases), as well as private data collections.
- For example, credit agency databases (e.g., Experian, Equifax and Transunion) typically store prior addresses for myriad people, in addition to their current addresses. Based on address data, a wealth of additional information can be obtained. Various public web sites, for example, can provide corresponding provide latitude/longitude information. Online municipal tax databases can be queried to obtain information about the home (three bedrooms, two full baths, cedar shake roof, etc.). Third party commercial databases can provide statistical demographics about the neighborhood (e.g., average income, age distribution, percentage of renters, etc.). All of this information can be added to the profile (or accessed online, as needed).
- Knowing alleged facts about the applicant allows the DMV to pose questions that help establish confidence in the applicant's asserted identity. The questions are generally of a sort that can be correctly answered by the applicant—if the facts collected concerning the applicant are correct, but would be more difficult to answer otherwise.
- One knowledge domain with which every applicant should be familiar is facts about their residence. Given a residence address, online tools can be used to mine substantial information about the building, its construction, its features, its age, etc., as well as about the surrounding neighborhood.
- Consider an applicant who asserts that his or her residence address is 6200 SW Tower Way, Portland, Oreg. By providing this address to the web page at portlandmaps.com or zillow.com, a wealth of information—parts of which are shown in
FIGS. 1A and 1B , can be obtained. Questions can be posed to the applicant based on the facts provided from this web resource. For example -
- How many bathrooms does 6200 SW Tower Way have?
- 1. One
- 2. One and a half
- 3. Two
- 4. Two and a half
- 5. Three
- 6. Three and a half
- When was 6200 SW Tower Way built?
- 1. Prior to 1940
- 2. Between 1940 and 1990
- 3. Between 1991 and 2000
- 4. After 2001
- What type of heating system does 6200 SW Tower Way have?
- 1. Forced air
- 2. Radiant floor heat
- 3. Baseboard hot water
If the applicant identifies himself as the owner of the property (rather than a renter), then the applicant might be further asked:
- In what year did you purchase the property?
- 1. 1996
- 2. 2000
- 3. 2002
- 4. 2004
- 5. 2005
- What was the purchase price for the property?
- 1. Less than $180,000
- 2. Between $180,000 and $220,000
- 3. Between $220,001 and $260,000
- 4. Between $260,001 and $300,000
- 5. More than $300,000
Each of these answers can be readily checked from the information available on-line, and depicted inFIGS. 1A and 1B .
- How many bathrooms does 6200 SW Tower Way have?
- If the applicant answers each of these questions correctly, it lends credence to their assertion that their residence is 6200 SW Tower Way. If the applicant answers most of the questions incorrectly, it raises a serious doubt as to at least their residence address. (Other information provided by such applicant may also be called into doubt.)
- The web increasingly offers a rich trove of geospatial data, such as maps, aerial imagery, and curbside imagery, which is readily accessible by street addresses or other location data. The maps are of different types, e.g., street maps, zoning maps, topographic maps, utility (e.g., water, sewer, electrical) maps, hazard maps (e.g., flood plain, earthquake), etc. (The portlandmaps.com site referenced above includes all these categories of information.) Such resources offer a rich source of information about which an applicant can be questioned—based on current and/or prior residence addresses. Consider, e.g., the following questions which may be posed to an applicant who asserts his address is 6200 SW Tower Way, Portland, Oreg.:
- Refer to the map shown in
FIG. 2 (which includes your residence address) for the following questions. -
-
- What is the name of the park (“E”)?
- 1. Albert Kelly Park
- 2. April Hill Park
- 3. Custer Park
- 4. Dickinson Park
- 5. Gabriel Park
- 6. Hillsdale Park
- 7. Pendleton Park
- 8. None of the above
- What is the name of the store located at the intersection of SW Vermont St and SW 45th avenue?
- 1. 7-11
- 2. Express Mart
- 3. Jackpot Food Mart
- 4. Plaid Pantry
- 5. Swan Mart
- 6. Uptown Market
- Is that store located at corner A, B or C?
- 1. A
- 2. B
- 3. C
- Is there a traffic light, or a four-way stop, at the intersection of SW Vermont St and SW 45th avenue?
- 1. Traffic light
- 2. Four-way stop
- Is location D uphill from location E?
- 1. Yes
- 2. No
- Wilson High School is offthe map. In what direction, from the map, is it?
- 1. Off the top side
- 2. Offthe right side
- 3. Off the bottom side
- 4. Off the left side
Again, an applicant who truly lives at 6200 SW Tower Way would have little difficulty with such questions. However, an imposter would fare poorly.
- What is the name of the park (“E”)?
- Typically, the map about which the applicant is questioned would be presented on an electronic display device, e.g., on a testing kiosk at a DMV office. Alternatively, the question could be posed to the applicant at a remote terminal (e.g., at the applicant's home)—provided certain safeguards are put in place to prevent the applicant from researching the answer. (E.g., the applicant would have a limited period of time after the presentation of each question to provide the answer.)
- Similarly, consider the following questions, which may be posed to an applicant, and which proceed with reference to two aerial photographs:
- Refer to the aerial photograph shown in
FIG. 3 (which includes your residence address) for the following questions. -
- Which of the buildings is your residence (A-V)?
- What is the last name of one of your neighbors?
- In what building does that neighbor live (A-V)?
- What is the name of the street labeled W?
- 1. SW Tower Way
- 2. SW Dakota St
- 3. SW Idaho St
- 4. SW Caldew Dr
- If you drove the street labeled X from the top of the map towards the bottom, would you be going uphill or downhill?
- 1. Uphill
- 2. Downhill
FIG. 4 is an aerial photograph showing one of the intersections nearest your residence (about 0.4 miles away). In the map:
- What business occupies building Y?
- 1. Multnomah Community Center
- 2. OHSU Clinic
- 3. Big 5 Sports
- 4. Tursi's Soccer Store
- What business occupies building Z?
- 1. Gerber Labs
- 2. Marquam Gymnasium
- 3. Southwest Community Center
- 4. Whole Foods Market
- It will be recognized that many such questions follow a standard form (e.g., a template) that can be recalled and customized by an automated process that inserts data unique to that particular neighborhood.
- (In the examples given above, the maps/photographs used in some of the questions give away answers to other questions. Naturally, in practical application, this would be avoided.)
- Source material for the foregoing queries can come from commercial providers (e.g., Google Earth, Terraserver.com, Yahoo!, MapQuest, etc.), or from government databases (national, state, municipal, etc.) The overlay of “A” “B” “C” markings, etc., can be provided—as desired—by the DMV or other entity.
- Yet another source of information useful in quizzing applicants is Amazon's Yellow Pages search, and its A9 search engine. Amazon has compiled a library of curbside-scene images, accessible by street address or business name. Various public taxation districts also publish such information. Again, such imagery can be used to test an applicant's familiarity with the neighborhood in which she claims to reside.
- To illustrate, imagine that the applicant gives an address that is between two town centers, e.g., Portland and Beaverton. A mapping database can be used to identify the fastest route between the stated address and one of the towns. A major intersection along this route (preferably near the applicant's residence) can be identified. A curbside imagery database (e.g., Amazon Yellow Pages A9) can then be queried to obtain a scene from this intersection. This image, or an unrelated image, can be presented to the applicant with a question such as:
-
- Referring to
FIG. 5 , is this scene:
- Referring to
- 1. Between your residence and Beaverton?
- 2. Between your residence and Portland? or
- 3. Not familiar to you.
- Microsoft's Windows Live Local service (aka Virtual Earth) is yet another source of geographic image/map data against which the knowledge of an applicant may be tested. Microsoft's service offers “Bird's eye” (oblique), in addition to “aerial” (straight down), photographic imagery of many city locations.
FIG. 6 , for example, shows a “Bird's eye” view of part of the campus of the Georgia Institute of Technology, obtained through the Microsoft service. Such imagery can be used in the foregoing examples. - Another class of resources that might be tapped, in this example as in others, are online services provided by certain municipal bus services. Portland's “Tri-Met” transit system, for example, offers a web-based service (at the domain trimet.org) by which users can specify their start points and desired destinations, and the service identifies buses and trains, and their schedules, that the user can take to commute between the locations. The system's web services also offer local maps, showing which bus routes travel along which streets, and the location of bus stops. A user can specify an intersection, and obtain a map of bus routes near that intersection. Such a map, and the associated user interface, is shown in
FIG. 7 . - Such transit system information can be used to assess an applicant's knowledge about their alleged residence. An applicant may be asked, for example:
-
- S.W. Dakota Street is near your home. Do mass transit buses run down that street?
- 1. Yes.
- 2. No.
- The DMV may sometimes ask questions expecting that the applicant will not know the answer. For example, a person living in the neighborhood depicted in the bus map of
FIG. 7 may be asked to name one or more bus routes that travel SW Beaverton-Hillsdale Highway (e.g., 54, 56, 61 and 92). If the applicant cannot answer this question, he is not faulted; the answer is likely unfamiliar to many neighborhood residents. If, however, the applicant can answer this difficult question correctly, such correct answer may help his score more than a correct answer to a more routine question. (The ability to identify two or more of the buses along this route could boost his score still further.) Thus, answers to different questions may be weighted differently in scoring confidence in the applicant's asserted identity. - Google offers a Transit Trip Planner service (google.com/transit) that seeks to standardize delivery of mass transit schedule and route information across many cities' transportation services. It, too, may be used as a fact resource in compiling challenge questions for applicants.
- The neighborhood-based questions noted earlier are just a start. Others may include the following:
-
- Leaving your home, what street intersection do you first encounter? What if you go the opposite way?
- What cable TV service provider serves your house/neighborhood? (DSL provider? Electric utility company?)
- What big mall is closest to your home? (Its anchor tenants?) What supermarket(s)?
- Which town(s) adjoin the town where you live/work?
- What bus route is close to your home/work?
- What is the name of the local mass transit system?
- Does your side of your home street have a curb? A sidewalk?
- Are power poles/lines on side of street where you live? (Or are they buried in your neighborhood?)
- On what street is the Post Office in your community?
- What school is near your home?
- What intersection with traffic light is close to your home?
- Name/number of thoroughfare that links town to adjoining town of [X]
- On what day(s) does garbage get picked up from your home?
- In what year did you purchase (or mortgage, or refinance) home? If not a homeowner, what is name of landlord/agency to whom you pay rent?
- Name a person who lives next door.
- The foregoing discussion has focused on a single knowledge domain—information that an applicant may be expected to know by reason of their residence address. Many other knowledge domains exist. One, for example, is information an applicant may be expected to know by reason of their employment. A few examples of facts on which an applicant may be quizzed, based on their employment, follow:
-
- If a truck driver
- Validity period of commercial driving license?
- Meaning of acronym GVW?
- Location of highway permit offices?
- Location of highway weigh stations?
- Weight restrictions on major roads/bridges?
- Phone number of company's freight dispatcher?
- Tolls on local roads?
- If a lawyer
- Names of local judges?
- Web site of county bar association?
- Knowledge of CLE requirements (e.g., how often must you report; how many hours/year required)?
- Town where state bar association has its headquarters?
- If a truck driver
- It will be recognized that many of these questions are not strictly based on employment, but also depend on location of employment. A truck driver in Portland, for example, may be able to name seven bridges in Portland, but none in Seattle. Generally speaking, the more closely tailored the questions are to the applicant profile (e.g., residence/employment/age/etc), the more useful they will be in establishing (or refuting) confidence in the asserted identity.
- Another knowledge domain comprises facts an applicant may be expected to know by reason of their education. For example:
-
- If a HS student or graduate
- Name of principal of HS where/when graduated (or teacher)?
- Size of HS graduating class (in tiered ranges)?
- Sports team name/mascot?
- If a college graduate
- (Above questions for HS, adapted to college)
- Town where college is located?
- Zip code of that town?
- Is school on schedule of 2 semester or 3 quarter/yr?
- Name of a dorm on campus?
- Name of a professor?
- If degreed in pharmacology
- Generic name for active ingredient in Benadryl?
- Name of company that is successor to A.H. Robins?
- City and/or state where Merck is headquartered?
- Name of college where received pharmacology degree?
- If a lawyer
- Name of an early Supreme Court justice?
- Meaning offorce majeure?
-
Name 3 traditionally-required first year courses?
- If a HS student or graduate
- Some of this information may be readily mined from public databases. For example, the XML profile excerpted above specified that the applicant attended Wilson High School in Portland, Oreg. Querying the online database Google with the input “Wilson high school portland mascot” gives, as the first search result, a “hit” from the online encyclopedia Wikipedia revealing that that the team mascot is the “Trojans.” See
FIG. 8 . - Some of this information may not appear as the first hit of a Google search, but can be researched with little effort. For example, information relating to high schools and colleges, including size and faculty (currently and historically) can be obtained by web sites maintained by the respective high schools and universities, and also from databases maintained by independent providers, such as classmates.com.
- Some of this information can be garnered from independent human searchers, e.g., using Amazon's Mechanical Turk service. Amazon's Turk web site explains:
-
- Amazon Mechanical Turk provides a web services API for computers to integrate Artificial Artificial Intelligence directly into their processing by making requests of humans. Developers use the Amazon Mechanical Turk web services API to submit tasks to the Amazon Mechanical Turk web site, approve completed tasks, and incorporate the answers into their software applications. To the application, the transaction looks very much like any remote procedure call—the application sends the request, and the service returns the results. In reality, a network of humans fuels this Artificial Intelligence by coming to the web site, searching for and completing tasks, and receiving payment for their work.
- All software developers need to do is write normal code. The pseudo code below illustrates how simple this can be.
read (photo); photoContainsHuman = callMechanicalTurk(photo); if (photoContainsHuman == TRUE){ acceptPhoto; } else { rejectPhoto; }
More information about Amazon's Mechanical Turk service is provided in Appendix B (Amazon Mechanical Turk Developer Guide, 2006, 165 pp., API Version 10-31-2006). - Alternatively, automated processes, e.g., using more traditional artificial intelligence techniques, can be applied to generate and check questions from available online resources, such as the maps and databases noted above, given profile data on the applicant (e.g., current and prior residence addresses, education, city where attended high school, etc.).
- There are many other domains of knowledge about which an applicant can be questioned. One is facts the applicant may be expected to know by reason of who they know. For example, they may be asked to provide four memorized telephone numbers, and name the person to which each corresponds. Another, as noted earlier, is names of neighbors.
- Still another knowledge domain comprises facts the applicant may be expected to know by reason of their age. For example:
-
- Who was Nixon's VP?
- In what state did Rosa Parks become famous?
- What was Muhammad Ali's real name?
- Who did Elizabeth Taylor marry more than once?
- Who did Brad Pitt recently break up with?
- What was Dustin Hoffmnan's first famous role?
- How many stars did the US flag have during WWII?
- What animal is on the back of old nickels?
- Yet another knowledge domain is facts the applicant may be expected to know by reason of activities they earlier performed. For example, readily accessible databases reveal answers to the following questions on which an applicant may be tested:
-
- Did you cast a ballot in the November, 2004, election? How about the presidential race in 2000? In what county/state?
- Did you donate money to the electoral campaign of any official? Who?
- Have you had a motor vehicle violation in past 5 years? (In what state?)
- Have you ever been summoned to jury duty?
- While quizzing an applicant, some questions may be asked to compile data that can be a source for questions in future checking—either for the applicant himself/herself, or for another person. Examples include:
-
- Name your first pet/your favorite teacher/your favorite frequent flier account number/your first telephone number/your earliest-remembered zip code.
- Name a non-family member with whom you've lived (e.g., a college roommate); name a former neighbor; name a person who sits near you at work.
- Name a person who was with you when you learned of the 9/11 attack, and where you were when you heard the news.
- Another factor that can help confirm identity is an on-line account. Many individuals have accounts with on-line entities. Examples include Amazon, PayPal, EBay, Orbitz, Expedia, NetFlix, etc. Such accounts can provide verification information—either with the companies' cooperation, or without.
- Consider Amazon. Its web site allows registered users to sign-in and display historical information, including a list of all orders ever placed with the company (at last back to 1997, as of this writing). For each order, Amazon provides information including the address to which it was shipped, the shipping date, and the billing address.
- As part of a verification protocol, a DMV clerk may direct a web browser on a DMV computer to the Amazon Sign-In page. The applicant can then type his/her account information (e.g., email address and password), and navigate from the resulting “My Account” page to select “View by Item—all items ordered” (
FIG. 9 ). A report like that shown inFIG. 10 is produced, listing all items ordered through that account. Clicking on an order number generates a screen like that shown inFIG. 11 , showing both the address to which the item was shipped, as well as the billing address for the credit card used. If this address information is consistent with the address given by the applicant, this tends to confirm the applicant's credibility. In embodiments in which such factors influence an applicant's score, this would tend to increase that score. Naturally, the further back in time the applicant can demonstrate residence at the current address, the more the applicant's score might be increased. (Evidence that Amazon shipped a book to the stated address a month ago would be less assuring than evidence that Amazon shipped a book to that address a year, or five years, ago.) - Other on-line services can also be used for this purpose. EBay reports the date on which people joined and became members, and their location (e.g., “Member since Apr. 18, 1998; Location: United States”). Again, a demonstrable history with such an online vendor can be factored into the analysis of assessing whether the applicant is who they purport to be.
- Moreover, EBay and other sites provide various mechanisms by which members are critiqued by their peers or others users. EBay thus reports a feedback score, indicating the percentage of unique users who have scored their transactions with a particular member as favorable. In one example, member “mgmill” has a feedback score of 100%: 2773 members left positive feedback; none left negative feedback. (A 100% score based on thousands of feedbacks is naturally more meaningful than a 100% score based on a dozen feedbacks.) Such peer/user rankings serve as an additional factor on which an applicant can be scored (albeit one that might be given relative little weight—absent some demonstrated correlation between EBay feedback scores, and the metric that the agency is trying to assess).
- Expedia, Orbitz, and other online travel companies have histories that a user can access giving information useful in verification. From the Expedia home page, for example, a user can click on “My Account” and sign-in by entering a login name and password. An Account Overview page is then produced. This page lists the account holder and others for whom that person has made travel arrangements. Each name is hyperlinked to a further screen with information about that traveler, such as home phone number, work phone number, cell phone number, passport number and country, frequent flier numbers, etc. At the bottom of the screen there is a listing of credit cards associated with the account—including billing addresses and telephone numbers. Again, all such information can be accessed—with the cooperation of the applicant—and factored into a possibility-of-fraud fraud assessment for that applicant.
- PayPal offers further data useful in checking a person. (PayPal currently has over 80 million accounts.) Again, going to the PayPal home page yields a screen on which the applicant can enter their email address and PayPal password. From the resulting “Overview” screen, the user can navigate to the “Profile” page. Here again, a great wealth of information useful in corroborating an applicant is available, e.g., links to email, street address, telephone, credit card accounts registered with PayPal, bank accounts registered with PayPal, etc. Clicking on the credit card account link produces a screen giving the last digits of the credit card number, and the credit card billing address. Clicking on the “History” tab allows historical transactions to be reviewed. Like Amazon, histories going back several years are available. As before, a long-tenured account may be taken as a favorable factor in assessing an applicant.
- Again, for any particular PayPal transaction, further details can be retrieved (e.g., by clicking the “Details” link). These details include the address to which the purchased item was to be sent, and information about the credit card or bank account that was debited. Again, this information can be collected as part of the verification process, and checked for consistency with other information about the applicant.
- By such techniques, ID verification can be augmented by allowing the applicant to demonstrate knowledge of accounts with which the applicant is associated. Knowledge of passwords, order history, and consistency between shipping addresses and applicant's asserted address, all tend to boost a confidence score.
- Still richer data may be obtained by establishing a commercial relationship with companies having accounts with large numbers of consumers, so that additional information—not generally available—might also be made available.
- (Naturally, safeguards may be put into place to assure privacy of the password and other data entered into the computer by the applicant to access such information, as well as of the account information thereby revealed.)
- Such techniques serve to enlarge the “neighborhood” of those who can vouch for an individual, to encompass entities involved in commercial transactions.
- Still another source from which question data can be derived is information provided by prior applicants. Applicants can be queried for information (questions and answers, or facts) that would most likely be known to other applicants having a similar background qualification, but would likely be unknown to others. Thus, a person who is a lawyer in Portland, Oreg., may offer the question, “Who is the old federal courthouse named after?” and offer the answer: “Gus Solomon.” Likewise, a bus driver in the same city may offer “Where is the bus garage on the west side of the Willamette River? Answer: Merlo Street.” A teacher at Hayhurst Elementary School in Portland may offer “To what middle school do Hayhurst students go? Answer: Robert Gray Middle School.”
- A nineteen year old student at the University of Portland may offer, “What live music club is closest to school? Answer: Portsmouth Club.” A sixty year old faculty member at the same school may offer, “What is the Catholic order with which the University is affiliated? Answer: Holy Cross.”
- It will be recognized that the nineteen year old student at the University of Portland may be less likely to know the answer to the latter question, and the sixty year old faculty member may be less likely to know the answer to the former question. Thus, in many cases, questions should not be selected for presentation to an applicant based on just a single factor (e.g., a connection to the University of Portland, or a residence on S.W. Dakota Street). Rather, the questions may be assigned based on two or more factors (e.g., age plus school affiliation; profession plus address; ethnicity plus location where grew up; etc.).
- One way of conceptualizing this is as a multidimensional vector space in which both questions and applicants can be located.
- The questions can each be located based on two or more classification data that help identify applicants who are likely to be able to answer such questions. Examples of possible classification data includes geographic location to which question relates, professional expertise (e.g., law) useful in answering question, age of persons most likely to be familiar with question, education of persons most likely to be familiar with questions, etc.
- The applicant can be similarly located in this vector space, again based on factors such as noted above.
- The questions nearest the applicant in this vector space are thus those that most closely match that applicant's background and other credentials, and are thus most likely to be useful in confirming that the applicant's background/credentials are as he or she states.
- In cases where candidate questions are proposed by applicants, such questions can be tagged with classification data based on the proposer's attributes. E.g., if a Portland, Oreg. lawyer, aged 60, proposed the question about the name of the old courthouse, then that question might be stored with such applicant attributes (e.g., lawyer, Oregon, age 60) as classification data. (Classification data may be of the same type, or may even be drawn from the same vocabulary, as applicant attributes, but this need not be the case.)
- In one arrangement, question data from which questions are drawn is stored in a large database, each record tagged with plural of the classification data noted earlier. Proximity between plural of the question data, and the particular applicant is then determined, so as to identify a subset of the original question space from which questions might usefully be posed. The method can then select questions from this subset (e.g., randomly) for presentation to the applicant.
- It should be noted that question data need not be a single query to which there is a single answer. Question data can span a larger body of information, e.g., names of familiar Nobel laureates in physics from 1910-1935, and can provide fodder for many different, but related, questions.
- In some arrangements, new question information collected from applicants can be vetted prior to use in verifying further applicants. Such vetting can be performed by DMV personnel, or by contractors (“professional vetting,” which may include the Mechanical Turk service). Computer verification may be employed in some instances.
- Sometimes, the vetting may be performed by presenting such questions/answers to new applicants, on a trial basis. Answers to such questions would likely not be included in a substantive assessment as to the new applicant's identity or risk. Rather, such answers would be used to judge whether the question/answer is a useful tool.
- For example, each proposed new question might be posed to five new applicants, each of whom appears to have a background that would allow them to correctly answer same. If at least three of them (or, alternatively, if at least four of them, or if all of them) give the answer offered by the proposer of the question, then the question can be moved to the pool of real questions—and used to substantively assess applicants.
- (If the trial question is proposed to an applicant who does not know the answer, and other circumstances give rise to uncertainty whether such applicant's identity is bona fide or fraudulent, then such data point may be disregarded in assessing whether the question is a useful tool. Conversely, if the question is answered correctly by a person whose identity is otherwise established to be fraudulent, then this suggests that the question is not a usefuil discriminator tool.)
- Combinations of professional and applicant vetting can also be used. For example, a candidate new question may be posed to five new applicants on a trial basis—as outlined above. If a sufficient number answer correctly, it can be further vetted by a professional. Or, such order may be reversed.
- Tests of the sort detailed above might be posed routinely to applicants. If an applicant does not demonstrate knowledge of an expected sort, this can trigger further scrutiny of other aspects of the applicant's application. For example, a DMV agent might respond by more closely inspecting breeder documents presented by the dubious applicant, or by requesting additional information not required from routine applicants.
- Alternatively, the checks presented above might be posed only if an applicant's profile otherwise suggests further checking is prudent (e.g., if a risk score based on the ensemble of presented breeder documents exceeds a threshold).
- In most embodiments, an incorrect answer isn't fatal to applicant verification. There may be many credible explanations for an incorrect answer. Rather, answers are typically used to adjust an aggregate score—with no one question being determinative.
- Applicants' technology detailed herein may be regarded as a knowledge-based system—utilizing a knowledge base of information to perform or facilitate an applicant-verification process. Earlier work in such knowledge-based systems is shown, e.g., by U.S. Pat. Nos. 6,968,328, 6,965,889 and 6,944,604.
- As referenced above, the Mechanical Turk service can be widely used in gathering and vetting information used in the foregoing. (Additionally, the Mechanical Turk service can be used in conjunction with the systems detailed in the earlier-referenced patent documents to facilitate some of the operations required by those systems, e.g., making judgments and undertaking tasks that computers are ill-suited to perform—such as performing fuzzy matching, applying common-sense knowledge, and interpreting documents. Thus, for example, such a system understands that the names “Bill,” “Will,” “Wm.,” and the like, all can be acceptable matches to the name “William;” likewise that address lines “Apartment 3A,” “Apt. 3A,” “Unit 3A,” and the like, all can be acceptable matches to the address line “Apt. 3-A,” etc. Similarly, the Turk service can be used to harvest public records data that can be used in verification operations. A number of such applications of the Mechanical Turk service to the arrangements in the cited documents are within the capabilities of the artisan from the teachings herein. Appendix A further details some further inventive uses of the Mechanical Turk service, and similar “crowdsourcing” technologies.)
- As noted, the arrangements detailed herein are not limited to verifying identity prior to issuance of identity documents; they have applicability in many other contexts.
- To give but one example, consider a passport control checkpoint in an airport, where a government official inspects passports of travelers. When the passport is swiped or scanned by a passport reader, the official is presented with a screen of information pertaining to the person. This same screen, or another, can be used to present one or more verification checks like those detailed above, e.g., showing a map of the passport holder's neighborhood, and requesting the traveler to identify her home.
- Naturally, in cases where the applicant data is received in advance of applicant testing, e.g., from a home web session, then additional time is available to prepare questions customized to that applicant.
- No particular methodology for generating a score has been detailed above, because same depends on the facts of particular cases. One technique for generating a numeric score is a polynomial approach (such as that detailed in US20040153663), where different factors are weighted differently, and summed to produce an aggregate score. Scores may be produced based just on the information collected through the procedures detailed in this detailed description, or the scored can be based on a larger set of data, e.g., including factors on which scores are computed in the prior art.
- Implementation of systems employing the foregoing principles is straightforward to artisans, e.g., using standard computer-, database-, software- and network-technology.
- To provide a comprehensive disclosure without unduly lengthening this specification, applicants incorporate-by-reference the documents referenced in this disclosure (including Appendices). It is expressly contemplated that the technologies, features and analytical methods detailed herein can be incorporated into the methods/systems detailed in such other documents. Moreover, the technologies, features, and analytical methods detailed in those documents can be incorporated into the methods/systems detailed herein. (It will be recognized that the brief synopses of prior documents provided above naturally do not reflect all of the features found in such disclosures.)
- In view of the wide variety of embodiments to which the principles and features discussed above can be applied, it should be apparent that the detailed embodiments are illustrative only and should not be taken as limiting the scope of the disclosed technology. Rather, we claim all such modifications as may come within the scope and spirit of the following claims and equivalents thereof.
- The Mechanical Turk service (detailed in Appendix B) may be regarded as a structured implementation of a technology commonly termed “crowdsourcing”—employing a group of outsiders to perform a task. Wikipedia explains:
-
- “Crowdsourcing” is a neologism for a business model that depends on work being done outside the traditional company walls: while outsourcing is typically performed by lower paid professionals, crowdsourcing relies on a combination of volunteers and low-paid amateurs who use their spare time to create content, solve problems, or even do corporate R&D. The term was coined by Wired magazine writer Jeff Howe and editor Mark Robinson in June 2006.
- Crowds targeted for crowdsourcing include garage scientists, amateur videographers, freelancers, photo enthusiasts, data companies, writers, smart mobs and the electronic herd.
- Overview
-
- While not a new idea, crowdsourcing is becoming mainstream. Open source projects are a form of crowdsourcing that has existed for years. People who may not know one another work together online to create complex software such as the Linux kernel, and the Firefox browser. In recent years internet technology has evolved to allow non-technical people to participate in online projects. Just as important, crowdsourcing presumes that a large number of enthusiasts can outperform a small group of experienced professionals.
- Advantages
-
- The main advantages of crowdsourcing is that innovative ideas can be explored at relatively little cost. Furthermore, it also helps reduce costs. For example if customers reject a particular design, it can easily be scrapped. Though disappointing, this is far less expensive than developing high volumes of a product that no one wants. Crowdsourcing is also related to terms like Collective Customer Commitment (CCC) and Mass Customisation. Collective Customer Commitment (CCC) involves integrating customers into innovation processes. It helps companies exploit a pool of talent and ideas and it also helps firms avoid product flops. Mass Customisation is somewhat similar to collective customer commitment; however, it also helps companies avoid making risky decisions about what components to prefabricate and thus avoids spending for products which may not be marketable later.
- Types of Crowdsourced Work
-
- Steve Jackson Games maintains a network of MIB (Men In Black), who perform secondary jobs (mostly product representation) in exchange for free product. They run publicly or semi-publicly announced play-tests of all their major books and game systems, in exchange for credit and product. They maintain an active user community online, and have done so since the days of BBSes.
- Procter & Gamble employs more than 9000 scientists and researchers in corporate R&D and still have many problems they can't solve. They now post these on a website called InnoCentive, offering large cash rewards to more than 90,000 ‘solvers’ who make up this network of backyard scientists. P&G also works with NineSigma, YourEncore and Yet2.
- Amazon Mechanical Turk co-ordinates the use of human intelligence to perform tasks which computers are unable to do.
- YRUHRN used Amazon Mechanical Turk and other means of crowdsourcing to compile content for a book published just 30 days after the project was started.
- iStockphoto is a website with over 22,000 amateur photographers who upload and distribute stock photographs. Because it does not have the same margins as a professional outfit like Getty Images it is able to sell photos for a low price. It was recently purchased by Getty Images.
- Cambrian House applies a crowdsourcing model to identify and develop profitable software ideas. Using a simple voting model, they attempt to find sticky software ideas that can be developed using a combination of internal and crowdsourced skills and effort.
- A Swarm of Angels is a project to utilize a swarm of subscribers (Angels) to help fund, make, contribute, and distribute, a £1 million feature film using the Internet and all digital technologies. It aims to recruit earlier development community members with the right expertise into paid project members, film crew, and production staff.
- The Goldcori Challenge is an example of how a traditional company in the mining industry used a crowdsource to identify likely veins of gold on its Red Lake Property. It was won by Fractal Graphics and Taylor-Wall and Associates of Australia but more importantly identified 110 drilling targets, 50% of which were new to the company.
- CafePress and Zazzle, customized products marketplaces for consumers to create apparel, posters, cards, stamps, and other products.
- Marketocracy, to isolating top stock market investors around the world in head to head competition so they can run real mutual funds around these soon-to-be-discovered investment super-stars.
- Threadless, an internet-based clothing retailer that sells t-shirts which have been designed by and rated by its users.
- Public Insight Journalism, A project at American Public Media to cover the news by tapping the collective and specific intelligence of the public. Gets the newsroom beyond the usual sources, uncovers unexpected expertise, stories and new angles.
- External links and references
-
- The Rise of Crowdsourcing, Wired June 2006.
- Crowdsourcing: Consumers as Creators, BusinessWeek July 2006.
- One use of the Mechanical Turk service (or similar crowdsourcing engines) is in connection with computationally difficult tasks, such as identification of audio, video and imagery content. These tasks are sometimes addressed by so-called “fingerprint” technology, which seeks to generate a “robust hash” of content (e.g., distilling a digital file of the content down to perceptually relevant features), and then compare the thus-obtained fingerprint against a database of reference fingerprints computed from known pieces of content, to identify a “best” match. Such technology is detailed, e.g., in Haitsma, et al, “A Highly Robust Audio Fingerprinting System,” Proc. Intl Conf on Music Information Retrieval, 2002; Cano et al, “A Review of Audio Fingerprinting,” Journal of VLSI Signal Processing, 41, 271, 272, 2005; Kalker et al, “Robust Identification of Audio Using Watermarking and Fingerprinting,” in Multimedia Security Handbook, CRC Press, 2005, and in patent documents WO02/065782, US20060075237, US20050259819, and US20050141707.
- A particular example of such technology is in facial recognition—matching an unknown face to a reference database of facial images. Again, each of the faces is distilled down to a characteristic set of features, and a match is sought between an unknown feature set, and feature sets corresponding to reference images. (The feature set may comprise eigenvectors or shape primitives.) Patent documents particularly concerned with such technology include US20020031253, U.S. Pat No. 6,292,575, U.S. Pat No. 6,301,370, U.S. Pat No. 6,430,306, U.S. Pat No. 6,466,695, and U.S. Pat No. 6,563,950.
- These are examples of technology that relies on “fuzzy” matching. The fingerprint derived from the unknown content often will not exactly match any of the reference fingerprints in the database. Thus, the database must be searched not just for the identical content fingerprint, but also for variants.
- Expanding the search to include variants hugely complicates—and slows—the database search task. To make the search tractable, one approach is to prune the database—identifying excerpts thereof that are believed to be relatively likely to have a match, and limiting the search to those excerpts (or, similarly, identifying excerpts that are believed relatively unlikely to have a match, and not searching those excerpts).
- The database search may locate several reference fingerprints that are similar to the fingerprint of the unknown content. The identification process then seeks to identify a “best” match, using various algorithms.
- Such content identification systems can be improved by injecting a human into the process—by the Mechanical Turk service or similar systems.
- In one particular arrangement, the content identification system makes an assessment of the results of its search, e.g., by a score. A score of 100 may correspond to a perfect match between the unknown fingerprint and a reference fingerprint. Lower scores may correspond to successively less correspondence. (At some lower score, Sx, (perhaps 60) the system may decide that there is no suitable match, and a “no-match” result is returned, with no identification made.) Above some threshold score, Sy, (perhaps 70) the system may be sufficiently confident of the result that no human intervention is necessary. At scores below Sy, the system may make a call through the Mechnical Turk service for assistance.
- The Mechanical Turk can be presented the unknown content (or an excerpt thereof), and some reference content, and asked to make a comparison. (The reference content may be stored in the fingerprint database, or may be readily obtainable through use of a link stored in the reference database.)
- A single item of reference content can be provided for comparison with the unknown content, or several items of reference content can be provided. (Again, excerpts may be used instead of the complete content objects. Depending on the application, the content might be processed before sending to the crowdsource engine, e.g., removing metadata (such as personally identifiable information: name, driver license number, etc.) that is printed on, or conveyed with, the file.)
- The requested comparison can take different forms. The service can be asked simply whether two items appear to match. Or it can be asked to identify the best of several possible matches (or indicate that none appears to match). Or it can be asked to give a relative match score (e.g., 0-100) between the unknown content and one or more items reference content.
- In many embodiments, a query is referred to several different humans (e.g., 2-50) through the Mechanical Turk service, and the returned results are examined for consensus on a particular answer. In some queries (e.g., does Content A match Content B? Or is Content A a better match to Content C?), a “vote” may be taken. A threshold of consensus (e.g., 51%, 75%, 90%, 100%) may be required in order for the service response to be given weight in the final analysis. Likewise, in queries that ask the humans to provide a subjective score, the scores returned from plural such calls may be combined to yield a net result. (The high and/or low and/or outlier scores may be disregarded in computing the net result; weighting can sometimes be employed, as noted below.)
- As suggested, the data returned from the Mechanical Turk calls may serve as a biasing factor, e.g., pushing an algorithmically determined output one way or another, to yield a final answer (e.g., a net score). Or the data returned from the Mechanical Turk calls may be treated as a definitive answer—with results from preceding processes disregarded.
- Sometimes the database search may reveal several candidate matches, all with comparable scores (which may be above the threshold Sy). Again, one or more calls to the Mechanical Turk service may be invoked to decide which match is the best, from a subjective human standpoint.
- Sometimes the Mechanical Turk service can be invoked even in situations where the original confidence score is below the threshold, Sx, which is normally taken as indicating “no match.” Thus, the service can be employed to effectively reduce this threshold—continuing to search for potential matches when the rote database search does not yield any results that appear reliable.
- The service can also be invoked to effect database pruning. For example, a database may be organized with several partitions (physical or logical), each containing information of a different class. In a facial recognition database, the data may be segregated by subject gender (i.e., male facial portraits, female facial portraits), and/or by age (15-40, 30-65, 55 and higher—data may sometimes be indexed in two or more classifications), etc. In an image database, the data may be segregated by topical classification (e.g., portrait, sports, news, landscape). In an audio database, the data may be segregated by type (spoken word, music, other). Each classification, in turn, can be further segregated (e.g., “music” may be divided into classical, country, rock, other). And these can be further segregated (e.g., “rock” may be classified by genre, such as soft rock, hard rock, Southern rock; by artist, e.g., Beatles, Rolling Stones, etc).
- A call to the Mechanical Turk can be made, passing the unknown content object (or an excerpt thereof) to a human reviewer, soliciting advice on classification. The human can indicate the apparent class to which the object belongs (e.g., is this a male or female face? Is this music classical, country, rock, or other?). Or, the human can indicate one or more classes to which the object does not belong.
- With such human advice (which, again, may involve several human reviewers, with a voting or scoring arrangement), the system can focus the database search where a correct match—if any—is more likely to be found (or avoid searching in unproductive database excerpts). This focusing can be done at different times. In one scenario it is done after a rote search is completed, in which the search results yield matches below the desired confidence level of Sy If the database search space is thereafter restricted by application of human judgment, the search can be conducted again in the limited search space. A more thorough search can be undertaken in the indicated subset(s) of the database. Since a smaller excerpt is being searched, a looser criteria for a “match” might be employed, since the likelihood of false-positive matches is diminished. Thus, for example, the desired confidence level Sy might be reduced from 70 to 65. Or the threshold Sx at which “no match” is concluded, may be reduced from 60 to 55. Alternatively, the focusing can be done before any rote searching is attempted.
- The result of such a human-focused search may reveal one or more candidate matches. The Mechnical Turk service may be called a second time, to vet the candidate matches—in the manner discussed above. This is one of several cases in which it may be desirable to cascade Mechanical Turk calls—the subsequent calls benefiting from the former.
- In the example just-given, the first Mechanical Turk call aids in pruning the database for subsequent search. The second call aids in assessing the results of that subsequent search. In other arrangements, Mechanical Turk calls of the same sort can be cascaded.
- For example, the Mechanical Turk first may be called to identify audio as music/speech/other. A second call may identify music (identified per the first call) as classical/country/rock/other. A third call may identify rock (identified per the second call) as Beatles/Rolling Stones/etc. Here, again, by iterative calling of a crowdsourcing service, a subjective judgment can be made that would be very difficult to achieve otherwise.
- In some arrangements, human reviewers are pre-qualified as knowledgeable in a specific domain (e.g., relatively expert in recognizing Beatles music). This qualification can be established by an online examination, which reviewers are invited to take to enable them to take on specific tasks (often at an increased rate of pay). Some queries may be routed only to individuals that are pre-qualified in a particular knowledge domain. In the cascaded example just given, for example, the third call might be routed to one or more users with demonstrated expertise with the Beatles (and, optionally, to one or more users with demonstrated expertise with the Rolling Stones, etc). A positive identification of the unknown content as sounding like the Beatles would be given more relative weight if coming from a human qualified in this knowledge domain. (Such weighting may be taken into account when aggregating results from plural human reviewers. For example, consider an unknown audio clip sent to six reviewers, two with expertise in the Beatles, two with expertise in the Rolling Stones, and two with expertise in the Grateful Dead. Assume the Beatles experts identify it as Beatles music, the Rolling Stones experts identify it as Grateful Dead music, and the Grateful Dead experts identify it as Rolling Stones music. Despite the fact that there are tie votes, and despite the fact that no selection earned a majority of the votes, the content identification service that made these calls and is provided with these results may logically conclude that the music is Beatles.)
- Calls to the Mechanical Turk service may request the human to provide metadata relevant to any content reviewed. This can include supposed artist(s), genre, title, subject, date, etc. This information (which may be ancillary to a main request, or may comprise the entirety of the request) can be entered into a database. For example, it can be entered into a fingerprint database—in association with the content reviewed by the human.
- Desirably, data gleaned from Mechanical Turk calls are entered into the database, and employed to enrich its data—and enrich information that can be later mined from the database. For example, if unknown content X has a fingerprint Fx, and through the Mechanical Turk service it is determined that this content is a match to reference content Y, with fingerprint Fy, then a corresponding notation can be added to the database, so that a later query on fingerprint Fx, (or close variants thereof) will indicate a match to content Y. (E.g., a lookup table initially indexed with a hash of the fingerprint Fx will point to the database record for content Y.)
- Calls to outsourcing engines involve a time lag before results are returned. The calling system can generally cope, or be adapted to cope, with such lags.
- Consider a user social networking site such as YouTube (now Google) that distributes “user generated content” (e.g., video files), and employs fingerprinting to recognize media content that should not be distributed. The site may check a video file at the time of its uploading with a fingerprint recognition system (e.g., of the sort offered by Audible Magic, or Gracenote). If no clear match is identified, the video may be indexed and stored on YouTube's servers, available for public downloading. Meanwhile, the content can be queued for review by one our more crowdsource reviewers. They may recognize it as a clip from the old TV sitcom “I Love Lucy”—perhaps digitally rotated 3 degrees to avoid fingerprint detection. This tentative identification is returned to YouTube from the API call. YouTube can check the returning metadata against a title list of works that should not be distributed (e.g., per the request of copyright owners), and may discover that “I Love Lucy” clips should not be distributed. It can then remove the content from public distribution. (This generally follows a double-check of the identification by a YouTube employee.) Additionally, the fingerprint database can be updated with the fingerprint of the rotated version of the I Love Lucy, allowing it to be immediately recognized the next time it is encountered.
- If the content is already being delivered to a user at the moment the determination is made (i.e., the determination that the content should not be distributed publicly), then the delivery can be interrupted. An explanatory message can be provided to the user (e.g., a splash screen presented at the interruption point in the video).
- Rotating a video by a few degrees is one of several hacks that can defeat fingerprint identification. (It is axiomatic that introduction of any new content protection technology draws hacker scrutiny. Familiar examples include attacks against Macrovision protection for VHS tapes, and against CSS protection for packaged DVD discs.) If fingerprinting is employed in content protection applications, such as in social networking sites (as outlined above) or peer-to-peer networks, its vulnerability to attack will eventually be determined and exploited.
- Each fingerprinting algorithm has particular weaknesses that can be exploited by hackers to defeat same. An example will help illustrate.
- A well known fingerprinting algorithm operates by repeatedly analyzing the frequency content of a short excerpt of an audio track (e.g., 0.4 seconds). The method determines the relative energy of this excerpt within 33 narrow frequency bands that logarithmically span the
range 300 Hz-2000 Hz. A corresponding 32-bit identifier is then generated from the resulting data. In particular, a frequency band corresponds to a data bit “1” if its energy level is larger than that of the band above, and a “0” if its energy level is lower. (A more complex arrangement can also take variations over time into account, outputting a “1” only if the immediately preceding excerpt also met the same test, i.e., having a band energy greater than the band above.) - Such a 32 bit identifier is computed every hundredth of a second or so, for the immediately preceding 0.4 second excerpt of the audio track, resulting in a large number of “fingerprints.” This series of characteristic fingerprints can be stored in a database entry associated with the track, or only a subset may be stored (e.g., every fourth fingerprint).
- When an unknown track is encountered, the same calculation process is repeated. The resulting set of data is then compared against data earlier stored in the database to try and identify a match. (As noted, various strategies can be employed to speed the search over a brute-force search technique, which yields unacceptable search times.)
- While the just-described technique is designed for audio identification, a similar arrangement can be used for video. Instead of energies in audio subbands, the algorithm can use average luminances of blocks into which the image is divided as the key perceptual features. Again, a fingerprint can be defined by determining whether the luminance in each block is larger or smaller than the luminance of the preceding block.
- While little has been written about attacks targeting fingerprinting systems, a casual examination of possible attack scenarios reveals several possibilities. A true hacker will probably see many more. Four simple approaches are discussed below.
- The reader may be familiar with different loudness profiles selectable on car radios, e.g., Jazz, Talk, Rock, etc. Each applies a different frequency equalization profile to the audio, e.g., making bass notes louder if the Rock setting is selected, and quieter if the Talk setting is selected, etc. The difference is often quite audible when switching between different settings.
- However, if the radio is simply turned on and tuned to different stations, the listener is generally unaware of which loudness profile is being employed. That is, without the ability to switch between different profiles, the frequency equalization imposed by a particular loudness profile is typically not noticed by a listener. The different loudness profiles, however, yield different fingerprints.
- For example, in the Rock setting, the 300 Hz energy in a particular 0.4 second excerpt may be greater than the 318 Hz energy. However, in the Talk setting, the situation may be reversed. This change prompts a change in the leading bit of the fingerprint.
- In practice, an attacker would probably apply loudness profiles more complex than those commonly available in car radios—increasing and decreasing the loudness at many different frequency bands (e.g., 32 different frequency bands). Significantly different fingerprints may thus be produced. Moreover, the loudness profile could change with time—further distancing the resulting fingerprint from the reference values stored in a database.
- Another process readily available to attackers is audio multiband compression, a form of processing that is commonly employed by broadcasters to increase the apparent loudness of their signal (most especially commercials). Such tools operate by reducing the dynamic range of a soundtrack—increasing the loudness of quiet passages on a band-by-band basis, to thereby achieve a higher average signal level. Again, this processing of the audio changes its fingerprint, yet is generally not objectionable to the listeners.
- The two examples given above are informal attacks—common signal processing techniques that yield, as side-effects, changes in audio fingerprints. Formal attacks-signal processing techniques that are optimized for purposes of changing fingerprints—are numerous.
- Some formal attacks are based on psychoacoustic masking. This is the phenomena by which, e.g., a loud sound at one instant (e.g., a drum beat) obscures a listener's ability to perceive a quieter sound at a later instant. Or the phenomena by which a loud sound at one frequency (e.g., 338 Hz) obscures a listener's ability to perceive a quieter sound at a nearby frequency (e.g., 358 Hz) at the same instant. Research in this field goes back decades. (Modem watermarking software employs psychoacoustic masking in an advantageous way, to help hide extra data in audio and video content.)
- Hacking software, of course, can likewise examine a song's characteristics and identify the psychoacoustic masking opportunities it presents. Such software can then automatically make slight alterations in the song's frequency components in a way that a listener won't be able to note, yet in a way that will produce a different series of characteristic fingerprints. The processed song will be audibly indistinguishable from the original, but will not “match” any series of fingerprints in the database.
- Another formal attack targets fingerprint bit determinations that are near a threshold, and slightly adjusts the signal to swing the outcome the other way. Consider an audio excerpt that has the following respective energy levels (on a scale of 0-99), in the frequency bands indicated:
300 Hz 318 Hz 338 Hz 358 Hz 69 71 70 68 - The algorithm detailed above would generate a fingerprint of {011. . . } from this data (i.e., 69 is less, than 71, so the first bit is ‘0’; 71 is greater than 70, so the second bit is ‘1’; 70 is greater than 68, so the third bit is ‘1’).
- Seeing that the energy levels are somewhat close, an attacker tool could slightly adjust the signal's spectral composition, so that the relative energy levels are as follows:
300 Hz 318 Hz 338 Hz 358 Hz [69] 70 [71] 69 70 68 - Instead of {011 . . . }, the fingerprint is now {101 . . . }. Two of the three illustrated fingerprint bits have been changed. Yet the change to the audio excerpt is essentially inaudible.
- Other fingerprint hacking vulnerabilities arise from shortcuts employed in the database searching strategy—seeking to prune large segments of the data from further searching. For example, the system outlined above confines the large potential search space by assuming that there exists a 32 bit excerpt of the unknown song fingerprint that exactly matches (or matches with only one bit error) a 32 bit excerpt of fingerprint data in the reference database. The system looks at successive 32 bit excerpts from the unknown song fingerprint, and identifies all database fingerprints that include an excerpt presenting a very close match (i.e., 0 or 1 errors). A list of candidate song fingerprints is thereby identified that can be further checked to determine if any meets the looser match criteria generally used. (To allow non-exact fingerprint matches, the system generally allows up to 2047 bit errors in every 8192 bit block of fingerprint data.)
- The evident problem is: what if the correct “match” in the database has no 32 bit excerpt that corresponds—with just 1 or 0 bit errors—to a 32 bit excerpt from the unknown song? Such a correct match will never be found—it gets screened out at the outset.
- A hacker familiar with the system's principles will see that everything hinges on the assumption that a 32 bit string of fingerprint data will identically match (or match with only one bit error) a corresponding string in the reference database. Since these 32 bits are based on the strengths of 32 narrow frequency bands between 300 Hz and 2000 Hz, the spectrum of the content can readily be tweaked to violate this assumption, forcing a false-negative error. (E.g., notching out two of these narrow bands will force four bits of every 32 to a known state: two will go to zero—since these bands are lower in amplitude than the preceding bands, and two will go to one—since the following bands are higher in amplitude that these preceding, notched, bands). On average, half of these forced bits will be “wrong” (compared to the untweaked music), leading to two bit errors—violating the assumption on which database pruning is based.)
- Attacks like the foregoing require a bit of effort. However, once an attacker makes the effort, the resulting hack can be spread quickly and widely.
- The exemplary fingerprinting technique noted above (which is understood to be the basis for Gracenote's commercial implementation, MusicID, built from technology licensed from Philips) is not unique in being vulnerable to various attacks. All fingerprinting techniques (including the recently announced MediaHedge, as well as CopySense and RepliCheck) are similarly believed to have vulnerabilities that can be exploited by hackers. (A quandary for potential adopters is that susceptibility of different techniques to different attacks has not been a focus of academic attention.)
- It will be recognized that crowdsourcing can help mitigate the vulnerabilities and uncertainties that are inherent in fingerprinting systems. Despite a “no-match” returned from the fingerprint-based content identification system (based on its rote search of the database for a fingerprint that matches that of the altered content), the techniques detailed herein allow human judgment to take a “second look.” Such techniques can identify content that has been altered to avoid its correct identification by fingerprint techniques. (Again, once such identification is made, corresponding information is desirably entered into the database to facilitate identification of the altered content next time.)
- It will be recognized that the “crowdsourcing” methodologies detailed above also have applicability to other tasks involved in the arrangements detailed in the specification, including all the documents incorporated by reference.
Claims (17)
Priority Applications (9)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/613,891 US20070162761A1 (en) | 2005-12-23 | 2006-12-20 | Methods and Systems to Help Detect Identity Fraud |
US12/114,612 US8341412B2 (en) | 2005-12-23 | 2008-05-02 | Methods for identifying audio or video content |
US13/355,240 US20120123959A1 (en) | 2005-12-23 | 2012-01-20 | Methods and Systems to Help Detect Identity Fraud |
US13/686,541 US10242415B2 (en) | 2006-12-20 | 2012-11-27 | Method and system for determining content treatment |
US13/714,930 US8458482B2 (en) | 2005-12-23 | 2012-12-14 | Methods for identifying audio or video content |
US13/909,834 US8868917B2 (en) | 2005-12-23 | 2013-06-04 | Methods for identifying audio or video content |
US13/937,995 US8688999B2 (en) | 2005-12-23 | 2013-07-09 | Methods for identifying audio or video content |
US14/519,973 US9292513B2 (en) | 2005-12-23 | 2014-10-21 | Methods for identifying audio or video content |
US15/074,967 US10007723B2 (en) | 2005-12-23 | 2016-03-18 | Methods for identifying audio or video content |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US75365205P | 2005-12-23 | 2005-12-23 | |
US11/613,891 US20070162761A1 (en) | 2005-12-23 | 2006-12-20 | Methods and Systems to Help Detect Identity Fraud |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/114,612 Division US8341412B2 (en) | 2005-12-23 | 2008-05-02 | Methods for identifying audio or video content |
Publications (1)
Publication Number | Publication Date |
---|---|
US20070162761A1 true US20070162761A1 (en) | 2007-07-12 |
Family
ID=38234123
Family Applications (8)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/613,891 Abandoned US20070162761A1 (en) | 2005-12-23 | 2006-12-20 | Methods and Systems to Help Detect Identity Fraud |
US12/114,612 Active 2030-03-10 US8341412B2 (en) | 2005-12-23 | 2008-05-02 | Methods for identifying audio or video content |
US13/355,240 Abandoned US20120123959A1 (en) | 2005-12-23 | 2012-01-20 | Methods and Systems to Help Detect Identity Fraud |
US13/714,930 Active US8458482B2 (en) | 2005-12-23 | 2012-12-14 | Methods for identifying audio or video content |
US13/909,834 Active US8868917B2 (en) | 2005-12-23 | 2013-06-04 | Methods for identifying audio or video content |
US13/937,995 Active US8688999B2 (en) | 2005-12-23 | 2013-07-09 | Methods for identifying audio or video content |
US14/519,973 Active US9292513B2 (en) | 2005-12-23 | 2014-10-21 | Methods for identifying audio or video content |
US15/074,967 Active US10007723B2 (en) | 2005-12-23 | 2016-03-18 | Methods for identifying audio or video content |
Family Applications After (7)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/114,612 Active 2030-03-10 US8341412B2 (en) | 2005-12-23 | 2008-05-02 | Methods for identifying audio or video content |
US13/355,240 Abandoned US20120123959A1 (en) | 2005-12-23 | 2012-01-20 | Methods and Systems to Help Detect Identity Fraud |
US13/714,930 Active US8458482B2 (en) | 2005-12-23 | 2012-12-14 | Methods for identifying audio or video content |
US13/909,834 Active US8868917B2 (en) | 2005-12-23 | 2013-06-04 | Methods for identifying audio or video content |
US13/937,995 Active US8688999B2 (en) | 2005-12-23 | 2013-07-09 | Methods for identifying audio or video content |
US14/519,973 Active US9292513B2 (en) | 2005-12-23 | 2014-10-21 | Methods for identifying audio or video content |
US15/074,967 Active US10007723B2 (en) | 2005-12-23 | 2016-03-18 | Methods for identifying audio or video content |
Country Status (1)
Country | Link |
---|---|
US (8) | US20070162761A1 (en) |
Cited By (62)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080104530A1 (en) * | 2006-10-31 | 2008-05-01 | Microsoft Corporation | Senseweb |
US20080133346A1 (en) * | 2006-11-30 | 2008-06-05 | Jyh-Herng Chow | Human responses and rewards for requests at web scale |
US20080208849A1 (en) * | 2005-12-23 | 2008-08-28 | Conwell William Y | Methods for Identifying Audio or Video Content |
US20080228733A1 (en) * | 2007-03-14 | 2008-09-18 | Davis Bruce L | Method and System for Determining Content Treatment |
US20080294647A1 (en) * | 2007-05-21 | 2008-11-27 | Arun Ramaswamy | Methods and apparatus to monitor content distributed by the internet |
US20080313011A1 (en) * | 2007-06-15 | 2008-12-18 | Robert Rose | Online marketing platform |
US20080313026A1 (en) * | 2007-06-15 | 2008-12-18 | Robert Rose | System and method for voting in online competitions |
US20090119264A1 (en) * | 2007-11-05 | 2009-05-07 | Chacha Search, Inc | Method and system of accessing information |
US20090124355A1 (en) * | 2007-11-12 | 2009-05-14 | Acres-Fiore, Inc. | System for attributing gameplay credit to a player |
US20090240624A1 (en) * | 2008-03-20 | 2009-09-24 | Modasolutions Corporation | Risk detection and assessment of cash payment for electronic purchase transactions |
US20090319274A1 (en) * | 2008-06-23 | 2009-12-24 | John Nicholas Gross | System and Method for Verifying Origin of Input Through Spoken Language Analysis |
US20090328150A1 (en) * | 2008-06-27 | 2009-12-31 | John Nicholas Gross | Progressive Pictorial & Motion Based CAPTCHAs |
US20100094849A1 (en) * | 2007-08-17 | 2010-04-15 | Robert Rose | Systems and methods for creating user generated content incorporating content from a content catalog |
US20100167256A1 (en) * | 2008-02-14 | 2010-07-01 | Douglas Michael Blash | System and method for global historical database |
US20100287201A1 (en) * | 2008-01-04 | 2010-11-11 | Koninklijke Philips Electronics N.V. | Method and a system for identifying elementary content portions from an edited content |
WO2011022051A1 (en) * | 2009-08-18 | 2011-02-24 | Alibaba Group Holding Limited | User verification using voice based password |
US20110137855A1 (en) * | 2009-12-08 | 2011-06-09 | Xerox Corporation | Music recognition method and system based on socialized music server |
US20110161225A1 (en) * | 2009-12-30 | 2011-06-30 | Infosys Technologies Limited | Method and system for processing loan applications in a financial institution |
US20110265162A1 (en) * | 2010-04-21 | 2011-10-27 | International Business Machines Corporation | Holistic risk-based identity establishment for eligibility determinations in context of an application |
US20110295591A1 (en) * | 2010-05-28 | 2011-12-01 | Palo Alto Research Center Incorporated | System and method to acquire paraphrases |
US20120054194A1 (en) * | 2009-05-08 | 2012-03-01 | Dolby Laboratories Licensing Corporation | Storing and Searching Fingerprints Derived from Media Content Based on a Classification of the Media Content |
US8458010B1 (en) | 2009-10-13 | 2013-06-04 | Amazon Technologies, Inc. | Monitoring and enforcing price parity |
US20140025741A1 (en) * | 2008-04-17 | 2014-01-23 | Gary Stephen Shuster | Evaluation of remote user attributes in a social networking environment |
US8656298B2 (en) | 2007-11-30 | 2014-02-18 | Social Mecca, Inc. | System and method for conducting online campaigns |
US20140114984A1 (en) * | 2012-04-19 | 2014-04-24 | Wonga Technology Limited | Method and system for user authentication |
US20140211044A1 (en) * | 2013-01-25 | 2014-07-31 | Electronics And Telecommunications Research Institute | Method and system for generating image knowledge contents based on crowdsourcing |
US20140236851A1 (en) * | 2013-02-19 | 2014-08-21 | Digitalglobe, Inc. | Crowdsourced search and locate platform |
US20140244495A1 (en) * | 2013-02-26 | 2014-08-28 | Digimarc Corporation | Methods and arrangements for smartphone payments |
US20140258110A1 (en) * | 2013-03-11 | 2014-09-11 | Digimarc Corporation | Methods and arrangements for smartphone payments and transactions |
US8909475B2 (en) | 2013-03-08 | 2014-12-09 | Zzzoom, LLC | Generating transport routes using public and private modes |
US8935745B2 (en) | 2006-08-29 | 2015-01-13 | Attributor Corporation | Determination of originality of content |
US20150106265A1 (en) * | 2013-10-11 | 2015-04-16 | Telesign Corporation | System and methods for processing a communication number for fraud prevention |
US9031919B2 (en) | 2006-08-29 | 2015-05-12 | Attributor Corporation | Content monitoring and compliance enforcement |
US9053182B2 (en) | 2011-01-27 | 2015-06-09 | International Business Machines Corporation | System and method for making user generated audio content on the spoken web navigable by community tagging |
US20150161611A1 (en) * | 2013-12-10 | 2015-06-11 | Sas Institute Inc. | Systems and Methods for Self-Similarity Measure |
US9082134B2 (en) | 2013-03-08 | 2015-07-14 | Zzzoom, LLC | Displaying advertising using transit time data |
WO2015157344A3 (en) * | 2014-04-07 | 2015-12-10 | Digitalglobe, Inc. | Systems and methods for large scale crowdsourcing of map data location, cleanup, and correction |
US9294456B1 (en) * | 2013-07-25 | 2016-03-22 | Amazon Technologies, Inc. | Gaining access to an account through authentication |
US9342670B2 (en) | 2006-08-29 | 2016-05-17 | Attributor Corporation | Content monitoring and host compliance evaluation |
EP3223228A1 (en) * | 2016-03-21 | 2017-09-27 | Facebook Inc. | Systems and methods for identifying matching content in a social network |
US9954942B2 (en) | 2013-12-11 | 2018-04-24 | Entit Software Llc | Result aggregation |
US10078645B2 (en) * | 2013-02-19 | 2018-09-18 | Digitalglobe, Inc. | Crowdsourced feature identification and orthorectification |
US20190042961A1 (en) * | 2017-08-07 | 2019-02-07 | Securiport Llc | Multi-mode data collection and traveler processing |
US10242415B2 (en) | 2006-12-20 | 2019-03-26 | Digimarc Corporation | Method and system for determining content treatment |
US10325603B2 (en) * | 2015-06-17 | 2019-06-18 | Baidu Online Network Technology (Beijing) Co., Ltd. | Voiceprint authentication method and apparatus |
US10346495B2 (en) * | 2013-02-19 | 2019-07-09 | Digitalglobe, Inc. | System and method for large scale crowdsourcing of map data cleanup and correction |
US10348586B2 (en) | 2009-10-23 | 2019-07-09 | Www.Trustscience.Com Inc. | Parallel computatonal framework and application server for determining path connectivity |
US10380703B2 (en) | 2015-03-20 | 2019-08-13 | Www.Trustscience.Com Inc. | Calculating a trust score |
US10395041B1 (en) * | 2018-10-31 | 2019-08-27 | Capital One Services, Llc | Methods and systems for reducing false positive findings |
US10419489B2 (en) * | 2017-05-04 | 2019-09-17 | International Business Machines Corporation | Unidirectional trust based decision making for information technology conversation agents |
US10438152B1 (en) * | 2008-01-25 | 2019-10-08 | Amazon Technologies, Inc. | Managing performance of human review of media data |
CN110674704A (en) * | 2019-09-05 | 2020-01-10 | 同济大学 | Crowd density estimation method and device based on multi-scale expansion convolutional network |
US11049094B2 (en) | 2014-02-11 | 2021-06-29 | Digimarc Corporation | Methods and arrangements for device to device communication |
US11210417B2 (en) | 2016-09-26 | 2021-12-28 | Advanced New Technologies Co., Ltd. | Identity recognition method and device |
US11323347B2 (en) | 2009-09-30 | 2022-05-03 | Www.Trustscience.Com Inc. | Systems and methods for social graph data analytics to determine connectivity within a community |
US11321774B2 (en) | 2018-01-30 | 2022-05-03 | Pointpredictive, Inc. | Risk-based machine learning classifier |
US11341145B2 (en) | 2016-02-29 | 2022-05-24 | Www.Trustscience.Com Inc. | Extrapolating trends in trust scores |
US11386129B2 (en) | 2016-02-17 | 2022-07-12 | Www.Trustscience.Com Inc. | Searching for entities based on trust score and geography |
US11423405B2 (en) | 2019-09-10 | 2022-08-23 | International Business Machines Corporation | Peer validation for unauthorized transactions |
US11482242B2 (en) * | 2017-10-18 | 2022-10-25 | Beijing Dajia Internet Information Technology Co., Ltd. | Audio recognition method, device and server |
US11640569B2 (en) | 2016-03-24 | 2023-05-02 | Www.Trustscience.Com Inc. | Learning an entity's trust model and risk tolerance to calculate its risk-taking score |
US11966372B1 (en) * | 2020-05-01 | 2024-04-23 | Bottomline Technologies, Inc. | Database record combination |
Families Citing this family (91)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6662194B1 (en) * | 1999-07-31 | 2003-12-09 | Raymond Anthony Joao | Apparatus and method for providing recruitment information |
US7566002B2 (en) * | 2005-01-06 | 2009-07-28 | Early Warning Services, Llc | Identity verification systems and methods |
US8369570B2 (en) | 2005-09-28 | 2013-02-05 | Facedouble, Inc. | Method and system for tagging an image of an individual in a plurality of photos |
US7450740B2 (en) | 2005-09-28 | 2008-11-11 | Facedouble, Inc. | Image classification and information retrieval over wireless digital networks and the internet |
US8311294B2 (en) * | 2009-09-08 | 2012-11-13 | Facedouble, Inc. | Image classification and information retrieval over wireless digital networks and the internet |
US8600174B2 (en) | 2005-09-28 | 2013-12-03 | Facedouble, Inc. | Method and system for attaching a metatag to a digital image |
US8326775B2 (en) * | 2005-10-26 | 2012-12-04 | Cortica Ltd. | Signature generation for multimedia deep-content-classification by a large-scale matching system and method thereof |
US8099311B2 (en) * | 2007-02-23 | 2012-01-17 | CrowdEngineering, Inc. | System and method for routing tasks to a user in a workforce |
EP2188711A4 (en) * | 2007-09-14 | 2012-08-22 | Auditude Inc | Restoring program information for clips of broadcast programs shared online |
US8775605B2 (en) * | 2009-09-29 | 2014-07-08 | At&T Intellectual Property I, L.P. | Method and apparatus to identify outliers in social networks |
US8121618B2 (en) | 2009-10-28 | 2012-02-21 | Digimarc Corporation | Intuitive computing methods and systems |
US8175617B2 (en) | 2009-10-28 | 2012-05-08 | Digimarc Corporation | Sensor-based mobile search, related methods and systems |
EP2363852B1 (en) * | 2010-03-04 | 2012-05-16 | Deutsche Telekom AG | Computer-based method and system of assessing intelligibility of speech represented by a speech signal |
US9185458B2 (en) * | 2010-04-02 | 2015-11-10 | Yahoo! Inc. | Signal-driven interactive television |
US20110307806A1 (en) * | 2010-06-14 | 2011-12-15 | Matthew Hills | Multiple party decision process |
US20120109623A1 (en) * | 2010-11-01 | 2012-05-03 | Microsoft Corporation | Stimulus Description Collections |
US9384408B2 (en) | 2011-01-12 | 2016-07-05 | Yahoo! Inc. | Image analysis system and method using image recognition and text search |
US20120232987A1 (en) * | 2011-03-10 | 2012-09-13 | Everingham James R | Image-based search interface |
US8533146B1 (en) | 2011-04-29 | 2013-09-10 | Google Inc. | Identification of over-clustered map features |
US20130006951A1 (en) * | 2011-05-30 | 2013-01-03 | Lei Yu | Video dna (vdna) method and system for multi-dimensional content matching |
US8706499B2 (en) * | 2011-08-16 | 2014-04-22 | Facebook, Inc. | Periodic ambient waveform analysis for enhanced social functions |
US8635519B2 (en) | 2011-08-26 | 2014-01-21 | Luminate, Inc. | System and method for sharing content based on positional tagging |
US20130086112A1 (en) | 2011-10-03 | 2013-04-04 | James R. Everingham | Image browsing system and method for a digital content platform |
US8737678B2 (en) | 2011-10-05 | 2014-05-27 | Luminate, Inc. | Platform for providing interactive applications on a digital content platform |
USD736224S1 (en) | 2011-10-10 | 2015-08-11 | Yahoo! Inc. | Portion of a display screen with a graphical user interface |
USD737290S1 (en) | 2011-10-10 | 2015-08-25 | Yahoo! Inc. | Portion of a display screen with a graphical user interface |
US8831763B1 (en) * | 2011-10-18 | 2014-09-09 | Google Inc. | Intelligent interest point pruning for audio matching |
US9257056B2 (en) | 2011-10-31 | 2016-02-09 | Google Inc. | Proactive user-based content correction and enrichment for geo data |
US8832116B1 (en) | 2012-01-11 | 2014-09-09 | Google Inc. | Using mobile application logs to measure and maintain accuracy of business information |
US8880452B2 (en) * | 2012-01-31 | 2014-11-04 | The United States Of America As Represented By The Secretary Of The Navy | Data structure and method for performing mishap risk of a system |
US9599466B2 (en) | 2012-02-03 | 2017-03-21 | Eagle View Technologies, Inc. | Systems and methods for estimation of building wall area |
US8255495B1 (en) | 2012-03-22 | 2012-08-28 | Luminate, Inc. | Digital image and content display systems and methods |
US9754585B2 (en) | 2012-04-03 | 2017-09-05 | Microsoft Technology Licensing, Llc | Crowdsourced, grounded language for intent modeling in conversational interfaces |
US8234168B1 (en) | 2012-04-19 | 2012-07-31 | Luminate, Inc. | Image content and quality assurance system and method |
US8495489B1 (en) | 2012-05-16 | 2013-07-23 | Luminate, Inc. | System and method for creating and displaying image annotations |
US20140015749A1 (en) * | 2012-07-10 | 2014-01-16 | University Of Rochester, Office Of Technology Transfer | Closed-loop crowd control of existing interface |
JP6112823B2 (en) * | 2012-10-30 | 2017-04-12 | キヤノン株式会社 | Information processing apparatus, information processing method, and computer-readable program |
US9116880B2 (en) | 2012-11-30 | 2015-08-25 | Microsoft Technology Licensing, Llc | Generating stimuli for use in soliciting grounded linguistic information |
US9159327B1 (en) * | 2012-12-20 | 2015-10-13 | Google Inc. | System and method for adding pitch shift resistance to an audio fingerprint |
US9146990B2 (en) * | 2013-01-07 | 2015-09-29 | Gracenote, Inc. | Search and identification of video content |
US9323840B2 (en) * | 2013-01-07 | 2016-04-26 | Gracenote, Inc. | Video fingerprinting |
US9495451B2 (en) | 2013-01-07 | 2016-11-15 | Gracenote, Inc. | Identifying video content via fingerprint matching |
US10223349B2 (en) | 2013-02-20 | 2019-03-05 | Microsoft Technology Licensing Llc | Inducing and applying a subject-targeted context free grammar |
US9344759B2 (en) | 2013-03-05 | 2016-05-17 | Google Inc. | Associating audio tracks of an album with video content |
US8966659B2 (en) * | 2013-03-14 | 2015-02-24 | Microsoft Technology Licensing, Llc | Automatic fraudulent digital certificate detection |
WO2014151122A1 (en) * | 2013-03-15 | 2014-09-25 | Eagle View Technologies, Inc. | Methods for risk management assessment of property |
US9449216B1 (en) * | 2013-04-10 | 2016-09-20 | Amazon Technologies, Inc. | Detection of cast members in video content |
US9659014B1 (en) * | 2013-05-01 | 2017-05-23 | Google Inc. | Audio and video matching using a hybrid of fingerprinting and content based classification |
US9542488B2 (en) | 2013-08-02 | 2017-01-10 | Google Inc. | Associating audio tracks with video content |
US9465995B2 (en) | 2013-10-23 | 2016-10-11 | Gracenote, Inc. | Identifying video content via color-based fingerprint matching |
WO2015167901A1 (en) * | 2014-04-28 | 2015-11-05 | Gracenote, Inc. | Video fingerprinting |
US9832538B2 (en) | 2014-06-16 | 2017-11-28 | Cisco Technology, Inc. | Synchronizing broadcast timeline metadata |
US9536546B2 (en) | 2014-08-07 | 2017-01-03 | Google Inc. | Finding differences in nearly-identical audio recordings |
US9986280B2 (en) * | 2015-04-11 | 2018-05-29 | Google Llc | Identifying reference content that includes third party content |
US10051121B2 (en) | 2015-04-20 | 2018-08-14 | Youmail, Inc. | System and method for identifying unwanted communications using communication fingerprinting |
CN106294331B (en) * | 2015-05-11 | 2020-01-21 | 阿里巴巴集团控股有限公司 | Audio information retrieval method and device |
EP3345349A4 (en) * | 2015-09-05 | 2019-08-14 | Nudata Security Inc. | Systems and methods for detecting and scoring anomalies |
US9922475B2 (en) | 2015-09-11 | 2018-03-20 | Comcast Cable Communications, Llc | Consensus based authentication and authorization process |
KR102424839B1 (en) | 2015-10-14 | 2022-07-25 | 삼성전자주식회사 | Display apparatus and method of controlling thereof |
US10936651B2 (en) | 2016-06-22 | 2021-03-02 | Gracenote, Inc. | Matching audio fingerprints |
US10289815B2 (en) | 2016-08-15 | 2019-05-14 | International Business Machines Corporation | Video file attribution |
US10972494B2 (en) | 2016-10-10 | 2021-04-06 | BugCrowd, Inc. | Vulnerability detection in IT assets by utilizing crowdsourcing techniques |
CN106649742B (en) * | 2016-12-26 | 2023-04-18 | 上海智臻智能网络科技股份有限公司 | Database maintenance method and device |
US11760387B2 (en) | 2017-07-05 | 2023-09-19 | AutoBrains Technologies Ltd. | Driving policies determination |
US11899707B2 (en) | 2017-07-09 | 2024-02-13 | Cortica Ltd. | Driving policies determination |
CN107392666A (en) * | 2017-07-24 | 2017-11-24 | 北京奇艺世纪科技有限公司 | Advertisement data processing method, device and advertisement placement method and device |
US11853932B2 (en) | 2017-12-19 | 2023-12-26 | Bugcrowd Inc. | Intermediated communication in a crowdsourced environment |
WO2019194794A1 (en) * | 2018-04-03 | 2019-10-10 | Vydia, Inc. | Social media content management |
US11181384B2 (en) * | 2018-07-23 | 2021-11-23 | Waymo Llc | Verifying map data using challenge questions |
US20200133308A1 (en) | 2018-10-18 | 2020-04-30 | Cartica Ai Ltd | Vehicle to vehicle (v2v) communication less truck platooning |
US11126870B2 (en) | 2018-10-18 | 2021-09-21 | Cartica Ai Ltd. | Method and system for obstacle detection |
US10839694B2 (en) | 2018-10-18 | 2020-11-17 | Cartica Ai Ltd | Blind spot alert |
US11181911B2 (en) | 2018-10-18 | 2021-11-23 | Cartica Ai Ltd | Control transfer of a vehicle |
US10748038B1 (en) | 2019-03-31 | 2020-08-18 | Cortica Ltd. | Efficient calculation of a robust signature of a media unit |
US11700356B2 (en) | 2018-10-26 | 2023-07-11 | AutoBrains Technologies Ltd. | Control transfer of a vehicle |
US10789535B2 (en) | 2018-11-26 | 2020-09-29 | Cartica Ai Ltd | Detection of road elements |
US11643005B2 (en) | 2019-02-27 | 2023-05-09 | Autobrains Technologies Ltd | Adjusting adjustable headlights of a vehicle |
US11285963B2 (en) | 2019-03-10 | 2022-03-29 | Cartica Ai Ltd. | Driver-based prediction of dangerous events |
US11694088B2 (en) | 2019-03-13 | 2023-07-04 | Cortica Ltd. | Method for object detection using knowledge distillation |
US11132548B2 (en) | 2019-03-20 | 2021-09-28 | Cortica Ltd. | Determining object information that does not explicitly appear in a media unit signature |
US11222069B2 (en) | 2019-03-31 | 2022-01-11 | Cortica Ltd. | Low-power calculation of a signature of a media unit |
US10776669B1 (en) | 2019-03-31 | 2020-09-15 | Cortica Ltd. | Signature generation and object detection that refer to rare scenes |
US10796444B1 (en) | 2019-03-31 | 2020-10-06 | Cortica Ltd | Configuring spanning elements of a signature generator |
US10789527B1 (en) | 2019-03-31 | 2020-09-29 | Cortica Ltd. | Method for object detection using shallow neural networks |
US10839060B1 (en) * | 2019-08-27 | 2020-11-17 | Capital One Services, Llc | Techniques for multi-voice speech recognition commands |
US10748022B1 (en) | 2019-12-12 | 2020-08-18 | Cartica Ai Ltd | Crowd separation |
US11593662B2 (en) | 2019-12-12 | 2023-02-28 | Autobrains Technologies Ltd | Unsupervised cluster generation |
US11590988B2 (en) | 2020-03-19 | 2023-02-28 | Autobrains Technologies Ltd | Predictive turning assistant |
US11827215B2 (en) | 2020-03-31 | 2023-11-28 | AutoBrains Technologies Ltd. | Method for training a driving related object detector |
US11756424B2 (en) | 2020-07-24 | 2023-09-12 | AutoBrains Technologies Ltd. | Parking assist |
US20230421516A1 (en) * | 2022-06-22 | 2023-12-28 | Ogenus Srl | Method and system for transmitting information and data |
Citations (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5513994A (en) * | 1993-09-30 | 1996-05-07 | Educational Testing Service | Centralized system and method for administering computer based tests |
US5565316A (en) * | 1992-10-09 | 1996-10-15 | Educational Testing Service | System and method for computer based testing |
US5679938A (en) * | 1994-12-02 | 1997-10-21 | Telecheck International, Inc. | Methods and systems for interactive check authorizations |
US6292575B1 (en) * | 1998-07-20 | 2001-09-18 | Lau Technologies | Real-time facial recognition and verification system |
US6301370B1 (en) * | 1998-04-13 | 2001-10-09 | Eyematic Interfaces, Inc. | Face recognition from video images |
US20020031253A1 (en) * | 1998-12-04 | 2002-03-14 | Orang Dialameh | System and method for feature location and tracking in multiple dimensions including depth |
US6430306B2 (en) * | 1995-03-20 | 2002-08-06 | Lau Technologies | Systems and methods for identifying images |
US6466695B1 (en) * | 1999-08-04 | 2002-10-15 | Eyematic Interfaces, Inc. | Procedure for automatic analysis of images and image sequences based on two-dimensional shape primitives |
US6513018B1 (en) * | 1994-05-05 | 2003-01-28 | Fair, Isaac And Company, Inc. | Method and apparatus for scoring the likelihood of a desired performance result |
US20030052768A1 (en) * | 2001-09-17 | 2003-03-20 | Maune James J. | Security method and system |
US6563950B1 (en) * | 1996-06-25 | 2003-05-13 | Eyematic Interfaces, Inc. | Labeled bunch graphs for image analysis |
US20030099379A1 (en) * | 2001-11-26 | 2003-05-29 | Monk Bruce C. | Validation and verification apparatus and method |
US20030115459A1 (en) * | 2001-12-17 | 2003-06-19 | Monk Bruce C. | Document and bearer verification system |
US6597775B2 (en) * | 2000-09-29 | 2003-07-22 | Fair Isaac Corporation | Self-learning real-time prioritization of telecommunication fraud control actions |
US20030216988A1 (en) * | 2002-05-17 | 2003-11-20 | Cassandra Mollett | Systems and methods for using phone number validation in a risk assessment |
US20040019807A1 (en) * | 2002-05-15 | 2004-01-29 | Zone Labs, Inc. | System And Methodology For Providing Community-Based Security Policies |
US20040059953A1 (en) * | 2002-09-24 | 2004-03-25 | Arinc | Methods and systems for identity management |
US20040064415A1 (en) * | 2002-07-12 | 2004-04-01 | Abdallah David S. | Personal authentication software and systems for travel privilege assignation and verification |
US20040153663A1 (en) * | 2002-11-01 | 2004-08-05 | Clark Robert T. | System, method and computer program product for assessing risk of identity theft |
US20040189441A1 (en) * | 2003-03-24 | 2004-09-30 | Kosmas Stergiou | Apparatus and methods for verification and authentication employing voluntary attributes, knowledge management and databases |
US20040205030A1 (en) * | 2001-10-24 | 2004-10-14 | Capital Confirmation, Inc. | Systems, methods and computer readable medium providing automated third-party confirmations |
US20040213437A1 (en) * | 2002-11-26 | 2004-10-28 | Howard James V | Systems and methods for managing and detecting fraud in image databases used with identification documents |
US20040230527A1 (en) * | 2003-04-29 | 2004-11-18 | First Data Corporation | Authentication for online money transfers |
US20050114679A1 (en) * | 2003-11-26 | 2005-05-26 | Amit Bagga | Method and apparatus for extracting authentication information from a user |
US20050132235A1 (en) * | 2003-12-15 | 2005-06-16 | Remco Teunen | System and method for providing improved claimant authentication |
US20050141707A1 (en) * | 2002-02-05 | 2005-06-30 | Haitsma Jaap A. | Efficient storage of fingerprints |
US20050154924A1 (en) * | 1998-02-13 | 2005-07-14 | Scheidt Edward M. | Multiple factor-based user identification and authentication |
US20050171851A1 (en) * | 2004-01-30 | 2005-08-04 | Applebaum Ted H. | Multiple choice challenge-response user authorization system and method |
US6931451B1 (en) * | 1996-10-03 | 2005-08-16 | Gotuit Media Corp. | Systems and methods for modifying broadcast programming |
US6944604B1 (en) * | 2001-07-03 | 2005-09-13 | Fair Isaac Corporation | Mechanism and method for specified temporal deployment of rules within a rule server |
US6965889B2 (en) * | 2000-05-09 | 2005-11-15 | Fair Isaac Corporation | Approach for generating rules |
US6968328B1 (en) * | 2000-12-29 | 2005-11-22 | Fair Isaac Corporation | Method and system for implementing rules and ruleflows |
US20050259819A1 (en) * | 2002-06-24 | 2005-11-24 | Koninklijke Philips Electronics | Method for generating hashes from a compressed multimedia content |
US20050288952A1 (en) * | 2004-05-18 | 2005-12-29 | Davis Bruce L | Official documents and methods of issuance |
US20060075237A1 (en) * | 2002-11-12 | 2006-04-06 | Koninklijke Philips Electronics N.V. | Fingerprinting multimedia contents |
US20060074986A1 (en) * | 2004-08-20 | 2006-04-06 | Viisage Technology, Inc. | Method and system to authenticate an object |
US20060106774A1 (en) * | 2004-11-16 | 2006-05-18 | Cohen Peter D | Using qualifications of users to facilitate user performance of tasks |
US20060106675A1 (en) * | 2004-11-16 | 2006-05-18 | Cohen Peter D | Providing an electronic marketplace to facilitate human performance of programmatically submitted tasks |
US7367058B2 (en) * | 2001-05-25 | 2008-04-29 | United States Postal Service | Encoding method |
Family Cites Families (321)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US609182A (en) * | 1898-08-16 | Dumping-vehicle | ||
US4677466A (en) | 1985-07-29 | 1987-06-30 | A. C. Nielsen Company | Broadcast program identification method and apparatus |
US5210820A (en) | 1990-05-02 | 1993-05-11 | Broadcast Data Systems Limited Partnership | Signal recognition system and method |
US6122403A (en) | 1995-07-27 | 2000-09-19 | Digimarc Corporation | Computer system linked by using information in data objects |
US7113615B2 (en) | 1993-11-18 | 2006-09-26 | Digimarc Corporation | Watermark embedder and reader |
US8505108B2 (en) | 1993-11-18 | 2013-08-06 | Digimarc Corporation | Authentication using a digital watermark |
US6947571B1 (en) | 1999-05-19 | 2005-09-20 | Digimarc Corporation | Cell phones with optical capabilities, and related applications |
JPH08263438A (en) | 1994-11-23 | 1996-10-11 | Xerox Corp | Distribution and use control system of digital work and access control method to digital work |
US5634012A (en) | 1994-11-23 | 1997-05-27 | Xerox Corporation | System for controlling the distribution and use of digital works having a fee reporting mechanism |
US5629980A (en) | 1994-11-23 | 1997-05-13 | Xerox Corporation | System for controlling the distribution and use of digital works |
US5715403A (en) | 1994-11-23 | 1998-02-03 | Xerox Corporation | System for controlling the distribution and use of digital works having attached usage rights where the usage rights are defined by a usage rights grammar |
US5774525A (en) * | 1995-01-23 | 1998-06-30 | International Business Machines Corporation | Method and apparatus utilizing dynamic questioning to provide secure access control |
CN100501754C (en) | 1995-02-13 | 2009-06-17 | 英特特拉斯特技术公司 | Systems and methods for secure transaction management and electronic rights protection |
US7124302B2 (en) | 1995-02-13 | 2006-10-17 | Intertrust Technologies Corp. | Systems and methods for secure transaction management and electronic rights protection |
US7133846B1 (en) | 1995-02-13 | 2006-11-07 | Intertrust Technologies Corp. | Digital certificate support system, methods and techniques for secure electronic commerce transaction and rights management |
US6829368B2 (en) | 2000-01-26 | 2004-12-07 | Digimarc Corporation | Establishing and interacting with on-line media collections using identifiers in media signals |
US7562392B1 (en) | 1999-05-19 | 2009-07-14 | Digimarc Corporation | Methods of interacting with audio and ambient music |
US6505160B1 (en) | 1995-07-27 | 2003-01-07 | Digimarc Corporation | Connected audio and other media objects |
US7711564B2 (en) * | 1995-07-27 | 2010-05-04 | Digimarc Corporation | Connected audio and other media objects |
US5765152A (en) | 1995-10-13 | 1998-06-09 | Trustees Of Dartmouth College | System and method for managing copyrighted electronic media |
US7047241B1 (en) | 1995-10-13 | 2006-05-16 | Digimarc Corporation | System and methods for managing digital creative works |
US5664018A (en) | 1996-03-12 | 1997-09-02 | Leighton; Frank Thomson | Watermarking process resilient to collusion attacks |
US5913205A (en) | 1996-03-29 | 1999-06-15 | Virage, Inc. | Query optimization for visual information retrieval system |
US7346472B1 (en) | 2000-09-07 | 2008-03-18 | Blue Spike, Inc. | Method and device for monitoring and analyzing signals |
US7159116B2 (en) | 1999-12-07 | 2007-01-02 | Blue Spike, Inc. | Systems, methods and devices for trusted transactions |
US5918223A (en) | 1996-07-22 | 1999-06-29 | Muscle Fish | Method and article of manufacture for content-based analysis, storage, retrieval, and segmentation of audio information |
US6108637A (en) | 1996-09-03 | 2000-08-22 | Nielsen Media Research, Inc. | Content display monitor |
US6647548B1 (en) | 1996-09-06 | 2003-11-11 | Nielsen Media Research, Inc. | Coded/non-coded program audience measurement system |
US20030093790A1 (en) | 2000-03-28 | 2003-05-15 | Logan James D. | Audio and video program recording, editing and playback systems using metadata |
US5892536A (en) | 1996-10-03 | 1999-04-06 | Personal Audio | Systems and methods for computer enhanced broadcast monitoring |
US20020120925A1 (en) | 2000-03-28 | 2002-08-29 | Logan James D. | Audio and video program recording, editing and playback systems using metadata |
US5983351A (en) | 1996-10-16 | 1999-11-09 | Intellectual Protocols, L.L.C. | Web site copyright registration system and method |
US5991429A (en) * | 1996-12-06 | 1999-11-23 | Coffin; Jeffrey S. | Facial recognition system for security access and identification |
US6251016B1 (en) | 1997-01-07 | 2001-06-26 | Fujitsu Limited | Information offering system for providing a lottery on a network |
JP3901268B2 (en) | 1997-01-23 | 2007-04-04 | ソニー株式会社 | Information signal output control device, information signal output control method, information signal duplication prevention device, and information signal duplication prevention method |
WO1998037473A2 (en) | 1997-02-07 | 1998-08-27 | General Internet, Inc. | Collaborative internet data mining system |
CA2627374A1 (en) * | 1997-03-21 | 1998-10-01 | Educational Testing Service | Methods and systems for presentation and evaluation of constructed responses assessed by human evaluators |
US6783459B2 (en) * | 1997-08-22 | 2004-08-31 | Blake Cumbers | Passive biometric customer identification and tracking system |
US6035055A (en) | 1997-11-03 | 2000-03-07 | Hewlett-Packard Company | Digital image management system in a distributed data access network system |
US6055538A (en) | 1997-12-22 | 2000-04-25 | Hewlett Packard Company | Methods and system for using web browser to search large collections of documents |
US6601172B1 (en) | 1997-12-31 | 2003-07-29 | Philips Electronics North America Corp. | Transmitting revisions with digital signatures |
US6091822A (en) | 1998-01-08 | 2000-07-18 | Macrovision Corporation | Method and apparatus for recording scrambled video audio signals and playing back said video signal, descrambled, within a secure environment |
DE69908226T2 (en) | 1998-03-19 | 2004-03-25 | Tomonari Sonoda | Device and method for finding melodies |
US7051004B2 (en) | 1998-04-03 | 2006-05-23 | Macrovision Corporation | System and methods providing secure delivery of licenses and content |
US7756892B2 (en) | 2000-05-02 | 2010-07-13 | Digimarc Corporation | Using embedded data with file sharing |
US7689532B1 (en) | 2000-07-20 | 2010-03-30 | Digimarc Corporation | Using embedded data with file sharing |
CA2357003C (en) * | 1998-05-21 | 2002-04-09 | Equifax Inc. | System and method for authentication of network users and issuing a digital certificate |
CA2357007C (en) * | 1998-05-21 | 2002-04-02 | Equifax Inc. | System and method for authentication of network users with preprocessing |
PT1080415T (en) * | 1998-05-21 | 2017-05-02 | Equifax Inc | System and method for authentication of network users |
US6401118B1 (en) | 1998-06-30 | 2002-06-04 | Online Monitoring Services | Method and computer program product for an online monitoring search engine |
US6603921B1 (en) | 1998-07-01 | 2003-08-05 | International Business Machines Corporation | Audio/video archive system and method for automatic indexing and searching |
US6226618B1 (en) | 1998-08-13 | 2001-05-01 | International Business Machines Corporation | Electronic content delivery system |
US6983371B1 (en) | 1998-10-22 | 2006-01-03 | International Business Machines Corporation | Super-distribution of protected digital content |
US7421723B2 (en) | 1999-01-07 | 2008-09-02 | Nielsen Media Research, Inc. | Detection of media links in broadcast signals |
US6868497B1 (en) | 1999-03-10 | 2005-03-15 | Digimarc Corporation | Method and apparatus for automatic ID management |
US7308413B1 (en) * | 1999-05-05 | 2007-12-11 | Tota Michael J | Process for creating media content based upon submissions received on an electronic multi-media exchange |
US7185201B2 (en) | 1999-05-19 | 2007-02-27 | Digimarc Corporation | Content identifiers triggering corresponding responses |
US7302574B2 (en) | 1999-05-19 | 2007-11-27 | Digimarc Corporation | Content identifiers triggering corresponding responses through collaborative processing |
US7762453B2 (en) | 1999-05-25 | 2010-07-27 | Silverbrook Research Pty Ltd | Method of providing information via a printed substrate with every interaction |
US7284255B1 (en) | 1999-06-18 | 2007-10-16 | Steven G. Apel | Audience survey system, and system and methods for compressing and correlating audio signals |
US7058817B1 (en) * | 1999-07-02 | 2006-06-06 | The Chase Manhattan Bank | System and method for single sign on process for websites with multiple applications and services |
US7346605B1 (en) | 1999-07-22 | 2008-03-18 | Markmonitor, Inc. | Method and system for searching and monitoring internet trademark usage |
AU6514200A (en) | 1999-08-03 | 2001-02-19 | Videoshare, Inc. | Method and system for sharing video with advertisements over a network |
US6493744B1 (en) | 1999-08-16 | 2002-12-10 | International Business Machines Corporation | Automatic rating and filtering of data files for objectionable content |
US6546135B1 (en) | 1999-08-30 | 2003-04-08 | Mitsubishi Electric Research Laboratories, Inc | Method for representing and comparing multimedia content |
US6976165B1 (en) | 1999-09-07 | 2005-12-13 | Emc Corporation | System and method for secure storage, transfer and retrieval of content addressable information |
US7174293B2 (en) | 1999-09-21 | 2007-02-06 | Iceberg Industries Llc | Audio identification system and method |
US7194752B1 (en) | 1999-10-19 | 2007-03-20 | Iceberg Industries, Llc | Method and apparatus for automatically recognizing input audio and/or video streams |
EP1228461A4 (en) | 1999-09-22 | 2005-07-27 | Oleg Kharisovich Zommers | Interactive personal information system and method |
US6795638B1 (en) | 1999-09-30 | 2004-09-21 | New Jersey Devils, Llc | System and method for recording and preparing statistics concerning live performances |
US6754364B1 (en) * | 1999-10-28 | 2004-06-22 | Microsoft Corporation | Methods and systems for fingerprinting digital data |
US6807634B1 (en) | 1999-11-30 | 2004-10-19 | International Business Machines Corporation | Watermarks for customer identification |
US6693236B1 (en) | 1999-12-28 | 2004-02-17 | Monkeymedia, Inc. | User interface for simultaneous management of owned and unowned inventory |
US20020002586A1 (en) | 2000-02-08 | 2002-01-03 | Howard Rafal | Methods and apparatus for creating and hosting customized virtual parties via the internet |
US6834308B1 (en) | 2000-02-17 | 2004-12-21 | Audible Magic Corporation | Method and apparatus for identifying media content presented on a media playing device |
US7412462B2 (en) | 2000-02-18 | 2008-08-12 | Burnside Acquisition, Llc | Data repository and method for promoting network storage of data |
US7298864B2 (en) | 2000-02-19 | 2007-11-20 | Digimarc Corporation | Digital watermarks as a gateway and control mechanism |
EP1137250A1 (en) * | 2000-03-22 | 2001-09-26 | Hewlett-Packard Company, A Delaware Corporation | Improvements relating to digital watermarks |
JP3990853B2 (en) | 2000-03-24 | 2007-10-17 | 株式会社トリニティーセキュリティーシステムズ | Digital copy prevention processing apparatus, reproducible recording medium recording digital data processed by the apparatus, digital copy prevention processing method, computer-readable recording medium recording a program for causing a computer to execute the method, and method Reproducible recording medium that records processed digital data |
US20060080200A1 (en) | 2000-04-07 | 2006-04-13 | Ashton David M | System and method for benefit plan administration |
US6952769B1 (en) | 2000-04-17 | 2005-10-04 | International Business Machines Corporation | Protocols for anonymous electronic communication and double-blind transactions |
US6684254B1 (en) | 2000-05-31 | 2004-01-27 | International Business Machines Corporation | Hyperlink filter for “pirated” and “disputed” copyright material on the internet in a method, system and program |
AU7593601A (en) | 2000-07-14 | 2002-01-30 | Atabok Inc | Controlling and managing digital assets |
US20050193408A1 (en) | 2000-07-24 | 2005-09-01 | Vivcom, Inc. | Generating, transporting, processing, storing and presenting segmentation information for audio-visual programs |
US6772196B1 (en) | 2000-07-27 | 2004-08-03 | Propel Software Corp. | Electronic mail filtering system and methods |
AU2001280890A1 (en) | 2000-07-28 | 2002-02-13 | Copyright.Net Inc. | Apparatus and method for transmitting and keeping track of legal notices |
US6990453B2 (en) * | 2000-07-31 | 2006-01-24 | Landmark Digital Services Llc | System and methods for recognizing sound and music signals in high noise and distortion |
WO2002019169A1 (en) | 2000-08-28 | 2002-03-07 | Digitalowl.Com, Inc. | System and methods for the production, distribution and flexible usage of electronic content in heterogeneous distributed environments |
US7363643B2 (en) | 2000-08-31 | 2008-04-22 | Eddie Drake | Real-time audience monitoring, content rating, and content enhancing |
US20020069370A1 (en) | 2000-08-31 | 2002-06-06 | Infoseer, Inc. | System and method for tracking and preventing illegal distribution of proprietary material over computer networks |
US6444695B1 (en) * | 2000-09-21 | 2002-09-03 | The Regents Of The University Of California | Inhibition of thrombin-induced platelet aggregation by creatine kinase inhibitors |
WO2002033505A2 (en) | 2000-10-16 | 2002-04-25 | Vidius Inc. | A method and apparatus for supporting electronic content distribution |
KR20020030610A (en) | 2000-10-19 | 2002-04-25 | 스톰 씨엔씨 인코포레이티드 | A method for preventing reduction of sales amount of phonograph records by way of digital music file unlawfully circulated through communication network |
US6898799B1 (en) | 2000-10-23 | 2005-05-24 | Clearplay, Inc. | Multimedia content navigation and playback |
US20060031870A1 (en) | 2000-10-23 | 2006-02-09 | Jarman Matthew T | Apparatus, system, and method for filtering objectionable portions of a multimedia presentation |
US6889383B1 (en) | 2000-10-23 | 2005-05-03 | Clearplay, Inc. | Delivery of navigation data for playback of audio and video content |
US6704733B2 (en) | 2000-10-25 | 2004-03-09 | Lightning Source, Inc. | Distributing electronic books over a computer network |
US7562012B1 (en) | 2000-11-03 | 2009-07-14 | Audible Magic Corporation | Method and apparatus for creating a unique audio signature |
US7085613B2 (en) | 2000-11-03 | 2006-08-01 | International Business Machines Corporation | System for monitoring audio content in a video broadcast |
WO2002041170A2 (en) | 2000-11-16 | 2002-05-23 | Interlegis, Inc. | System and method of managing documents |
US7660902B2 (en) | 2000-11-20 | 2010-02-09 | Rsa Security, Inc. | Dynamic file access control and management |
US7043473B1 (en) | 2000-11-22 | 2006-05-09 | Widevine Technologies, Inc. | Media tracking system and method |
US7266704B2 (en) | 2000-12-18 | 2007-09-04 | Digimarc Corporation | User-friendly rights management systems and methods |
JP3587248B2 (en) * | 2000-12-20 | 2004-11-10 | 日本電気株式会社 | Scan flip-flops |
US6407680B1 (en) | 2000-12-22 | 2002-06-18 | Generic Media, Inc. | Distributed on-demand media transcoding system and method |
US7627897B2 (en) | 2001-01-03 | 2009-12-01 | Portauthority Technologies Inc. | Method and apparatus for a reactive defense against illegal distribution of multimedia content in file sharing networks |
EP1334431A4 (en) | 2001-01-17 | 2004-09-01 | Contentguard Holdings Inc | Method and apparatus for managing digital content usage rights |
US7028009B2 (en) | 2001-01-17 | 2006-04-11 | Contentguardiholdings, Inc. | Method and apparatus for distributing enforceable property rights |
EP1362485B1 (en) | 2001-02-12 | 2008-08-13 | Gracenote, Inc. | Generating and matching hashes of multimedia content |
US20020168082A1 (en) | 2001-03-07 | 2002-11-14 | Ravi Razdan | Real-time, distributed, transactional, hybrid watermarking method to provide trace-ability and copyright protection of digital content in peer-to-peer networks |
US7681032B2 (en) | 2001-03-12 | 2010-03-16 | Portauthority Technologies Inc. | System and method for monitoring unauthorized transport of digital content |
US7197459B1 (en) * | 2001-03-19 | 2007-03-27 | Amazon Technologies, Inc. | Hybrid machine/human computing arrangement |
US7653552B2 (en) | 2001-03-21 | 2010-01-26 | Qurio Holdings, Inc. | Digital file marketplace |
US7987510B2 (en) | 2001-03-28 | 2011-07-26 | Rovi Solutions Corporation | Self-protecting digital content |
US7111169B2 (en) | 2001-03-29 | 2006-09-19 | Intel Corporation | Method and apparatus for content protection across a source-to-destination interface |
US7194490B2 (en) | 2001-05-22 | 2007-03-20 | Christopher Zee | Method for the assured and enduring archival of intellectual property |
EP1490767B1 (en) | 2001-04-05 | 2014-06-11 | Audible Magic Corporation | Copyright detection and protection system and method |
US6996273B2 (en) | 2001-04-24 | 2006-02-07 | Microsoft Corporation | Robust recognizer of perceptually similar content |
US20020165819A1 (en) | 2001-05-02 | 2002-11-07 | Gateway, Inc. | System and method for providing distributed computing services |
US20020174132A1 (en) | 2001-05-04 | 2002-11-21 | Allresearch, Inc. | Method and system for detecting unauthorized trademark use on the internet |
US6983479B1 (en) | 2001-06-08 | 2006-01-03 | Tarantella, Inc. | Dynamic content activation by locating, coordinating and presenting content publishing resources such that content publisher can create or change content |
NO314375B1 (en) | 2001-06-15 | 2003-03-10 | Beep Science As | Arrangement and procedure for content control of data objects, special data objects in MMS messages |
TWI262416B (en) * | 2001-06-27 | 2006-09-21 | Ulead Systems Inc | Pornographic picture censoring system and method thereof |
US7529659B2 (en) | 2005-09-28 | 2009-05-05 | Audible Magic Corporation | Method and apparatus for identifying an unknown work |
JP2004536348A (en) * | 2001-07-20 | 2004-12-02 | グレースノート インコーポレイテッド | Automatic recording identification |
US8972481B2 (en) | 2001-07-20 | 2015-03-03 | Audible Magic, Inc. | Playlist generation method and apparatus |
US7877438B2 (en) | 2001-07-20 | 2011-01-25 | Audible Magic Corporation | Method and apparatus for identifying new media content |
US20030061490A1 (en) | 2001-09-26 | 2003-03-27 | Abajian Aram Christian | Method for identifying copyright infringement violations by fingerprint detection |
CA2359269A1 (en) * | 2001-10-17 | 2003-04-17 | Biodentity Systems Corporation | Face imaging system for recordal and automated identity confirmation |
US20030135623A1 (en) | 2001-10-23 | 2003-07-17 | Audible Magic, Inc. | Method and apparatus for cache promotion |
US7117513B2 (en) | 2001-11-09 | 2006-10-03 | Nielsen Media Research, Inc. | Apparatus and method for detecting and correcting a corrupted broadcast time code |
US7840488B2 (en) | 2001-11-20 | 2010-11-23 | Contentguard Holdings, Inc. | System and method for granting access to an item or permission to use an item based on configurable conditions |
US7020635B2 (en) | 2001-11-21 | 2006-03-28 | Line 6, Inc | System and method of secure electronic commerce transactions including tracking and recording the distribution and usage of assets |
US20030101104A1 (en) | 2001-11-28 | 2003-05-29 | Koninklijke Philips Electronics N.V. | System and method for retrieving information related to targeted subjects |
US7117200B2 (en) | 2002-01-11 | 2006-10-03 | International Business Machines Corporation | Synthesizing information-bearing content from multiple channels |
US7231657B2 (en) * | 2002-02-14 | 2007-06-12 | American Management Systems, Inc. | User authentication system and methods thereof |
DE10216261A1 (en) | 2002-04-12 | 2003-11-06 | Fraunhofer Ges Forschung | Method and device for embedding watermark information and method and device for extracting embedded watermark information |
US6885757B2 (en) | 2002-04-18 | 2005-04-26 | Sarnoff Corporation | Method and apparatus for providing an asymmetric watermark carrier |
US7403890B2 (en) * | 2002-05-13 | 2008-07-22 | Roushar Joseph C | Multi-dimensional method and apparatus for automated language interpretation |
US7120273B2 (en) | 2002-05-31 | 2006-10-10 | Hewlett-Packard Development Company, Lp. | Apparatus and method for image group integrity protection |
US8601504B2 (en) | 2002-06-20 | 2013-12-03 | Verance Corporation | Secure tracking system and method for video program content |
US6931413B2 (en) | 2002-06-25 | 2005-08-16 | Microsoft Corporation | System and method providing automated margin tree analysis and processing of sampled data |
US7003131B2 (en) | 2002-07-09 | 2006-02-21 | Kaleidescape, Inc. | Watermarking and fingerprinting digital content using alternative blocks to embed information |
US7996503B2 (en) | 2002-07-10 | 2011-08-09 | At&T Intellectual Property I, L.P. | System and method for managing access to digital content via digital rights policies |
US6871200B2 (en) | 2002-07-11 | 2005-03-22 | Forensic Eye Ltd. | Registration and monitoring system |
US20040091111A1 (en) | 2002-07-16 | 2004-05-13 | Levy Kenneth L. | Digital watermarking and fingerprinting applications |
KR20050025997A (en) | 2002-07-26 | 2005-03-14 | 코닌클리케 필립스 일렉트로닉스 엔.브이. | Identification of digital data sequences |
SE0202451D0 (en) | 2002-08-15 | 2002-08-15 | Ericsson Telefon Ab L M | Flexible Sim-Based DRM agent and architecture |
US20040044571A1 (en) | 2002-08-27 | 2004-03-04 | Bronnimann Eric Robert | Method and system for providing advertising listing variance in distribution feeds over the internet to maximize revenue to the advertising distributor |
US6983280B2 (en) | 2002-09-13 | 2006-01-03 | Overture Services Inc. | Automated processing of appropriateness determination of content for search listings in wide area network searches |
US8041719B2 (en) | 2003-05-06 | 2011-10-18 | Symantec Corporation | Personal computing device-based mechanism to detect preselected data |
US8090717B1 (en) | 2002-09-20 | 2012-01-03 | Google Inc. | Methods and apparatus for ranking documents |
GB0222113D0 (en) * | 2002-09-24 | 2002-10-30 | Koninkl Philips Electronics Nv | Image recognition |
US7277891B2 (en) * | 2002-10-11 | 2007-10-02 | Digimarc Corporation | Systems and methods for recognition of individuals using multiple biometric searches |
NZ539596A (en) | 2002-10-23 | 2008-11-28 | Nielsen Media Res Inc | Digital data insertion apparatus and methods for use with compressed audio/video data |
JP4366916B2 (en) | 2002-10-29 | 2009-11-18 | 富士ゼロックス株式会社 | Document confirmation system, document confirmation method, and document confirmation program |
US7757075B2 (en) | 2002-11-15 | 2010-07-13 | Microsoft Corporation | State reference |
JP2004180278A (en) | 2002-11-15 | 2004-06-24 | Canon Inc | Information processing apparatus, server device, electronic data management system, information processing system, information processing method, computer program, and computer-readable storage medium |
GB0229625D0 (en) * | 2002-12-19 | 2003-01-22 | British Telecomm | Searching images |
US7370017B1 (en) | 2002-12-20 | 2008-05-06 | Microsoft Corporation | Redistribution of rights-managed content and technique for encouraging same |
JP4226889B2 (en) | 2002-12-20 | 2009-02-18 | 株式会社東芝 | Content management system, program and method |
US7725544B2 (en) | 2003-01-24 | 2010-05-25 | Aol Inc. | Group based spam classification |
GB2397904B (en) | 2003-01-29 | 2005-08-24 | Hewlett Packard Co | Control of access to data content for read and/or write operations |
US8332326B2 (en) | 2003-02-01 | 2012-12-11 | Audible Magic Corporation | Method and apparatus to identify a work received by a processing system |
US20050102515A1 (en) | 2003-02-03 | 2005-05-12 | Dave Jaworski | Controlling read and write operations for digital media |
US7945511B2 (en) * | 2004-02-26 | 2011-05-17 | Payment Pathways, Inc. | Methods and systems for identity authentication |
US7606790B2 (en) | 2003-03-03 | 2009-10-20 | Digimarc Corporation | Integrating and enhancing searching of media content and biometric databases |
EP1457889A1 (en) | 2003-03-13 | 2004-09-15 | Koninklijke Philips Electronics N.V. | Improved fingerprint matching method and system |
US7246740B2 (en) | 2003-04-03 | 2007-07-24 | First Data Corporation | Suspicious persons database |
DE10319771B4 (en) | 2003-05-02 | 2005-03-17 | Koenig & Bauer Ag | System for inspecting a printed image |
US7669225B2 (en) | 2003-05-06 | 2010-02-23 | Portauthority Technologies Inc. | Apparatus and method for assuring compliance with distribution and usage policy |
JP2005004728A (en) | 2003-05-20 | 2005-01-06 | Canon Inc | Information processing system, information processing device, information processing method, storage medium storing program for executing same so that program can be read out to information processing device, and program |
US7685642B2 (en) | 2003-06-26 | 2010-03-23 | Contentguard Holdings, Inc. | System and method for controlling rights expressions by stakeholders of an item |
US7454061B2 (en) | 2003-06-27 | 2008-11-18 | Ricoh Company, Ltd. | System, apparatus, and method for providing illegal use research service for image data, and system, apparatus, and method for providing proper use research service for image data |
EP1642206B1 (en) | 2003-07-07 | 2017-12-20 | Irdeto B.V. | Reprogrammable security for controlling piracy and enabling interactive content |
US20050012563A1 (en) | 2003-07-16 | 2005-01-20 | Michael Kramer | Method and system for the simulataneous recording and identification of audio-visual material |
US20050039057A1 (en) | 2003-07-24 | 2005-02-17 | Amit Bagga | Method and apparatus for authenticating a user using query directed passwords |
US8200775B2 (en) | 2005-02-01 | 2012-06-12 | Newsilike Media Group, Inc | Enhanced syndication |
US20050043960A1 (en) | 2003-08-19 | 2005-02-24 | David Blankley | System and automate the licensing, re-use and royalties of authored content in derivative works |
US20050043548A1 (en) | 2003-08-22 | 2005-02-24 | Joseph Cates | Automated monitoring and control system for networked communications |
US20050049868A1 (en) | 2003-08-25 | 2005-03-03 | Bellsouth Intellectual Property Corporation | Speech recognition error identification method and system |
US20050080846A1 (en) | 2003-09-27 | 2005-04-14 | Webhound, Inc. | Method and system for updating digital content over a network |
US7703140B2 (en) | 2003-09-30 | 2010-04-20 | Guardian Data Storage, Llc | Method and system for securing digital assets using process-driven security policies |
US7369677B2 (en) | 2005-04-26 | 2008-05-06 | Verance Corporation | System reactions to the detection of embedded watermarks in a digital host content |
US9055239B2 (en) | 2003-10-08 | 2015-06-09 | Verance Corporation | Signal continuity assessment using embedded watermarks |
US7314162B2 (en) | 2003-10-17 | 2008-01-01 | Digimore Corporation | Method and system for reporting identity document usage |
AU2004304818A1 (en) | 2003-10-22 | 2005-07-07 | Clearplay, Inc. | Apparatus and method for blocking audio/visual programming and for muting audio |
CN1883198A (en) | 2003-11-17 | 2006-12-20 | 皇家飞利浦电子股份有限公司 | Commercial insertion into video streams based on surrounding program content |
US7444403B1 (en) | 2003-11-25 | 2008-10-28 | Microsoft Corporation | Detecting sexually predatory content in an electronic communication |
US8700533B2 (en) | 2003-12-04 | 2014-04-15 | Black Duck Software, Inc. | Authenticating licenses for legally-protectable content based on license profiles and content identifiers |
US7707039B2 (en) | 2004-02-15 | 2010-04-27 | Exbiblio B.V. | Automatic modification of web pages |
US20050193016A1 (en) | 2004-02-17 | 2005-09-01 | Nicholas Seet | Generation of a media content database by correlating repeating media content in media streams |
US7751805B2 (en) | 2004-02-20 | 2010-07-06 | Google Inc. | Mobile image-based information retrieval system |
JP4665406B2 (en) | 2004-02-23 | 2011-04-06 | 日本電気株式会社 | Access control management method, access control management system, and terminal device with access control management function |
WO2005088507A1 (en) | 2004-03-04 | 2005-09-22 | Yates James M | Method and apparatus for digital copyright exchange |
US8255331B2 (en) | 2004-03-04 | 2012-08-28 | Media Rights Technologies, Inc. | Method for providing curriculum enhancement using a computer-based media access system |
US20060080703A1 (en) | 2004-03-22 | 2006-04-13 | Compton Charles L | Content storage method and system |
EP1735999A4 (en) | 2004-03-29 | 2012-06-20 | Nielsen Media Res Inc | Methods and apparatus to detect a blank frame in a digital video broadcast signal |
US20050222900A1 (en) | 2004-03-30 | 2005-10-06 | Prashant Fuloria | Selectively delivering advertisements based at least in part on trademark issues |
US8874487B2 (en) | 2004-04-14 | 2014-10-28 | Digital River, Inc. | Software wrapper having use limitation within a geographic boundary |
US8688248B2 (en) | 2004-04-19 | 2014-04-01 | Shazam Investments Limited | Method and system for content sampling and identification |
US7769756B2 (en) | 2004-06-07 | 2010-08-03 | Sling Media, Inc. | Selection and presentation of context-relevant supplemental content and advertising |
US7975062B2 (en) | 2004-06-07 | 2011-07-05 | Sling Media, Inc. | Capturing and sharing media content |
US20050276570A1 (en) | 2004-06-15 | 2005-12-15 | Reed Ogden C Jr | Systems, processes and apparatus for creating, processing and interacting with audiobooks and other media |
US8953908B2 (en) | 2004-06-22 | 2015-02-10 | Digimarc Corporation | Metadata management and generation using perceptual features |
US7707427B1 (en) | 2004-07-19 | 2010-04-27 | Michael Frederick Kenrich | Multi-level file digests |
JP2006039791A (en) | 2004-07-26 | 2006-02-09 | Matsushita Electric Ind Co Ltd | Transmission history dependent processor |
US8130746B2 (en) | 2004-07-28 | 2012-03-06 | Audible Magic Corporation | System for distributing decoy content in a peer to peer network |
US7631336B2 (en) | 2004-07-30 | 2009-12-08 | Broadband Itv, Inc. | Method for converting, navigating and displaying video content uploaded from the internet to a digital TV video-on-demand platform |
JP4817624B2 (en) | 2004-08-06 | 2011-11-16 | キヤノン株式会社 | Image processing system, image alteration judgment method, computer program, and computer-readable storage medium |
US7467401B2 (en) * | 2004-08-12 | 2008-12-16 | Avatier Corporation | User authentication without prior user enrollment |
US7860922B2 (en) | 2004-08-18 | 2010-12-28 | Time Warner, Inc. | Method and device for the wireless exchange of media content between mobile devices based on content preferences |
US7555487B2 (en) | 2004-08-20 | 2009-06-30 | Xweb, Inc. | Image processing and identification system, method and apparatus |
US20060058019A1 (en) | 2004-09-15 | 2006-03-16 | Chan Wesley T | Method and system for dynamically modifying the appearance of browser screens on a client device |
US20060061599A1 (en) * | 2004-09-17 | 2006-03-23 | Matsushita Electric Industrial Co., Ltd. | Method and apparatus for automatic image orientation normalization |
US20060080356A1 (en) * | 2004-10-13 | 2006-04-13 | Microsoft Corporation | System and method for inferring similarities between media objects |
US20060085816A1 (en) | 2004-10-18 | 2006-04-20 | Funk James M | Method and apparatus to control playback in a download-and-view video on demand system |
US8117282B2 (en) | 2004-10-20 | 2012-02-14 | Clearplay, Inc. | Media player configured to receive playback filters from alternative storage mediums |
US7124743B2 (en) | 2004-10-22 | 2006-10-24 | Ford Global Technologies, Llc | System and method for starting sequential fuel injection internal combustion engine |
US8117339B2 (en) | 2004-10-29 | 2012-02-14 | Go Daddy Operating Company, LLC | Tracking domain name related reputation |
US7574409B2 (en) | 2004-11-04 | 2009-08-11 | Vericept Corporation | Method, apparatus, and system for clustering and classification |
GB0424479D0 (en) | 2004-11-05 | 2004-12-08 | Ibm | Generating a fingerprint for a document |
US20060106725A1 (en) | 2004-11-12 | 2006-05-18 | International Business Machines Corporation | Method, system, and program product for visual display of a license status for a software program |
US7881957B1 (en) | 2004-11-16 | 2011-02-01 | Amazon Technologies, Inc. | Identifying tasks for task performers based on task subscriptions |
US20060112015A1 (en) | 2004-11-24 | 2006-05-25 | Contentguard Holdings, Inc. | Method, system, and device for handling creation of derivative works and for adapting rights to derivative works |
US20060110137A1 (en) | 2004-11-25 | 2006-05-25 | Matsushita Electric Industrial Co., Ltd. | Video and audio data transmitting apparatus, and video and audio data transmitting method |
EP1827018B1 (en) | 2004-12-03 | 2017-11-22 | NEC Corporation | Video content reproduction supporting method, video content reproduction supporting system, and information delivery program |
US20060128470A1 (en) | 2004-12-15 | 2006-06-15 | Daniel Willis | System and method for managing advertising content delivery in a gaming environment supporting aggregated demographics serving and reporting |
US8107010B2 (en) | 2005-01-05 | 2012-01-31 | Rovi Solutions Corporation | Windows management in a television environment |
US7814564B2 (en) | 2005-01-07 | 2010-10-12 | University Of Maryland | Method for fingerprinting multimedia content |
US20060159128A1 (en) | 2005-01-20 | 2006-07-20 | Yen-Fu Chen | Channel switching subscription service according to predefined content patterns |
EP1844418B1 (en) | 2005-01-24 | 2013-03-13 | Koninklijke Philips Electronics N.V. | Private and controlled ownership sharing |
US7562228B2 (en) | 2005-03-15 | 2009-07-14 | Microsoft Corporation | Forensic for fingerprint detection in multimedia |
US20070242880A1 (en) | 2005-05-18 | 2007-10-18 | Stebbings David W | System and method for the identification of motional media of widely varying picture content |
US8365306B2 (en) | 2005-05-25 | 2013-01-29 | Oracle International Corporation | Platform and service for management and multi-channel delivery of multi-types of contents |
US20080109306A1 (en) | 2005-06-15 | 2008-05-08 | Maigret Robert J | Media marketplaces |
WO2006138484A2 (en) | 2005-06-15 | 2006-12-28 | Revver, Inc. | Media marketplaces |
US20070130015A1 (en) | 2005-06-15 | 2007-06-07 | Steven Starr | Advertisement revenue sharing for distributed video |
US20060287996A1 (en) | 2005-06-16 | 2006-12-21 | International Business Machines Corporation | Computer-implemented method, system, and program product for tracking content |
JP2007011554A (en) | 2005-06-29 | 2007-01-18 | Konica Minolta Business Technologies Inc | Image forming apparatus |
US7788132B2 (en) | 2005-06-29 | 2010-08-31 | Google, Inc. | Reviewing the suitability of Websites for participation in an advertising network |
US20070028308A1 (en) | 2005-07-29 | 2007-02-01 | Kosuke Nishio | Decoding apparatus |
US7925973B2 (en) | 2005-08-12 | 2011-04-12 | Brightcove, Inc. | Distribution of content |
US7516074B2 (en) | 2005-09-01 | 2009-04-07 | Auditude, Inc. | Extraction and matching of characteristic fingerprints from audio signals |
GB2445688A (en) | 2005-09-01 | 2008-07-16 | Zvi Haim Lev | System and method for reliable content access using a cellular/wireless device with imaging capabilities |
US7697942B2 (en) | 2005-09-02 | 2010-04-13 | Stevens Gilman R | Location based rules architecture systems and methods |
US20070058925A1 (en) | 2005-09-14 | 2007-03-15 | Fu-Sheng Chiu | Interactive multimedia production |
US20070106551A1 (en) | 2005-09-20 | 2007-05-10 | Mcgucken Elliot | 22nets: method, system, and apparatus for building content and talent marketplaces and archives based on a social network |
WO2007035965A2 (en) | 2005-09-23 | 2007-03-29 | Jammermedia, Inc. | Media management system |
US20080004116A1 (en) | 2006-06-30 | 2008-01-03 | Andrew Stephen Van Luchene | Video Game Environment |
US20070162349A1 (en) | 2005-10-17 | 2007-07-12 | Markmonitor Inc. | Client Side Brand Protection |
US7720767B2 (en) | 2005-10-24 | 2010-05-18 | Contentguard Holdings, Inc. | Method and system to support dynamic rights and resources sharing |
JP4629555B2 (en) | 2005-11-07 | 2011-02-09 | インターナショナル・ビジネス・マシーンズ・コーポレーション | Restoration device, program, information system, restoration method, storage device, storage system, and storage method |
US7412224B2 (en) | 2005-11-14 | 2008-08-12 | Nokia Corporation | Portable local server with context sensing |
US20070208751A1 (en) | 2005-11-22 | 2007-09-06 | David Cowan | Personalized content control |
AU2006320693B2 (en) | 2005-11-29 | 2012-03-01 | Google Inc. | Social and interactive applications for mass media |
JP4154421B2 (en) | 2005-12-07 | 2008-09-24 | キヤノン株式会社 | Image processing apparatus, program for executing the image processing method, and medium storing the program |
CA2634489C (en) | 2005-12-21 | 2016-08-30 | Digimarc Corporation | Rules driven pan id metadata routing system and network |
US20070162761A1 (en) | 2005-12-23 | 2007-07-12 | Davis Bruce L | Methods and Systems to Help Detect Identity Fraud |
US20070157228A1 (en) | 2005-12-30 | 2007-07-05 | Jason Bayer | Advertising with video ad creatives |
US20070156594A1 (en) | 2006-01-03 | 2007-07-05 | Mcgucken Elliot | System and method for allowing creators, artsists, and owners to protect and profit from content |
EP1974300A2 (en) * | 2006-01-16 | 2008-10-01 | Thomson Licensing | Method for determining and fingerprinting a key frame of a video sequence |
US20070260520A1 (en) | 2006-01-18 | 2007-11-08 | Teracent Corporation | System, method and computer program product for selecting internet-based advertising |
WO2007089943A2 (en) | 2006-02-01 | 2007-08-09 | Markmonitor Inc. | Detecting online abuse in images |
US20070203911A1 (en) | 2006-02-07 | 2007-08-30 | Fu-Sheng Chiu | Video weblog |
WO2007091243A2 (en) | 2006-02-07 | 2007-08-16 | Mobixell Networks Ltd. | Matching of modified visual and audio media |
US8122019B2 (en) | 2006-02-17 | 2012-02-21 | Google Inc. | Sharing user distributed search results |
US20080027931A1 (en) | 2006-02-27 | 2008-01-31 | Vobile, Inc. | Systems and methods for publishing, searching, retrieving and binding metadata for a digital object |
US20070203891A1 (en) | 2006-02-28 | 2007-08-30 | Microsoft Corporation | Providing and using search index enabling searching based on a targeted content of documents |
US20070208715A1 (en) | 2006-03-02 | 2007-09-06 | Thomas Muehlbauer | Assigning Unique Content Identifiers to Digital Media Content |
US8037506B2 (en) | 2006-03-03 | 2011-10-11 | Verimatrix, Inc. | Movie studio-based network distribution system and method |
US20070233556A1 (en) | 2006-03-31 | 2007-10-04 | Ross Koningstein | Controlling the serving, with a primary document, of ads from a first source, subject to a first compensation scheme, and ads from a second source, subject to a second compensation scheme |
US8009861B2 (en) | 2006-04-28 | 2011-08-30 | Vobile, Inc. | Method and system for fingerprinting digital video object based on multiresolution, multirate spatial and temporal signatures |
US8745226B2 (en) | 2006-05-02 | 2014-06-03 | Google Inc. | Customization of content and advertisements in publications |
US20080034396A1 (en) | 2006-05-30 | 2008-02-07 | Lev Zvi H | System and method for video distribution and billing |
US20070282472A1 (en) | 2006-06-01 | 2007-12-06 | International Business Machines Corporation | System and method for customizing soundtracks |
US7831531B1 (en) | 2006-06-22 | 2010-11-09 | Google Inc. | Approximate hashing functions for finding similar content |
MY166373A (en) | 2006-06-23 | 2018-06-25 | Tencent Tech Shenzhen Co Ltd | Method, system and apparatus for playing advertisements |
US20080005241A1 (en) | 2006-06-30 | 2008-01-03 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Usage parameters for communication content |
US7899694B1 (en) * | 2006-06-30 | 2011-03-01 | Amazon Technologies, Inc. | Generating solutions to problems via interactions with human responders |
US20080051029A1 (en) | 2006-08-25 | 2008-02-28 | Bradley James Witteman | Phone-based broadcast audio identification |
US8738749B2 (en) | 2006-08-29 | 2014-05-27 | Digimarc Corporation | Content monitoring and host compliance evaluation |
US20080059461A1 (en) | 2006-08-29 | 2008-03-06 | Attributor Corporation | Content search using a provided interface |
US8707459B2 (en) | 2007-01-19 | 2014-04-22 | Digimarc Corporation | Determination of originality of content |
US8010511B2 (en) | 2006-08-29 | 2011-08-30 | Attributor Corporation | Content monitoring and compliance enforcement |
US9654447B2 (en) | 2006-08-29 | 2017-05-16 | Digimarc Corporation | Customized handling of copied content based on owner-specified similarity thresholds |
US20080059211A1 (en) | 2006-08-29 | 2008-03-06 | Attributor Corporation | Content monitoring and compliance |
US20080059285A1 (en) | 2006-09-01 | 2008-03-06 | Admob, Inc. | Assessing a fee for an ad |
US7730316B1 (en) | 2006-09-22 | 2010-06-01 | Fatlens, Inc. | Method for document fingerprinting |
US8781892B2 (en) | 2006-09-29 | 2014-07-15 | Yahoo! Inc. | Digital media benefit attachment mechanism |
US7945470B1 (en) | 2006-09-29 | 2011-05-17 | Amazon Technologies, Inc. | Facilitating performance of submitted tasks by mobile task performers |
WO2008058093A2 (en) | 2006-11-03 | 2008-05-15 | Google Inc. | Content management system |
US8301658B2 (en) | 2006-11-03 | 2012-10-30 | Google Inc. | Site directed management of audio components of uploaded video files |
US20080162228A1 (en) | 2006-12-19 | 2008-07-03 | Friedrich Mechbach | Method and system for the integrating advertising in user generated contributions |
US10242415B2 (en) | 2006-12-20 | 2019-03-26 | Digimarc Corporation | Method and system for determining content treatment |
US9179200B2 (en) | 2007-03-14 | 2015-11-03 | Digimarc Corporation | Method and system for determining content treatment |
US20080155701A1 (en) | 2006-12-22 | 2008-06-26 | Yahoo! Inc. | Method and system for unauthorized content detection and reporting |
US8055552B2 (en) | 2006-12-22 | 2011-11-08 | Yahoo! Inc. | Social network commerce model |
US7840537B2 (en) | 2006-12-22 | 2010-11-23 | Commvault Systems, Inc. | System and method for storing redundant information |
US20080162449A1 (en) | 2006-12-28 | 2008-07-03 | Chen Chao-Yu | Dynamic page similarity measurement |
KR100856027B1 (en) | 2007-01-09 | 2008-09-03 | 주식회사 태그스토리 | System for providing copyright-verified video data and method thereof |
US7979464B2 (en) | 2007-02-27 | 2011-07-12 | Motion Picture Laboratories, Inc. | Associating rights to multimedia content |
US20080235200A1 (en) | 2007-03-21 | 2008-09-25 | Ripcode, Inc. | System and Method for Identifying Content |
US8249992B2 (en) | 2007-03-22 | 2012-08-21 | The Nielsen Company (Us), Llc | Digital rights management and audience measurement systems and methods |
EP2130156A1 (en) | 2007-03-23 | 2009-12-09 | Baytsp, Inc | System and method for confirming digital content |
US20080240490A1 (en) | 2007-03-30 | 2008-10-02 | Microsoft Corporation | Source authentication and usage tracking of video |
AU2008247347A1 (en) | 2007-05-03 | 2008-11-13 | Google Inc. | Monetization of digital content contributions |
US8117094B2 (en) | 2007-06-29 | 2012-02-14 | Microsoft Corporation | Distribution channels and monetizing |
US8170392B2 (en) | 2007-11-21 | 2012-05-01 | Shlomo Selim Rakib | Method and apparatus for generation, distribution and display of interactive video content |
JP4938580B2 (en) | 2007-07-27 | 2012-05-23 | アイシン精機株式会社 | Door handle device |
US8006314B2 (en) | 2007-07-27 | 2011-08-23 | Audible Magic Corporation | System for identifying content of digital data |
US8238669B2 (en) | 2007-08-22 | 2012-08-07 | Google Inc. | Detection and classification of matches between time-based media |
US9087331B2 (en) | 2007-08-29 | 2015-07-21 | Tveyes Inc. | Contextual advertising for video and audio media |
US8490206B1 (en) | 2007-09-28 | 2013-07-16 | Time Warner, Inc. | Apparatuses, methods and systems for reputation/content tracking and management |
US20090119169A1 (en) | 2007-10-02 | 2009-05-07 | Blinkx Uk Ltd | Various methods and apparatuses for an engine that pairs advertisements with video files |
US8250097B2 (en) * | 2007-11-02 | 2012-08-21 | Hue Rhodes | Online identity management and identity verification |
US8209223B2 (en) | 2007-11-30 | 2012-06-26 | Google Inc. | Video object tag creation and processing |
US9984369B2 (en) | 2007-12-19 | 2018-05-29 | At&T Intellectual Property I, L.P. | Systems and methods to identify target video content |
WO2009100093A1 (en) | 2008-02-05 | 2009-08-13 | Dolby Laboratories Licensing Corporation | Associating information with media content |
GB2460857A (en) | 2008-06-12 | 2009-12-16 | Geoffrey Mark Timothy Cross | Detecting objects of interest in the frames of a video sequence by a distributed human workforce employing a hybrid human/computing arrangement |
US9788043B2 (en) * | 2008-11-07 | 2017-10-10 | Digimarc Corporation | Content interaction methods and systems employing portable devices |
-
2006
- 2006-12-20 US US11/613,891 patent/US20070162761A1/en not_active Abandoned
-
2008
- 2008-05-02 US US12/114,612 patent/US8341412B2/en active Active
-
2012
- 2012-01-20 US US13/355,240 patent/US20120123959A1/en not_active Abandoned
- 2012-12-14 US US13/714,930 patent/US8458482B2/en active Active
-
2013
- 2013-06-04 US US13/909,834 patent/US8868917B2/en active Active
- 2013-07-09 US US13/937,995 patent/US8688999B2/en active Active
-
2014
- 2014-10-21 US US14/519,973 patent/US9292513B2/en active Active
-
2016
- 2016-03-18 US US15/074,967 patent/US10007723B2/en active Active
Patent Citations (40)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5565316A (en) * | 1992-10-09 | 1996-10-15 | Educational Testing Service | System and method for computer based testing |
US5513994A (en) * | 1993-09-30 | 1996-05-07 | Educational Testing Service | Centralized system and method for administering computer based tests |
US6513018B1 (en) * | 1994-05-05 | 2003-01-28 | Fair, Isaac And Company, Inc. | Method and apparatus for scoring the likelihood of a desired performance result |
US5679938A (en) * | 1994-12-02 | 1997-10-21 | Telecheck International, Inc. | Methods and systems for interactive check authorizations |
US6430306B2 (en) * | 1995-03-20 | 2002-08-06 | Lau Technologies | Systems and methods for identifying images |
US6563950B1 (en) * | 1996-06-25 | 2003-05-13 | Eyematic Interfaces, Inc. | Labeled bunch graphs for image analysis |
US6931451B1 (en) * | 1996-10-03 | 2005-08-16 | Gotuit Media Corp. | Systems and methods for modifying broadcast programming |
US20050154924A1 (en) * | 1998-02-13 | 2005-07-14 | Scheidt Edward M. | Multiple factor-based user identification and authentication |
US6301370B1 (en) * | 1998-04-13 | 2001-10-09 | Eyematic Interfaces, Inc. | Face recognition from video images |
US6292575B1 (en) * | 1998-07-20 | 2001-09-18 | Lau Technologies | Real-time facial recognition and verification system |
US20020031253A1 (en) * | 1998-12-04 | 2002-03-14 | Orang Dialameh | System and method for feature location and tracking in multiple dimensions including depth |
US6466695B1 (en) * | 1999-08-04 | 2002-10-15 | Eyematic Interfaces, Inc. | Procedure for automatic analysis of images and image sequences based on two-dimensional shape primitives |
US6965889B2 (en) * | 2000-05-09 | 2005-11-15 | Fair Isaac Corporation | Approach for generating rules |
US6597775B2 (en) * | 2000-09-29 | 2003-07-22 | Fair Isaac Corporation | Self-learning real-time prioritization of telecommunication fraud control actions |
US6968328B1 (en) * | 2000-12-29 | 2005-11-22 | Fair Isaac Corporation | Method and system for implementing rules and ruleflows |
US7367058B2 (en) * | 2001-05-25 | 2008-04-29 | United States Postal Service | Encoding method |
US6944604B1 (en) * | 2001-07-03 | 2005-09-13 | Fair Isaac Corporation | Mechanism and method for specified temporal deployment of rules within a rule server |
US20030052768A1 (en) * | 2001-09-17 | 2003-03-20 | Maune James J. | Security method and system |
US20040205030A1 (en) * | 2001-10-24 | 2004-10-14 | Capital Confirmation, Inc. | Systems, methods and computer readable medium providing automated third-party confirmations |
US20030099379A1 (en) * | 2001-11-26 | 2003-05-29 | Monk Bruce C. | Validation and verification apparatus and method |
US7003669B2 (en) * | 2001-12-17 | 2006-02-21 | Monk Bruce C | Document and bearer verification system |
US20030115459A1 (en) * | 2001-12-17 | 2003-06-19 | Monk Bruce C. | Document and bearer verification system |
US20050141707A1 (en) * | 2002-02-05 | 2005-06-30 | Haitsma Jaap A. | Efficient storage of fingerprints |
US20040019807A1 (en) * | 2002-05-15 | 2004-01-29 | Zone Labs, Inc. | System And Methodology For Providing Community-Based Security Policies |
US20030216988A1 (en) * | 2002-05-17 | 2003-11-20 | Cassandra Mollett | Systems and methods for using phone number validation in a risk assessment |
US20050259819A1 (en) * | 2002-06-24 | 2005-11-24 | Koninklijke Philips Electronics | Method for generating hashes from a compressed multimedia content |
US20040064415A1 (en) * | 2002-07-12 | 2004-04-01 | Abdallah David S. | Personal authentication software and systems for travel privilege assignation and verification |
US20040059953A1 (en) * | 2002-09-24 | 2004-03-25 | Arinc | Methods and systems for identity management |
US20040153663A1 (en) * | 2002-11-01 | 2004-08-05 | Clark Robert T. | System, method and computer program product for assessing risk of identity theft |
US20060075237A1 (en) * | 2002-11-12 | 2006-04-06 | Koninklijke Philips Electronics N.V. | Fingerprinting multimedia contents |
US20040213437A1 (en) * | 2002-11-26 | 2004-10-28 | Howard James V | Systems and methods for managing and detecting fraud in image databases used with identification documents |
US20040189441A1 (en) * | 2003-03-24 | 2004-09-30 | Kosmas Stergiou | Apparatus and methods for verification and authentication employing voluntary attributes, knowledge management and databases |
US20040230527A1 (en) * | 2003-04-29 | 2004-11-18 | First Data Corporation | Authentication for online money transfers |
US20050114679A1 (en) * | 2003-11-26 | 2005-05-26 | Amit Bagga | Method and apparatus for extracting authentication information from a user |
US20050132235A1 (en) * | 2003-12-15 | 2005-06-16 | Remco Teunen | System and method for providing improved claimant authentication |
US20050171851A1 (en) * | 2004-01-30 | 2005-08-04 | Applebaum Ted H. | Multiple choice challenge-response user authorization system and method |
US20050288952A1 (en) * | 2004-05-18 | 2005-12-29 | Davis Bruce L | Official documents and methods of issuance |
US20060074986A1 (en) * | 2004-08-20 | 2006-04-06 | Viisage Technology, Inc. | Method and system to authenticate an object |
US20060106774A1 (en) * | 2004-11-16 | 2006-05-18 | Cohen Peter D | Using qualifications of users to facilitate user performance of tasks |
US20060106675A1 (en) * | 2004-11-16 | 2006-05-18 | Cohen Peter D | Providing an electronic marketplace to facilitate human performance of programmatically submitted tasks |
Cited By (118)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9292513B2 (en) | 2005-12-23 | 2016-03-22 | Digimarc Corporation | Methods for identifying audio or video content |
US8458482B2 (en) | 2005-12-23 | 2013-06-04 | Digimarc Corporation | Methods for identifying audio or video content |
US20080208849A1 (en) * | 2005-12-23 | 2008-08-28 | Conwell William Y | Methods for Identifying Audio or Video Content |
US8688999B2 (en) | 2005-12-23 | 2014-04-01 | Digimarc Corporation | Methods for identifying audio or video content |
US8868917B2 (en) | 2005-12-23 | 2014-10-21 | Digimarc Corporation | Methods for identifying audio or video content |
US8341412B2 (en) | 2005-12-23 | 2012-12-25 | Digimarc Corporation | Methods for identifying audio or video content |
US10007723B2 (en) | 2005-12-23 | 2018-06-26 | Digimarc Corporation | Methods for identifying audio or video content |
US9031919B2 (en) | 2006-08-29 | 2015-05-12 | Attributor Corporation | Content monitoring and compliance enforcement |
US9342670B2 (en) | 2006-08-29 | 2016-05-17 | Attributor Corporation | Content monitoring and host compliance evaluation |
US9436810B2 (en) | 2006-08-29 | 2016-09-06 | Attributor Corporation | Determination of copied content, including attribution |
US8935745B2 (en) | 2006-08-29 | 2015-01-13 | Attributor Corporation | Determination of originality of content |
US9842200B1 (en) | 2006-08-29 | 2017-12-12 | Attributor Corporation | Content monitoring and host compliance evaluation |
US7971143B2 (en) * | 2006-10-31 | 2011-06-28 | Microsoft Corporation | Senseweb |
US20080104530A1 (en) * | 2006-10-31 | 2008-05-01 | Microsoft Corporation | Senseweb |
US20080133346A1 (en) * | 2006-11-30 | 2008-06-05 | Jyh-Herng Chow | Human responses and rewards for requests at web scale |
US10242415B2 (en) | 2006-12-20 | 2019-03-26 | Digimarc Corporation | Method and system for determining content treatment |
US9179200B2 (en) | 2007-03-14 | 2015-11-03 | Digimarc Corporation | Method and system for determining content treatment |
US9785841B2 (en) | 2007-03-14 | 2017-10-10 | Digimarc Corporation | Method and system for audio-video signal processing |
US20080228733A1 (en) * | 2007-03-14 | 2008-09-18 | Davis Bruce L | Method and System for Determining Content Treatment |
US20080294647A1 (en) * | 2007-05-21 | 2008-11-27 | Arun Ramaswamy | Methods and apparatus to monitor content distributed by the internet |
US20080313026A1 (en) * | 2007-06-15 | 2008-12-18 | Robert Rose | System and method for voting in online competitions |
US8788334B2 (en) * | 2007-06-15 | 2014-07-22 | Social Mecca, Inc. | Online marketing platform |
US8788335B2 (en) | 2007-06-15 | 2014-07-22 | Social Mecca, Inc. | Content distribution system including cost-per-engagement based advertising |
US20080313040A1 (en) * | 2007-06-15 | 2008-12-18 | Robert Rose | Content distribution system including cost-per-engagement based advertising |
US20080313011A1 (en) * | 2007-06-15 | 2008-12-18 | Robert Rose | Online marketing platform |
US20100094849A1 (en) * | 2007-08-17 | 2010-04-15 | Robert Rose | Systems and methods for creating user generated content incorporating content from a content catalog |
US20090119264A1 (en) * | 2007-11-05 | 2009-05-07 | Chacha Search, Inc | Method and system of accessing information |
US20090124354A1 (en) * | 2007-11-12 | 2009-05-14 | Acres-Fiore, Inc. | Method for attributing gameplay credit to a player |
US20090124355A1 (en) * | 2007-11-12 | 2009-05-14 | Acres-Fiore, Inc. | System for attributing gameplay credit to a player |
US8656298B2 (en) | 2007-11-30 | 2014-02-18 | Social Mecca, Inc. | System and method for conducting online campaigns |
US20100287201A1 (en) * | 2008-01-04 | 2010-11-11 | Koninklijke Philips Electronics N.V. | Method and a system for identifying elementary content portions from an edited content |
US10438152B1 (en) * | 2008-01-25 | 2019-10-08 | Amazon Technologies, Inc. | Managing performance of human review of media data |
US20100167256A1 (en) * | 2008-02-14 | 2010-07-01 | Douglas Michael Blash | System and method for global historical database |
US20090240624A1 (en) * | 2008-03-20 | 2009-09-24 | Modasolutions Corporation | Risk detection and assessment of cash payment for electronic purchase transactions |
US9503545B2 (en) * | 2008-04-17 | 2016-11-22 | Gary Stephen Shuster | Evaluation of remote user attributes in a social networking environment |
US20140025741A1 (en) * | 2008-04-17 | 2014-01-23 | Gary Stephen Shuster | Evaluation of remote user attributes in a social networking environment |
US8380503B2 (en) * | 2008-06-23 | 2013-02-19 | John Nicholas and Kristin Gross Trust | System and method for generating challenge items for CAPTCHAs |
US9558337B2 (en) | 2008-06-23 | 2017-01-31 | John Nicholas and Kristin Gross Trust | Methods of creating a corpus of spoken CAPTCHA challenges |
US8489399B2 (en) | 2008-06-23 | 2013-07-16 | John Nicholas and Kristin Gross Trust | System and method for verifying origin of input through spoken language analysis |
US9653068B2 (en) | 2008-06-23 | 2017-05-16 | John Nicholas and Kristin Gross Trust | Speech recognizer adapted to reject machine articulations |
US20090319274A1 (en) * | 2008-06-23 | 2009-12-24 | John Nicholas Gross | System and Method for Verifying Origin of Input Through Spoken Language Analysis |
US9075977B2 (en) | 2008-06-23 | 2015-07-07 | John Nicholas and Kristin Gross Trust U/A/D Apr. 13, 2010 | System for using spoken utterances to provide access to authorized humans and automated agents |
US20090319271A1 (en) * | 2008-06-23 | 2009-12-24 | John Nicholas Gross | System and Method for Generating Challenge Items for CAPTCHAs |
US8868423B2 (en) | 2008-06-23 | 2014-10-21 | John Nicholas and Kristin Gross Trust | System and method for controlling access to resources with a spoken CAPTCHA test |
US10013972B2 (en) | 2008-06-23 | 2018-07-03 | J. Nicholas and Kristin Gross Trust U/A/D Apr. 13, 2010 | System and method for identifying speakers |
US8949126B2 (en) * | 2008-06-23 | 2015-02-03 | The John Nicholas and Kristin Gross Trust | Creating statistical language models for spoken CAPTCHAs |
US10276152B2 (en) | 2008-06-23 | 2019-04-30 | J. Nicholas and Kristin Gross | System and method for discriminating between speakers for authentication |
US8494854B2 (en) | 2008-06-23 | 2013-07-23 | John Nicholas and Kristin Gross | CAPTCHA using challenges optimized for distinguishing between humans and machines |
US20090319270A1 (en) * | 2008-06-23 | 2009-12-24 | John Nicholas Gross | CAPTCHA Using Challenges Optimized for Distinguishing Between Humans and Machines |
US20140316786A1 (en) * | 2008-06-23 | 2014-10-23 | John Nicholas And Kristin Gross Trust U/A/D April 13, 2010 | Creating statistical language models for audio CAPTCHAs |
US20090325696A1 (en) * | 2008-06-27 | 2009-12-31 | John Nicholas Gross | Pictorial Game System & Method |
US9295917B2 (en) | 2008-06-27 | 2016-03-29 | The John Nicholas and Kristin Gross Trust | Progressive pictorial and motion based CAPTCHAs |
US20090328150A1 (en) * | 2008-06-27 | 2009-12-31 | John Nicholas Gross | Progressive Pictorial & Motion Based CAPTCHAs |
US20090325661A1 (en) * | 2008-06-27 | 2009-12-31 | John Nicholas Gross | Internet Based Pictorial Game System & Method |
US9474978B2 (en) | 2008-06-27 | 2016-10-25 | John Nicholas and Kristin Gross | Internet based pictorial game system and method with advertising |
US9266023B2 (en) | 2008-06-27 | 2016-02-23 | John Nicholas and Kristin Gross | Pictorial game system and method |
US9789394B2 (en) | 2008-06-27 | 2017-10-17 | John Nicholas and Kristin Gross Trust | Methods for using simultaneous speech inputs to determine an electronic competitive challenge winner |
US9192861B2 (en) | 2008-06-27 | 2015-11-24 | John Nicholas and Kristin Gross Trust | Motion, orientation, and touch-based CAPTCHAs |
US9186579B2 (en) | 2008-06-27 | 2015-11-17 | John Nicholas and Kristin Gross Trust | Internet based pictorial game system and method |
US8752141B2 (en) | 2008-06-27 | 2014-06-10 | John Nicholas | Methods for presenting and determining the efficacy of progressive pictorial and motion-based CAPTCHAs |
US20120054194A1 (en) * | 2009-05-08 | 2012-03-01 | Dolby Laboratories Licensing Corporation | Storing and Searching Fingerprints Derived from Media Content Based on a Classification of the Media Content |
US9075897B2 (en) * | 2009-05-08 | 2015-07-07 | Dolby Laboratories Licensing Corporation | Storing and searching fingerprints derived from media content based on a classification of the media content |
WO2011022051A1 (en) * | 2009-08-18 | 2011-02-24 | Alibaba Group Holding Limited | User verification using voice based password |
US8869254B2 (en) | 2009-08-18 | 2014-10-21 | Alibaba Group Holding Limited | User verification using voice based password |
US20110047607A1 (en) * | 2009-08-18 | 2011-02-24 | Alibaba Group Holding Limited | User verification using voice based password |
US11323347B2 (en) | 2009-09-30 | 2022-05-03 | Www.Trustscience.Com Inc. | Systems and methods for social graph data analytics to determine connectivity within a community |
US8458010B1 (en) | 2009-10-13 | 2013-06-04 | Amazon Technologies, Inc. | Monitoring and enforcing price parity |
US11665072B2 (en) | 2009-10-23 | 2023-05-30 | Www.Trustscience.Com Inc. | Parallel computational framework and application server for determining path connectivity |
US10812354B2 (en) | 2009-10-23 | 2020-10-20 | Www.Trustscience.Com Inc. | Parallel computational framework and application server for determining path connectivity |
US10348586B2 (en) | 2009-10-23 | 2019-07-09 | Www.Trustscience.Com Inc. | Parallel computatonal framework and application server for determining path connectivity |
US9069771B2 (en) * | 2009-12-08 | 2015-06-30 | Xerox Corporation | Music recognition method and system based on socialized music server |
US20110137855A1 (en) * | 2009-12-08 | 2011-06-09 | Xerox Corporation | Music recognition method and system based on socialized music server |
US20110161225A1 (en) * | 2009-12-30 | 2011-06-30 | Infosys Technologies Limited | Method and system for processing loan applications in a financial institution |
US20110265162A1 (en) * | 2010-04-21 | 2011-10-27 | International Business Machines Corporation | Holistic risk-based identity establishment for eligibility determinations in context of an application |
US8375427B2 (en) * | 2010-04-21 | 2013-02-12 | International Business Machines Corporation | Holistic risk-based identity establishment for eligibility determinations in context of an application |
US20110295591A1 (en) * | 2010-05-28 | 2011-12-01 | Palo Alto Research Center Incorporated | System and method to acquire paraphrases |
US9672204B2 (en) * | 2010-05-28 | 2017-06-06 | Palo Alto Research Center Incorporated | System and method to acquire paraphrases |
US9053182B2 (en) | 2011-01-27 | 2015-06-09 | International Business Machines Corporation | System and method for making user generated audio content on the spoken web navigable by community tagging |
US20140114984A1 (en) * | 2012-04-19 | 2014-04-24 | Wonga Technology Limited | Method and system for user authentication |
US20140211044A1 (en) * | 2013-01-25 | 2014-07-31 | Electronics And Telecommunications Research Institute | Method and system for generating image knowledge contents based on crowdsourcing |
US10083186B2 (en) * | 2013-02-19 | 2018-09-25 | Digitalglobe, Inc. | System and method for large scale crowdsourcing of map data cleanup and correction |
US10346495B2 (en) * | 2013-02-19 | 2019-07-09 | Digitalglobe, Inc. | System and method for large scale crowdsourcing of map data cleanup and correction |
US20140236851A1 (en) * | 2013-02-19 | 2014-08-21 | Digitalglobe, Inc. | Crowdsourced search and locate platform |
US20140233863A1 (en) * | 2013-02-19 | 2014-08-21 | Digitalglobe, Inc. | Crowdsourced search and locate platform |
US11294981B2 (en) * | 2013-02-19 | 2022-04-05 | Digitalglobe, Inc. | System and method for large scale crowdsourcing of map data cleanup and correction |
US10078645B2 (en) * | 2013-02-19 | 2018-09-18 | Digitalglobe, Inc. | Crowdsourced feature identification and orthorectification |
US9128959B2 (en) * | 2013-02-19 | 2015-09-08 | Digitalglobe, Inc. | Crowdsourced search and locate platform |
US9122708B2 (en) * | 2013-02-19 | 2015-09-01 | Digitalglobe Inc. | Crowdsourced search and locate platform |
US9830588B2 (en) * | 2013-02-26 | 2017-11-28 | Digimarc Corporation | Methods and arrangements for smartphone payments |
US20140244495A1 (en) * | 2013-02-26 | 2014-08-28 | Digimarc Corporation | Methods and arrangements for smartphone payments |
US8909475B2 (en) | 2013-03-08 | 2014-12-09 | Zzzoom, LLC | Generating transport routes using public and private modes |
US9082134B2 (en) | 2013-03-08 | 2015-07-14 | Zzzoom, LLC | Displaying advertising using transit time data |
US20140258110A1 (en) * | 2013-03-11 | 2014-09-11 | Digimarc Corporation | Methods and arrangements for smartphone payments and transactions |
US9294456B1 (en) * | 2013-07-25 | 2016-03-22 | Amazon Technologies, Inc. | Gaining access to an account through authentication |
US20150106265A1 (en) * | 2013-10-11 | 2015-04-16 | Telesign Corporation | System and methods for processing a communication number for fraud prevention |
US20150161611A1 (en) * | 2013-12-10 | 2015-06-11 | Sas Institute Inc. | Systems and Methods for Self-Similarity Measure |
US9954942B2 (en) | 2013-12-11 | 2018-04-24 | Entit Software Llc | Result aggregation |
US11049094B2 (en) | 2014-02-11 | 2021-06-29 | Digimarc Corporation | Methods and arrangements for device to device communication |
WO2015157344A3 (en) * | 2014-04-07 | 2015-12-10 | Digitalglobe, Inc. | Systems and methods for large scale crowdsourcing of map data location, cleanup, and correction |
US10380703B2 (en) | 2015-03-20 | 2019-08-13 | Www.Trustscience.Com Inc. | Calculating a trust score |
US11900479B2 (en) | 2015-03-20 | 2024-02-13 | Www.Trustscience.Com Inc. | Calculating a trust score |
US10325603B2 (en) * | 2015-06-17 | 2019-06-18 | Baidu Online Network Technology (Beijing) Co., Ltd. | Voiceprint authentication method and apparatus |
US11386129B2 (en) | 2016-02-17 | 2022-07-12 | Www.Trustscience.Com Inc. | Searching for entities based on trust score and geography |
US11341145B2 (en) | 2016-02-29 | 2022-05-24 | Www.Trustscience.Com Inc. | Extrapolating trends in trust scores |
EP3223228A1 (en) * | 2016-03-21 | 2017-09-27 | Facebook Inc. | Systems and methods for identifying matching content in a social network |
US11640569B2 (en) | 2016-03-24 | 2023-05-02 | Www.Trustscience.Com Inc. | Learning an entity's trust model and risk tolerance to calculate its risk-taking score |
US11210417B2 (en) | 2016-09-26 | 2021-12-28 | Advanced New Technologies Co., Ltd. | Identity recognition method and device |
US10419489B2 (en) * | 2017-05-04 | 2019-09-17 | International Business Machines Corporation | Unidirectional trust based decision making for information technology conversation agents |
US20190042961A1 (en) * | 2017-08-07 | 2019-02-07 | Securiport Llc | Multi-mode data collection and traveler processing |
US11482242B2 (en) * | 2017-10-18 | 2022-10-25 | Beijing Dajia Internet Information Technology Co., Ltd. | Audio recognition method, device and server |
US11321774B2 (en) | 2018-01-30 | 2022-05-03 | Pointpredictive, Inc. | Risk-based machine learning classifier |
US11651083B2 (en) | 2018-10-31 | 2023-05-16 | Capital One Services, Llc | Methods and systems for reducing false positive findings |
US10395041B1 (en) * | 2018-10-31 | 2019-08-27 | Capital One Services, Llc | Methods and systems for reducing false positive findings |
US10929543B2 (en) | 2018-10-31 | 2021-02-23 | Capital One Services, Llc | Methods and systems for reducing false positive findings |
CN110674704A (en) * | 2019-09-05 | 2020-01-10 | 同济大学 | Crowd density estimation method and device based on multi-scale expansion convolutional network |
US11423405B2 (en) | 2019-09-10 | 2022-08-23 | International Business Machines Corporation | Peer validation for unauthorized transactions |
US11966372B1 (en) * | 2020-05-01 | 2024-04-23 | Bottomline Technologies, Inc. | Database record combination |
US11968105B2 (en) | 2022-04-14 | 2024-04-23 | Www.Trustscience.Com Inc. | Systems and methods for social graph data analytics to determine connectivity within a community |
Also Published As
Publication number | Publication date |
---|---|
US9292513B2 (en) | 2016-03-22 |
US8868917B2 (en) | 2014-10-21 |
US20130297942A1 (en) | 2013-11-07 |
US8688999B2 (en) | 2014-04-01 |
US20150106389A1 (en) | 2015-04-16 |
US20080208849A1 (en) | 2008-08-28 |
US20130110870A1 (en) | 2013-05-02 |
US8458482B2 (en) | 2013-06-04 |
US10007723B2 (en) | 2018-06-26 |
US8341412B2 (en) | 2012-12-25 |
US20140156691A1 (en) | 2014-06-05 |
US20160246878A1 (en) | 2016-08-25 |
US20120123959A1 (en) | 2012-05-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20120123959A1 (en) | Methods and Systems to Help Detect Identity Fraud | |
US11748469B1 (en) | Multifactor identity authentication via cumulative dynamic contextual identity | |
US7693767B2 (en) | Method for generating predictive models for a business problem via supervised learning | |
AU2018201140B2 (en) | System and method for candidate profile screening | |
US20090150166A1 (en) | Hiring process by using social networking techniques to verify job seeker information | |
US20060010487A1 (en) | System and method of verifying personal identities | |
CN108596638A (en) | Anti- fraud recognition methods and system based on big data, terminal and storage medium | |
US20230086644A1 (en) | Cryptographically Enabling Characteristic Assignment to Identities with Tokens, Token Validity Assessments and State Capture Processes | |
TW202034262A (en) | Loan matching system and method | |
CN109065180A (en) | Shared knowledge platform system applied to medical information | |
Rutskiy et al. | Prospects for the Use of Artificial Intelligence to Combat Fraud in Bank Payments | |
CN115080858A (en) | Data recommendation method and device under multi-party collaboration scene | |
US20070055673A1 (en) | Verified personal credit search system and method thereof | |
Khan et al. | Utilizing the collective wisdom of fintech in the gcc region: A systematic mapping approach | |
KR20010044544A (en) | A service mothod of internet credit | |
Slomovic | Privacy issues in identity verification | |
Turnbull et al. | Private government, property rights and uncertain neighbourhood externalities: Evidence from gated communities | |
CN108520334A (en) | A kind of occupation reference method and apparatus | |
Chaturvedi et al. | India: Unique identification authority | |
Sutradhar et al. | Distribution and Usage of Digital Payment Cards in India: Findings from NSS 77th Round Survey | |
CN115689811A (en) | Block chain-based electronic will advice generation method, asset inheritance method and system | |
Xie et al. | FBN: Federated Bert Network with client-server architecture for cross-lingual signature verification | |
van der Straaten | African Countries Struggle to Build Robust Identity Systems. But That May Soon Change, Thanks to the awkward capture of The Economist | |
KR20230049330A (en) | Method of issuing portfolio system based on block chain network, computer readable medium and system for performing the method | |
CN116662281A (en) | Paid resource sharing service method and system based on block chain |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: DIGIMARC CORPORATION, OREGON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DAVIS, BRUCE L.;CONWELL, WILLIAM Y.;REEL/FRAME:019031/0641;SIGNING DATES FROM 20070131 TO 20070201 |
|
AS | Assignment |
Owner name: L-1 SECURE CREDENTIALING, INC., MASSACHUSETTS Free format text: MERGER/CHANGE OF NAME;ASSIGNOR:DIGIMARC CORPORATION;REEL/FRAME:022169/0973 Effective date: 20080813 Owner name: L-1 SECURE CREDENTIALING, INC.,MASSACHUSETTS Free format text: MERGER/CHANGE OF NAME;ASSIGNOR:DIGIMARC CORPORATION;REEL/FRAME:022169/0973 Effective date: 20080813 |
|
AS | Assignment |
Owner name: BANK OF AMERICA, N.A., ILLINOIS Free format text: NOTICE OF GRANT OF SECURITY INTEREST IN PATENTS;ASSIGNOR:L-1 SECURE CREDENTIALING, INC.;REEL/FRAME:022584/0307 Effective date: 20080805 Owner name: BANK OF AMERICA, N.A.,ILLINOIS Free format text: NOTICE OF GRANT OF SECURITY INTEREST IN PATENTS;ASSIGNOR:L-1 SECURE CREDENTIALING, INC.;REEL/FRAME:022584/0307 Effective date: 20080805 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |