US9972055B2 - Fact checking method and system utilizing social networking information - Google Patents

Fact checking method and system utilizing social networking information Download PDF

Info

Publication number
US9972055B2
US9972055B2 US14/260,492 US201414260492A US9972055B2 US 9972055 B2 US9972055 B2 US 9972055B2 US 201414260492 A US201414260492 A US 201414260492A US 9972055 B2 US9972055 B2 US 9972055B2
Authority
US
United States
Prior art keywords
information
user
social networking
contacts
sources
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US14/260,492
Other versions
US20150248736A1 (en
Inventor
Lucas J. Myslinski
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US14/260,492 priority Critical patent/US9972055B2/en
Priority to US14/729,223 priority patent/US9892109B2/en
Publication of US20150248736A1 publication Critical patent/US20150248736A1/en
Priority to US15/422,642 priority patent/US9643722B1/en
Priority to US15/472,858 priority patent/US10035594B2/en
Priority to US15/472,894 priority patent/US10035595B2/en
Priority to US15/628,907 priority patent/US10183748B2/en
Priority to US15/868,193 priority patent/US10061318B2/en
Application granted granted Critical
Publication of US9972055B2 publication Critical patent/US9972055B2/en
Priority to US16/017,133 priority patent/US10196144B2/en
Priority to US16/017,168 priority patent/US10183749B2/en
Priority to US16/017,510 priority patent/US10160542B2/en
Priority to US16/017,536 priority patent/US10301023B2/en
Priority to US16/126,672 priority patent/US10220945B1/en
Priority to US16/169,328 priority patent/US10538329B2/en
Priority to US16/372,933 priority patent/US10562625B2/en
Priority to US16/695,947 priority patent/US10974829B2/en
Priority to US16/696,033 priority patent/US11180250B2/en
Priority to US17/194,569 priority patent/US20210188437A1/en
Priority to US17/504,782 priority patent/US20220033077A1/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking

Definitions

  • the present invention relates to the field of information analysis. More specifically, the present invention relates to the field of automatically verifying the factual accuracy of information.
  • a social networking fact checking system analyzes and determines the factual accuracy of information and/or characterizes the information by comparing the information with source information.
  • the social networking fact checking system automatically monitors information, processes the information, fact checks the information and/or provides a status of the information.
  • the social networking fact checking system provides users with factually accurate information, limits the spread of misleading or incorrect information, provides additional revenue streams, and supports many other advantages.
  • FIG. 1 illustrates a flowchart of a method of implementing fact checking according to some embodiments.
  • FIG. 2 illustrates a block diagram of an exemplary computing device configured to implement the fact checking method according to some embodiments.
  • FIG. 3 illustrates a network of devices configured to implement fact checking according to some embodiments.
  • FIG. 4 illustrates a flowchart of a method of implementing social fact checking according to some embodiments.
  • FIG. 5 illustrates a flowchart of a method of utilizing social network contacts for fact checking according to some embodiments.
  • FIG. 6 illustrates a flowchart of a method of fact checking a user for registration according to some embodiments.
  • FIG. 7 illustrates a flowchart of a method of determining a validity rating based on contacts' information according to some embodiments.
  • FIG. 8 illustrates an exemplary web of lies according to some embodiments.
  • FIG. 9 illustrates an exemplary web of lies in timeline format according to some embodiments.
  • FIG. 10 illustrates a flowchart of a method of affecting a user based on a validity rating according to some embodiments.
  • FIG. 11 illustrates a flowchart of a method of connecting users based on similar content or validity rating according to some embodiments.
  • FIG. 12 illustrates a flowchart of a method of fact checking mapping information.
  • FIG. 13 illustrates a flowchart of a method of using an icon to indicate a validity rating or the validity of information provided by an entity according to some embodiments.
  • FIG. 14 illustrates a flowchart of a method of awarding honors for fact checking according to some embodiments.
  • FIG. 15 illustrates a flowchart of a method of touchscreen fact checking according to some embodiments.
  • a fact checking system utilizing social networking information determines the factual accuracy of information by comparing the information with source information. Additional analysis is able to be implemented as well such as characterizing the information.
  • FIG. 1 illustrates a flowchart of a method of implementing fact checking according to some embodiments.
  • step 100 information is monitored. In some embodiments, all information or only some information (e.g., a subset less than all of the information) is monitored. In some embodiments, only explicitly selected information is monitored. In some embodiments, although all information is monitored, only some information (e.g., information deemed to be fact-based) is fact checked.
  • the information includes, but is not limited to, broadcast information (e.g., television broadcast information, radio broadcast information), email, documents, database information, social networking/media content (tweets/Twitter®, Facebook® postings), webpages, message boards, web logs, any computing device communication, telephone calls/communications, audio, text, live speeches/audio, radio, television video/text/audio, VoIP calls, video chatting, video conferencing, images, videos, and/or any other information.
  • the information is able to be in the form of phrases, segments, sentences, numbers, words, comments, values, graphics, and/or any other form.
  • monitoring includes recording, scanning, capturing, transmitting, tracking, collecting, surveying, and/or any other type of monitoring. In some embodiments, monitoring includes determining if a portion of the information is able to be fact checked. For example, if information has a specified structure, then it is able to be fact checked.
  • the social networking fact checking system is implemented without monitoring information. This is able to be implemented in any manner. For example, while information is transmitted from a source, the information is also processed and fact checked so that the fact check result is able to be presented. In some embodiments, the fact check result is embedded in the same stream as the information. In some embodiments, the fact check result is in the header of a packet.
  • the information is processed.
  • Processing is able to include many aspects including, but not limited to, converting (e.g., audio into text), formatting, parsing, determining context, transmitting, converting an image into text, analyzing and reconfiguring, and/or any other aspect that enables the information to be fact checked. Parsing, for example, includes separating a long speech into separate phrases that are each separately fact checked. For example, a speech may include 100 different facts that should be separately fact checked.
  • the step 102 is able to be skipped if processing is not necessary (e.g., text may not need to be processed).
  • processing includes converting the information into a searchable format.
  • processing occurs concurrently with monitoring.
  • processing includes capturing/receiving and/or transmitting the information (e.g., to/from the cloud).
  • information is converted into searchable information (e.g., audio is converted into searchable text), and then the searchable information is parsed into fact checkable portions (e.g., segments of the searchable text; several word phrases).
  • searchable information e.g., audio is converted into searchable text
  • fact checkable portions e.g., segments of the searchable text; several word phrases.
  • Parsing is able to be implemented in any manner including, but not limited to, based on sentence structure (e.g., subject/verb determination), based on punctuation including, but not limited to, end punctuation of each sentence (e.g., period, question mark, exclamation point), intermediate punctuation such as commas and semi-colons, based on other grammatical features such as conjunctions, based on capital letters, based on a duration of a pause between words (e.g., 2 seconds), based on duration of a pause between words by comparison (e.g., typical pauses between words for user are 0.25 seconds and pauses between thoughts are 1 second)—the user's speech is able to be analyzed to determine speech patterns such as length of pauses between words lasting a fourth of the length for pauses between thoughts or sentences, based on a change of a speaker (e.g., speaker A is talking, then speaker B starts talking), based on a word count (e.g., 10 word segments
  • the information is fact checked.
  • Fact checking includes comparing the information to source information to determine the factual validity, accuracy, quality, character and/or type of the information.
  • the source information includes web pages on the Internet, one or more databases, dictionaries, encyclopedias, social network information, video, audio, any other communication, any other data, one or more data stores and/or any other source.
  • the comparison is a text comparison such as a straight word for word text comparison.
  • the comparison is a context/contextual comparison.
  • a natural language comparison is used.
  • pattern matching is utilized.
  • an intelligent comparison is implemented to perform the fact check.
  • exact match, pattern matching, natural language, intelligence, context, and/or any combination thereof is used for the comparison. Any method of analyzing the source information and/or comparing the information to the source information to analyze and/or characterizing the information is able to be implemented.
  • An exemplary implementation of fact checking includes searching (e.g., a search engine's search), parsing the results or searching through the results of the search, comparing the results with the information to be checked using one or more of the comparisons (e.g., straight text, context or intelligent) and retrieving results based on the comparison (e.g., if a match is found return “True”).
  • the results are able to be any type including, but not limited to, binary, Boolean (True/False), text, numerical, and/or any other format.
  • determining context and/or other aspects of converting could be implemented in the step 104 .
  • the sources are rated and/or weighted.
  • sources are able to be given more weight based on accuracy of the source, type of the source, user preference, user selections, classification of the source, and/or any other weighting factor.
  • the weighting is then able to be used in determining the fact check result. For example, if a highly weighted or rated source agrees with a comment, and a low weighted source disagrees with the comment, the higher weighted source is used, and “valid” or a similar result is returned.
  • Determining a source agrees with information is able to be implemented in any manner, for example, by comparing the information with the source and finding a matching result, and determining a source disagrees with information is when the comparison of the information and the source does not find a match.
  • a status of the information is provided based on the fact check result.
  • the status is provided in any manner including, but not limited to, transmitting and/or displaying text, highlighting, underlining, color effects, a visual or audible alert or alarm, a graphical representation, and/or any other indication.
  • the meaning of the status is able to be any meaning including, but not limited to, correct, incorrect, valid, true, false, invalid, opinion, hyperbole, sarcasm, hypocritical, comedy, unknown, questionable, suspicious, need more information, questionable, misleading, deceptive, possibly, close to the truth, and/or any other status.
  • the status is able to be presented in any manner, including, but not limited to, lights, audio/sounds, highlighting, text, a text bubble, a scrolling text, color gradient, headnotes/footnotes, an iconic or graphical representation, a video or video clip, music, other visual or audio indicators, a projection, a hologram, a tactile indicator including, but not limited to, vibrations, an olfactory indicator, a Tweet, a text message (SMS, MMS), an email, a page, a phone call, a social networking page/transmission/post/content, or any combination thereof.
  • text is able to be highlighted or the text color is able to change based on the validity of the text.
  • providing the status includes transmitting and/or broadcasting the status to one or more devices (e.g., televisions).
  • the status is also able to include other information including, but not limited to, statistics, citations and/or quotes.
  • Providing the status of the information is also able to include providing additional information related to the fact checked information, such as an advertisement.
  • providing includes pointing out, showing, displaying, recommending, playing, presenting, announcing, arguing, convincing, signaling, asserting, persuading, demonstrating, denoting, expressing, hinting, illustrating, implying, tagging, labeling, characterizing, and/or revealing.
  • the fact checking system is implemented such that responses, validity determinations and/or status presentations are available in real-time or near real-time.
  • real-time it is meant instantaneously (e.g., within 1 second); whereas near real-time is within a few seconds (e.g., within 5 seconds).
  • real-time also means faster than having a human perform the search and presenting results.
  • the indication is presented in at most 1 second, at most several seconds (e.g., at most 5 seconds), at most a minute (not real-time), at most several minutes or by the end of a show.
  • the time amount begins once a user pauses in typing, once a phrase has been communicated, once a phrase has been determined, at the end of a sentence, once an item is flagged, or another point in a sequence.
  • the fact checking system checks the fact, returns a result and displays an indication based on the result in less than 1 second—clearly much faster than a human performing a search, analyzing the search results and then typing a result to be displayed on a screen.
  • an indication is displayed to compare the fact check result with other fact check results for other users.
  • fact check implementations are able to be different for different users based on selections such as approvals of sources and processing selections which are able to result in different fact check results. Therefore, if User A is informed that X information is determined to be “false,” an indication indicates that X information was determined to be “true” for 50 other people.
  • usernames are indicated (e.g., X information was determined to be “true” for Bob). In some embodiments, usernames and/or results are only provided if their result is different from the user's result.
  • the number of users whose result matches the user's result is indicated.
  • the indication only indicates what the results were for contacts (e.g., social networking contacts) of the user. In some embodiments, the indication is only indicated if the results were different (e.g., true for user, but false for others). In some embodiments, the indication includes numbers or percentages of other fact check implementations (e.g., true for 50 users and false for 500 users or 25% true and 75% false). In some embodiments, indications are only indicated for specific users or classes of users. For example, only results of users classified as “members of the media” are indicated. In another example, a user is able to select whose results are indicated. In some embodiments, only results of users with a validity rating above a threshold are indicated.
  • fewer or more steps are implemented. Furthermore, in some embodiments, the order of the steps is modified. In some embodiments, the steps are performed on the same device, and in some embodiments, one or more of the steps, or parts of the steps, are separately performed and/or performed on separate devices. In some embodiments, each of the steps 100 , 102 , 104 and 106 occur or are able to occur in real-time or non-real-time. Any combination of real-time and non-real-time steps is possible such as all real-time, none real-time and everything in between.
  • FIG. 2 illustrates a block diagram of an exemplary computing device 200 configured to implement the fact checking method according to some embodiments.
  • the computing device 200 is able to be used to acquire, store, compute, process, communicate and/or display information including, but not limited to, text, images, videos and audio.
  • the computing device 200 is able to be used to monitor information, process the information, fact check the information and/or provide a status of the information.
  • a hardware structure suitable for implementing the computing device 200 includes a network interface 202 , a memory 204 , a processor 206 , I/O device(s) 208 , a bus 210 and a storage device 212 .
  • the choice of processor is not critical as long as a suitable processor with sufficient speed is chosen.
  • the memory 204 is able to be any conventional computer memory known in the art.
  • the storage device 212 is able to include a hard drive, CDROM, CDRW, DVD, DVDRW, flash memory card, solid state drive or any other storage device.
  • the computing device 200 is able to include one or more network interfaces 202 .
  • An example of a network interface includes a network card connected to an Ethernet or other type of LAN.
  • the I/O device(s) 208 are able to include one or more of the following: keyboard, mouse, monitor, display, printer, modem, touchscreen, touchpad, speaker/microphone, voice input device, button interface, hand-waving, body-motion capture, touchless 3D input, joystick, remote control, brain-computer interface/direct neural interface/brain-machine interface, camera, and other devices.
  • the hardware structure includes multiple processors and other hardware to perform parallel processing.
  • Fact checking application(s) 230 used to perform the monitoring, processing, fact checking and providing are likely to be stored in the storage device 212 and memory 204 and processed as applications are typically processed. More or fewer components shown in FIG. 2 are able to be included in the computing device 200 .
  • fact checking hardware 220 is included.
  • the computing device 200 in FIG. 2 includes applications 230 and hardware 220 for implementing the fact checking, the fact checking method is able to be implemented on a computing device in hardware, firmware, software or any combination thereof.
  • the fact checking applications 230 are programmed in a memory and executed using a processor.
  • the fact checking hardware 220 is programmed hardware logic including gates specifically designed to implement the method.
  • the fact checking application(s) 230 include several applications and/or modules. Modules include a monitoring module for monitoring information, a processing module for processing (e.g., converting) information, a fact checking module for fact checking information and a providing module for providing a status of the information. In some embodiments, modules include one or more sub-modules as well. In some embodiments, fewer or additional modules are able to be included. In some embodiments, the applications and/or the modules are located on different devices. For example, a device performs monitoring, processing, and fact checking, but the providing is performed on a different device, or in another example, the monitoring and processing occurs on a first device, the fact checking occurs on a second device and the providing occurs on a third device. Any configuration of where the applications/modules are located is able to be implemented such that the fact checking system is executed.
  • Suitable computing devices include, but are not limited to a personal computer, a laptop computer, a computer workstation, a server, a mainframe computer, a handheld computer, a personal digital assistant, a pager, a telephone, a fax machine, a cellular/mobile telephone, a smart appliance, a gaming console, a digital camera, a digital camcorder, a camera phone, a smart phone/device (e.g., a Droid® or an iPhone®), a portable music player (e.g., an iPod®), a tablet (e.g., an iPad®), a video player, an e-reader (e.g., KindleTM), a DVD writer/player, an HD (e.g., Blu-ray®) or ultra high density writer/player, a television, a copy machine, a scanner, a car stereo, a stereo, a satellite, a DVR (e.g., TiVo®), a smart watch/jewelry, smart devices, a home
  • FIG. 3 illustrates a network of devices configured to implement fact checking according to some embodiments.
  • the network of devices 300 is able to include any number of devices and any various devices including, but not limited to, a computing device (e.g., a tablet) 302 , a television 304 , a smart device 306 (e.g., a smart phone) and a source 308 (e.g., a database) coupled through a network 310 (e.g., the Internet).
  • the source device 308 is able to be any device containing source information including, but not limited to, a searchable database, web pages, transcripts, statistics, historical information, or any other information or device that provides information.
  • the network 310 is able to any network or networks including, but not limited to, the Internet, an intranet, a LAN/WAN/MAN, wireless, wired, Ethernet, satellite, a combination of networks, or any other implementation of communicating.
  • the devices are able to communicate with each other through the network 310 or directly to each other.
  • One or more of the devices is able to be an end user device, a media organization, a company and/or another entity.
  • peer-to-peer sourcing is implemented. For example, the source of the data to be compared with is not on a centralized source but is found on peer sources.
  • social fact checking is implemented.
  • only a user's content and/or sources and/or a user's contacts' content and/or sources are used for fact checking.
  • the source information is able to be limited in any manner such as by generating a database and filling the database only with information found in the user's contacts' content/sources.
  • the social fact checking only utilizes content that the user has access to, such that content and/or sources of users of a social networking system who are not contacts of the user are not accessible by the user and are not used by the fact checking system.
  • source information is limited to social networking information such that the social networking information is defined as content generated by or for, stored by or for, or controlled by or for a specified social networking entity (e.g., Facebook®, Twitter®, LinkedIn®).
  • the social network entity is able to be recognized by a reference to the entity being stored in a data structure.
  • a database stores the names of social networking entities, and the database is able to be referenced to determine if a source is a social networking source or not.
  • source information is limited to the social networking information that has been shared by a large number of users (e.g., over 1,000) or a very large number of users (e.g., over 1,000,000).
  • the social networking information is able to be shared in any manner such as shared peer-to-peer, shared directly or indirectly between users, shared by sending a communication directly or indirectly via a social networking system.
  • source information is limited to only tweets, only tweets received by at least 100 users, only Facebook® postings that are viewed by at least 100 users, only Facebook® postings of users with 100 or more contacts, only users who are “followed” by 100 or more users, and/or any other limitation or combination of limitations.
  • source information is acquired by monitoring a system such as Twitter®. For example, microblogs (e.g., tweets) are monitored, and in real-time or non-real-time, the tweets are analyzed and incorporated as source information.
  • the tweets are processed (e.g., parsed), fact checked and/or compared with other information, and results and/or other information regarding the tweets are stored as source information.
  • the source information is limited to social networking information and additional source information.
  • source information is limited to social networking information and other sources with a reliability rating above 9 (on scale of 1 to 10).
  • source information is limited to social networking information and specific sources such as encyclopedias and dictionaries.
  • the fact check occurs while the user is logged into the social networking system and uses the content accessible at that time. In some embodiments, if a contact is invited but has not accepted, his content/sources are still used.
  • contacts are able to be separated into different groups such as employers, employees or by position/level (e.g., partners and associates), and the different groups are able to be used for fact checking
  • groups such as employers, employees or by position/level (e.g., partners and associates)
  • the different groups are able to be used for fact checking
  • only a user's friends' content and/or sources are used for fact checking
  • multiple fact checks are implemented based on the groups (e.g., one fact checker including friends' information and a second fact checker including co-workers' information).
  • fact check results are sent to contacts (e.g., social network contacts) of a user.
  • fact check results are shared using social networking.
  • users are able to select if they want to receive fact check results from contacts.
  • users are able to be limited to contacts where they only receive fact check results but do not have other access (e.g., no access to personal information).
  • a user watches a show which is fact checked.
  • the fact check result is sent to the user and his contacts (e.g., via a tweet).
  • only certain types of fact check results are sent to users (e.g., only lies and misinformation). The misinformation and lies are able to be determined in any manner.
  • misinformation is determined automatically by determining the factual accuracy of the information, and if the information is determined to be factually inaccurate, then it is misinformation.
  • Lies are able to be determined by determining information is misinformation and analyzing intent.
  • Intent is able to be analyzed in any manner, for example, context (e.g., additional information, a result) of a statement is analyzed to determine intent.
  • Misinformation, lies and other characterizations are able to be determined using a look-up table which classifies and stores information as factually accurate, misinformation, lies, and/or other characterizations.
  • information is distinguished as misinformation or lies by manual review and/or crowdsourcing.
  • social information is stored/utilized in an efficient manner. For example, personal data is stored in a fastest access memory location, and non-personal information provided by a user on a social network is stored in a slower location. In some embodiments, information is further prioritized based on popularity, relevance, time (recent/old), and/or any other implementation.
  • FIG. 4 illustrates a flowchart of a method of implementing social fact checking according to some embodiments.
  • information is analyzed. Analyzing is able to include monitoring, processing, and/or other forms of analysis.
  • automatic fact checking is performed utilizing only social network information as source information.
  • the social network information is only social network information from contacts of the user (or contacts of contacts). In some embodiments, social network information is not limited to contacts of the user.
  • manual crowdsourcing fact checking is implemented to generate a result, in the step 404 .
  • the manual crowdsourcing is implemented by providing/sending the information to be fact checked where many users are able to find the information, fact check the information and send a response which is used to generate a fact checking result. For example, 1000 users perform manual crowdsourcing fact checking, and 995 of the users send a response indicating the information is false, and the fact checking system generates a fact check result that the information is false.
  • the fact check result is able to be determined in any way, for example, majority rules, percent above/below a threshold or any other way.
  • the result is presented on the user's device. In some embodiments, fewer or additional steps are implemented. In some embodiments, automatic fact checking and crowdsourcing are performed in parallel.
  • an automatic result and crowdsource result are compared, and the result with a higher confidence score is used.
  • both results including the confidence score of each are provided. Confidence of a result is able to be determined in any manner; for example, based on how close source information is to the information, based on the number of agreeing/disagreeing sources, and/or any other manner. For example, if 99 sources agree with a statement (e.g., have the same text as the statement) and only 1 source disagrees with the statement (e.g., has text that indicates or means the opposite of the statement), then the confidence score is 99%.
  • only sources that a user and/or a user's contacts are used for fact checking Users are able to approve/accept sources in any manner, such as clicking approve after visiting a website, or filling out a survey, or not clicking disapprove after visiting a website where the site is automatically approved by visiting, approving via social networking (e.g., receiving a link or site or content from a contact), by “liking” content, by sending a tweet with a hashtag or other communication with the source to approve, by selecting content (e.g., from list of selectable sources), using another social media forum (e.g., items/photos pinned on Pinterest are approved by that user, videos liked on Youtube are approved by those users) or any other implementation.
  • social networking e.g., receiving a link or site or content from a contact
  • a source is approved if the source is fact checked by the user. In some embodiments, a source is approved if the source has been fact checked by another entity (e.g., automatically by fact checking system), and the user has verified or accepted the fact check results. In some embodiments, a user is able to designate an entity which approves/disapproves of sources for the user. For example, the user selects an automated approval/disapproval system which searches/crawls sources (e.g., databases, the Web), analyzes (e.g., parses) the sources, fact checks the sources, and based on the analysis and/or fact check results, approves/disapproves of the sources for the user.
  • an automated approval/disapproval system which searches/crawls sources (e.g., databases, the Web), analyzes (e.g., parses) the sources, fact checks the sources, and based on the analysis and/or fact check results, approves/disapproves of the sources for the user.
  • a source is approved if the source is associated with an organization/entity that the user has “liked” (or a similar implementation), where associated means approved by, written by, affiliated with, or another similar meaning.
  • a site or other source information becomes an approved source if a user uses or visits the source.
  • a source is approved if a user uses or visits the source while signed/logged in (e.g., signed in to Facebook® or Google+®).
  • the user must be logged into a specific social networking system, and in some embodiments, the user is able to be logged into any social networking system or a specific set of social networking systems.
  • the sources are limited to a specific method of approval such as only sources visited while logged in.
  • a source is approved if the source is recommended to the user (e.g., by a contact) (even if the user does not visit/review the source), unless or until the user rejects/disapproves of the source.
  • sources are suggested to a user for a user to accept or reject based on contacts of the user and/or characteristics of the user (e.g., location, political affiliation, job, salary, organizations, recently watched programs, sites visited).
  • the contacts are limited to n-level contacts (e.g., friends of friends but not friends of friends of friends).
  • user A approved source X, and one of his contacts approved source Y, and another contact approved source Z. So only sources X, Y and Z are used for fact checking content for user A. Furthering the example, since user A's sources may be different than user J's sources, it is possible to have different fact checking results for different users.
  • users are able to disapprove sources. In some embodiments, if there is a conflict (e.g., one user approves of a source and a contact disapproves of the same source), then the choice of the user with a higher validity rating is used. In some embodiments, if there is a conflict, the selection of the contact with the closer relationship to the user (the user being interpreted as the closest contact) is used.
  • the higher of the number of approvals versus disapprovals determines the result (e.g., 2 users approve Site X and 5 users disapprove Site X, then Site X is not used).
  • the higher of the number of approvals versus disapprovals determines the result (e.g., 2 users approve Site X and 5 users disapprove Site X, then Site X is not used).
  • 50 contacts approve Web Page Y
  • 10 contacts disapprove Web Page Y then Web Page Y is approved.
  • the fact check results could be different for different users.
  • User A has 50 contacts that approve Web Page Y, and 10 that disapprove.
  • User B has 5 contacts that approve Web Page Y, and 20 contacts that disapprove Web Page Y. Therefore, Web Page Y is approved for User A and disapproved for User B.
  • users are able to approve/disapprove sources in clusters, and users are able to cluster sources.
  • users are able to share/recommend sources to contacts (e.g., via a social networking site). For example, user A says, “I've grouped these sources; I think they are all legit,” and the contacts are able to accept or reject some/all of the sources.
  • the source when a user approves or disapproves of a source or a group of sources, the source (or references to the source, for example, a link to the source) and the approval or disapproval are automatically sent to contacts of the user (or up to nth level contacts of the user, for example, contacts of contacts of the user). Similarly, when contacts of a user approve/disapprove a source, the source or reference and approval/disapproval are automatically sent to the user. In some embodiments, when a user approves/disapproves of a source, the source is automatically approved/disapproved for contacts.
  • the contacts are able to modify the approval/disapproval; for example, although the user approved a source, Contact X selects to disapprove the source, so it is not an approved source for Contact X. Similarly, when contacts approve/disapprove a source, the source is automatically approved/disapproved for the user unless the user modifies the approval/disapproval. In some embodiments, users are able to limit the automatic approval to nth level contacts (e.g., only 1st and 2nd level contacts but no higher level contacts).
  • all sources or a subset of sources are approved until a user disapproves of a source (or group of sources), and then that source (or group of sources) is disapproved.
  • sources are approved based on a tweet and a hashtag. For example, a user tweets a message with the name of a source preceded by a hashtag symbol. In another example, a user tweets a message with a link to a source or a name of a source and “#fcapproval” or “#fcdisapproval” or similar terms to approve/disapprove a source.
  • sources are approved based on content watched (e.g., on television, YouTube), items purchased, stores/sites shopped at, and/or other activities by the user.
  • content watched e.g., on television, YouTube
  • the user watches Program X which uses/approves sources A, B and C for analyzing/determining content, so those sources automatically become approved for the user.
  • the sources are approved only if the user “likes” the show or if it is determined the user watches the show long enough and/or enough times.
  • a counter tracks how long the user watches a show, and if/when the counter goes above a threshold, the sources affiliated with/related to the show are automatically approved.
  • sources are linked so that if a user approves a source, any source that is linked to the source is also approved automatically.
  • the linked sources are required to be related (e.g., same/similar genre, same/similar reliability rating). For example, a user approves Dictionary A which is linked to Dictionary B and Dictionary C, so Dictionaries A, B and C, all become approved when the user approves Dictionary A.
  • the linked sources are displayed for the user to select/de-select (e.g., in a pop-up window).
  • approval/disapproval of sources is transmitted via color-coded or non-color-coded messages such as tweets, text messages and/or emails.
  • approvals/disapprovals are transmitted automatically to contacts of the user upon approval/disapproval.
  • a user when a user is about to approve/disapprove a source, an indication of what others (e.g., contacts or non-contacts of the user) have selected is presented. For example, the user visits Webpage Z, and in the bottom corner of the browser (or elsewhere), it is displayed that Contact J disapproved this source.
  • all sources are accepted except ones the user manually rejects.
  • sources are able to be selected by sensing a user circling/selecting icons or other representations of the sources.
  • sources are approved by bending a flexible screen when the source is on the screen.
  • a bend is detected by detecting pressure in the screen or in another manner, and the device determines the current source being displayed.
  • sources are selected based on an entity. For example, a user specifies to use Fox News's content and sources as approved sources. In some embodiments, any content/sources that Fox News disapproved is also recognized as disapproved for the user. In some embodiments, users are able to combine entities and their sources; for example, a user selects to use Fox News content/sources and CNN content/sources. If there are any conflicts, such as Fox News approving Source X and CNN disapproving Source X, the conflicts are able to be handled in any manner such as those described herein.
  • the user handles the conflicts by selecting approve/disapprove of each conflicting item or selects a preferred entity (e.g., if conflict, prefer CNN, so CNN's selections are chosen).
  • sources are received from/by others, and the sources are filtered based on personal preferences, characteristics, and/or selections such that only sources satisfying preferences are accepted automatically and others are rejected or placed on hold for approval.
  • User A is a very liberal person as determined based on viewing and reading habits, so when User G sends three sources that he thinks User A should approve, two of the three sources are classified as liberal, so they are automatically approved for User A, and the third source is classified as conservative, so it is placed in a queue for User A to review and approve/disapprove.
  • sources are approved by detecting a user in a specified location. For example, the device determines that it is at or is detected at a political rally. The content/sources of the politician holding the rally are automatically approved for the user or are presented for the user for manual approval. In some embodiments, content/sources of the opponent of the politician are automatically disapproved (unless they had previously been approved; for example, by detecting them as already approved by the user). In some embodiments, when a device determines that it is within range of another user (e.g., by facial recognition) or another user's device (e.g., by detecting device ID or user ID), the approved/disapproved sources and their approval/disapproval status is provided on the device.
  • another user e.g., by facial recognition
  • another user's device e.g., by detecting device ID or user ID
  • user's are able to limit their approval/disapproval information (e.g., only contacts are able to view).
  • sources are approved by waving a device at a source. For example, RFID or another technology is used to determine what sources are in close proximity (e.g., user waves smart phone in a library, and the books become approved sources for the user).
  • users are able to set/select any other option regarding fact checking implementations such as which content to monitor, keywords for monitoring, how content is processed, weighting schemes for sources, priorities, and/or any other fact checking implementation option.
  • a source is approved based on the reliability rating of the source and the approvals/disapprovals of the source. For example, a source is approved if the reliability rating and the approvals total above a threshold. In another example, a reliability rating is increased by 1 (or another number) if the number of approvals is greater than the number of disapprovals, and the reliability rating is decreased by 1 (or another number) if the number of approvals is not greater than the number of disapprovals, and then the source is approved if the modified reliability rating is above a threshold.
  • the reliability rating is added to the number of approvals divided by the number of disapprovals divided by ten or the number of approvals plus the number of disapprovals, and then the modified reliability rating is compared with a threshold, and if the modified reliability rating is above the threshold, then the source is approved.
  • the reliability rating is multiplied by the number of approvals divided by the number of disapprovals with a cap/maximum total (e.g., 10), and then the modified reliability rating is compared with a threshold, and if the modified reliability rating is above the threshold, then the source is approved. Any calculation is able to be implemented to utilize the reliability rating, approvals and disapprovals to determine if a source is approved for fact checking.
  • weights are added to the calculations; for example, a user's approval/disapproval is given extra weight. For example, reliability rating+user's approval/disapproval (+2/ ⁇ 2)+contacts' approvals/disapprovals (+1/ ⁇ 1).
  • social networking teams/groups are able to be set up for fact checking such that each member of a team approves, disapproves, and/or rates sources for fact checking.
  • each member of a team rates/selects other options regarding fact checking as well such as monitoring criteria, processing criteria and/or other criteria, and the selections are used to determine how to fact check information. For example, three members of a team select to parse after every pause in the monitored information of two seconds, and two members select to parse after every 10 seconds, so the selection of after every pause of two seconds is used.
  • social network groups' fact checking results are compared to determine the most accurate group.
  • Groups A, B and C are compared, and the group with the most correct results is considered to be the most accurate group.
  • a set of data is fact checked using Group A's sources, Group B's sources, and Group C's sources, and then the fact checking results are analyzed automatically, manually or both to determine the most accurate fact checking results.
  • Group A's results were 80% accurate, Group B's results were 95% accurate, and Group C's results were 50% accurate, so Group B was the most accurate.
  • the groups' results are able to be compared automatically, manually or both. For example, if groups' results match similar to the automatic fact checking system results, the groups' results are determined to be accurate.
  • a group's results are analyzed manually (e.g., by a group of impartial individuals) and manually compared with an automated fact checking system's results or other groups' results. Furthering the example, the sources selected/approved by a group are used to automatically fact check content, and the results of those fact checks are manually or automatically compared with automatic fact check implementations using different sources or other groups' implementations.
  • the groups are ranked by accuracy. In some embodiments, the most accurate groups' sources (e.g., top 10 groups) are made public and/or selectable by other users, and/or the most accurate groups' sources are sent via social media (e.g., tweeted or posted) to other users with an option to accept/reject.
  • Rating sources includes providing a reliability rating of a source, a validity rating of a source, fact checking a source, and/or any other rating. For example, a user of a team rates an opinion blog as a 1 out of 10 (1 meaning very factually inaccurate), and then the opinion blog is fact checked utilizing an automatic fact checking system (or manually) which determines the content of the opinion blog is mostly factually inaccurate, so the automatic fact checking system gives a rating of 1 as well.
  • users of teams do not specify a rating number for a source; rather, the users of the teams approve/disapprove/select sources, and the team with the most accurate sources (e.g., in number and/or in accuracy) is considered to be the most accurate team.
  • “accurate” such as an accurate source is defined as having a reliability or accuracy rating above a threshold (e.g., above 8 on a scale of 1 to 10 with 10 being the most accurate), and the reliability/accuracy rating is able to be based on how accurate the information is; for example, the information is fact checked (automatically and/or manually) and based on the fact check, the reliability/accuracy rating is determined. Furthering the example, if the fact check returns “factually accurate” for all segments of information, then the information receives a 10 for accuracy, and if the fact check returns “factually inaccurate” for all segments of the information, then the information receives a 0 for accuracy.
  • a threshold e.g., above 8 on a scale of 1 to 10 with 10 being the most accurate
  • sources are manually analyzed to determine a reliability/accuracy rating.
  • a team with 1 source that is fact checked by a fact checking system and determined to be a reliability rating of 10 is considered to be less accurate than a team with 10 sources that all have a reliability rating of 10.
  • accuracy and breadth of the sources are taken into account to determine the team with the best sources.
  • the sources are classified, and breadth is determined not just by quantity of sources but also by the number classes the sources fall into. For example, 100 sources in a single classification (e.g., sports history) are not as accurate as 100 sources in 10 classifications. In some embodiments, the opposite is true.
  • a large number sources in a single classification would ensure a fact check using those sources would be accurate, and a source collection that is very broad would not necessarily help. Furthering the example, if the fact checking system is fact checking the first team to win back to back Super Bowls, a set of sources which include a medical encyclopedia and a french dictionary would not be better than a set of sources that focuses on sports. In some embodiments, accuracy is given more weight, and in some embodiments, breadth is given more weight.
  • a set of 100 sources with an average reliability rating of 9 is better than a set of 1000 sources with an average reliability rating of 8.
  • the set of 1000 sources is considered better even though the reliability rating is slightly lower, since more information may be able to be fact checked with the larger breadth.
  • both sets are available and used for fact checking, and whichever one returns with a higher confidence score is used to provide a status of the information being fact checked.
  • FIG. 5 illustrates a flowchart of a method of utilizing social network contacts for fact checking according to some embodiments.
  • users approve or disapprove sources or other features/options/elements for fact checking.
  • fact checking is implemented (e.g., monitoring, processing, fact checking and/or providing a result). In some embodiments, additional or fewer steps are implemented.
  • users in a social network are grouped/have different levels (e.g., media, business level, regular user, politician) which affects the weight of the sources. For example, a media level source is given a higher weight than a regular user source.
  • levels e.g., media, business level, regular user, politician
  • the weight of the source is utilized in fact checking such that higher weighted sources have more influence on a fact check result.
  • a calculation in determining a fact check result includes: determining the number of agreeing highest weighted sources which is multiplied by the highest weight value, determining the number of agreeing second highest weighted sources which is multiplied by the second highest weight value, and so on until determining the number of agreeing lowest weighted sources which is multiplied by the lowest weight value.
  • the results are combined to determine a total value, and if the total value is above a threshold, then the information being fact checked is determined “confirmed,” and if the total value is not above the threshold, then the information is “unconfirmed” or “disproved.”
  • the weights are applied to disagreeing sources, and if the total value is above a threshold, then the information is “disproved,” or if the total value is not above the threshold, then the information is “confirmed.”
  • the weighted agreeing and disagreeing values are combined or subtracted, and if the result is above a threshold, then “confirmed” and if not, then “disproved.”
  • users' sources are weighted based on “tokens” or user validity ratings (e.g., the higher the validity rating or higher number of tokens earned, then the higher the source weight).
  • emails or other messages are sent to contacts with fact check result updates. For example, an email is automatically sent when a contact's validity rating drops below a threshold.
  • Other forms of communication are possible such as a tweet, text message, or instant message.
  • people are fact checked to confirm they are who they say they are.
  • the person is verified by fact checking.
  • a user is able to be verified in any manner, such as: comparing the user information with another social networking site, comparing the user information with an online resume, using IP address location information, using past history information, comparing a past photograph of the user with a current photograph or a video scan (e.g., using a webcam), analyzing school information, analyzing work information, analyzing professional organization information, and/or analyzing housing information.
  • a question is asked. For example, the user asks the friend a question, and the user determines if the answer is correct or not, which determines if the friend is accepted into the network or not. In another example, the friend asks the user a question, and the friend determines if the answer is correct or not, which determines if the user is accepted into the network or not. In some embodiments, for efficiency, the user asks a generic/broad question that is able to be applied to many users, so the user does not have to generate specific questions for each user.
  • a user when a user makes an invitation to a second user, the user inputs a question for the second user to answer.
  • the second user instead of or in addition to a user asking a question, the second user (or invitee) simply sends a personal message that informs the user that the second user is, in fact, who he says he is.
  • the invitee accepts the invitation, and also makes a comment, “I remember that weird painting of the dog in your dorm room at Stanford.” Then, the user either accepts or rejects the second user.
  • a user is allowed to “connect” to another user but with limited access until he is verified.
  • FIG. 6 illustrates a flowchart of a method of fact checking a user for registration according to some embodiments.
  • a user attempts to register (e.g., with a social networking site/system or a second social networking site/system).
  • the user is verified using fact checking.
  • the user attempts to connect with a friend.
  • user/friend verification occurs. In some embodiments, fewer or additional steps are implemented.
  • the fact check result and any identifying information is stored and used as source information, or stored in a manner that is easily retrievable for future displays of results.
  • the results are stored in a cache or other quickly accessible location.
  • the results are stored in a script (e.g., javascript) with the web page or coded in the web page, or another implementation.
  • an entity including, but not limited to, a speaker, author, user, or another entity (e.g., corporation) has a validity rating that is included with the distribution of information from him/it.
  • the validity rating is able to be based on fact checking results of comments made by an entity or any other information. For example, if a person has a web page, and 100% of the web page is factually accurate, then the user is given a 10 (on a scale of 1 to 10) for a validity rating. In another example, a user tweets often, and half of the tweets are factually accurate and half are inaccurate, the user is given a 5.
  • the validity rating is able to be calculated in any manner.
  • items such as controversies, bias, and/or any other relevant information is able to be used in calculating a validity rating.
  • the severity of the information or misinformation is also able to be factored in when rating a person or entity. Additionally, the subject of the information or misinformation is also able to be taken into account in terms of severity.
  • an independent agency calculates a validity rating and/or determines what is major and what is minor. In some embodiments, individual users are able to indicate what is important to them and what is not. In some embodiments, another implementation of determining what is major, minor and in between is implemented. The context of the situation/statement is also able to be taken into account.
  • entities are able to improve their validity rating if they apologize for or correct a mistake, although measures are able to be taken to prevent abuses of apologies.
  • an entity in addition to or instead of a validity rating, an entity is able to include another rating, including, but not limited to, a comedic rating or a political rating.
  • an entity includes a classification including, but not limited to, political, comedy or opinion. Examples of information or statistics presented when an entity appears include, but are not limited to the number of lies, misstatements, truthful statements, hypocritical statements or actions, questionable statements, spin, and/or any other characterizations.
  • FIG. 7 illustrates a flowchart of a method of determining a validity rating based on contacts' information according to some embodiments.
  • a user's validity rating is determined or acquired.
  • the user's contacts' validity ratings are determined or acquired.
  • a complete user's validity rating is determined based on the user's validity rating and the contacts' validity ratings.
  • additional or fewer steps are implemented and/or the order of the steps is modified. For example, the steps are continuously ongoing such that if anything changes in either the user's validity rating or the contacts' validity ratings, then new ratings, including a new complete validity rating, are computed.
  • relationship information is utilized in fact checking. For example, if a user's contacts have low entity/validity ratings, then that information negatively affects the user's entity rating. For example, a user's base validity rating is a 7 out of 10 based on fact checking results of the user's comments. Based on social networking relationships, the user has 4 friends/contacts with 1 degree of separation from the user, and each of those friends has a 2 out of 10 validity rating.
  • contacts with additional degrees of separation are utilized in determining the user's validity rating. In some embodiments, the additional degrees of separation are weighted less, and the weighting decreases as the degree of separation increases.
  • a web of lies/misinformation/other characterization is generated.
  • a web is able to be generated by fact checking information and determining the relationship of who said what and when. Once information is determined to be misleading, analysis is performed to determine who provided the information, and then analysis is determined if anyone provided the information before, and relationships are determined based on the time/date of the information and/or if there is any connection between those providing the information.
  • the web of misinformation includes a graphic of who spreads misinformation. Each point in the web is able to be an entity or information. For example, a set of Republicans who made the same lie are included in the ring with the misinformation shown in the middle.
  • the web is a timeline version where the web shows who first said the lie, and then who repeated it. In some embodiments, times/dates of when the misinformation was said or passed on is indicated. In some embodiments, the first person to say the lie is given more negative weight (e.g., for validity rating) as they are the origin of the lie. In another example, a tree structure is used to display the connections of lies. Although specific examples have been provided, there are many different ways of storing the information and showing who provided the information. The web is able to be displayed for viewers to see who says the same information or agrees with a person. The web is shown when the misinformation is detected or when one of the people in the web is detected. For example, commentator X provided misinformation, and 5 people also provided the same misinformation. When commentator X is detected (e.g., voice or facial recognition), a graphic is presented showing the 5 additional people who provided the same misinformation as commentator X.
  • commentator X e.g., voice or facial recognition
  • FIG. 8 illustrates an exemplary web of lies according to some embodiments.
  • a first level provider 800 of misinformation is shown in the middle of the web.
  • Second level 802 and third level 804 misinformation providers are also shown further out in the web.
  • FIG. 9 illustrates an exemplary web of lies in timeline format according to some embodiments.
  • a first provider 900 of misinformation is shown, followed by a second provider 902 , third provider 904 , fourth provider 906 , and fifth provider 908 .
  • the web is also able to be used to generate relationships between entities. For example, user A says, “global warming is a hoax.” Then, users who have made a similar or the same comment (e.g., on their Facebook® page, personal website, on a message board, in a tweet) are recommended to connect/join in a social network. Same or similar phrases are detected in any manner such as word/keyword comparison, and then a message or any communication is sent to users that have provided the same/similar phrase. Furthering the example, a popup is displayed on the user's social network page that provides a list of users who have made the same or a similar comment, and the user is asked if he wants to invite the other users to join his network or to join their networks.
  • a message/tweet is sent to both asking if they want to “connect.”
  • a message is sent to users in network saying this person said that and the fact check result shows it to be wrong.
  • entity/validity ratings are based on relationships with other entities (including the web described above).
  • the relationships are able to be based on same cable network or same company. Using the web above, for example, if entities say the same misinformation, they become linked together or connected and their ratings become related or merged.
  • a user whose validity rating is below a lower threshold is automatically de-friended/disconnected/de-linked. In some embodiments, others are prompted with a question if they would like to disconnect from the user whose validity rating is below a lower threshold.
  • the user with a low validity rating is put in “time out” or his status remains a friend but a non-full friend status. For example, although the user with the low validity rating is connected, he is not able to comment on a connected user's page. In another example, the capabilities of the user are limited on a social networking site if his validity rating drops below threshold.
  • FIG. 10 illustrates a flowchart of a method of affecting a user based on a validity rating according to some embodiments.
  • a validity rating of a user is determined to be below a threshold.
  • the user is affected; for example, the user's access to web pages (e.g., social network) is restricted. In some embodiments, additional or fewer steps are implemented.
  • people are grouped (e.g., become contacts) if they send/say the same misinformation (may not even know each other, but if they say “global warming is a hoax,” they join the same contacts as others who said same thing).
  • people who use the same phrase or quote become friends or are asked if they would like to become friends as someone who said the same thing.
  • users with the same or similar validity rating are connected or asked if they would like to connect.
  • FIG. 11 illustrates a flowchart of a method of connecting users based on similar content and/or validity rating according to some embodiments.
  • information is compared to determine a match. For example, user comments are compared to determine if they have said the same thing such as “49ers rule!”. In some embodiments, only misinformation or other negative characteristic comments are compared. For example, a database stores comments that have been fact checked and deemed inaccurate as well as the user that made the comment. Then, those comments are compared to determine if there are any matches between users. In some embodiments, user validity ratings are compared as well. In some embodiments, users are grouped by validity rating (e.g., validity rating is stored in a database and sortable by validity rating).
  • the validity ratings are exactly matched (e.g., all users with a validity rating of 7.0 are matched), and in some embodiments, ranges of validity ratings are matched (e.g., all users with a 7.0 to 7.5 are matched).
  • opposite comments are searched for. For example, a comment that says “raising taxes hurts the economy” and an opposite comment of “raising taxes helps the economy.” These comments are able to be considered an opposite match, which can then be used to join people with opposing views.
  • users with matching comments and/or validity ratings are “connected” or asked if they would like to “connect” (e.g., join each others social network).
  • the steps occur in real-time; for example, immediately after the user tweets, “49ers rule!,” connection suggestions are presented based on the implementation described herein. Additional information is able to be provided to the users such as the matching comment, the validity rating of the other user, and/or any other information. In some embodiments, additional or fewer steps are able to be implemented.
  • mapping information is fact checked.
  • a camera device e.g., augmented reality camera or vehicle camera
  • a map indicates that traffic is going “fast” (e.g., over 50 mph)
  • a vehicle camera indicates the traffic is stopped
  • an alert indicating the fact check result of “bad traffic information” is able to be presented.
  • a map indicates the traffic a certain way, but a user's GPS (e.g., stand alone device or smart phone GPS) indicates traffic differently, then an alert is provided to other users.
  • accident information is fact checked by comparing news information and/or police reports.
  • a corrected route is provided. For example, after fact checking a route, it is determined the traffic is not bad for a particular road that was supposedly bad, so the route now includes that road. Fact checking of the mapping information is able to occur periodically, when new information becomes available, or at any other time.
  • mapping information from different sources is compared. For example, G Maps indicates that traffic is flowing at 65 mph; however, A Maps shows that traffic is only going 35 mph. The information from each source is compared (e.g., determine any differences), and analysis is performed to determine which is more accurate.
  • mapping information and fact checking results are shared among contacts in a social network.
  • the mapping information is fact checked using social networking source information (e.g., information from contacts).
  • flying devices e.g., drones
  • the drones take images and/or videos of traffic conditions and provide the images and/or videos as source information for comparison.
  • a drone is able to be automatically directed to verify the issue by flying over to the area and acquiring information.
  • a user texts that an accident has occurred on Interstate X.
  • the drone automatically receives/retrieves this information, and flies into position to take pictures of the location including traffic analysis.
  • a device e.g., a user's mobile device or a vehicle device
  • determines that user's vehicle is moving much slower than the speed limit so the device automatically communicates with a drone (either directly or through a server), and the drone utilizes GPS information of the vehicle to move into position to analyze the traffic issues.
  • the information acquired by the drone is then dispersed to be used as source information.
  • a server automatically determines the nearest drone to the position of the user device, and directs only that drone to move to acquire information.
  • FIG. 12 illustrates a flowchart of a method of fact checking mapping information.
  • mapping information is analyzed (e.g., monitored and processed).
  • the mapping information is fact checked.
  • a fact check result is presented. In some embodiments, fewer or additional steps are implemented.
  • an icon changes from a happy face to a sad face as misinformation is given by an entity.
  • an image of a person is changed from smiling to sad/angry.
  • the fact checking system collects 2 to 5 different images of the person by detecting the person (e.g., facial recognition). Then, the system searches/crawls the web for pictures of the person using templates of smile, frown, angry face, tears, tense, stoic, neutral to do the searching. The appropriate pictures are retrieved and stored. The appropriate image is displayed when the misinformation calculation result is in range. For example, when zero misinformation is detected, a smiling face is displayed, and when 3-6 misinformation comments are detected the face displayed is a frowning face, and above 6 is a crying face.
  • tears or other items are added to an image if the image cannot be found. For example, a sad image cannot be found, so tears are added to a neutral image that was found.
  • FIG. 13 illustrates a flowchart of a method of using an icon to indicate a validity rating or the validity of information provided by an entity according to some embodiments.
  • the step 1300 one or more images of an entity are acquired.
  • the entity's validity rating is determine or the validity of the entity's comments is analyzed.
  • the step 1304 as the entity's validity rating changes or the validity of the entity's comments are analyzed, the image presented changes. In some embodiments, additional or fewer steps are implemented.
  • medallions/medals, tokens, ranks, points, and/or other awards/honors are provided based on user fact checking actions. For example, a user is awarded a different token for providing an accurate fact check result for different items. Furthering the example, a user receives a “donkey” token for fact checking an item from a member of the Democratic party, and an “elephant” token for fact checking an item from a member of the Republican party. In some embodiments, the item has to be an item not previously accurately fact checked (for example, a comment by the President previously not fact checked). In some embodiments, the fact check result is verified automatically, manually or a combination of both.
  • the user provides the fact checked comment or identification information of the comment as well as support for the fact check result (e.g., a website confirming or disproving the comment).
  • the user must perform a specified number of fact checks before receiving a token (e.g., 5 fact checks of Democrats to receive a “donkey” token). Additional tokens are able to include, but are not limited to: a “donk-phant” for fact checking both Democrats and Republicans, a “prez” token for fact checking the President, a “sen” token for fact checking a member of the Senate, a “house” token for fact checking a member of the House of Representatives, and a “news” token for fact checking a newscaster.
  • one level of tokens is for actually fact checking, and a second level is for merely flagging content as false, questionable, or another characterization, and when the content is fact checked, a user is rewarded for being accurate. For example, if a user flags a comment as questionable, and then the comment is proven to be false, the user is awarded one point towards five points to obtain a second-level token. In some embodiments, a user is penalized (e.g., points lost or demoted) for incorrectly flagging an item and/or providing an incorrect fact check result.
  • FIG. 14 illustrates a flowchart of a method of awarding honors for fact checking according to some embodiments.
  • a user fact checks or causes (e.g., flags) information to be fact checked.
  • the user fact check or flag is analyzed/verified.
  • the user is rewarded for a valid fact check. In some embodiments, fewer or additional steps are implemented.
  • a user acquires tokens
  • his label/title changes. For example, the user begins as a level 1 fact checker and is able to increase to reach a level 10 fact checker if he acquires all of the possible tokens.
  • users are able to specify the type of label/title they receive. For example, users are able to specify “middle ages” which begins the user as a “peon” and goes up to “king.” Other examples include, but are not limited to: Star Wars (ewok to jedi knight or storm trooper to sith lord (good/evil)), police (recruit to chief), military (cadet to captain), political (mayor to president).
  • a set of labels or titles is generated for a group (e.g., social network group). For example, user X generates a football-labeled fact checking group which starts users as “punters” with the goal of becoming a “quarterback.”
  • the label/title is based on the tokens, validity rating and/or other fact checking.
  • a user's label/title is able to move up or down based on the acquired tokens, validity rating and/or other fact checking. For example, if a user acquires several tokens, but then provides misinformation several times, a token is able to be taken away.
  • users are provided additional features or benefits for a higher label/title. For example, a user with a level 8 fact checker label is provided contact information of several members of the news media, whereas a level 1 fact checker is not provided this information. Other benefits, awards and/or rewards are able to be provided, such as monetary or item prizes.
  • the label/title is able to be used as a filtering tool for searches (e.g., employee searches by employers). For example, an employer is able to search for candidates with “computer engineering skills” and “at least level 5 fact checker.”
  • users are rewarded for providing factually accurate information. For example, if a user tweets 100 times (and each of the tweets if fact checked by a fact checking system), the user receives a reward such as a token or any other reward.
  • the information fact checked has to meet a specified criteria to qualify for counting toward the reward. For example, the user is not able to tweet a well known fact 100 times and receive a reward.
  • steps to prevent cheating are implemented (e.g., monitoring for redundancy).
  • the information provided by the user has to be directed to a specific topic (e.g., politics).
  • the information provided by the user needs to include a keyword to be fact checked to receive a reward.
  • only information with a specific label e.g., hashtag
  • fact check swarms are able to be implemented.
  • social media e.g., Twitter®
  • one or more users are able to encourage and/or trigger a fact check swarm such that many users attempt to fact check information (e.g., a speech).
  • fact check information e.g., a speech
  • Those that participate in the fact check swarm are able to be recognized, awarded a prize, or provided another benefit.
  • a user sends a tweet with a specific hashtag and/or other information regarding information to fact check swarm.
  • the users who receive the tweet are then able to participate in the fact check swarm by researching elements of the information and providing fact check results related to the information (e.g., by tweeting a snippet, a fact check result, and a cite to source(s) for the result).
  • the users in the swarm are then able to agree or disagree with the result. If enough (e.g., above a threshold) users agree with the result, the result is accepted and presented (e.g., tweeted or displayed on a television) to users outside of the social network.
  • FIG. 15 illustrates a flowchart of a method of touchscreen fact checking according to some embodiments.
  • information is monitored.
  • the information is processed.
  • the information is fact checked, after detecting a touch of the touchscreen (or a button or other implementation).
  • a fact check result is provided. In some embodiments, additional or fewer steps are implemented.
  • a touchscreen input is utilized for fact checking
  • content e.g., a commentator talking
  • the user taps the touchscreen, and the last n seconds of content are used for fact checking.
  • the content is continuously monitored and processed, and the fact checking system is able to retrieve previously processed information to perform the fact check.
  • a commentator is talking in a video
  • a user taps the screen
  • the previous 10 seconds of content are fact checked.
  • an additional time e.g., 5 seconds
  • the fact checking system determines the current segment.
  • the commentator says, “this project is a mess, it is $5B over budget.”
  • the user taps the screen at “$5B” in the video.
  • the fact checking system had determined or determines that the current segment is “it is $5B over budget,” so that segment is fact checked.
  • the current segment or a previous segment e.g., to allow a delay of the user thinking
  • the user is able to highlight closed caption content for fact checking.
  • a list of recent/current segments is displayed (e.g., pops up), and the user is able to select one or more of the segments by tapping again.
  • the list is displayed on a second or third screen.
  • the list is based on time (e.g., most recent) and/or priority (e.g., most relevant).
  • content is monitored and processed, but the content is only fact checked when a user touches the touchscreen (or utilizes any other input mechanism).
  • the user is able to use the touchscreen to select or highlight text, information or a communication to have that text/information/communication fact checked. For example, a user taps a tweet on a screen to have the tweet fact checked. In another example, a user highlights text on a social networking page to have the text fact checked.
  • content feeds are modified based on fact checking.
  • Content feeds are fact checked, and a content feed with the highest factual accuracy rating is presented on top/first. Factual accuracy and time/date information are able to be combined for ranking/ordering content feeds.
  • fact checking results are presented one after the other or in chronological order as a news/activity feed (and presented via social media/networking).
  • fact checking information is displayed on a page mostly (e.g., 95% or more) hidden behind the main content. The user can then click on the page to view the fact check information.
  • what time the misinformation was said is included in timeline format or another format.
  • misinformation is turned into jokes automatically to send to friends.
  • misinformation is turned into a postcard or greeting card.
  • the misinformation is turned into a joke and/or card by including the misinformation with a matching image and/or template.
  • the match is able to be made using a keyword or any other manner. For example, if the misinformation is from Politician Z, a caricature of Politician Z is included as well as the misinformation and the fact check result or a correction of the misinformation.
  • additional text, audio, images and/or video is provided such as an “oops!” sound or text, or silly music or any other effect to add humor.
  • the sources are rated using a rating system so that sources that provide false or inaccurate information are rated as poor or unreliable and/or are not used, and sources that rarely provide misinformation are rated as reliable and are used and/or given more weight than others. For example, if a source's rating falls or is below a threshold, that source is not used in fact checking. In some embodiments, users are able to designate the threshold.
  • comments are classified (e.g., high/false, mid/misleading, low/unsupported), and users are able to select which classification of information to exclude or receive.
  • “high” excludes only false information
  • “mid” excludes false and misleading information
  • “low” excludes false, misleading and unsupported information.
  • user A accepts all information
  • user B excludes only false information. When information is excluded, it is muted, crossed out, blacked out, not provided, deleted, not transmitted and/or any other exclusion.
  • fact check results are displayed when a user visits a page (or views other content such as a video or a television show) based on previous fact checks done by/for other users. For example, User A visits Webpage X, and a selectable/clickable link appears for the user to see the fact check result that was done by the fact check system for Contact B of that page.
  • fact checks performed by/for contacts of the user are displayed.
  • fact checks performed by/for anyone are displayed.
  • only manual fact checks are displayed, only automatic fact checks are displayed (e.g., automatically performed by the fact checking system) or only automatic fact checks that have been manually reviewed are displayed.
  • the user is able to select to have a fact check performed by the fact checking system using the user's sources and compare the results with the previously performed fact check(s). In some embodiments, only differences between the fact check results are displayed. In some embodiments, sources/criteria for the user's fact check implementation is automatically compared with a previous fact check's sources/criteria, and the user's fact check is only performed if the user's fact check sources/criteria is different (e.g., substantially different) from the previous fact check's sources/criteria.
  • Substantially different is able to be determined based on the number of different sources (e.g., number of different sources below a threshold), the quality of the differing sources (e.g., all sources have a 10 reliability rating), and/or any other analysis. For example, if the user's sources are the same except for one additional approved website, then the user's fact check and the previous fact check are considered not to be substantially different.
  • users receive benefits by fact checking content.
  • users register to fact check and/or use their social networking identification for fact checking and receiving benefits. For example, a user agrees to fact check a television program for free access to the television program. In another example, a user fact checks a television program and is able to watch the next television program commercial-free. In another example, a user agrees to fact check a program, and is provided the program is streamed to the user for free.
  • Any benefit is able to be provided, including, but not limited to, commercial-free, shortened/fewer commercials, extended content, a period (e.g., month) of free cable/Internet access, program-specific access for free (e.g., access to News Show X), discounted access (e.g., 50% off), free access to related or unrelated content and/or any other benefit. For example, if the user fact checks News Show X, then they are given free access to News Show Y. In another example, if the user fact checks News Show X, they are given commercial free viewing of the next football game of their favorite team. In some embodiments, users are presented selectable benefits from which to choose.
  • a user is offered a free movie, free sporting event programming or a 50% off download of a new release game, if they fact check News Show X.
  • the user is required to fact check a certain amount of content and/or receive an accuracy rating above a threshold to receive the benefits. For example, a user agrees to fact check News Network X's content for free access to the content. If the user abuses the agreement, and does not fact check the content or provides inaccurate fact check results, then the user's access is terminated. If the user provides accurate fact check results, then the user is able to continue to receive free access. The user is able to fact check the content in any manner.
  • the user is able to manually fact check the content and provide the results to a central or distributed fact checking system.
  • the user is able to utilize an automatic fact checking implementation that the user has modified (e.g., by selecting sources, monitoring rules, processing rules).
  • users are grouped or form groups to fact check content (e.g., crowdsourcing), so that the groups work together to generate fact check results.
  • the benefits are able to be applied to any type of content/services. For example, users of a social networking service are able to receive expanded access for fact checking, or no advertisement browsing as a benefit for fact checking, and/or any other benefits.
  • users who agree to fact check YouTube content or provide a specified number (e.g., 10) accurate fact check results are allowed to watch YouTube videos without commercials for a day, or users who fact check other users' Facebook® pages do not have any advertisements displayed when they browse Facebook® or listen to a music playing service such as Pandora.
  • the social networking fact checking system is a smartphone application including, but not limited to, an iPhone®, Droid® or Blackberry® application.
  • a broadcaster performs the fact checking.
  • a user's television performs the fact checking.
  • a user's mobile device performs the fact checking and causes (e.g., sends) the results to be displayed on the user's television and/or another device.
  • the television sends the fact checking result to a smart phone.
  • Utilizing the social networking fact checking system, method and device depends on the implementation to some extent.
  • a television broadcast uses fact checking to fact check what is said or shown to the viewers
  • a mobile application uses fact checking to ensure a user provides factually correct information.
  • Other examples include where web pages or social networking content (e.g., tweet or Facebook® page) are processed, fact checked, and a result is provided.
  • the fact checking is able to be implemented without user intervention. For example, if a user is watching a news program, the fact checking is able to automatically occur and present the appropriate information.
  • users are able to disable the fact checking if desired. Similarly, if a user implements fact checking on his mobile application, the fact checking occurs automatically.
  • the fact checking is also able to be implemented automatically, so that once installed and/or configured, the news company does not need take any additional steps to utilize the fact checking.
  • the news company is able to take additional steps such as adding sources.
  • news companies are able to disable the fact checking, and in some embodiments, news companies are not able to disable the fact checking to avoid tampering and manipulation of data.
  • one or more aspects of the fact checking are performed manually.
  • the social networking fact checking system, method and device enable information to be fact checked in real-time and automatically (e.g., without user intervention).
  • the monitoring, processing, fact checking and providing of status are each able to occur automatically, without user intervention.
  • Results of the fact checking are able to be presented nearly instantaneously, so that viewers of the information are able to be sure they are receiving accurate and truthful information.
  • the fact checking is able to clarify meaning, tone, context and/or other elements of a comment to assist a user or viewer.
  • monitoring, processing, fact checking and indicating are able to occur on any device and in any configuration, these are some specific examples of implementation configurations.
  • Monitoring, processing, fact checking and providing all occur on a broadcaster's devices (or other emitters of information including, but not limited to, news stations, radio stations and newspapers).
  • Monitoring, processing and fact checking occur on a broadcaster's devices, and providing occurs on an end-user's device.
  • Monitoring and processing occur on a broadcaster's devices, fact checking occurs on a broadcaster's devices in conjunction with third-party devices, and providing occurs on an end-user's device.
  • Monitoring occurs on a broadcaster's devices, processing and providing occur on an end-user's device, and fact checking occurs on third-party devices.
  • Fact checking includes checking the factual accuracy and/or correctness of information.
  • the type of fact checking is able to be any form of fact checking such as checking historical correctness/accuracy, geographical correctness/accuracy, mathematical correctness/accuracy, scientific correctness/accuracy, literary correctness/accuracy, objective correctness/accuracy, subjective correctness/accuracy, and/or any other correctness/accuracy.
  • Another way of viewing fact checking includes determining the correctness of a statement of objective reality or an assertion of objective reality.
  • Yet another way of viewing fact checking includes determining whether a statement, segment or phrase is true or false.

Abstract

A fact checking system utilizes social networking information and analyzes and determines the factual accuracy of information and/or characterizes the information by comparing the information with source information. The social networking fact checking system automatically monitors information, processes the information, fact checks the information and/or provides a status of the information.

Description

CROSS-REFERENCE TO RELATED APPLICATION(S)
This application claims the benefit of U.S. Provisional Patent Application Ser. No. 61/946,043, filed Feb. 28, 2014, and titled “FACT CHECKING METHOD AND SYSTEM UTILIZING SOCIAL NETWORKING INFORMATION,” which is hereby incorporated by reference in its entirety for all purposes.
FIELD OF THE INVENTION
The present invention relates to the field of information analysis. More specifically, the present invention relates to the field of automatically verifying the factual accuracy of information.
BACKGROUND OF THE INVENTION
Information is easily dispersed through the Internet, television, social media and many other outlets. The accuracy of the information is often questionable or even incorrect. Although there are many fact checkers, they typically suffer from efficiency issues.
SUMMARY OF THE INVENTION
A social networking fact checking system analyzes and determines the factual accuracy of information and/or characterizes the information by comparing the information with source information. The social networking fact checking system automatically monitors information, processes the information, fact checks the information and/or provides a status of the information.
The social networking fact checking system provides users with factually accurate information, limits the spread of misleading or incorrect information, provides additional revenue streams, and supports many other advantages.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 illustrates a flowchart of a method of implementing fact checking according to some embodiments.
FIG. 2 illustrates a block diagram of an exemplary computing device configured to implement the fact checking method according to some embodiments.
FIG. 3 illustrates a network of devices configured to implement fact checking according to some embodiments.
FIG. 4 illustrates a flowchart of a method of implementing social fact checking according to some embodiments.
FIG. 5 illustrates a flowchart of a method of utilizing social network contacts for fact checking according to some embodiments.
FIG. 6 illustrates a flowchart of a method of fact checking a user for registration according to some embodiments.
FIG. 7 illustrates a flowchart of a method of determining a validity rating based on contacts' information according to some embodiments.
FIG. 8 illustrates an exemplary web of lies according to some embodiments.
FIG. 9 illustrates an exemplary web of lies in timeline format according to some embodiments.
FIG. 10 illustrates a flowchart of a method of affecting a user based on a validity rating according to some embodiments.
FIG. 11 illustrates a flowchart of a method of connecting users based on similar content or validity rating according to some embodiments.
FIG. 12 illustrates a flowchart of a method of fact checking mapping information.
FIG. 13 illustrates a flowchart of a method of using an icon to indicate a validity rating or the validity of information provided by an entity according to some embodiments.
FIG. 14 illustrates a flowchart of a method of awarding honors for fact checking according to some embodiments.
FIG. 15 illustrates a flowchart of a method of touchscreen fact checking according to some embodiments.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
A fact checking system utilizing social networking information determines the factual accuracy of information by comparing the information with source information. Additional analysis is able to be implemented as well such as characterizing the information.
FIG. 1 illustrates a flowchart of a method of implementing fact checking according to some embodiments.
In the step 100, information is monitored. In some embodiments, all information or only some information (e.g., a subset less than all of the information) is monitored. In some embodiments, only explicitly selected information is monitored. In some embodiments, although all information is monitored, only some information (e.g., information deemed to be fact-based) is fact checked.
The information includes, but is not limited to, broadcast information (e.g., television broadcast information, radio broadcast information), email, documents, database information, social networking/media content (tweets/Twitter®, Facebook® postings), webpages, message boards, web logs, any computing device communication, telephone calls/communications, audio, text, live speeches/audio, radio, television video/text/audio, VoIP calls, video chatting, video conferencing, images, videos, and/or any other information. The information is able to be in the form of phrases, segments, sentences, numbers, words, comments, values, graphics, and/or any other form.
In some embodiments, monitoring includes recording, scanning, capturing, transmitting, tracking, collecting, surveying, and/or any other type of monitoring. In some embodiments, monitoring includes determining if a portion of the information is able to be fact checked. For example, if information has a specified structure, then it is able to be fact checked.
In some embodiments, the social networking fact checking system is implemented without monitoring information. This is able to be implemented in any manner. For example, while information is transmitted from a source, the information is also processed and fact checked so that the fact check result is able to be presented. In some embodiments, the fact check result is embedded in the same stream as the information. In some embodiments, the fact check result is in the header of a packet.
In the step 102, the information is processed. Processing is able to include many aspects including, but not limited to, converting (e.g., audio into text), formatting, parsing, determining context, transmitting, converting an image into text, analyzing and reconfiguring, and/or any other aspect that enables the information to be fact checked. Parsing, for example, includes separating a long speech into separate phrases that are each separately fact checked. For example, a speech may include 100 different facts that should be separately fact checked. In some embodiments, the step 102 is able to be skipped if processing is not necessary (e.g., text may not need to be processed). In some embodiments, processing includes converting the information into a searchable format. In some embodiments, processing occurs concurrently with monitoring. In some embodiments, processing includes capturing/receiving and/or transmitting the information (e.g., to/from the cloud).
In a specific example of processing, information is converted into searchable information (e.g., audio is converted into searchable text), and then the searchable information is parsed into fact checkable portions (e.g., segments of the searchable text; several word phrases).
Parsing is able to be implemented in any manner including, but not limited to, based on sentence structure (e.g., subject/verb determination), based on punctuation including, but not limited to, end punctuation of each sentence (e.g., period, question mark, exclamation point), intermediate punctuation such as commas and semi-colons, based on other grammatical features such as conjunctions, based on capital letters, based on a duration of a pause between words (e.g., 2 seconds), based on duration of a pause between words by comparison (e.g., typical pauses between words for user are 0.25 seconds and pauses between thoughts are 1 second)—the user's speech is able to be analyzed to determine speech patterns such as length of pauses between words lasting a fourth of the length for pauses between thoughts or sentences, based on a change of a speaker (e.g., speaker A is talking, then speaker B starts talking), based on a word count (e.g., 10 word segments), based on speech analysis, based on a slowed down version (recording the content, slowing down the recorded content to determine timing breaks), based on keywords/key phrases, based on search results, and/or any other manner. In some embodiments, processing includes, but is not limited to, calculating, computing, storing, recognition, speaker recognition, language (word, phrase, sentence, other) recognition, labeling, and/or characterizing.
In the step 104, the information is fact checked. Fact checking includes comparing the information to source information to determine the factual validity, accuracy, quality, character and/or type of the information. In some embodiments, the source information includes web pages on the Internet, one or more databases, dictionaries, encyclopedias, social network information, video, audio, any other communication, any other data, one or more data stores and/or any other source.
In some embodiments, the comparison is a text comparison such as a straight word for word text comparison. In some embodiments, the comparison is a context/contextual comparison. In some embodiments, a natural language comparison is used. In some embodiments, pattern matching is utilized. In some embodiments, an intelligent comparison is implemented to perform the fact check. In some embodiments, exact match, pattern matching, natural language, intelligence, context, and/or any combination thereof is used for the comparison. Any method of analyzing the source information and/or comparing the information to the source information to analyze and/or characterizing the information is able to be implemented. An exemplary implementation of fact checking includes searching (e.g., a search engine's search), parsing the results or searching through the results of the search, comparing the results with the information to be checked using one or more of the comparisons (e.g., straight text, context or intelligent) and retrieving results based on the comparison (e.g., if a match is found return “True”). The results are able to be any type including, but not limited to, binary, Boolean (True/False), text, numerical, and/or any other format. In some embodiments, determining context and/or other aspects of converting could be implemented in the step 104. In some embodiments, the sources are rated and/or weighted. For example, sources are able to be given more weight based on accuracy of the source, type of the source, user preference, user selections, classification of the source, and/or any other weighting factor. The weighting is then able to be used in determining the fact check result. For example, if a highly weighted or rated source agrees with a comment, and a low weighted source disagrees with the comment, the higher weighted source is used, and “valid” or a similar result is returned. Determining a source agrees with information is able to be implemented in any manner, for example, by comparing the information with the source and finding a matching result, and determining a source disagrees with information is when the comparison of the information and the source does not find a match.
In the step 106, a status of the information is provided based on the fact check result. The status is provided in any manner including, but not limited to, transmitting and/or displaying text, highlighting, underlining, color effects, a visual or audible alert or alarm, a graphical representation, and/or any other indication. The meaning of the status is able to be any meaning including, but not limited to, correct, incorrect, valid, true, false, invalid, opinion, hyperbole, sarcasm, hypocritical, comedy, unknown, questionable, suspicious, need more information, questionable, misleading, deceptive, possibly, close to the truth, and/or any other status.
The status is able to be presented in any manner, including, but not limited to, lights, audio/sounds, highlighting, text, a text bubble, a scrolling text, color gradient, headnotes/footnotes, an iconic or graphical representation, a video or video clip, music, other visual or audio indicators, a projection, a hologram, a tactile indicator including, but not limited to, vibrations, an olfactory indicator, a Tweet, a text message (SMS, MMS), an email, a page, a phone call, a social networking page/transmission/post/content, or any combination thereof. For example, text is able to be highlighted or the text color is able to change based on the validity of the text. For example, as a user types a social network message, the true statements are displayed in green, the questionable statements are displayed in yellow, and the false statements are displayed in red. In some embodiments, providing the status includes transmitting and/or broadcasting the status to one or more devices (e.g., televisions).
The status is also able to include other information including, but not limited to, statistics, citations and/or quotes. Providing the status of the information is also able to include providing additional information related to the fact checked information, such as an advertisement. In some embodiments, providing includes pointing out, showing, displaying, recommending, playing, presenting, announcing, arguing, convincing, signaling, asserting, persuading, demonstrating, denoting, expressing, hinting, illustrating, implying, tagging, labeling, characterizing, and/or revealing.
In some embodiments, the fact checking system is implemented such that responses, validity determinations and/or status presentations are available in real-time or near real-time. By real-time, it is meant instantaneously (e.g., within 1 second); whereas near real-time is within a few seconds (e.g., within 5 seconds). Furthermore, since the monitoring, processing, fact checking and providing status are all able to be performed automatically without user intervention, real-time also means faster than having a human perform the search and presenting results. Depending on the implementation, in some embodiments, the indication is presented in at most 1 second, at most several seconds (e.g., at most 5 seconds), at most a minute (not real-time), at most several minutes or by the end of a show. In some embodiments, the time amount (e.g., at most 1 second) begins once a user pauses in typing, once a phrase has been communicated, once a phrase has been determined, at the end of a sentence, once an item is flagged, or another point in a sequence. For example, as soon as a phrase is detected, the fact checking system checks the fact, returns a result and displays an indication based on the result in less than 1 second—clearly much faster than a human performing a search, analyzing the search results and then typing a result to be displayed on a screen.
In some embodiments, an indication is displayed to compare the fact check result with other fact check results for other users. For example, as described herein, in some embodiments, fact check implementations are able to be different for different users based on selections such as approvals of sources and processing selections which are able to result in different fact check results. Therefore, if User A is informed that X information is determined to be “false,” an indication indicates that X information was determined to be “true” for 50 other people. In some embodiments, usernames are indicated (e.g., X information was determined to be “true” for Bob). In some embodiments, usernames and/or results are only provided if their result is different from the user's result. In some embodiments, the number of users whose result matches the user's result is indicated. In some embodiments, the indication only indicates what the results were for contacts (e.g., social networking contacts) of the user. In some embodiments, the indication is only indicated if the results were different (e.g., true for user, but false for others). In some embodiments, the indication includes numbers or percentages of other fact check implementations (e.g., true for 50 users and false for 500 users or 25% true and 75% false). In some embodiments, indications are only indicated for specific users or classes of users. For example, only results of users classified as “members of the media” are indicated. In another example, a user is able to select whose results are indicated. In some embodiments, only results of users with a validity rating above a threshold are indicated.
In some embodiments, fewer or more steps are implemented. Furthermore, in some embodiments, the order of the steps is modified. In some embodiments, the steps are performed on the same device, and in some embodiments, one or more of the steps, or parts of the steps, are separately performed and/or performed on separate devices. In some embodiments, each of the steps 100, 102, 104 and 106 occur or are able to occur in real-time or non-real-time. Any combination of real-time and non-real-time steps is possible such as all real-time, none real-time and everything in between.
FIG. 2 illustrates a block diagram of an exemplary computing device 200 configured to implement the fact checking method according to some embodiments. The computing device 200 is able to be used to acquire, store, compute, process, communicate and/or display information including, but not limited to, text, images, videos and audio. In some examples, the computing device 200 is able to be used to monitor information, process the information, fact check the information and/or provide a status of the information. In general, a hardware structure suitable for implementing the computing device 200 includes a network interface 202, a memory 204, a processor 206, I/O device(s) 208, a bus 210 and a storage device 212. The choice of processor is not critical as long as a suitable processor with sufficient speed is chosen. The memory 204 is able to be any conventional computer memory known in the art. The storage device 212 is able to include a hard drive, CDROM, CDRW, DVD, DVDRW, flash memory card, solid state drive or any other storage device. The computing device 200 is able to include one or more network interfaces 202. An example of a network interface includes a network card connected to an Ethernet or other type of LAN. The I/O device(s) 208 are able to include one or more of the following: keyboard, mouse, monitor, display, printer, modem, touchscreen, touchpad, speaker/microphone, voice input device, button interface, hand-waving, body-motion capture, touchless 3D input, joystick, remote control, brain-computer interface/direct neural interface/brain-machine interface, camera, and other devices. In some embodiments, the hardware structure includes multiple processors and other hardware to perform parallel processing. Fact checking application(s) 230 used to perform the monitoring, processing, fact checking and providing are likely to be stored in the storage device 212 and memory 204 and processed as applications are typically processed. More or fewer components shown in FIG. 2 are able to be included in the computing device 200. In some embodiments, fact checking hardware 220 is included. Although the computing device 200 in FIG. 2 includes applications 230 and hardware 220 for implementing the fact checking, the fact checking method is able to be implemented on a computing device in hardware, firmware, software or any combination thereof. For example, in some embodiments, the fact checking applications 230 are programmed in a memory and executed using a processor. In another example, in some embodiments, the fact checking hardware 220 is programmed hardware logic including gates specifically designed to implement the method.
In some embodiments, the fact checking application(s) 230 include several applications and/or modules. Modules include a monitoring module for monitoring information, a processing module for processing (e.g., converting) information, a fact checking module for fact checking information and a providing module for providing a status of the information. In some embodiments, modules include one or more sub-modules as well. In some embodiments, fewer or additional modules are able to be included. In some embodiments, the applications and/or the modules are located on different devices. For example, a device performs monitoring, processing, and fact checking, but the providing is performed on a different device, or in another example, the monitoring and processing occurs on a first device, the fact checking occurs on a second device and the providing occurs on a third device. Any configuration of where the applications/modules are located is able to be implemented such that the fact checking system is executed.
Examples of suitable computing devices include, but are not limited to a personal computer, a laptop computer, a computer workstation, a server, a mainframe computer, a handheld computer, a personal digital assistant, a pager, a telephone, a fax machine, a cellular/mobile telephone, a smart appliance, a gaming console, a digital camera, a digital camcorder, a camera phone, a smart phone/device (e.g., a Droid® or an iPhone®), a portable music player (e.g., an iPod®), a tablet (e.g., an iPad®), a video player, an e-reader (e.g., Kindle™), a DVD writer/player, an HD (e.g., Blu-ray®) or ultra high density writer/player, a television, a copy machine, a scanner, a car stereo, a stereo, a satellite, a DVR (e.g., TiVo®), a smart watch/jewelry, smart devices, a home entertainment system or any other suitable computing device.
FIG. 3 illustrates a network of devices configured to implement fact checking according to some embodiments. The network of devices 300 is able to include any number of devices and any various devices including, but not limited to, a computing device (e.g., a tablet) 302, a television 304, a smart device 306 (e.g., a smart phone) and a source 308 (e.g., a database) coupled through a network 310 (e.g., the Internet). The source device 308 is able to be any device containing source information including, but not limited to, a searchable database, web pages, transcripts, statistics, historical information, or any other information or device that provides information. The network 310 is able to any network or networks including, but not limited to, the Internet, an intranet, a LAN/WAN/MAN, wireless, wired, Ethernet, satellite, a combination of networks, or any other implementation of communicating. The devices are able to communicate with each other through the network 310 or directly to each other. One or more of the devices is able to be an end user device, a media organization, a company and/or another entity. In some embodiments, peer-to-peer sourcing is implemented. For example, the source of the data to be compared with is not on a centralized source but is found on peer sources.
Social
In some embodiments, social fact checking is implemented. In some embodiments, only a user's content and/or sources and/or a user's contacts' content and/or sources are used for fact checking. The source information is able to be limited in any manner such as by generating a database and filling the database only with information found in the user's contacts' content/sources. In some embodiments, the social fact checking only utilizes content that the user has access to, such that content and/or sources of users of a social networking system who are not contacts of the user are not accessible by the user and are not used by the fact checking system. In some embodiments, source information is limited to social networking information such that the social networking information is defined as content generated by or for, stored by or for, or controlled by or for a specified social networking entity (e.g., Facebook®, Twitter®, LinkedIn®). The social network entity is able to be recognized by a reference to the entity being stored in a data structure. For example, a database stores the names of social networking entities, and the database is able to be referenced to determine if a source is a social networking source or not. In some embodiments, source information is limited to the social networking information that has been shared by a large number of users (e.g., over 1,000) or a very large number of users (e.g., over 1,000,000). The social networking information is able to be shared in any manner such as shared peer-to-peer, shared directly or indirectly between users, shared by sending a communication directly or indirectly via a social networking system. For example, source information is limited to only tweets, only tweets received by at least 100 users, only Facebook® postings that are viewed by at least 100 users, only Facebook® postings of users with 100 or more contacts, only users who are “followed” by 100 or more users, and/or any other limitation or combination of limitations. In some embodiments, source information is acquired by monitoring a system such as Twitter®. For example, microblogs (e.g., tweets) are monitored, and in real-time or non-real-time, the tweets are analyzed and incorporated as source information. Furthering the example, the tweets are processed (e.g., parsed), fact checked and/or compared with other information, and results and/or other information regarding the tweets are stored as source information. In some embodiments, the source information is limited to social networking information and additional source information. For example, source information is limited to social networking information and other sources with a reliability rating above 9 (on scale of 1 to 10). In another example, source information is limited to social networking information and specific sources such as encyclopedias and dictionaries. In some embodiments, the fact check occurs while the user is logged into the social networking system and uses the content accessible at that time. In some embodiments, if a contact is invited but has not accepted, his content/sources are still used. In some embodiments, contacts are able to be separated into different groups such as employers, employees or by position/level (e.g., partners and associates), and the different groups are able to be used for fact checking In some embodiments, only a user's friends' content and/or sources are used for fact checking In some embodiments, multiple fact checks are implemented based on the groups (e.g., one fact checker including friends' information and a second fact checker including co-workers' information). In some embodiments, fact check results are sent to contacts (e.g., social network contacts) of a user. In some embodiments, fact check results are shared using social networking. For example, a user fact checks an advertisement or the advertisement is fact checked for a user, and the result is sent to and/or displayed for contacts in the social network. In some embodiments, users are able to select if they want to receive fact check results from contacts. In some embodiments, users are able to be limited to contacts where they only receive fact check results but do not have other access (e.g., no access to personal information). For example, a user watches a show which is fact checked. When misinformation is detected, the fact check result is sent to the user and his contacts (e.g., via a tweet). In some embodiments, only certain types of fact check results are sent to users (e.g., only lies and misinformation). The misinformation and lies are able to be determined in any manner. For example, misinformation is determined automatically by determining the factual accuracy of the information, and if the information is determined to be factually inaccurate, then it is misinformation. Lies are able to be determined by determining information is misinformation and analyzing intent. Intent is able to be analyzed in any manner, for example, context (e.g., additional information, a result) of a statement is analyzed to determine intent. Misinformation, lies and other characterizations are able to be determined using a look-up table which classifies and stores information as factually accurate, misinformation, lies, and/or other characterizations. In some embodiments, information is distinguished as misinformation or lies by manual review and/or crowdsourcing. For example, users are presented a comment including context (e.g., a video clip), and the users indicate if they think it is misinformation or an intentional lie, then based on the user selections, a result is indicated. In some embodiments, additional information is sent with the result to provide context such as a clip of the original content which had the misinformation or a link to the original content. In some embodiments, social information is stored/utilized in an efficient manner. For example, personal data is stored in a fastest access memory location, and non-personal information provided by a user on a social network is stored in a slower location. In some embodiments, information is further prioritized based on popularity, relevance, time (recent/old), and/or any other implementation.
FIG. 4 illustrates a flowchart of a method of implementing social fact checking according to some embodiments. In the step 400, information is analyzed. Analyzing is able to include monitoring, processing, and/or other forms of analysis. In the step 402, automatic fact checking is performed utilizing only social network information as source information. In some embodiments, the social network information is only social network information from contacts of the user (or contacts of contacts). In some embodiments, social network information is not limited to contacts of the user. In some embodiments, if the automatic fact checking fails to produce a result above a quality/confidence threshold, then manual crowdsourcing fact checking is implemented to generate a result, in the step 404. The manual crowdsourcing is implemented by providing/sending the information to be fact checked where many users are able to find the information, fact check the information and send a response which is used to generate a fact checking result. For example, 1000 users perform manual crowdsourcing fact checking, and 995 of the users send a response indicating the information is false, and the fact checking system generates a fact check result that the information is false. The fact check result is able to be determined in any way, for example, majority rules, percent above/below a threshold or any other way. In the step 406, the result is presented on the user's device. In some embodiments, fewer or additional steps are implemented. In some embodiments, automatic fact checking and crowdsourcing are performed in parallel. In some embodiments, an automatic result and crowdsource result are compared, and the result with a higher confidence score is used. In some embodiments, both results including the confidence score of each are provided. Confidence of a result is able to be determined in any manner; for example, based on how close source information is to the information, based on the number of agreeing/disagreeing sources, and/or any other manner. For example, if 99 sources agree with a statement (e.g., have the same text as the statement) and only 1 source disagrees with the statement (e.g., has text that indicates or means the opposite of the statement), then the confidence score is 99%.
In some embodiments, only sources that a user and/or a user's contacts (e.g., social network contacts) have approved/accepted/selected are used for fact checking Users are able to approve/accept sources in any manner, such as clicking approve after visiting a website, or filling out a survey, or not clicking disapprove after visiting a website where the site is automatically approved by visiting, approving via social networking (e.g., receiving a link or site or content from a contact), by “liking” content, by sending a tweet with a hashtag or other communication with the source to approve, by selecting content (e.g., from list of selectable sources), using another social media forum (e.g., items/photos pinned on Pinterest are approved by that user, videos liked on Youtube are approved by those users) or any other implementation. In some embodiments, a source is approved if the source is fact checked by the user. In some embodiments, a source is approved if the source has been fact checked by another entity (e.g., automatically by fact checking system), and the user has verified or accepted the fact check results. In some embodiments, a user is able to designate an entity which approves/disapproves of sources for the user. For example, the user selects an automated approval/disapproval system which searches/crawls sources (e.g., databases, the Web), analyzes (e.g., parses) the sources, fact checks the sources, and based on the analysis and/or fact check results, approves/disapproves of the sources for the user. In some embodiments, a source is approved if the source is associated with an organization/entity that the user has “liked” (or a similar implementation), where associated means approved by, written by, affiliated with, or another similar meaning. In some embodiments, a site or other source information becomes an approved source if a user uses or visits the source. In some embodiments, a source is approved if a user uses or visits the source while signed/logged in (e.g., signed in to Facebook® or Google+®). In some embodiments, the user must be logged into a specific social networking system, and in some embodiments, the user is able to be logged into any social networking system or a specific set of social networking systems. In some embodiments, the sources are limited to a specific method of approval such as only sources visited while logged in. In some embodiments, a source is approved if the source is recommended to the user (e.g., by a contact) (even if the user does not visit/review the source), unless or until the user rejects/disapproves of the source. In some embodiments, sources are suggested to a user for a user to accept or reject based on contacts of the user and/or characteristics of the user (e.g., location, political affiliation, job, salary, organizations, recently watched programs, sites visited). In some embodiments, the contacts are limited to n-level contacts (e.g., friends of friends but not friends of friends of friends). In an example, user A approved source X, and one of his contacts approved source Y, and another contact approved source Z. So only sources X, Y and Z are used for fact checking content for user A. Furthering the example, since user A's sources may be different than user J's sources, it is possible to have different fact checking results for different users. In some embodiments, users are able to disapprove sources. In some embodiments, if there is a conflict (e.g., one user approves of a source and a contact disapproves of the same source), then the choice of the user with a higher validity rating is used. In some embodiments, if there is a conflict, the selection of the contact with the closer relationship to the user (the user being interpreted as the closest contact) is used. In some embodiments, if there is a conflict and multiple users approve/disapprove, then the higher of the number of approvals versus disapprovals determines the result (e.g., 2 users approve Site X and 5 users disapprove Site X, then Site X is not used). In another example, if 50 contacts approve Web Page Y, and 10 contacts disapprove Web Page Y, then Web Page Y is approved. Again, depending on the contacts, the fact check results could be different for different users. Furthering the example, User A has 50 contacts that approve Web Page Y, and 10 that disapprove. However, User B has 5 contacts that approve Web Page Y, and 20 contacts that disapprove Web Page Y. Therefore, Web Page Y is approved for User A and disapproved for User B. In some embodiments, users are able to approve/disapprove sources in clusters, and users are able to cluster sources. In some embodiments, users are able to share/recommend sources to contacts (e.g., via a social networking site). For example, user A says, “I've grouped these sources; I think they are all legit,” and the contacts are able to accept or reject some/all of the sources. In some embodiments, to generate viral approvals/disapprovals, when a user approves or disapproves of a source or a group of sources, the source (or references to the source, for example, a link to the source) and the approval or disapproval are automatically sent to contacts of the user (or up to nth level contacts of the user, for example, contacts of contacts of the user). Similarly, when contacts of a user approve/disapprove a source, the source or reference and approval/disapproval are automatically sent to the user. In some embodiments, when a user approves/disapproves of a source, the source is automatically approved/disapproved for contacts. In some embodiments, the contacts are able to modify the approval/disapproval; for example, although the user approved a source, Contact X selects to disapprove the source, so it is not an approved source for Contact X. Similarly, when contacts approve/disapprove a source, the source is automatically approved/disapproved for the user unless the user modifies the approval/disapproval. In some embodiments, users are able to limit the automatic approval to nth level contacts (e.g., only 1st and 2nd level contacts but no higher level contacts). In some embodiments, all sources or a subset of sources (e.g., all sources including social networking content generated by users while logged into a social networking site) are approved until a user disapproves of a source (or group of sources), and then that source (or group of sources) is disapproved. In some embodiments, sources are approved based on a tweet and a hashtag. For example, a user tweets a message with the name of a source preceded by a hashtag symbol. In another example, a user tweets a message with a link to a source or a name of a source and “#fcapproval” or “#fcdisapproval” or similar terms to approve/disapprove a source. In some embodiments, sources are approved based on content watched (e.g., on television, YouTube), items purchased, stores/sites shopped at, and/or other activities by the user. For example, the user watches Program X which uses/approves sources A, B and C for analyzing/determining content, so those sources automatically become approved for the user. In some embodiments, the sources are approved only if the user “likes” the show or if it is determined the user watches the show long enough and/or enough times. For example, a counter tracks how long the user watches a show, and if/when the counter goes above a threshold, the sources affiliated with/related to the show are automatically approved. In some embodiments, sources are linked so that if a user approves a source, any source that is linked to the source is also approved automatically. In some embodiments, the linked sources are required to be related (e.g., same/similar genre, same/similar reliability rating). For example, a user approves Dictionary A which is linked to Dictionary B and Dictionary C, so Dictionaries A, B and C, all become approved when the user approves Dictionary A. In some embodiments, the linked sources are displayed for the user to select/de-select (e.g., in a pop-up window). In some embodiments, approval/disapproval of sources is transmitted via color-coded or non-color-coded messages such as tweets, text messages and/or emails. In some embodiments, approvals/disapprovals are transmitted automatically to contacts of the user upon approval/disapproval. In some embodiments, when a user is about to approve/disapprove a source, an indication of what others (e.g., contacts or non-contacts of the user) have selected is presented. For example, the user visits Webpage Z, and in the bottom corner of the browser (or elsewhere), it is displayed that Contact J disapproved this source. In some embodiments, all sources are accepted except ones the user manually rejects. In some embodiments, sources are able to be selected by sensing a user circling/selecting icons or other representations of the sources. In some embodiments, sources are approved by bending a flexible screen when the source is on the screen. For example, a bend is detected by detecting pressure in the screen or in another manner, and the device determines the current source being displayed. In some embodiments, sources are selected based on an entity. For example, a user specifies to use Fox News's content and sources as approved sources. In some embodiments, any content/sources that Fox News disapproved is also recognized as disapproved for the user. In some embodiments, users are able to combine entities and their sources; for example, a user selects to use Fox News content/sources and CNN content/sources. If there are any conflicts, such as Fox News approving Source X and CNN disapproving Source X, the conflicts are able to be handled in any manner such as those described herein. In some embodiments, the user handles the conflicts by selecting approve/disapprove of each conflicting item or selects a preferred entity (e.g., if conflict, prefer CNN, so CNN's selections are chosen). In some embodiments, sources are received from/by others, and the sources are filtered based on personal preferences, characteristics, and/or selections such that only sources satisfying preferences are accepted automatically and others are rejected or placed on hold for approval. For example, User A is a very liberal person as determined based on viewing and reading habits, so when User G sends three sources that he thinks User A should approve, two of the three sources are classified as liberal, so they are automatically approved for User A, and the third source is classified as conservative, so it is placed in a queue for User A to review and approve/disapprove. In some embodiments, sources are approved by detecting a user in a specified location. For example, the device determines that it is at or is detected at a political rally. The content/sources of the politician holding the rally are automatically approved for the user or are presented for the user for manual approval. In some embodiments, content/sources of the opponent of the politician are automatically disapproved (unless they had previously been approved; for example, by detecting them as already approved by the user). In some embodiments, when a device determines that it is within range of another user (e.g., by facial recognition) or another user's device (e.g., by detecting device ID or user ID), the approved/disapproved sources and their approval/disapproval status is provided on the device. In some embodiments, user's are able to limit their approval/disapproval information (e.g., only contacts are able to view). In some embodiments, sources are approved by waving a device at a source. For example, RFID or another technology is used to determine what sources are in close proximity (e.g., user waves smart phone in a library, and the books become approved sources for the user). In some embodiments, in addition to or instead of accepting/rejecting sources, users are able to set/select any other option regarding fact checking implementations such as which content to monitor, keywords for monitoring, how content is processed, weighting schemes for sources, priorities, and/or any other fact checking implementation option.
In some embodiments, a source is approved based on the reliability rating of the source and the approvals/disapprovals of the source. For example, a source is approved if the reliability rating and the approvals total above a threshold. In another example, a reliability rating is increased by 1 (or another number) if the number of approvals is greater than the number of disapprovals, and the reliability rating is decreased by 1 (or another number) if the number of approvals is not greater than the number of disapprovals, and then the source is approved if the modified reliability rating is above a threshold. In another example, the reliability rating is added to the number of approvals divided by the number of disapprovals divided by ten or the number of approvals plus the number of disapprovals, and then the modified reliability rating is compared with a threshold, and if the modified reliability rating is above the threshold, then the source is approved. In another example, the reliability rating is multiplied by the number of approvals divided by the number of disapprovals with a cap/maximum total (e.g., 10), and then the modified reliability rating is compared with a threshold, and if the modified reliability rating is above the threshold, then the source is approved. Any calculation is able to be implemented to utilize the reliability rating, approvals and disapprovals to determine if a source is approved for fact checking. In some embodiments, weights are added to the calculations; for example, a user's approval/disapproval is given extra weight. For example, reliability rating+user's approval/disapproval (+2/−2)+contacts' approvals/disapprovals (+1/−1).
In some embodiments, social networking teams/groups are able to be set up for fact checking such that each member of a team approves, disapproves, and/or rates sources for fact checking. In some embodiments, each member of a team rates/selects other options regarding fact checking as well such as monitoring criteria, processing criteria and/or other criteria, and the selections are used to determine how to fact check information. For example, three members of a team select to parse after every pause in the monitored information of two seconds, and two members select to parse after every 10 seconds, so the selection of after every pause of two seconds is used. In some embodiments, social network groups' fact checking results are compared to determine the most accurate group. For example, Groups A, B and C are compared, and the group with the most correct results is considered to be the most accurate group. Furthering the example, a set of data is fact checked using Group A's sources, Group B's sources, and Group C's sources, and then the fact checking results are analyzed automatically, manually or both to determine the most accurate fact checking results. Furthering the example, Group A's results were 80% accurate, Group B's results were 95% accurate, and Group C's results were 50% accurate, so Group B was the most accurate. The groups' results are able to be compared automatically, manually or both. For example, if groups' results match similar to the automatic fact checking system results, the groups' results are determined to be accurate. In another example, a group's results are analyzed manually (e.g., by a group of impartial individuals) and manually compared with an automated fact checking system's results or other groups' results. Furthering the example, the sources selected/approved by a group are used to automatically fact check content, and the results of those fact checks are manually or automatically compared with automatic fact check implementations using different sources or other groups' implementations. In some embodiments, the groups are ranked by accuracy. In some embodiments, the most accurate groups' sources (e.g., top 10 groups) are made public and/or selectable by other users, and/or the most accurate groups' sources are sent via social media (e.g., tweeted or posted) to other users with an option to accept/reject.
Rating sources includes providing a reliability rating of a source, a validity rating of a source, fact checking a source, and/or any other rating. For example, a user of a team rates an opinion blog as a 1 out of 10 (1 meaning very factually inaccurate), and then the opinion blog is fact checked utilizing an automatic fact checking system (or manually) which determines the content of the opinion blog is mostly factually inaccurate, so the automatic fact checking system gives a rating of 1 as well. In some embodiments, users of teams do not specify a rating number for a source; rather, the users of the teams approve/disapprove/select sources, and the team with the most accurate sources (e.g., in number and/or in accuracy) is considered to be the most accurate team. In some embodiments, “accurate” such as an accurate source is defined as having a reliability or accuracy rating above a threshold (e.g., above 8 on a scale of 1 to 10 with 10 being the most accurate), and the reliability/accuracy rating is able to be based on how accurate the information is; for example, the information is fact checked (automatically and/or manually) and based on the fact check, the reliability/accuracy rating is determined. Furthering the example, if the fact check returns “factually accurate” for all segments of information, then the information receives a 10 for accuracy, and if the fact check returns “factually inaccurate” for all segments of the information, then the information receives a 0 for accuracy. In some embodiments, sources are manually analyzed to determine a reliability/accuracy rating. In an example of teams with the most accurate sources, a team with 1 source that is fact checked by a fact checking system and determined to be a reliability rating of 10, is considered to be less accurate than a team with 10 sources that all have a reliability rating of 10.In other words, accuracy and breadth of the sources are taken into account to determine the team with the best sources. In some embodiments, the sources are classified, and breadth is determined not just by quantity of sources but also by the number classes the sources fall into. For example, 100 sources in a single classification (e.g., sports history) are not as accurate as 100 sources in 10 classifications. In some embodiments, the opposite is true. For example, a large number sources in a single classification would ensure a fact check using those sources would be accurate, and a source collection that is very broad would not necessarily help. Furthering the example, if the fact checking system is fact checking the first team to win back to back Super Bowls, a set of sources which include a medical encyclopedia and a french dictionary would not be better than a set of sources that focuses on sports. In some embodiments, accuracy is given more weight, and in some embodiments, breadth is given more weight. For example, in some embodiments, a set of 100 sources with an average reliability rating of 9 is better than a set of 1000 sources with an average reliability rating of 8.In some embodiments, the set of 1000 sources is considered better even though the reliability rating is slightly lower, since more information may be able to be fact checked with the larger breadth. In some embodiments, both sets are available and used for fact checking, and whichever one returns with a higher confidence score is used to provide a status of the information being fact checked.
FIG. 5 illustrates a flowchart of a method of utilizing social network contacts for fact checking according to some embodiments. In the step 500, users approve or disapprove sources or other features/options/elements for fact checking. In the step 502, fact checking is implemented (e.g., monitoring, processing, fact checking and/or providing a result). In some embodiments, additional or fewer steps are implemented.
In some embodiments, sources are weighted based on the number of users that have accepted/rejected them. For example, a dictionary that has 1000 accepts and 0 rejects is rated higher than a biased site which has 5 accepts and 900 rejects. In some embodiments, this is a factor used in conjunction with other weighting systems. For example, a fact check is performed on sources to generate a reliability rating, and the accept/reject weighting is an additional factor for determining a final reliability rating (e.g., reliability rating+/−accept/reject weighting=final reliability rating).
In some embodiments, users in a social network are grouped/have different levels (e.g., media, business level, regular user, politician) which affects the weight of the sources. For example, a media level source is given a higher weight than a regular user source.
In some embodiments, the weight of the source is utilized in fact checking such that higher weighted sources have more influence on a fact check result. For example, a calculation in determining a fact check result includes: determining the number of agreeing highest weighted sources which is multiplied by the highest weight value, determining the number of agreeing second highest weighted sources which is multiplied by the second highest weight value, and so on until determining the number of agreeing lowest weighted sources which is multiplied by the lowest weight value. Then, the results are combined to determine a total value, and if the total value is above a threshold, then the information being fact checked is determined “confirmed,” and if the total value is not above the threshold, then the information is “unconfirmed” or “disproved.” In another, example, the weights are applied to disagreeing sources, and if the total value is above a threshold, then the information is “disproved,” or if the total value is not above the threshold, then the information is “confirmed.” In yet another example, the weighted agreeing and disagreeing values are combined or subtracted, and if the result is above a threshold, then “confirmed” and if not, then “disproved.”
In some embodiments, users' sources are weighted based on “tokens” or user validity ratings (e.g., the higher the validity rating or higher number of tokens earned, then the higher the source weight).
In some embodiments, emails or other messages are sent to contacts with fact check result updates. For example, an email is automatically sent when a contact's validity rating drops below a threshold. Other forms of communication are possible such as a tweet, text message, or instant message.
In some embodiments, people are fact checked to confirm they are who they say they are. For example, when a person registers for a social networking site, the person is verified by fact checking. A user is able to be verified in any manner, such as: comparing the user information with another social networking site, comparing the user information with an online resume, using IP address location information, using past history information, comparing a past photograph of the user with a current photograph or a video scan (e.g., using a webcam), analyzing school information, analyzing work information, analyzing professional organization information, and/or analyzing housing information.
In some embodiments, when users attempt to connect (e.g., when a user asks to join a user/friend's network or when a user is asked to join another user's (e.g., friend) network), a question is asked. For example, the user asks the friend a question, and the user determines if the answer is correct or not, which determines if the friend is accepted into the network or not. In another example, the friend asks the user a question, and the friend determines if the answer is correct or not, which determines if the user is accepted into the network or not. In some embodiments, for efficiency, the user asks a generic/broad question that is able to be applied to many users, so the user does not have to generate specific questions for each user. For example, “what high school did we go to?”. In some embodiments, when a user makes an invitation to a second user, the user inputs a question for the second user to answer. In some embodiments, instead of or in addition to a user asking a question, the second user (or invitee) simply sends a personal message that informs the user that the second user is, in fact, who he says he is. For example, the invitee accepts the invitation, and also makes a comment, “I remember that weird painting of the dog in your dorm room at Stanford.” Then, the user either accepts or rejects the second user. In some embodiments, a user is allowed to “connect” to another user but with limited access until he is verified.
FIG. 6 illustrates a flowchart of a method of fact checking a user for registration according to some embodiments. In the step 600, a user attempts to register (e.g., with a social networking site/system or a second social networking site/system). In the step 602, the user is verified using fact checking. In the step 604, the user attempts to connect with a friend. In the step 606, user/friend verification occurs. In some embodiments, fewer or additional steps are implemented.
In some embodiments, after a web page, tweet, and/or any other content is fact checked, the fact check result and any identifying information (e.g., the parsed segment) is stored and used as source information, or stored in a manner that is easily retrievable for future displays of results. In some embodiments, the results are stored in a cache or other quickly accessible location. In some embodiments, the results are stored in a script (e.g., javascript) with the web page or coded in the web page, or another implementation.
Validity Rating and Web
In some embodiments, an entity including, but not limited to, a speaker, author, user, or another entity (e.g., corporation) has a validity rating that is included with the distribution of information from him/it. The validity rating is able to be based on fact checking results of comments made by an entity or any other information. For example, if a person has a web page, and 100% of the web page is factually accurate, then the user is given a 10 (on a scale of 1 to 10) for a validity rating. In another example, a user tweets often, and half of the tweets are factually accurate and half are inaccurate, the user is given a 5.The validity rating is able to be calculated in any manner. In addition to fact checking information by an entity, items such as controversies, bias, and/or any other relevant information is able to be used in calculating a validity rating. The severity of the information or misinformation is also able to be factored in when rating a person or entity. Additionally, the subject of the information or misinformation is also able to be taken into account in terms of severity. In some embodiments, an independent agency calculates a validity rating and/or determines what is major and what is minor. In some embodiments, individual users are able to indicate what is important to them and what is not. In some embodiments, another implementation of determining what is major, minor and in between is implemented. The context of the situation/statement is also able to be taken into account. In some embodiments, entities are able to improve their validity rating if they apologize for or correct a mistake, although measures are able to be taken to prevent abuses of apologies. In some embodiments, in addition to or instead of a validity rating, an entity is able to include another rating, including, but not limited to, a comedic rating or a political rating. In some embodiments, an entity includes a classification including, but not limited to, political, comedy or opinion. Examples of information or statistics presented when an entity appears include, but are not limited to the number of lies, misstatements, truthful statements, hypocritical statements or actions, questionable statements, spin, and/or any other characterizations.
FIG. 7 illustrates a flowchart of a method of determining a validity rating based on contacts' information according to some embodiments. In the step 700, a user's validity rating is determined or acquired. In the step 702, the user's contacts' validity ratings are determined or acquired. In the step 704, a complete user's validity rating is determined based on the user's validity rating and the contacts' validity ratings. In some embodiments, additional or fewer steps are implemented and/or the order of the steps is modified. For example, the steps are continuously ongoing such that if anything changes in either the user's validity rating or the contacts' validity ratings, then new ratings, including a new complete validity rating, are computed.
In some embodiments, relationship information is utilized in fact checking. For example, if a user's contacts have low entity/validity ratings, then that information negatively affects the user's entity rating. For example, a user's base validity rating is a 7 out of 10 based on fact checking results of the user's comments. Based on social networking relationships, the user has 4 friends/contacts with 1 degree of separation from the user, and each of those friends has a 2 out of 10 validity rating. If the user's validity rating is calculated as Final Validity Rating=(Base Validity Rating*10+Average Friend Validity Rating*5)/15, then the Final Validity Rating=(7*10+2*5)/15=5.3.In another example, the user's validity rating is calculated as Final Validity Rating=(Base Validity Rating*10+Average Friend Validity Rating*# of friends)/(# of friends+10), then the Final Validity Rating=(7*10+2*4)/(4+10)=5.6.In some embodiments, contacts with additional degrees of separation are utilized in determining the user's validity rating. In some embodiments, the additional degrees of separation are weighted less, and the weighting decreases as the degree of separation increases. For example, a user's validity rating is 7, 4 friends have validity ratings of 2, and 2 friends of friends have validity ratings of 6.If the user's validity rating is calculated as Final Validity Rating=(Base Validity Rating*10+Average Friend Validity Rating*5+Average Second Degree Friend Validity Rating*2)/17, then the Final Validity Rating=(7*10+2*5+6*2)/17=5.4.
In some embodiments, a web of lies/misinformation/other characterization is generated. A web is able to be generated by fact checking information and determining the relationship of who said what and when. Once information is determined to be misleading, analysis is performed to determine who provided the information, and then analysis is determined if anyone provided the information before, and relationships are determined based on the time/date of the information and/or if there is any connection between those providing the information. For example, the web of misinformation includes a graphic of who spreads misinformation. Each point in the web is able to be an entity or information. For example, a set of Republicans who made the same lie are included in the ring with the misinformation shown in the middle. In another example, the web is a timeline version where the web shows who first said the lie, and then who repeated it. In some embodiments, times/dates of when the misinformation was said or passed on is indicated. In some embodiments, the first person to say the lie is given more negative weight (e.g., for validity rating) as they are the origin of the lie. In another example, a tree structure is used to display the connections of lies. Although specific examples have been provided, there are many different ways of storing the information and showing who provided the information. The web is able to be displayed for viewers to see who says the same information or agrees with a person. The web is shown when the misinformation is detected or when one of the people in the web is detected. For example, commentator X provided misinformation, and 5 people also provided the same misinformation. When commentator X is detected (e.g., voice or facial recognition), a graphic is presented showing the 5 additional people who provided the same misinformation as commentator X.
FIG. 8 illustrates an exemplary web of lies according to some embodiments. In the web shown, a first level provider 800 of misinformation is shown in the middle of the web. Second level 802 and third level 804 misinformation providers are also shown further out in the web.
FIG. 9 illustrates an exemplary web of lies in timeline format according to some embodiments. In the timeline format of the web, a first provider 900 of misinformation is shown, followed by a second provider 902, third provider 904, fourth provider 906, and fifth provider 908.
The web is also able to be used to generate relationships between entities. For example, user A says, “global warming is a hoax.” Then, users who have made a similar or the same comment (e.g., on their Facebook® page, personal website, on a message board, in a tweet) are recommended to connect/join in a social network. Same or similar phrases are detected in any manner such as word/keyword comparison, and then a message or any communication is sent to users that have provided the same/similar phrase. Furthering the example, a popup is displayed on the user's social network page that provides a list of users who have made the same or a similar comment, and the user is asked if he wants to invite the other users to join his network or to join their networks. In some embodiments, a message/tweet is sent to both asking if they want to “connect.” In some embodiments, when misinformation is detected in a person's comment, a message is sent to users in network saying this person said that and the fact check result shows it to be wrong.
In some embodiments, entity/validity ratings are based on relationships with other entities (including the web described above). The relationships are able to be based on same cable network or same company. Using the web above, for example, if entities say the same misinformation, they become linked together or connected and their ratings become related or merged.
In some embodiments, a user whose validity rating is below a lower threshold is automatically de-friended/disconnected/de-linked. In some embodiments, others are prompted with a question if they would like to disconnect from the user whose validity rating is below a lower threshold. In some embodiments, the user with a low validity rating is put in “time out” or his status remains a friend but a non-full friend status. For example, although the user with the low validity rating is connected, he is not able to comment on a connected user's page. In another example, the capabilities of the user are limited on a social networking site if his validity rating drops below threshold.
FIG. 10 illustrates a flowchart of a method of affecting a user based on a validity rating according to some embodiments. In the step 1000, a validity rating of a user is determined to be below a threshold. In the step 1002, the user is affected; for example, the user's access to web pages (e.g., social network) is restricted. In some embodiments, additional or fewer steps are implemented.
In another example, related to the web of lies, people are grouped (e.g., become contacts) if they send/say the same misinformation (may not even know each other, but if they say “global warming is a hoax,” they join the same contacts as others who said same thing). In some embodiments, people who use the same phrase or quote (not necessarily misinformation) become friends or are asked if they would like to become friends as someone who said the same thing. In some embodiments, users with the same or similar validity rating are connected or asked if they would like to connect.
FIG. 11 illustrates a flowchart of a method of connecting users based on similar content and/or validity rating according to some embodiments. In the step 1100, information is compared to determine a match. For example, user comments are compared to determine if they have said the same thing such as “49ers rule!”. In some embodiments, only misinformation or other negative characteristic comments are compared. For example, a database stores comments that have been fact checked and deemed inaccurate as well as the user that made the comment. Then, those comments are compared to determine if there are any matches between users. In some embodiments, user validity ratings are compared as well. In some embodiments, users are grouped by validity rating (e.g., validity rating is stored in a database and sortable by validity rating). In some embodiments, the validity ratings are exactly matched (e.g., all users with a validity rating of 7.0 are matched), and in some embodiments, ranges of validity ratings are matched (e.g., all users with a 7.0 to 7.5 are matched). In some embodiments, opposite comments are searched for. For example, a comment that says “raising taxes hurts the economy” and an opposite comment of “raising taxes helps the economy.” These comments are able to be considered an opposite match, which can then be used to join people with opposing views. In the step 1102, users with matching comments and/or validity ratings are “connected” or asked if they would like to “connect” (e.g., join each others social network). In some embodiments, the steps occur in real-time; for example, immediately after the user tweets, “49ers rule!,” connection suggestions are presented based on the implementation described herein. Additional information is able to be provided to the users such as the matching comment, the validity rating of the other user, and/or any other information. In some embodiments, additional or fewer steps are able to be implemented.
Additional Implementations
In some embodiments, mapping information is fact checked. For example, a camera device (e.g., augmented reality camera or vehicle camera) is used to confirm traffic information on a map. Furthering the example, if a map indicates that traffic is going “fast” (e.g., over 50 mph), yet a vehicle camera indicates the traffic is stopped, then an alert indicating the fact check result of “bad traffic information” is able to be presented. In another example, if a map indicates the traffic a certain way, but a user's GPS (e.g., stand alone device or smart phone GPS) indicates traffic differently, then an alert is provided to other users. In another example, accident information is fact checked by comparing news information and/or police reports. In some embodiments, based on the fact check result, a corrected route is provided. For example, after fact checking a route, it is determined the traffic is not bad for a particular road that was supposedly bad, so the route now includes that road. Fact checking of the mapping information is able to occur periodically, when new information becomes available, or at any other time. In some embodiments, mapping information from different sources is compared. For example, G Maps indicates that traffic is flowing at 65 mph; however, A Maps shows that traffic is only going 35 mph. The information from each source is compared (e.g., determine any differences), and analysis is performed to determine which is more accurate. For example, verification of either is searched for using direct knowledge (e.g., using vehicle camera or a camera positioned on the side of the road or elsewhere to view traffic). Or a news organization is contacted for additional information. In some embodiments, the mapping information and fact checking results are shared among contacts in a social network. In some embodiments, the mapping information is fact checked using social networking source information (e.g., information from contacts). In some embodiments, flying devices (e.g., drones) are utilized to provide information for fact checking. For example, the drones take images and/or videos of traffic conditions and provide the images and/or videos as source information for comparison. In some embodiments, when an accident or other traffic issue occurs, a drone is able to be automatically directed to verify the issue by flying over to the area and acquiring information. For example, a user texts that an accident has occurred on Interstate X. The drone automatically receives/retrieves this information, and flies into position to take pictures of the location including traffic analysis. In another example, a device (e.g., a user's mobile device or a vehicle device) determines that user's vehicle is moving much slower than the speed limit, so the device automatically communicates with a drone (either directly or through a server), and the drone utilizes GPS information of the vehicle to move into position to analyze the traffic issues. The information acquired by the drone is then dispersed to be used as source information. In some embodiments, a server automatically determines the nearest drone to the position of the user device, and directs only that drone to move to acquire information.
FIG. 12 illustrates a flowchart of a method of fact checking mapping information. In the step 1200, mapping information is analyzed (e.g., monitored and processed). In the step 1202, the mapping information is fact checked. In the step 1204, a fact check result is presented. In some embodiments, fewer or additional steps are implemented.
In some embodiments, an icon changes from a happy face to a sad face as misinformation is given by an entity. In some embodiments, an image of a person is changed from smiling to sad/angry. The fact checking system collects 2 to 5 different images of the person by detecting the person (e.g., facial recognition). Then, the system searches/crawls the web for pictures of the person using templates of smile, frown, angry face, tears, tense, stoic, neutral to do the searching. The appropriate pictures are retrieved and stored. The appropriate image is displayed when the misinformation calculation result is in range. For example, when zero misinformation is detected, a smiling face is displayed, and when 3-6 misinformation comments are detected the face displayed is a frowning face, and above 6 is a crying face. In some embodiments, tears or other items are added to an image if the image cannot be found. For example, a sad image cannot be found, so tears are added to a neutral image that was found.
FIG. 13 illustrates a flowchart of a method of using an icon to indicate a validity rating or the validity of information provided by an entity according to some embodiments. In the step 1300, one or more images of an entity are acquired. In the step 1302, the entity's validity rating is determine or the validity of the entity's comments is analyzed. In the step 1304, as the entity's validity rating changes or the validity of the entity's comments are analyzed, the image presented changes. In some embodiments, additional or fewer steps are implemented.
In some embodiments, medallions/medals, tokens, ranks, points, and/or other awards/honors are provided based on user fact checking actions. For example, a user is awarded a different token for providing an accurate fact check result for different items. Furthering the example, a user receives a “donkey” token for fact checking an item from a member of the Democratic party, and an “elephant” token for fact checking an item from a member of the Republican party. In some embodiments, the item has to be an item not previously accurately fact checked (for example, a comment by the President previously not fact checked). In some embodiments, the fact check result is verified automatically, manually or a combination of both. In some embodiments, the user provides the fact checked comment or identification information of the comment as well as support for the fact check result (e.g., a website confirming or disproving the comment). In some embodiments, the user must perform a specified number of fact checks before receiving a token (e.g., 5 fact checks of Democrats to receive a “donkey” token). Additional tokens are able to include, but are not limited to: a “donk-phant” for fact checking both Democrats and Republicans, a “prez” token for fact checking the President, a “sen” token for fact checking a member of the Senate, a “house” token for fact checking a member of the House of Representatives, and a “news” token for fact checking a newscaster. In some embodiments, there are different levels of tokens. For example, one level of tokens is for actually fact checking, and a second level is for merely flagging content as false, questionable, or another characterization, and when the content is fact checked, a user is rewarded for being accurate. For example, if a user flags a comment as questionable, and then the comment is proven to be false, the user is awarded one point towards five points to obtain a second-level token. In some embodiments, a user is penalized (e.g., points lost or demoted) for incorrectly flagging an item and/or providing an incorrect fact check result.
FIG. 14 illustrates a flowchart of a method of awarding honors for fact checking according to some embodiments. In the step 1400, a user fact checks or causes (e.g., flags) information to be fact checked. In the step 1402, the user fact check or flag is analyzed/verified. In the step 1404, the user is rewarded for a valid fact check. In some embodiments, fewer or additional steps are implemented.
In some embodiments, as a user acquires tokens, his label/title changes. For example, the user begins as a level 1 fact checker and is able to increase to reach a level 10 fact checker if he acquires all of the possible tokens. In some embodiments, users are able to specify the type of label/title they receive. For example, users are able to specify “middle ages” which begins the user as a “peon” and goes up to “king.” Other examples include, but are not limited to: Star Wars (ewok to jedi knight or storm trooper to sith lord (good/evil)), police (recruit to chief), military (cadet to captain), political (mayor to president). By enabling the user to specify the set of labels or titles, additional enjoyment occurs for the user. In some embodiments, a set of labels or titles is generated for a group (e.g., social network group). For example, user X generates a football-labeled fact checking group which starts users as “punters” with the goal of becoming a “quarterback.”
In some embodiments, the label/title is based on the tokens, validity rating and/or other fact checking. A user's label/title is able to move up or down based on the acquired tokens, validity rating and/or other fact checking. For example, if a user acquires several tokens, but then provides misinformation several times, a token is able to be taken away. In some embodiments, users are provided additional features or benefits for a higher label/title. For example, a user with a level 8 fact checker label is provided contact information of several members of the news media, whereas a level 1 fact checker is not provided this information. Other benefits, awards and/or rewards are able to be provided, such as monetary or item prizes. In some embodiments, the label/title is able to be used as a filtering tool for searches (e.g., employee searches by employers). For example, an employer is able to search for candidates with “computer engineering skills” and “at least level 5 fact checker.”
In some embodiments, users are rewarded for providing factually accurate information. For example, if a user tweets 100 times (and each of the tweets if fact checked by a fact checking system), the user receives a reward such as a token or any other reward. In some embodiments, the information fact checked has to meet a specified criteria to qualify for counting toward the reward. For example, the user is not able to tweet a well known fact 100 times and receive a reward. In some embodiments, steps to prevent cheating are implemented (e.g., monitoring for redundancy). In some embodiments, the information provided by the user has to be directed to a specific topic (e.g., politics). In some embodiments, the information provided by the user needs to include a keyword to be fact checked to receive a reward. In some embodiments, only information with a specific label (e.g., hashtag) is fact checked and count towards a reward.
In some embodiments, fact check swarms are able to be implemented. Using social media (e.g., Twitter®), one or more users are able to encourage and/or trigger a fact check swarm such that many users attempt to fact check information (e.g., a speech). Those that participate in the fact check swarm are able to be recognized, awarded a prize, or provided another benefit. For example, a user sends a tweet with a specific hashtag and/or other information regarding information to fact check swarm. The users who receive the tweet are then able to participate in the fact check swarm by researching elements of the information and providing fact check results related to the information (e.g., by tweeting a snippet, a fact check result, and a cite to source(s) for the result). The users in the swarm are then able to agree or disagree with the result. If enough (e.g., above a threshold) users agree with the result, the result is accepted and presented (e.g., tweeted or displayed on a television) to users outside of the social network.
FIG. 15 illustrates a flowchart of a method of touchscreen fact checking according to some embodiments. In the step 1500, information is monitored. In the step 1502, the information is processed. In the step 1504, the information is fact checked, after detecting a touch of the touchscreen (or a button or other implementation). In the step 1506, a fact check result is provided. In some embodiments, additional or fewer steps are implemented.
In some embodiments, a touchscreen input is utilized for fact checking When a user wants to flag content (e.g., a commentator talking) to indicate the information is questionable and/or to receive a fact check result, the user taps the touchscreen, and the last n seconds of content are used for fact checking. For example, the content is continuously monitored and processed, and the fact checking system is able to retrieve previously processed information to perform the fact check. Furthering the example, a commentator is talking in a video, a user taps the screen, and the previous 10 seconds of content are fact checked. In some embodiments, an additional time (e.g., 5 seconds) is fact checked. In some embodiments, the fact checking system determines the current segment. For example, the commentator says, “this project is a mess, it is $5B over budget.” The user taps the screen at “$5B” in the video. The fact checking system had determined or determines that the current segment is “it is $5B over budget,” so that segment is fact checked. In some embodiments, the current segment or a previous segment (e.g., to allow a delay of the user thinking) is fact checked. In some embodiments, the user is able to highlight closed caption content for fact checking. In some embodiments, when a user taps the touchscreen, a list of recent/current segments is displayed (e.g., pops up), and the user is able to select one or more of the segments by tapping again. In some embodiments, the list is displayed on a second or third screen. In some embodiments, the list is based on time (e.g., most recent) and/or priority (e.g., most relevant). In some embodiments, content is monitored and processed, but the content is only fact checked when a user touches the touchscreen (or utilizes any other input mechanism). In some embodiments, the user is able to use the touchscreen to select or highlight text, information or a communication to have that text/information/communication fact checked. For example, a user taps a tweet on a screen to have the tweet fact checked. In another example, a user highlights text on a social networking page to have the text fact checked.
In some embodiments, content feeds are modified based on fact checking. Content feeds are fact checked, and a content feed with the highest factual accuracy rating is presented on top/first. Factual accuracy and time/date information are able to be combined for ranking/ordering content feeds.
In some embodiments, fact checking results are presented one after the other or in chronological order as a news/activity feed (and presented via social media/networking).
In some embodiments, fact checking information is displayed on a page mostly (e.g., 95% or more) hidden behind the main content. The user can then click on the page to view the fact check information.
In some embodiments, what time the misinformation was said is included in timeline format or another format.
In some embodiments, misinformation is turned into jokes automatically to send to friends. In some embodiments, misinformation is turned into a postcard or greeting card. The misinformation is turned into a joke and/or card by including the misinformation with a matching image and/or template. The match is able to be made using a keyword or any other manner. For example, if the misinformation is from Politician Z, a caricature of Politician Z is included as well as the misinformation and the fact check result or a correction of the misinformation. In some embodiments, additional text, audio, images and/or video is provided such as an “oops!” sound or text, or silly music or any other effect to add humor.
In some embodiments, the sources are rated using a rating system so that sources that provide false or inaccurate information are rated as poor or unreliable and/or are not used, and sources that rarely provide misinformation are rated as reliable and are used and/or given more weight than others. For example, if a source's rating falls or is below a threshold, that source is not used in fact checking. In some embodiments, users are able to designate the threshold.
In some embodiments, comments are classified (e.g., high/false, mid/misleading, low/unsupported), and users are able to select which classification of information to exclude or receive. In some embodiments, “high” excludes only false information, “mid” excludes false and misleading information, and “low” excludes false, misleading and unsupported information. In an example, user A accepts all information, but user B excludes only false information. When information is excluded, it is muted, crossed out, blacked out, not provided, deleted, not transmitted and/or any other exclusion.
In some embodiments, fact check results are displayed when a user visits a page (or views other content such as a video or a television show) based on previous fact checks done by/for other users. For example, User A visits Webpage X, and a selectable/clickable link appears for the user to see the fact check result that was done by the fact check system for Contact B of that page. In some embodiments, only fact checks performed by/for contacts of the user are displayed. In some embodiments, fact checks performed by/for anyone are displayed. In some embodiments, only manual fact checks are displayed, only automatic fact checks are displayed (e.g., automatically performed by the fact checking system) or only automatic fact checks that have been manually reviewed are displayed. In some embodiments, the user is able to select to have a fact check performed by the fact checking system using the user's sources and compare the results with the previously performed fact check(s). In some embodiments, only differences between the fact check results are displayed. In some embodiments, sources/criteria for the user's fact check implementation is automatically compared with a previous fact check's sources/criteria, and the user's fact check is only performed if the user's fact check sources/criteria is different (e.g., substantially different) from the previous fact check's sources/criteria. Substantially different is able to be determined based on the number of different sources (e.g., number of different sources below a threshold), the quality of the differing sources (e.g., all sources have a 10 reliability rating), and/or any other analysis. For example, if the user's sources are the same except for one additional approved website, then the user's fact check and the previous fact check are considered not to be substantially different.
In some embodiments, users receive benefits by fact checking content. In some embodiments, users register to fact check and/or use their social networking identification for fact checking and receiving benefits. For example, a user agrees to fact check a television program for free access to the television program. In another example, a user fact checks a television program and is able to watch the next television program commercial-free. In another example, a user agrees to fact check a program, and is provided the program is streamed to the user for free. Any benefit is able to be provided, including, but not limited to, commercial-free, shortened/fewer commercials, extended content, a period (e.g., month) of free cable/Internet access, program-specific access for free (e.g., access to News Show X), discounted access (e.g., 50% off), free access to related or unrelated content and/or any other benefit. For example, if the user fact checks News Show X, then they are given free access to News Show Y. In another example, if the user fact checks News Show X, they are given commercial free viewing of the next football game of their favorite team. In some embodiments, users are presented selectable benefits from which to choose. For example, a user is offered a free movie, free sporting event programming or a 50% off download of a new release game, if they fact check News Show X. In some embodiments, the user is required to fact check a certain amount of content and/or receive an accuracy rating above a threshold to receive the benefits. For example, a user agrees to fact check News Network X's content for free access to the content. If the user abuses the agreement, and does not fact check the content or provides inaccurate fact check results, then the user's access is terminated. If the user provides accurate fact check results, then the user is able to continue to receive free access. The user is able to fact check the content in any manner. For example, the user is able to manually fact check the content and provide the results to a central or distributed fact checking system. In another example, the user is able to utilize an automatic fact checking implementation that the user has modified (e.g., by selecting sources, monitoring rules, processing rules). In another example, users are grouped or form groups to fact check content (e.g., crowdsourcing), so that the groups work together to generate fact check results. The benefits are able to be applied to any type of content/services. For example, users of a social networking service are able to receive expanded access for fact checking, or no advertisement browsing as a benefit for fact checking, and/or any other benefits. In additional examples, users who agree to fact check YouTube content or provide a specified number (e.g., 10) accurate fact check results, are allowed to watch YouTube videos without commercials for a day, or users who fact check other users' Facebook® pages do not have any advertisements displayed when they browse Facebook® or listen to a music playing service such as Pandora.
In some embodiments, the social networking fact checking system is a smartphone application including, but not limited to, an iPhone®, Droid® or Blackberry® application. In some embodiments, a broadcaster performs the fact checking. In some embodiments, a user's television performs the fact checking. In some embodiments, a user's mobile device performs the fact checking and causes (e.g., sends) the results to be displayed on the user's television and/or another device. In some embodiments, the television sends the fact checking result to a smart phone.
Utilizing the social networking fact checking system, method and device depends on the implementation to some extent. In some implementations, a television broadcast uses fact checking to fact check what is said or shown to the viewers, and a mobile application, in some embodiments, uses fact checking to ensure a user provides factually correct information. Other examples include where web pages or social networking content (e.g., tweet or Facebook® page) are processed, fact checked, and a result is provided. The fact checking is able to be implemented without user intervention. For example, if a user is watching a news program, the fact checking is able to automatically occur and present the appropriate information. In some embodiments, users are able to disable the fact checking if desired. Similarly, if a user implements fact checking on his mobile application, the fact checking occurs automatically. For a news company, the fact checking is also able to be implemented automatically, so that once installed and/or configured, the news company does not need take any additional steps to utilize the fact checking. In some embodiments, the news company is able to take additional steps such as adding sources. In some embodiments, news companies are able to disable the fact checking, and in some embodiments, news companies are not able to disable the fact checking to avoid tampering and manipulation of data. In some embodiments, one or more aspects of the fact checking are performed manually.
In operation, the social networking fact checking system, method and device enable information to be fact checked in real-time and automatically (e.g., without user intervention). The monitoring, processing, fact checking and providing of status are each able to occur automatically, without user intervention. Results of the fact checking are able to be presented nearly instantaneously, so that viewers of the information are able to be sure they are receiving accurate and truthful information. Additionally, the fact checking is able to clarify meaning, tone, context and/or other elements of a comment to assist a user or viewer. By utilizing the speed and breadth of knowledge that comes with automatic, computational fact checking, the shortcomings of human fact checking are greatly overcome. With instantaneous or nearly instantaneous fact checking, viewers will not be confused as to what information is being fact checked since the results are posted instantaneously or nearly instantaneously versus when a fact check is performed by humans and the results are posted minutes later. The rapid fact checking provides a significant advantage over past data analysis implementations. Any of the steps described herein are able to be implemented automatically. Any of the steps described herein are able to be implemented in real-time or non-real-time.
Examples of Implementation Configurations:
Although the monitoring, processing, fact checking and indicating are able to occur on any device and in any configuration, these are some specific examples of implementation configurations. Monitoring, processing, fact checking and providing all occur on a broadcaster's devices (or other emitters of information including, but not limited to, news stations, radio stations and newspapers). Monitoring, processing and fact checking occur on a broadcaster's devices, and providing occurs on an end-user's device. Monitoring and processing occur on a broadcaster's devices, fact checking occurs on a broadcaster's devices in conjunction with third-party devices, and providing occurs on an end-user's device. Monitoring occurs on a broadcaster's devices, processing and providing occur on an end-user's device, and fact checking occurs on third-party devices. Monitoring, processing, fact checking, and providing all occur on third-party devices. Monitoring, processing, fact checking, and providing all occur on an end-user's device. Monitoring, processing and fact checking occur on a social networking site's device, and providing occurs on an end-user's device. These are only some examples; other implementations are possible. Additionally, supplemental information is able to be monitored for, searched for, processed and/or provided using any of the implementations described herein.
Fact checking includes checking the factual accuracy and/or correctness of information. The type of fact checking is able to be any form of fact checking such as checking historical correctness/accuracy, geographical correctness/accuracy, mathematical correctness/accuracy, scientific correctness/accuracy, literary correctness/accuracy, objective correctness/accuracy, subjective correctness/accuracy, and/or any other correctness/accuracy. Another way of viewing fact checking includes determining the correctness of a statement of objective reality or an assertion of objective reality. Yet another way of viewing fact checking includes determining whether a statement, segment or phrase is true or false.
Although some implementations and/or embodiments have been described related to specific implementations and/or embodiments, and some aspects/elements/steps of some implementations and/or embodiments have been described related to specific implementations and/or embodiments, any of the aspects/elements/steps, implementations and/or embodiments are applicable to other aspects/elements/steps, implementations and/or embodiments described herein.
The present invention has been described in terms of specific embodiments incorporating details to facilitate the understanding of principles of construction and operation of the invention. Such reference herein to specific embodiments and details thereof is not intended to limit the scope of the claims appended hereto. It will be readily apparent to one skilled in the art that other various modifications may be made in the embodiment chosen for illustration without departing from the spirit and scope of the invention as defined by the claims.

Claims (20)

What is claimed is:
1. A method programmed in a non-transitory memory of a device comprising:
a. automatically analyzing social networking information of a user including:
i. capturing the social networking information from a social networking system; and
ii. parsing the social networking information into parsed segments based on punctuation within and at an end of sentences within the social networking information;
b. detecting bending of a flexible screen of the device by detecting pressure, wherein detecting the bending of the flexible screen of the device approves a source as source information;
c. automatically fact checking, using the device, the social networking information to determine a factual accuracy of the social networking information by comparing the parsed segments of the social networking information with the source information, wherein the source information comprises only approved social networking information, wherein the approved social networking information includes user-approved social networking information approved by the user and contact-approved social networking information approved by contacts of the user, wherein the approved social networking information approved by the user or the contacts of the user comprises visited social networking information visited by the user or the contacts of the user but not disapproved by the user or the contacts of the user, wherein the contacts of the user are the contacts of the user in the social networking system, wherein fact checking includes determining a text string of the social networking information is in the source information, wherein the source information containing the text string of the social networking information is an agreeing source, further wherein fact checking includes determining a number of agreeing highest weighted sources and multiplying the number of agreeing highest weighted sources by a highest weight value, determining the number of agreeing second highest weighted sources and multiplying the number of agreeing second highest weighted sources by a second highest weight value, and continuing through determining the number of agreeing lowest weighted sources and multiplying the number of agreeing lowest weighted sources by a lowest weight value and combining the multiplying results to determine a total value, and upon determining the total value is above a fact check threshold, the automatic fact checking result is true, and upon determining the total value is not above the fact check threshold, the automatic fact checking result is false; and
d. automatically presenting a status of the social networking information in real-time based on the automatic fact checking result from the comparison of the social networking information with the source information.
2. The method of claim 1 further comprising:
determining a first confidence score of the automatic fact checking result based on the number of agreeing sources and a second number of disagreeing sources;
comparing the first confidence score of the automatic fact checking result with a confidence threshold;
fact checking using crowdsourced data to generate a crowdsourced result upon determining the automatic fact checking does not return the automatic fact checking result with the first confidence score above the confidence threshold;
comparing the first confidence score of the automatic fact checking result and a second confidence score of the crowdsourced result; and
utilizing the result with a higher confidence score as the status of the social networking information.
3. The method of claim 1 further comprising
automatically sending the status of the social networking information to contacts of the user, wherein only certain types of fact check statuses are automatically sent to the contacts, wherein the certain types are limited to lies and misinformation; and
automatically sending additional information with the status to provide context for the social networking information, wherein the additional information includes a snippet of original content.
4. The method of claim 1 further comprising:
approving information as the source information by microblogging a link to the information including a hashtag within a microblog, the hashtag indicating approval or disapproval of the information.
5. The method of claim 1 further comprising:
approving information as the approved social networking information by the user or the contacts of the user visiting the information using a browser but not disapproving the information;
determining the approved social networking information is linked to additional sources with a same reliability rating as the approved social networking information; and
automatically approving the additional sources as the approved social networking information.
6. The method of claim 1 wherein the approved social networking information approved by the user or the contacts of the user comprises the visited social networking information visited by the user or the contacts of the user while logged in to the social networking system.
7. The method of claim 1 further comprising:
receiving suggested social networking information suggested to the user based on the contacts of the user and characteristics of the user; and
approving or disapproving the suggested social networking information as the source information.
8. The method of claim 1 further comprising:
receiving approvals and disapprovals of information as the source information, wherein the user and the contacts of the user are able to approve and disapprove the information as the source information; and
resolving a conflict as to whether the information is approved or disapproved by using an approval or disapproval choice of a person, who approved or disapproved the information, with a highest validity rating to determine whether the information is approved or disapproved, wherein the highest validity rating is based on fact checking results of comments by the user and the contacts of the user.
9. The method of claim 8 further comprising:
calculating a base validity rating for each user;
calculating contact validity ratings for contacts of each user; and
combining the base validity rating and the contact validity ratings to generate a final validity rating for each user, wherein the contact validity ratings for the contacts are weighted depending on the degree of separation from the user, wherein the highest validity rating is determined from the final validity rating for each user.
10. The method of claim 1 further comprising:
receiving approvals and disapprovals of information as the source information, wherein the user and the contacts of the user are able to approve and disapprove the information as the source information; and
resolving a conflict as to whether the information is approved or disapproved, wherein multiple users approve and disapprove the information, by comparing a first tally of approvals of the information with a second tally of disapprovals of the information, and approving the information upon determining the first tally is greater than the second tally, and disapproving the information upon determining the first tally is not greater than the second tally.
11. The method of claim 1 further comprising:
automatically sharing sources with contacts of the user using social networking immediately after the user approves the sources; and
enabling the contacts to approve or disapprove some or all of the sources shared by the user by selecting approve or disapprove.
12. The method of claim 1 further comprising: automatically determining the user is registering for a second social networking system; and automatically verifying an identity of the user using fact checking, wherein the social networking information is the identity of the user.
13. The method of claim 1 further comprising: automatically determining the user is attempting to connect with a second user on the social networking system, wherein the social networking information is a user communication to the second user; and automatically verifying the user and the second user using fact checking.
14. The method of claim 1 further comprising: automatically detecting a same phrase provided by the user and other users participating on the social networking system, wherein the social networking information includes the same phrase; and automatically connecting the user and the other users based on the same phrase detected, wherein the same phrase detected is not factually accurate as determined by the fact checking.
15. The method of claim 12 wherein comparing the social networking information with source information further includes comparing an image from a current video scan of the user with a past photograph of the user to determine any factual inaccuracies in the current video scan.
16. The method of claim 14 further comprising:
storing fact checked social networking information determined to be not factually accurate and a username of the user corresponding to the fact checked social networking information in a database; and
matching the fact checked social networking information among users.
17. A method programmed in a non-transitory memory of a device comprising:
a. capturing social networking information of a user from a social networking system;
b. parsing the social networking information into parsed segments based on punctuation within and at an end of sentences within the social networking information;
c. approving sources for source information by sending a communication with a specific identifier indicating approval or disapproval of the sources, wherein the user and contacts of the user are able to approve and disapprove information as the source information;
d. adding, to a source data structure containing the source information used for fact checking, an approval or disapproval selection of a contact, who approved or disapproved the information, with a closest relationship to the user upon determining there is a conflict as to whether the information is approved or disapproved;
e. detecting bending of a flexible screen of the device by detecting pressure, wherein detecting the bending of the flexible screen of the device approves a source as the source information;
f. automatically fact checking, using the device, the social networking information to determine a factual accuracy of the social networking information by comparing the parsed segments of the social networking information with the source information in the source data structure to generate an automatic fact checking result, wherein the source information excludes non-social networking information, wherein fact checking includes determining a text string of the social networking information is in the source information, wherein the source information containing the text string of the social networking information is an agreeing source, further wherein fact checking includes determining a number of agreeing highest weighted sources and multiplying the number of agreeing highest weighted sources by a highest weight value, determining the number of agreeing second highest weighted sources and multiplying the number of agreeing second highest weighted sources by a second highest weight value, and continuing through determining the number of agreeing lowest weighted sources and multiplying the number of agreeing lowest weighted sources by a lowest weight value and combining the multiplying results to determine a total value, and upon determining the total value is above a fact check threshold, the automatic fact checking result is true and upon determining the total value is not above the fact check threshold, the automatic fact checking result is false, wherein users in a social network are grouped in different levels, and each level affects a weight of sources approved by the users such that the sources of the users in a higher level have more weight than the sources of the users in a lower level;
g. fact checking the social networking information by comparing the social networking information with crowdsourced data to generate a crowdsourced result; and
h. automatically presenting a status of the social networking information in real-time based on comparing the social networking information with the source information and the crowdsourced data, wherein the status is based on comparing a first confidence score of the automatic fact checking result and a second confidence score of the crowdsourced result, and selecting the result with the higher confidence score, further wherein at least one of the first confidence score of the automatic fact checking result and the second confidence score of the crowdsourced result is presented with the status of the social networking information.
18. A device comprising:
a. a flexible screen configured for detecting bending of the flexible screen based on pressure, wherein detecting the pressure on the flexible screen approves a source as the source information;
b. a non-transitory memory for storing an application for automatically performing the following steps:
i. processing content based on detecting the touch of the touchscreen display, wherein the content comprises social networking information of a user;
ii. fact checking the social networking information to determine a factual accuracy of the social networking information by comparing the social networking information with the source information to generate an automatic fact checking result, wherein fact checking includes determining a text string of the social networking information is in the source information, wherein the source information containing the text string of the social networking information is an agreeing source, further wherein fact checking includes determining a number of agreeing highest weighted sources and multiplying the number of agreeing highest weighted sources by a highest weight value, determining the number of agreeing second highest weighted sources and multiplying the number of agreeing second highest weighted sources by a second highest weight value, and continuing through determining the number of agreeing lowest weighted sources and multiplying the number of agreeing lowest weighted sources by a lowest weight value and combining the multiplying results to determine a total value, and upon determining the total value is above a fact check threshold, the automatic fact checking result is true and upon determining the total value is not above the fact check threshold, the automatic fact checking result is false, wherein the source information excludes non-social networking information, wherein the source information comprises only approved social networking information approved by the user or contacts of the user, wherein the contacts of the user are the contacts of the user in a social networking system, wherein the approved social networking information approved by the user or the contacts of the user comprises visited social networking information visited by the user or the contacts of the user but not disapproved by the user or the contacts of the user, wherein the approved social networking information approved by the user or the contacts of the user comprises the visited social networking information visited by the user or the contacts of the user while logged in to the social networking system, wherein the approved social networking information approved by the user comprises suggested social networking information suggested to the user based on the contacts of the user and characteristics of the user, wherein the user and the contacts of the user are able to approve and disapprove information as source information, and if there is a conflict as to whether the information is approved or disapproved, and multiple users approve and disapprove the information, then the higher of the number of approvals versus disapprovals determines if the information is approved or disapproved, wherein sources are automatically shared with contacts using social networking after the user accepts the sources to enable the contacts to accept or reject some or all of the sources;
iii. fact checking the social network information using crowdsourced data to generate a crowdsourced result;
iv. comparing confidence scores of the automatic fact checking result and the crowdsourced result, and the result with a higher confidence score is used to generate the status of the social networking information;
v. presenting a status of the social networking information in real-time; and
vi. sending the status of the social networking information to the contacts of the user, wherein only certain types of fact check statuses are automatically sent to the contacts, wherein the certain types are limited to lies and misinformation stored, classified and retrieved from a look-up table, further wherein additional information is sent with the status to provide context for the social networking information, wherein the additional information includes a snippet of original content; and
c. a processor for processing the application.
19. The device of claim 18 wherein the application is further for automatically performing:
comparing the status of the social networking information based on the fact checking for the user with a second status of the social networking information based on fact checking for the contacts of the user;
presenting the second status of the social networking information based on the fact checking for the contacts of the user upon determining the status of the social networking information and the second status of the social networking information do not match; and
presenting a quantity of contacts who received the second status of the social networking information.
20. The device of claim 18 wherein the touchscreen display is configured for displaying a list of segments based on priority to be fact checked and receiving a selection by the user of one or more of the segments, and the application for fact checking the one or more selected segments.
US14/260,492 2014-02-28 2014-04-24 Fact checking method and system utilizing social networking information Active 2034-04-29 US9972055B2 (en)

Priority Applications (18)

Application Number Priority Date Filing Date Title
US14/260,492 US9972055B2 (en) 2014-02-28 2014-04-24 Fact checking method and system utilizing social networking information
US14/729,223 US9892109B2 (en) 2014-02-28 2015-06-03 Automatically coding fact check results in a web page
US15/422,642 US9643722B1 (en) 2014-02-28 2017-02-02 Drone device security system
US15/472,858 US10035594B2 (en) 2014-02-28 2017-03-29 Drone device security system
US15/472,894 US10035595B2 (en) 2014-02-28 2017-03-29 Drone device security system
US15/628,907 US10183748B2 (en) 2014-02-28 2017-06-21 Drone device security system for protecting a package
US15/868,193 US10061318B2 (en) 2014-02-28 2018-01-11 Drone device for monitoring animals and vegetation
US16/017,536 US10301023B2 (en) 2014-02-28 2018-06-25 Drone device for news reporting
US16/017,510 US10160542B2 (en) 2014-02-28 2018-06-25 Autonomous mobile device security system
US16/017,168 US10183749B2 (en) 2014-02-28 2018-06-25 Drone device security system
US16/017,133 US10196144B2 (en) 2014-02-28 2018-06-25 Drone device for real estate
US16/126,672 US10220945B1 (en) 2014-02-28 2018-09-10 Drone device
US16/169,328 US10538329B2 (en) 2014-02-28 2018-10-24 Drone device security system for protecting a package
US16/372,933 US10562625B2 (en) 2014-02-28 2019-04-02 Drone device
US16/695,947 US10974829B2 (en) 2014-02-28 2019-11-26 Drone device security system for protecting a package
US16/696,033 US11180250B2 (en) 2014-02-28 2019-11-26 Drone device
US17/194,569 US20210188437A1 (en) 2014-02-28 2021-03-08 Drone device security system for protecting a package
US17/504,782 US20220033077A1 (en) 2014-02-28 2021-10-19 Drone device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201461946043P 2014-02-28 2014-02-28
US14/260,492 US9972055B2 (en) 2014-02-28 2014-04-24 Fact checking method and system utilizing social networking information

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/729,223 Continuation-In-Part US9892109B2 (en) 2014-02-28 2015-06-03 Automatically coding fact check results in a web page

Publications (2)

Publication Number Publication Date
US20150248736A1 US20150248736A1 (en) 2015-09-03
US9972055B2 true US9972055B2 (en) 2018-05-15

Family

ID=54007009

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/260,492 Active 2034-04-29 US9972055B2 (en) 2014-02-28 2014-04-24 Fact checking method and system utilizing social networking information
US14/729,223 Active 2035-09-12 US9892109B2 (en) 2014-02-28 2015-06-03 Automatically coding fact check results in a web page

Family Applications After (1)

Application Number Title Priority Date Filing Date
US14/729,223 Active 2035-09-12 US9892109B2 (en) 2014-02-28 2015-06-03 Automatically coding fact check results in a web page

Country Status (1)

Country Link
US (2) US9972055B2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11386902B2 (en) 2020-04-28 2022-07-12 Bank Of America Corporation System for generation and maintenance of verified data records
WO2022204435A3 (en) * 2021-03-24 2022-11-24 Trust & Safety Laboratory Inc. Multi-platform detection and mitigation of contentious online content
US11687539B2 (en) 2021-03-17 2023-06-27 International Business Machines Corporation Automatic neutral point of view content generation

Families Citing this family (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012054848A1 (en) * 2010-10-21 2012-04-26 Davidson College System and process for ranking content on social networks such as twitter
US9870591B2 (en) * 2013-09-12 2018-01-16 Netspective Communications Llc Distributed electronic document review in a blockchain system and computerized scoring based on textual and visual feedback
US11270263B2 (en) * 2013-09-12 2022-03-08 Netspective Communications Llc Blockchain-based crowdsourced initiatives tracking system
US20150355609A1 (en) * 2014-06-06 2015-12-10 Vivint, Inc. Crowdsourcing automation rules
US10873556B2 (en) * 2014-07-14 2020-12-22 Urban2Suburban Innovations, LLC Systems and methods for compiling, curating, and sharing digital content
US9483582B2 (en) * 2014-09-12 2016-11-01 International Business Machines Corporation Identification and verification of factual assertions in natural language
US10142213B1 (en) * 2014-09-22 2018-11-27 Symantec Corporation Techniques for providing event driven notifications
US9805099B2 (en) 2014-10-30 2017-10-31 The Johns Hopkins University Apparatus and method for efficient identification of code similarity
US10168699B1 (en) * 2015-01-30 2019-01-01 Vecna Technologies, Inc. Interactions between a vehicle and a being encountered by the vehicle
US10594810B2 (en) * 2015-04-06 2020-03-17 International Business Machines Corporation Enhancing natural language processing query/answer systems using social network analysis
US10229219B2 (en) 2015-05-01 2019-03-12 Facebook, Inc. Systems and methods for demotion of content items in a feed
US9973464B2 (en) * 2015-09-09 2018-05-15 International Business Machines Corporation Addressing propagation of inaccurate information in a social networking environment
TWI557526B (en) * 2015-12-18 2016-11-11 林其禹 Selfie-drone system and performing method thereof
US9898444B1 (en) * 2016-03-18 2018-02-20 Amazon Technologies, Inc. Image comparison for user interface testing
US10839004B2 (en) * 2016-09-07 2020-11-17 International Business Machines Corporation Approval between portions of content in digital items
US20180091566A1 (en) * 2016-09-29 2018-03-29 Lenovo (Singapore) Pte. Ltd. Apparatus, method, and program product for content notification
US20190156348A1 (en) * 2017-11-21 2019-05-23 David Levy Market-based Fact Verification Media System and Method
US11087087B1 (en) * 2017-02-15 2021-08-10 Robert Mayer Comparative expression processing
CN110537176A (en) * 2017-02-21 2019-12-03 索尼互动娱乐有限责任公司 Method for determining accuracy of news
US10395509B2 (en) 2017-06-08 2019-08-27 Total Sa Method of preparing and/or carrying out a ground survey in a region of interest and related apparatus
US10459953B1 (en) * 2017-08-04 2019-10-29 Microsoft Technology Licensing, Llc Member privacy protection
GB201713817D0 (en) * 2017-08-29 2017-10-11 Factmata Ltd Fact checking
US10387576B2 (en) * 2017-11-30 2019-08-20 International Business Machines Corporation Document preparation with argumentation support from a deep question answering system
WO2019118459A1 (en) * 2017-12-11 2019-06-20 Celo Labs Inc. Decentralized name verification using recursive attestation
US20200004882A1 (en) * 2018-06-27 2020-01-02 Microsoft Technology Licensing, Llc Misinformation detection in online content
WO2020051500A1 (en) 2018-09-06 2020-03-12 Coffing Daniel L System for providing dialogue guidance
EP3850781A4 (en) * 2018-09-14 2022-05-04 Coffing, Daniel L. Fact management system
WO2020061578A1 (en) * 2018-09-21 2020-03-26 Arizona Board Of Regents On Behalf Of Arizona State University Method and apparatus for collecting, detecting and visualizing fake news
US10395041B1 (en) * 2018-10-31 2019-08-27 Capital One Services, Llc Methods and systems for reducing false positive findings
US11693910B2 (en) * 2018-12-13 2023-07-04 Microsoft Technology Licensing, Llc Personalized search result rankings
US20210089956A1 (en) * 2019-09-19 2021-03-25 International Business Machines Corporation Machine learning based document analysis using categorization
US11494446B2 (en) 2019-09-23 2022-11-08 Arizona Board Of Regents On Behalf Of Arizona State University Method and apparatus for collecting, detecting and visualizing fake news
US20210248688A1 (en) * 2020-01-23 2021-08-12 By The People Localized governmental and political information systems and methods
US11443208B2 (en) 2020-03-19 2022-09-13 International Business Machines Corporation Assessment of inconsistent statements to a recipient group
US11423094B2 (en) * 2020-06-09 2022-08-23 International Business Machines Corporation Document risk analysis
EP4189553A1 (en) * 2020-07-27 2023-06-07 Overlooked, Inc. System and method for addressing disinformation
US11501074B2 (en) * 2020-08-27 2022-11-15 Capital One Services, Llc Representing confidence in natural language processing
CN112000943B (en) * 2020-09-02 2021-07-16 江苏小梦科技有限公司 Information verification method based on edge computing and cloud edge fusion and central cloud platform
US11494058B1 (en) 2020-09-03 2022-11-08 George Damian Interactive methods and systems for exploring ideology attributes on a virtual map
US20220100904A1 (en) * 2020-09-25 2022-03-31 Billy David TEA Computer implemented method for analyzing content on a virtual platform
US11900480B2 (en) * 2020-10-14 2024-02-13 International Business Machines Corporation Mediating between social networks and payed curated content producers in misinformative content mitigation
CN113038075A (en) * 2021-03-03 2021-06-25 四川大学 Data transmission system and method based on air-ground integrated data link
US11405349B1 (en) 2021-04-12 2022-08-02 International Business Machines Corporation Viral message detection and control in social media messaging
US11941052B2 (en) * 2021-06-08 2024-03-26 AVAST Software s.r.o. Online content evaluation system and methods
CN114281091A (en) * 2021-12-20 2022-04-05 中国人民解放军军事科学院国防科技创新研究院 Unmanned aerial vehicle cluster internal information transmission method based on behavior recognition

Citations (100)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5960411A (en) 1997-09-12 1999-09-28 Amazon.Com, Inc. Method and system for placing a purchase order via a communications network
US6161090A (en) 1997-06-11 2000-12-12 International Business Machines Corporation Apparatus and methods for speaker verification/identification/classification employing non-acoustic and/or acoustic models and databases
US6256734B1 (en) * 1998-02-17 2001-07-03 At&T Method and apparatus for compliance checking in a trust management system
US6266664B1 (en) 1997-10-01 2001-07-24 Rulespace, Inc. Method for scanning, analyzing and rating digital information content
WO2001077906A2 (en) 2000-04-10 2001-10-18 Sonicisland Software, Inc. System and method for providing an interactive display interface for information objects
WO2001077907A2 (en) 2000-04-10 2001-10-18 Sonicisland Software, Inc. Interactive display interface for information objects
US20020083468A1 (en) 2000-11-16 2002-06-27 Dudkiewicz Gil Gavriel System and method for generating metadata for segments of a video program
US20020099730A1 (en) 2000-05-12 2002-07-25 Applied Psychology Research Limited Automatic text classification system
WO2003014949A1 (en) 2001-08-06 2003-02-20 Parker Vision, Inc. Method, system, and computer program product for producing and distributing enhanced media
US20030088689A1 (en) 2001-11-08 2003-05-08 Alexander Cedell A. Methods and systems for efficiently delivering data to a plurality of destinations in a computer network
US20030158872A1 (en) * 2002-02-19 2003-08-21 Media Vu, Llc Method and system for checking content before dissemination
US20030210249A1 (en) 2002-05-08 2003-11-13 Simske Steven J. System and method of automatic data checking and correction
WO2004034755A2 (en) 2002-10-11 2004-04-22 Maggio Frank S Remote control system and method for interacting with broadcast content
US20040103032A1 (en) 2000-10-12 2004-05-27 Maggio Frank S. Remote control system and method for interacting with broadcast content
US20040122846A1 (en) * 2002-12-19 2004-06-24 Ibm Corporation Fact verification system
US20040139077A1 (en) 2002-12-20 2004-07-15 Banker Shailen V. Linked information system
US6782510B1 (en) 1998-01-27 2004-08-24 John N. Gross Word checking tool for controlling the language content in documents using dictionaries with modifyable status fields
US20040210824A1 (en) 1996-03-29 2004-10-21 Microsoft Corporation Interactive entertainment system for presenting supplemental interactive content together with continuous video programs
US20050022252A1 (en) 2002-06-04 2005-01-27 Tong Shen System for multimedia recognition, analysis, and indexing, using text, audio, and digital video
US20050060312A1 (en) 2003-09-16 2005-03-17 Michael Curtiss Systems and methods for improving the ranking of news articles
US20050120391A1 (en) 2003-12-02 2005-06-02 Quadrock Communications, Inc. System and method for generation of interactive TV content
US20050132420A1 (en) 2003-12-11 2005-06-16 Quadrock Communications, Inc System and method for interaction with television content
US20060015904A1 (en) 2000-09-08 2006-01-19 Dwight Marcus Method and apparatus for creation, distribution, assembly and verification of media
WO2006036853A2 (en) 2004-09-27 2006-04-06 Exbiblio B.V. Handheld device for capturing
US20060148446A1 (en) 2002-02-28 2006-07-06 Stefan Karlsson Method and distributed rating system for determining rating data in a charging system
US20060206912A1 (en) 2000-09-25 2006-09-14 Klarfeld Kenneth A System and method for personalized TV
US20060248076A1 (en) * 2005-04-21 2006-11-02 Case Western Reserve University Automatic expert identification, ranking and literature search based on authorship in large document collections
US20060253580A1 (en) 2005-05-03 2006-11-09 Dixon Christopher J Website reputation product architecture
US20060293879A1 (en) 2005-05-31 2006-12-28 Shubin Zhao Learning facts from semi-structured text
US20070011710A1 (en) 2005-07-05 2007-01-11 Fu-Sheng Chiu Interactive news gathering and media production control system
GB2428529A (en) 2005-06-24 2007-01-31 Era Digital Media Co Ltd Interactive news gathering and media production control system
US20070043766A1 (en) 2005-08-18 2007-02-22 Nicholas Frank C Method and System for the Creating, Managing, and Delivery of Feed Formatted Content
US20070100730A1 (en) 2005-11-01 2007-05-03 Dmitry Batashvili Normalization algorithm for improving performance of modules depending on price feed changes in real-time transactional trading systems
US20070136782A1 (en) 2004-05-14 2007-06-14 Arun Ramaswamy Methods and apparatus for identifying media content
US20070136781A1 (en) 2000-03-08 2007-06-14 Sony Corporation. Electronic information content distribution processing system, information distribution apparatus, information processing apparatus, and electronic information content distribution processing method
US7249058B2 (en) 2001-11-13 2007-07-24 International Business Machines Corporation Method of promoting strategic documents by bias ranking of search results
US7249380B2 (en) 2002-09-05 2007-07-24 Yinan Yang Method and apparatus for evaluating trust and transitivity of trust of online services
US7266116B2 (en) 2004-12-13 2007-09-04 Skylead Assets Limited HTTP extension header for metering information
WO2007115224A2 (en) 2006-03-30 2007-10-11 Sri International Method and apparatus for annotating media streams
US20070288978A1 (en) 2006-06-08 2007-12-13 Ajp Enterprises, Llp Systems and methods of customized television programming over the internet
US7337462B2 (en) 2000-11-16 2008-02-26 Meevee, Inc. System and method for providing timing data for programming events
US20080077570A1 (en) 2004-10-25 2008-03-27 Infovell, Inc. Full Text Query and Search Systems and Method of Use
US20080109780A1 (en) 2006-10-20 2008-05-08 International Business Machines Corporation Method of and apparatus for optimal placement and validation of i/o blocks within an asic
US20080109285A1 (en) 2006-10-26 2008-05-08 Mobile Content Networks, Inc. Techniques for determining relevant advertisements in response to queries
US20080183726A1 (en) 2007-01-31 2008-07-31 Microsoft Corporation Request-driven on-demand processing
US20080319744A1 (en) 2007-05-25 2008-12-25 Adam Michael Goldberg Method and system for rapid transcription
WO2009006542A2 (en) 2007-07-03 2009-01-08 3M Innovative Properties Company System and method for assessing effectiveness of communication content
US7478078B2 (en) 2004-06-14 2009-01-13 Friendster, Inc. Method for sharing relationship information stored in a social network database with third party databases
US7487334B2 (en) 2005-02-03 2009-02-03 International Business Machines Corporation Branch encoding before instruction cache write
US20090063294A1 (en) 2007-09-05 2009-03-05 Dennis Hoekstra Scoring Feed Data Quality
US20090125382A1 (en) 2007-11-07 2009-05-14 Wise Window Inc. Quantifying a Data Source's Reputation
WO2009089116A2 (en) 2008-01-02 2009-07-16 Three Purple Dots, Inc. Systems and methods for determining the relative bias and accuracy of a piece of news
US20090210395A1 (en) 2008-02-12 2009-08-20 Sedam Marc C Methods, systems, and computer readable media for dynamically searching and presenting factually tagged media clips
US20090265304A1 (en) 2008-04-22 2009-10-22 Xerox Corporation Method and system for retrieving statements of information sources and associating a factuality assessment to the statements
US20090311659A1 (en) 2008-06-11 2009-12-17 Pacific Metrics Corporation System and Method For Scoring Constructed Responses
US7644088B2 (en) 2003-11-13 2010-01-05 Tamale Software Systems and methods for retrieving data
US20100023525A1 (en) 2006-01-05 2010-01-28 Magnus Westerlund Media container file management
US7657424B2 (en) 1999-11-12 2010-02-02 Phoenix Solutions, Inc. System and method for processing sentence based queries
US20100049590A1 (en) 2008-04-03 2010-02-25 Infosys Technologies Limited Method and system for semantic analysis of unstructured data
US20100121973A1 (en) 2008-11-12 2010-05-13 Yuliya Lobacheva Augmentation of streaming media
US20100121638A1 (en) 2008-11-12 2010-05-13 Mark Pinson System and method for automatic speech to text conversion
US7765574B1 (en) 1997-10-27 2010-07-27 The Mitre Corporation Automated segmentation and information extraction of broadcast news via finite state presentation model
WO2010093510A1 (en) 2009-02-12 2010-08-19 Digimarc Corporation Media processing methods and arrangements
WO2010105245A2 (en) 2009-03-12 2010-09-16 Exbiblio B.V. Automatically providing content associated with captured information, such as information captured in real-time
US20100235313A1 (en) 2009-03-16 2010-09-16 Tim Rea Media information analysis and recommendation platform
US7809721B2 (en) 2007-11-16 2010-10-05 Iac Search & Media, Inc. Ranking of objects using semantic and nonsemantic features in a system and method for conducting a search
US20100306166A1 (en) 2009-06-01 2010-12-02 Yahoo! Inc. Automatic fact validation
US20100332583A1 (en) 1999-07-21 2010-12-30 Andrew Szabo Database access system
US20110066587A1 (en) 2009-09-17 2011-03-17 International Business Machines Corporation Evidence evaluation system and method based on question answering
US20110067065A1 (en) 2009-09-14 2011-03-17 Jeyhan Karaoguz System and method in a television system for providing information associated with a user-selected information elelment in a television program
US20110087639A1 (en) 2009-10-12 2011-04-14 Motorola, Inc. Method and apparatus for automatically ensuring consistency among multiple spectrum databases
US20110093258A1 (en) 2009-10-15 2011-04-21 2167959 Ontario Inc. System and method for text cleaning
US20110106615A1 (en) 2009-11-03 2011-05-05 Yahoo! Inc. Multimode online advertisements and online advertisement exchanges
US20110136542A1 (en) 2009-12-09 2011-06-09 Nokia Corporation Method and apparatus for suggesting information resources based on context and preferences
US20110166860A1 (en) 2006-03-06 2011-07-07 Tran Bao Q Spoken mobile engine
WO2011088264A1 (en) 2010-01-13 2011-07-21 Qualcomm Incorporated Optimized delivery of interactivity event assets in a mobile broadcast communications system
US20110313757A1 (en) 2010-05-13 2011-12-22 Applied Linguistics Llc Systems and methods for advanced grammar checking
US20120005221A1 (en) * 2010-06-30 2012-01-05 Microsoft Corporation Extracting facts from social network messages
US20120102405A1 (en) * 2010-10-25 2012-04-26 Evidence-Based Solutions, Inc. System and method for matching person-specific data with evidence resulting in recommended actions
US8185448B1 (en) * 2011-06-10 2012-05-22 Myslinski Lucas J Fact checking method and system
US20120131015A1 (en) 2010-11-24 2012-05-24 King Abdulaziz City For Science And Technology System and method for rating a written document
US20120191757A1 (en) * 2011-01-20 2012-07-26 John Nicholas Gross System & Method For Compiling Intellectual Property Asset Data
US20120198319A1 (en) 2011-01-28 2012-08-02 Giovanni Agnoli Media-Editing Application with Video Segmentation and Caching Capabilities
US8290960B2 (en) 2008-10-24 2012-10-16 International Business Machines Corporation Configurable trust context assignable to facts and associated trust metadata
US8290924B2 (en) 2008-08-29 2012-10-16 Empire Technology Development Llc Providing answer to keyword based query from natural owner of information
US20120272143A1 (en) 2011-04-22 2012-10-25 John Gillick System and Method for Audience-Vote-Based Copyediting
US20120317046A1 (en) 2011-06-10 2012-12-13 Myslinski Lucas J Candidate fact checking method and system
US20130099925A1 (en) 2002-08-23 2013-04-25 John C. Pederson Intelligent Observation And Identification Database System
US20130110748A1 (en) 2011-08-30 2013-05-02 Google Inc. Policy Violation Checker
US20130151240A1 (en) * 2011-06-10 2013-06-13 Lucas J. Myslinski Interactive fact checking system
US20130158984A1 (en) * 2011-06-10 2013-06-20 Lucas J. Myslinski Method of and system for validating a fact checking system
US20130159127A1 (en) 2011-06-10 2013-06-20 Lucas J. Myslinski Method of and system for rating sources for fact checking
US20130198196A1 (en) 2011-06-10 2013-08-01 Lucas J. Myslinski Selective fact checking method and system
US20130218761A1 (en) * 2011-10-10 2013-08-22 David J. Kwasny System, method,computer product and website for automobile collision repair
US20130218788A1 (en) * 2012-02-19 2013-08-22 Factlink Inc. System and method for monitoring credibility of online content and authority of users
US20130346160A1 (en) 2012-06-26 2013-12-26 Myworld, Inc. Commerce System and Method of Using Consumer Feedback to Invoke Corrective Action
US20140074751A1 (en) * 2012-09-11 2014-03-13 Sage Decision Systems, Llc System and method for calculating future value
WO2015044179A1 (en) 2013-09-27 2015-04-02 Trooclick France Apparatus, systems and methods for scoring and distributing the reliability of online information
US9300755B2 (en) 2009-04-20 2016-03-29 Matthew Gerke System and method for determining information reliability
US9900415B2 (en) * 2012-04-02 2018-02-20 Samsung Electronics Co., Ltd. Content sharing method and mobile terminal using the method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7630986B1 (en) * 1999-10-27 2009-12-08 Pinpoint, Incorporated Secure data interchange
WO2003096669A2 (en) * 2002-05-10 2003-11-20 Reisman Richard R Method and apparatus for browsing using multiple coordinated device
US7831928B1 (en) 2006-06-22 2010-11-09 Digg, Inc. Content visualization
US20080243531A1 (en) * 2007-03-29 2008-10-02 Yahoo! Inc. System and method for predictive targeting in online advertising using life stage profiling
US8600968B2 (en) * 2011-04-19 2013-12-03 Microsoft Corporation Predictively suggesting websites

Patent Citations (119)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040210824A1 (en) 1996-03-29 2004-10-21 Microsoft Corporation Interactive entertainment system for presenting supplemental interactive content together with continuous video programs
US6161090A (en) 1997-06-11 2000-12-12 International Business Machines Corporation Apparatus and methods for speaker verification/identification/classification employing non-acoustic and/or acoustic models and databases
US5960411A (en) 1997-09-12 1999-09-28 Amazon.Com, Inc. Method and system for placing a purchase order via a communications network
US6266664B1 (en) 1997-10-01 2001-07-24 Rulespace, Inc. Method for scanning, analyzing and rating digital information content
US7765574B1 (en) 1997-10-27 2010-07-27 The Mitre Corporation Automated segmentation and information extraction of broadcast news via finite state presentation model
US6782510B1 (en) 1998-01-27 2004-08-24 John N. Gross Word checking tool for controlling the language content in documents using dictionaries with modifyable status fields
US6256734B1 (en) * 1998-02-17 2001-07-03 At&T Method and apparatus for compliance checking in a trust management system
US20100332583A1 (en) 1999-07-21 2010-12-30 Andrew Szabo Database access system
US7657424B2 (en) 1999-11-12 2010-02-02 Phoenix Solutions, Inc. System and method for processing sentence based queries
US20070136781A1 (en) 2000-03-08 2007-06-14 Sony Corporation. Electronic information content distribution processing system, information distribution apparatus, information processing apparatus, and electronic information content distribution processing method
WO2001077906A2 (en) 2000-04-10 2001-10-18 Sonicisland Software, Inc. System and method for providing an interactive display interface for information objects
WO2001077907A2 (en) 2000-04-10 2001-10-18 Sonicisland Software, Inc. Interactive display interface for information objects
US20020099730A1 (en) 2000-05-12 2002-07-25 Applied Psychology Research Limited Automatic text classification system
US20060015904A1 (en) 2000-09-08 2006-01-19 Dwight Marcus Method and apparatus for creation, distribution, assembly and verification of media
US20060212904A1 (en) 2000-09-25 2006-09-21 Klarfeld Kenneth A System and method for personalized TV
US20060206912A1 (en) 2000-09-25 2006-09-14 Klarfeld Kenneth A System and method for personalized TV
US20040103032A1 (en) 2000-10-12 2004-05-27 Maggio Frank S. Remote control system and method for interacting with broadcast content
US20020083468A1 (en) 2000-11-16 2002-06-27 Dudkiewicz Gil Gavriel System and method for generating metadata for segments of a video program
US7337462B2 (en) 2000-11-16 2008-02-26 Meevee, Inc. System and method for providing timing data for programming events
WO2003014949A1 (en) 2001-08-06 2003-02-20 Parker Vision, Inc. Method, system, and computer program product for producing and distributing enhanced media
US20030088689A1 (en) 2001-11-08 2003-05-08 Alexander Cedell A. Methods and systems for efficiently delivering data to a plurality of destinations in a computer network
US7249058B2 (en) 2001-11-13 2007-07-24 International Business Machines Corporation Method of promoting strategic documents by bias ranking of search results
US20030158872A1 (en) * 2002-02-19 2003-08-21 Media Vu, Llc Method and system for checking content before dissemination
US20050235199A1 (en) 2002-02-19 2005-10-20 Wesley Adams Method and system for checking content before dissemination
US20060064633A1 (en) 2002-02-19 2006-03-23 Wesley Adams Method and system for checking content before dissemination
US20060148446A1 (en) 2002-02-28 2006-07-06 Stefan Karlsson Method and distributed rating system for determining rating data in a charging system
US20030210249A1 (en) 2002-05-08 2003-11-13 Simske Steven J. System and method of automatic data checking and correction
US20050022252A1 (en) 2002-06-04 2005-01-27 Tong Shen System for multimedia recognition, analysis, and indexing, using text, audio, and digital video
US20130099925A1 (en) 2002-08-23 2013-04-25 John C. Pederson Intelligent Observation And Identification Database System
US7249380B2 (en) 2002-09-05 2007-07-24 Yinan Yang Method and apparatus for evaluating trust and transitivity of trust of online services
WO2004034755A2 (en) 2002-10-11 2004-04-22 Maggio Frank S Remote control system and method for interacting with broadcast content
US20040122846A1 (en) * 2002-12-19 2004-06-24 Ibm Corporation Fact verification system
US20040139077A1 (en) 2002-12-20 2004-07-15 Banker Shailen V. Linked information system
US20120158711A1 (en) 2003-09-16 2012-06-21 Google Inc. Systems and methods for improving the ranking of news articles
US20050060312A1 (en) 2003-09-16 2005-03-17 Michael Curtiss Systems and methods for improving the ranking of news articles
US7644088B2 (en) 2003-11-13 2010-01-05 Tamale Software Systems and methods for retrieving data
US20050120391A1 (en) 2003-12-02 2005-06-02 Quadrock Communications, Inc. System and method for generation of interactive TV content
US20050132420A1 (en) 2003-12-11 2005-06-16 Quadrock Communications, Inc System and method for interaction with television content
US20070136782A1 (en) 2004-05-14 2007-06-14 Arun Ramaswamy Methods and apparatus for identifying media content
US7478078B2 (en) 2004-06-14 2009-01-13 Friendster, Inc. Method for sharing relationship information stored in a social network database with third party databases
WO2006036853A2 (en) 2004-09-27 2006-04-06 Exbiblio B.V. Handheld device for capturing
US20080077570A1 (en) 2004-10-25 2008-03-27 Infovell, Inc. Full Text Query and Search Systems and Method of Use
US7266116B2 (en) 2004-12-13 2007-09-04 Skylead Assets Limited HTTP extension header for metering information
US7487334B2 (en) 2005-02-03 2009-02-03 International Business Machines Corporation Branch encoding before instruction cache write
US20060248076A1 (en) * 2005-04-21 2006-11-02 Case Western Reserve University Automatic expert identification, ranking and literature search based on authorship in large document collections
US20060253580A1 (en) 2005-05-03 2006-11-09 Dixon Christopher J Website reputation product architecture
US20060293879A1 (en) 2005-05-31 2006-12-28 Shubin Zhao Learning facts from semi-structured text
GB2428529A (en) 2005-06-24 2007-01-31 Era Digital Media Co Ltd Interactive news gathering and media production control system
US20070011710A1 (en) 2005-07-05 2007-01-11 Fu-Sheng Chiu Interactive news gathering and media production control system
US20070043766A1 (en) 2005-08-18 2007-02-22 Nicholas Frank C Method and System for the Creating, Managing, and Delivery of Feed Formatted Content
US20070100730A1 (en) 2005-11-01 2007-05-03 Dmitry Batashvili Normalization algorithm for improving performance of modules depending on price feed changes in real-time transactional trading systems
US20100023525A1 (en) 2006-01-05 2010-01-28 Magnus Westerlund Media container file management
US8225164B2 (en) 2006-01-05 2012-07-17 Telefonaktiebolaget Lm Ericsson (Publ) Media container file management
US20110166860A1 (en) 2006-03-06 2011-07-07 Tran Bao Q Spoken mobile engine
WO2007115224A2 (en) 2006-03-30 2007-10-11 Sri International Method and apparatus for annotating media streams
US20070288978A1 (en) 2006-06-08 2007-12-13 Ajp Enterprises, Llp Systems and methods of customized television programming over the internet
US20080109780A1 (en) 2006-10-20 2008-05-08 International Business Machines Corporation Method of and apparatus for optimal placement and validation of i/o blocks within an asic
US20080109285A1 (en) 2006-10-26 2008-05-08 Mobile Content Networks, Inc. Techniques for determining relevant advertisements in response to queries
US20080183726A1 (en) 2007-01-31 2008-07-31 Microsoft Corporation Request-driven on-demand processing
US20080319744A1 (en) 2007-05-25 2008-12-25 Adam Michael Goldberg Method and system for rapid transcription
WO2009006542A2 (en) 2007-07-03 2009-01-08 3M Innovative Properties Company System and method for assessing effectiveness of communication content
US20090063294A1 (en) 2007-09-05 2009-03-05 Dennis Hoekstra Scoring Feed Data Quality
US20090125382A1 (en) 2007-11-07 2009-05-14 Wise Window Inc. Quantifying a Data Source's Reputation
US7809721B2 (en) 2007-11-16 2010-10-05 Iac Search & Media, Inc. Ranking of objects using semantic and nonsemantic features in a system and method for conducting a search
WO2009089116A2 (en) 2008-01-02 2009-07-16 Three Purple Dots, Inc. Systems and methods for determining the relative bias and accuracy of a piece of news
US20090210395A1 (en) 2008-02-12 2009-08-20 Sedam Marc C Methods, systems, and computer readable media for dynamically searching and presenting factually tagged media clips
US20100049590A1 (en) 2008-04-03 2010-02-25 Infosys Technologies Limited Method and system for semantic analysis of unstructured data
US20090265304A1 (en) 2008-04-22 2009-10-22 Xerox Corporation Method and system for retrieving statements of information sources and associating a factuality assessment to the statements
US20090311659A1 (en) 2008-06-11 2009-12-17 Pacific Metrics Corporation System and Method For Scoring Constructed Responses
US8290924B2 (en) 2008-08-29 2012-10-16 Empire Technology Development Llc Providing answer to keyword based query from natural owner of information
US8290960B2 (en) 2008-10-24 2012-10-16 International Business Machines Corporation Configurable trust context assignable to facts and associated trust metadata
US20100121638A1 (en) 2008-11-12 2010-05-13 Mark Pinson System and method for automatic speech to text conversion
US20100121973A1 (en) 2008-11-12 2010-05-13 Yuliya Lobacheva Augmentation of streaming media
WO2010093510A1 (en) 2009-02-12 2010-08-19 Digimarc Corporation Media processing methods and arrangements
US20110043652A1 (en) 2009-03-12 2011-02-24 King Martin T Automatically providing content associated with captured information, such as information captured in real-time
WO2010105245A2 (en) 2009-03-12 2010-09-16 Exbiblio B.V. Automatically providing content associated with captured information, such as information captured in real-time
US20100235313A1 (en) 2009-03-16 2010-09-16 Tim Rea Media information analysis and recommendation platform
US9300755B2 (en) 2009-04-20 2016-03-29 Matthew Gerke System and method for determining information reliability
US20100306166A1 (en) 2009-06-01 2010-12-02 Yahoo! Inc. Automatic fact validation
US20110067065A1 (en) 2009-09-14 2011-03-17 Jeyhan Karaoguz System and method in a television system for providing information associated with a user-selected information elelment in a television program
US20110066587A1 (en) 2009-09-17 2011-03-17 International Business Machines Corporation Evidence evaluation system and method based on question answering
US20110087639A1 (en) 2009-10-12 2011-04-14 Motorola, Inc. Method and apparatus for automatically ensuring consistency among multiple spectrum databases
US20110093258A1 (en) 2009-10-15 2011-04-21 2167959 Ontario Inc. System and method for text cleaning
US20110106615A1 (en) 2009-11-03 2011-05-05 Yahoo! Inc. Multimode online advertisements and online advertisement exchanges
US20110136542A1 (en) 2009-12-09 2011-06-09 Nokia Corporation Method and apparatus for suggesting information resources based on context and preferences
WO2011088264A1 (en) 2010-01-13 2011-07-21 Qualcomm Incorporated Optimized delivery of interactivity event assets in a mobile broadcast communications system
US20110313757A1 (en) 2010-05-13 2011-12-22 Applied Linguistics Llc Systems and methods for advanced grammar checking
US20120005221A1 (en) * 2010-06-30 2012-01-05 Microsoft Corporation Extracting facts from social network messages
US20120102405A1 (en) * 2010-10-25 2012-04-26 Evidence-Based Solutions, Inc. System and method for matching person-specific data with evidence resulting in recommended actions
US20120131015A1 (en) 2010-11-24 2012-05-24 King Abdulaziz City For Science And Technology System and method for rating a written document
US20120191757A1 (en) * 2011-01-20 2012-07-26 John Nicholas Gross System & Method For Compiling Intellectual Property Asset Data
US20120198319A1 (en) 2011-01-28 2012-08-02 Giovanni Agnoli Media-Editing Application with Video Segmentation and Caching Capabilities
US20120272143A1 (en) 2011-04-22 2012-10-25 John Gillick System and Method for Audience-Vote-Based Copyediting
US20130158984A1 (en) * 2011-06-10 2013-06-20 Lucas J. Myslinski Method of and system for validating a fact checking system
US20130191298A1 (en) 2011-06-10 2013-07-25 Lucas J. Myslinski Method of and system for indicating a validity rating of an entity
US20130060860A1 (en) * 2011-06-10 2013-03-07 Lucas J. Myslinski Social media fact checking method and system
US20130060757A1 (en) 2011-06-10 2013-03-07 Lucas J. Myslinski Method of and system for utilizing fact checking results to generate search engine results
US8401919B2 (en) 2011-06-10 2013-03-19 Lucas J. Myslinski Method of and system for fact checking rebroadcast information
US20130074110A1 (en) 2011-06-10 2013-03-21 Lucas J. Myslinski Method of and system for parallel fact checking
US8423424B2 (en) * 2011-06-10 2013-04-16 Lucas J. Myslinski Web page fact checking system and method
US8229795B1 (en) 2011-06-10 2012-07-24 Myslinski Lucas J Fact checking methods
US8185448B1 (en) * 2011-06-10 2012-05-22 Myslinski Lucas J Fact checking method and system
US8458046B2 (en) 2011-06-10 2013-06-04 Lucas J. Myslinski Social media fact checking method and system
US20130151240A1 (en) * 2011-06-10 2013-06-13 Lucas J. Myslinski Interactive fact checking system
US8321295B1 (en) 2011-06-10 2012-11-27 Myslinski Lucas J Fact checking method and system
US20130159127A1 (en) 2011-06-10 2013-06-20 Lucas J. Myslinski Method of and system for rating sources for fact checking
US20120317046A1 (en) 2011-06-10 2012-12-13 Myslinski Lucas J Candidate fact checking method and system
US20130198196A1 (en) 2011-06-10 2013-08-01 Lucas J. Myslinski Selective fact checking method and system
US8510173B2 (en) 2011-06-10 2013-08-13 Lucas J. Myslinski Method of and system for fact checking email
US20130308920A1 (en) 2011-06-10 2013-11-21 Lucas J. Myslinski Method of and system for fact checking recorded information
US20130311388A1 (en) 2011-06-10 2013-11-21 Lucas J. Myslinski Method of and system for fact checking flagged comments
US8583509B1 (en) 2011-06-10 2013-11-12 Lucas J. Myslinski Method of and system for fact checking with a camera device
US20130110748A1 (en) 2011-08-30 2013-05-02 Google Inc. Policy Violation Checker
US20130218761A1 (en) * 2011-10-10 2013-08-22 David J. Kwasny System, method,computer product and website for automobile collision repair
US20130218788A1 (en) * 2012-02-19 2013-08-22 Factlink Inc. System and method for monitoring credibility of online content and authority of users
US9900415B2 (en) * 2012-04-02 2018-02-20 Samsung Electronics Co., Ltd. Content sharing method and mobile terminal using the method
US20130346160A1 (en) 2012-06-26 2013-12-26 Myworld, Inc. Commerce System and Method of Using Consumer Feedback to Invoke Corrective Action
US20140074751A1 (en) * 2012-09-11 2014-03-13 Sage Decision Systems, Llc System and method for calculating future value
WO2015044179A1 (en) 2013-09-27 2015-04-02 Trooclick France Apparatus, systems and methods for scoring and distributing the reliability of online information

Non-Patent Citations (23)

* Cited by examiner, † Cited by third party
Title
<http://en.wikipedia.org/wiki/SpinSpotter> (Jul. 1, 2010).
<http://jayrosen.posterous.com/my-simple-fix-for-the-messed-up-sunday-shows> (Dec. 27, 2009).
Accelerated Examination Support Document from U.S. Appl. No. 13/287,804.
Andreas Juffinger et al.: "Blog credibility ranking by exploiting verified content," Proceedings of the 3rd Workshop on Information Credibility on the Web, WICOW '09, Apr. 20, 2009 (Apr. 20, 2009), p. 51.
Announcing Truth Teller beta, a better way to watch political speech, <http://www.washingtonpost.com/blogs/ask-the-post/wp/2013/09/25/announcing-truth-teller-beta-a-better-way-to-watch-political-speech/>, Sep. 25, 2013.
Kim, K.-S., Sin, S.-C. J., & Yoo-Lee, E. Y. (2014). Undergraduates' Use of Social Media as Information Sources. College & Research Libraries, 75(4), 442-457. (Year: 2014). *
LazyTruth Chrome extension fact checks chain emails, <http://www.theverge.com/2012/11/14/3646294/lazytruth-fact-check-chain-email>, Nov. 14, 2012.
Notice of Allowance including reasons for allowance from U.S. Appl. No. 13/287,804.
Notice of Allowance including reasons for allowance from U.S. Appl. No. 13/448,991.
Notice of Allowance including reasons for allowance from U.S. Appl. No. 13/528,563.
Notice of Allowance including reasons for allowance from U.S. Appl. No. 13/632,490.
Notice of Allowance including reasons for allowance from U.S. Appl. No. 13/669,711.
Notice of Allowance including reasons for allowance from U.S. Appl. No. 13/669,819.
Notice of Allowance including reasons for allowance from U.S. Appl. No. 13/760,408.
Notice of Allowance including reasons for allowance from U.S. Appl. No. 13/946,333.
Office Action from U.S. Appl. No. 13/669,711.
Office Action from U.S. Appl. No. 13/669,819.
Office Action from U.S. Appl. No. 13/946,333.
Preexam Search Document from U.S. Appl. No. 13/287,804.
Ryosuke Nagura et al.: "A method of rating the credibility of news documents on the web," Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '06, Aug. 6, 2006, p. 683.
Sorry for the Spam, Truth Goggles, <http://slifty.com/projects/truth-goggles/>, Oct. 29, 2012.
Ulken, A Question of Balance: Are Google News search results politically biased? May 5, 2005, <http://ulken.com/thesis/googlenews-bias-study.pdf.
Wendell Cochran; Journalists aren't frauds; the business has fine lines; Ethics classes would help them stay on right side; The Sun. Baltimore, Md.: Jul 19, 1998. p. 6.C; http://proguest.umi.com/pgdweb?did=32341381&sid=3&Fmt=3&clientId=19649&RQT=309&VName=PQD.

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11386902B2 (en) 2020-04-28 2022-07-12 Bank Of America Corporation System for generation and maintenance of verified data records
US11687539B2 (en) 2021-03-17 2023-06-27 International Business Machines Corporation Automatic neutral point of view content generation
WO2022204435A3 (en) * 2021-03-24 2022-11-24 Trust & Safety Laboratory Inc. Multi-platform detection and mitigation of contentious online content

Also Published As

Publication number Publication date
US9892109B2 (en) 2018-02-13
US20150293897A1 (en) 2015-10-15
US20150248736A1 (en) 2015-09-03

Similar Documents

Publication Publication Date Title
US9972055B2 (en) Fact checking method and system utilizing social networking information
US8862505B2 (en) Method of and system for fact checking recorded information
US9886471B2 (en) Electronic message board fact checking
US8768782B1 (en) Optimized cloud computing fact checking
US9015037B2 (en) Interactive fact checking system
US9483159B2 (en) Fact checking graphical user interface including fact checking icons
US9176957B2 (en) Selective fact checking method and system
US20120317046A1 (en) Candidate fact checking method and system
US20130159127A1 (en) Method of and system for rating sources for fact checking

Legal Events

Date Code Title Description
STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

Year of fee payment: 4