WO2021262180A1 - System and method for detecting misinformation and fake news via network analysis - Google Patents

System and method for detecting misinformation and fake news via network analysis Download PDF

Info

Publication number
WO2021262180A1
WO2021262180A1 PCT/US2020/039658 US2020039658W WO2021262180A1 WO 2021262180 A1 WO2021262180 A1 WO 2021262180A1 US 2020039658 W US2020039658 W US 2020039658W WO 2021262180 A1 WO2021262180 A1 WO 2021262180A1
Authority
WO
WIPO (PCT)
Prior art keywords
weights
users
articles
user
misinformation
Prior art date
Application number
PCT/US2020/039658
Other languages
French (fr)
Inventor
Elan Pavlov
Original Assignee
Hints Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hints Inc. filed Critical Hints Inc.
Priority to PCT/US2020/039658 priority Critical patent/WO2021262180A1/en
Publication of WO2021262180A1 publication Critical patent/WO2021262180A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • G06F16/353Clustering; Classification into predefined classes

Definitions

  • the present invention relates generally to automated detection through network and more particularly to method and systems for detecting fake news and other misinformation through network analysis.
  • Fake news is considered a relatively hard problem with important social impact With the rise of automated disinformation, there is a need for automated ways to identify fake news.
  • Network analysis of the social and other accounts that share fake news can help classify or identify it, and limit its reach, as it is being shared. This is in contrast to content analysis plus source analysis, which attempt to limit fake news before it is shared.
  • Twitter attempts to detect bots with humans reporting (https://www.theverge.com/2018/10/31/18048838/twitter-report-fake-accounts-spam-bot- crackdown accessed 11/1/2018).
  • Figure 1 shows the history of search and misinformation detection.
  • the final block titled “Network- based identification” relates to the present invention. Thus, even though the entire figure is marked as “prior art", this last block is not taught or suggested in the prior art.
  • the present invention uses a method somewhat similar to the prior art HITS method to detect misinformation and fake news.
  • HITS each node is assigned two numerical scores.
  • the Authoritative score indicates how likely a given webpage is likely to have good information, while the Hub score indicates how likely it is to link to pages with a good Authoritative score.
  • a page with a good Authoritative score is pointed to by many pages with good Hubness, and one with a good Hub score points to many Authoritative pages.
  • These definitions are recursive, as each page's score references the scores of neighbors in its link graph. This recursion is solved by assigning initial weights to each page and updating the scores until the values converge.
  • the present invention modifies the HITS method to pair people with articles, and will be called HINTS, as opposed to HITS.
  • the HINTS method is also recursive and more accurately identifies misinformation than HITS.
  • the present invention represents a method for detection of misinformation without the need to analyze any articles that includes forming a mixed graph containing at least two different node types, such as users and articles with edges between users and articles with user weights for user nodes and article weights for article nodes.
  • Seed nodes are planted at least one user node and at least one article node.
  • User weights and article weights are manually assigned to the seed nodes, then neighborhoods are defined for the seed nodes.
  • a HITS-like algorithm is then run for a predetermined number of rounds updating both people and articles while keeping the weights of the seed nodes constant to converge the graph for the weights of articles and users. Finally, a set of highest weights for users and/or articles is outputted and possible remedial action can be taken.
  • An exemplary embodiment of the disclosed subject matter is a computer program product comprising an non-transitory computer readable medium; a first computer instruction forming a mixed graph containing at least two different node types, users and articles with edges between users and articles, with user weights for user nodes and article weights for article nodes; a second computer instruction planting at least one seed user node and at least one seed article node into said mixed graph; a third computer instruction manually assigning user weights and article weights to the seed nodes; a forth computer instruction defining neighborhoods of the seed nodes; a fifth computer instruction running a HITS-like algorithm for a predetermined number of rounds updating both people and articles while keeping the weights of the seed nodes constant to converge the mixed graph for the weights of articles and users; a sixth computer instruction outputting a set of highest weights for users and/or articles; wherein, said first, second, third, forth, fifth and sixth program instructions are stored on said non- transitory computer readable medium and executed on a computing device.
  • Figure 1 shows the history and evolution of search techniques and of misinformation detection.
  • FIG. 2 shows an outline of an embodiment of the present invention HITS.
  • Figure 2 shows both the HITS method and the HINTS method.
  • the present invention describes an automated, robust fake news detector which we call the Human Interaction News Trust System [HINTS] to detect fake news and misinformation, even in the presence of adversaries who know how the detector works.
  • HINTS Human Interaction News Trust System
  • Our key tools are network dynamics and classification of members of the network in terms of their historical interaction with news.
  • the present inventions looks at how known and suspected fake news propagates in a dynamic network of people, and uses this data to identify new posts/publication/news items that are likely to be fake as well. This also gives information about accounts controlled by an adversary. Platforms can use this data to limit the damage a fake news article can do by limiting the reach of such an article. And while limiting its reach, they can still increase confidence in the fakeness of the article e.g., by making it visible to small groups of users whose use patterns are the strongest indicators.
  • the present invention works for a wide variety of classification problems.
  • a key insight behind our fake news detector is that we focus on limiting the exposure of people to fake news, rather than trying to block all such news from being shared. This dramatically increases the cost of spreading fake news, as it is most cost effective when people spread it on their own. For instance, there is very little fake news on broadcast television.
  • a credulous person is someone who disproportionately interacts positively with fake news, and a piece of fake news is one that is interacted with disproportionately by credulous people.
  • some of these credulous accounts are intentionally sharing fake news, and may not be real people.
  • this definition is recursive and converges: we assign an initial fake value to each article and a credulous value to each user, and iterate.
  • modes of interactions can include liking, sharing, spending time reading a source (estimated by for instance mouse movement over an article), commenting, reposting, following, favoriting, and the like.
  • Other metrics such as bounce time (amount of time before user returns to previous page) and changes in search patterns can also be used.
  • this signal might be weak (or wrong) — for example, some individuals might comment to disprove an article.
  • different modes of interaction can be assigned different weights, to make the aggregate signal useful. (And despite disclaimers, retweets are endorsements
  • the method of user identification can vary. Some websites have enough user activity on their website to rank the user by themselves. Others can utilize plugins on other websites such as Facebook or Twitter plugins, or can use the browser, such as Google sync
  • Another way is to utilize ad network data (https://en.wikipedia.org/wiki/Advertising network accessed 10/13/2018), such as cookies on a user's computer, or device identification,
  • HITS itself is not bipartite, but a person and a webpage are different entities
  • one side are people and the other side are articles or posts (or clusters of articles and posts)
  • there is a weighted link where there is an interactions between a person and an article.
  • the weight can depend on the type of interaction, and can be negative if the person saw but declined to interact with the article - e.g., if a person habitually interacts with links they see on their twitter feed, and we know (or can assign a probability) that they saw an article and did not interact with it.
  • Weights can be modified by the individual's propensity to interact with content (this would be equivalent to the 'out-degree' measure in the original HITS algorithm). Weights can also be modified based on personal information about the user such as gender age, political affiliation or other attributed (either known or attributed).
  • Negative links are novel to this use case; among web pages we don't have negative links: while we see which links exists on a webpage, we do not see which pages an author considered and rejected.
  • a user can similarly be assigned a fixed credulous value of one if it is known to be a bot controlled by an adversary.
  • Clustering when an article is marked as being untrustworthy, we do not merely mark an individual link. We can aggregate links to similar stories, or similar links to the same story. This is similar to how Google News (http://news.google.com) aggregates stories based on text similarity. Obviously if multiple links point to the same text (e.g., short links such as bit.ly) it is even easier to aggregate stories. Users can similarly be clustered when the same user has accounts on multiple platforms. Users can be identified/linked e.g., by cookies on their computers, browser fingerprinting or other methods. If users cannot be identified the algorithm will still work but convergence will be slightly slower.
  • pages or sources can be down-ranked and appear less frequently in newsfeeds or social media feeds. Warnings can be displayed or sources can be banned. It is also possible to show information from other sources to counterbalance. Of course, this can require some integration with other providers.
  • a plugin can be used similar to how Adblock (https://en.wikipedia.org/wiki/AdBlock accessed 10/28/2018) hides ads or how Facebook purity filters post (https://en.wikipedia.org/wiki/Fluff Busting Purity accessed 10/28/2018).
  • Another use case is identifying patterns of small social media channels.
  • some chat servers running the Discord chat tool have issues with Vietnamese communities forming against the wishes of the server maintainers. Some of these have names such as “Nazism 'n' Chill,” “Reich Lords,” “Rotten Reich,” “KKK of America,” “Oven Baked Jews,” and “Whitetopia.”
  • By manually labeling these groups we can then use the algorithm to find other groups which are disproportionately inhabited by Nazis. These can be shut down or marked for manual inspection. Similar efforts can be done for chatrooms frequented by ISIS or other political groups.
  • the place of a "user” can be replaced with other aspects of identity, such as IP address, username, typing habits (e.g., by using https://www.typingdna.com/ accessed on 10/9/2018) or any other method of statistically identifying a user across time or location. This identification can be unique or merely probabilistic.
  • a key concept is the notion of disproportionate actions or interactions. This notion is governed by the comparison of a user(s) with a control group. Ideally this control group would be matched as to aspects such as age/country/language/education/gender/etc. If the matching is not done properly, the algorithm will still work though it will have reduced power and hence more people will be exposed to the content of interest.
  • Control matching can be discovered in a variety of ways. For example, FB or Linkedin explicitly know demographic characteristics while Ad networks know them implicitly.
  • each level can be treated separately. For example, we can complete the computations on a bipartite graph of webs and users before starting the computation on users and Discord chatrooms. We can also do this in parallel with a single graph containing all of the entities (e.g., users, websites, and chatrooms) and weights measuring the connections. This can be done e.g., by utilized belief propagation (https://en.wikipedia.org/wiki/Belief propagation accessed on 9/10/2018).
  • a graph which consists of websites and users.
  • a user is linked to a website if they visited (e.g., within the last month).
  • Websites are linked to each other if they have a link (similar to how it works today).
  • User can be linked if they share properties (e.g. known demographics).
  • We can then run Pagerank on the graph to determine quality of websites. It is also possible to achieve other properties by judiciously choosing the initial seed of websites. For example, choosing a seed of the websites for high quality cars (e.g., BMW.com, ferrari.com, https://www.lamborghini.com/en-en/) will allow us to find a set of users who are interested in high end cars. Such users, are disproportionately likely to be interested in other sites such as https://global.astonmartin.com/en-us.
  • high quality cars e.g., BMW.com, ferrari.com, https://www.lamborghini.com/en-en/
  • Click fraud is when an adversary attempts to fool an ad network to think it is receiving valid clicks when it is not (despite the name this also refers to impressions and actions). This then causes the advertiser to pay for the ads it allegedly served to clients.
  • https://www.buzzfeednews.com/article/craigsilverman/how-a-massive-ad-fraud-scheme-exploited- android-phones-to (accessed 10/23/2018) details a network in which an adversary created a network of fake users to consume ads. The users were created based on profiles of real users of apps. These users had the same click behavior, and the same activity as other users, except they were multiplied by created fake personas.
  • the apps (or parts of the apps) are acting in a way similar to the URLs, and the users are acting as users (identified e.g., by fingerprinting).
  • the users are acting as users (identified e.g., by fingerprinting).
  • HITS Pagerank
  • a HITS-like algorithm is any algorithm that converges the graph for the weights of multiple sides of a mixed graph.
  • the technique of the present invention can be used as a standalone method, or can be incorporated as a signal or input to other methods.
  • our system can be integrated into content ranking on Google, or used as a prefilter for human filtering by Facebook.
  • HINTS can also be paired with machine learning classification methods to improve fake news detection before network interaction.
  • HINTS scores collected for a set of articles could provide a set of labelled training data for training a classifier to predict trustworthiness of future articles.
  • Current content-based detection methods rely on human labeling. The speed at which HINTS labels could be collected would reduce the lag in current content-based methods that inhibit scaling and increase exploitability.
  • the labeling via network analysis also provides a margin which can be treated as a confidence level into the NLP. This is useful for some applications. For example, when feeding into a Bayes net, knowing the weights on the labeled sample provides additional value and can improve the accuracy of the classifier.
  • NLP approaches oftentimes have a margin (https://en.wikipedia.org/wiki/Margin (machine learning)) accessed 11/18/2018).
  • the use of HINTS approach has an assigned probability which can also be thought of as a margin.
  • ML techniques such as boosting (https://en.wikipedia.org/wiki/Boosting (machine learning) accessed 11/18/2018).
  • Another useful application of the present invention is to detect harassment. There are cases of harassment of online figures which are coordinated in sites.. The ability to coordinate harassment is important for the psychological effect it has on the victim.
  • the present invention can disrupt the loop. For example, we can create a bipartite graph with celebrities (or other potential victims) on one side of the graph and people who contact them on the other side.
  • the unit of analysis does not have to be pages. It can be phrases or hashtags. Partial remediation for worldview
  • remediation One interesting aspect of the present invention is remediation.
  • One novel form of remediation is to remediate the appearance of the content at issue such that only people who have not expressed an interest or affinity for the content are remediated. This reduces the perceived impact on those who share the same world view (e.g., whose score on the graph created by the seed is within a constant factor) since they are not subject to remediation effects and are not impacted by the algorithm.
  • Data can be self-labeled (e.g., using a given hashtag), labeled by manual fact checkers (e.g., datacommons.org), labeled by trustworthiness of source (e.g., papers of records), labeled by political affiliation or other methods.
  • manual fact checkers e.g., datacommons.org
  • trustworthiness of source e.g., papers of records
  • political affiliation e.g., political affiliation
  • the people who interact with misinformation are not random. There are certain traits which contribute to the propensity to interact with misinformation. For example, many studies have found that older people are more susceptible to misinformation (PEW, 2018), (Andrew Guess, 2019:), (Antonis Kalogeropoulos, 2018). Other studies have shown that a psychological trait known as "Need for Cognition" (John T Cacioppo, 1982) mediates susceptibility to misinformation even controlling for the desirability of information as reflecting the world view of the user (Pennycook, 2018), (Juliana Leding, 2019). Other traits such as social networks, information ecologies, and social context also influence susceptibility to misinformation (Krause, 2019). There is very active research into additional traits and properties which impact on vulnerability to interaction with misinformation.
  • the present invention has at least the following features:
  • Negative links where we look at the lack of an expected link(s) as opposed to the mere existence of a link.
  • the subject matter is related to the field of collaborative filtering (e.g., https://en.wikipedia.org/wiki/Collaborative filtering accessed on Nov 1st 2019).
  • collaborative filtering e.g., https://en.wikipedia.org/wiki/Collaborative filtering accessed on Nov 1st 2019.
  • some of the data is labeled.
  • techniques used in collaborative filtering such as deep learning (ibid)
  • the subject matter is related to boosting in that each individual can be thought of as a weak learner and we aggregate across multiple learners. See e.g., https://en.wikipedia.org/wiki Boosting (machine learning) accessed on Nov 1st 2019).
  • the solution can be used for other moderation tasks such as pornography or harassment detection. This is due to the fact that different users have different propensities to indulge in e.g., pornography.
  • pornography e.g., we can differentiate the napalm girl photograph from a pornographic photo since the user base who shares/interacts/views/likes the napalm girl photo is substantially different from the user base who interacts with pornography.
  • the invention can also be used in combination with Gibbs sampling (e.g. https://en.wikipedia.org/wiki/Gibbs sampling access Nov 1st 2019). If there are different segments of the population, we can weigh the known segments differently so as to achieve better results.
  • Bayes theorem https ://en. wikipedia. org/wiki/Bayes%27 theorem accessed Nov 1st 2019
  • Bayes theorem https ://en. wikipedia. org/wiki/Bayes%27 theorem accessed Nov 1st 2019
  • the present invention is a major improvement in networking technology and has wide applicability in the industrial field of detecting and preventing the spread of misinformation including fake news.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A method for detection of misinformation (HINTS) without the need to analyze any articles that includes forming a mixed graph containing at least two different node types, such as users and articles with edges between users and articles with user weights for user nodes and article weights for article nodes. Seed nodes are planted at least one user node and at least one article node. User weights and article weights are manually assigned to the seed nodes, then neighborhoods are defined for the seed nodes. A HITS-like algorithm is then run for a predetermined number of rounds updating both people and articles while keeping the weights of the seed nodes constant to converge the graph for the weights of articles and users. Finally, a set of highest weights for users and/or articles is outputted and possible remedial action can be taken.

Description

System and Method for Detecting Misinformation and Fake
News via Network Analysis
BACKGROUND
Field of the invention
The present invention relates generally to automated detection through network and more particularly to method and systems for detecting fake news and other misinformation through network analysis.
Description of the Problem Solved
Fake news is considered a relatively hard problem with important social impact With the rise of automated disinformation, there is a need for automated ways to identify fake news. Network analysis of the social and other accounts that share fake news can help classify or identify it, and limit its reach, as it is being shared. This is in contrast to content analysis plus source analysis, which attempt to limit fake news before it is shared.
There have been many attempts to detect, discover and define fake news. For example, Facebook has hired thousands of reviewers to manually detect, review, rank, and mark fake news. For a documentary om this manual process, see The Cleaners (http://www.pbs.org/independentlens/videos/the-cleaners/). Facebook has signed contracts with external organizations such as Politifact, to detect and rank fake news. Other attempts use NLP to attempt to discover fake news (see e.g., https://towardsdatascience.com/i-trained-fake-news-detection-ai-with-95-accuracv-and-almost-went- crazv-dl0589aa57c accessed on 10/12/2018 or the attempts on https://www.ramp. studio/problems/fake news, accessed 10/12/2018). Several startups use NLP for fake news detection (e.g. https://www.logically.co.uk/ accessed 11/17/2018). Most of these use a combination of humans and machine learning to analyze the content of the text/article/video, or the quality of the source, and some teach away from using network analysis. [Indeed, network analysis is only useful where you have access to data about how the story will be shared. For example, "AP Verify", a joint project of Google and the AP, uses only textual understanding and humans, since at publication, AP does not have access to the data about how the story will be shared.]
This problem is not unique to Facebook. For example, Reddit, Twitter, Facebook, Instagram, Whatsapp, YouTube (comments and recommendations) and email providers all face a version of this challenge.
Solutions and history
Present attempts
Automated attempts to identify problematic texts from their content include Google's 'hate speech A https://thenextweb.com/artificial-intelligence/2018/09/ll/googles-hate-SDeech-ai-easilv-fooled/ and
China's keyword-based censorship of social media. Twitter attempts to detect bots with humans reporting (https://www.theverge.com/2018/10/31/18048838/twitter-report-fake-accounts-spam-bot- crackdown accessed 11/1/2018).
Other attempts exist. For example, "Our previous work on the Credibility Coalition, an effort to develop web-wide standards around online-content credibility, and PATH, a project aimed at translating and surfacing scientific claims in new ways, is part of two efforts of many to think about data standards and information access across different platforms. The Trust Project, meanwhile, has developed a set of machine-readable trust indicators for news platforms; Hypothesis is a tool used by scientists and others to annotate content online; and Hoaxy visualizes the spread of claims online." (https://www.theatlantic.com/technology/archive/2018/08/how-misinfodemics-spread- disease/568921/ accessed 11/1/2018)
However, these attempts can be fooled by manipulating the exact words used in an article (or tweet), and have issues with detecting sarcasm, irony, criticism of the problematic texts, and other subtle elements of discourse. For some mediums such as videos (e.g., beheadings by ISIS) or photos, text search does not work and other methods are employed (see e.g., http://citeseerx.ist. psu.edu/viewdoc/download?doi=10.1.1.706.7108&rep=repl&type=pdf accessed 11/8/2018) which are not sufficient.
For examples of other attempts at ranking and increasing trust in news, see http://www.niemanlab.org/2018/04/so-what-is-that-er-trusted-news-integrity-trust-project-all-about-a- guide-to-the-many-similarly-named-new-efforts-fighting-for-journalism/ accessed 9/10/2018. One notable attempt is Trustrank (https://en.wikipedia.org/wiki/TrustRank accessed 10/13/2018), which attempts to combat web spam by defining reliability. TrustRank uses a seed of reliable websites (selected manually) and propagates reliability by using Pagerank. Notably, TrustRank does not utilize passive data collected from user behaviors, or measures of user reliability.
Domain identification and blacklisting of fake news has been suggested but is easily circumvented Moreover, it does not detect small and seldom used domains. Other attempts at domain identification include Newsguard (http://www.niemanlab.org/2019/01/newsguard-changed-its-mind-about-the-dailv- mails-qualitv-its-green-now-not-red/ accessed 1/31/2019) which is not as accurate as could be desired.
After the priority date of the present invention, an MIT professor published an article in the prestigious (https://en.wikipedia.org/wiki/Proceedings of the National Academy of Sciences of the United Sta tes of America accessed 28/1/2019) journal PNAS which was granted prominence as a journal preprint and received noteworthy press reports as well as a press release and placement on the MIT website (http://news.mit.edu/2019/reader-crowdsource-fake-news-0128 accessed 29/01/2018). It is important to note that this solution has problems in that it weighs users linearly and does not take into account the variability in user quality. It is thus a special case and is not optimal. Despite this, the publication shows the novelty and importance of our approach. This result was also covered inThe Poynter Institute for Media Studies newsletter of 31/1/2019
(http://go.pardot.com/webmail/273262/328340559/26aa3f8a5dlbb368bb82e770a7f08el48d55496e5 bebb6f985401284871a7fe4).
Background in the field of Internet search
Search is an important component of the way we use the Internet. Early search engines attempted to understand pages in terms of how humans classified them. For instance, the Yahoo directory attempted to manually annotate and rank the content of the web. Manual ranking suffered from immense scaling issues, and fell out of favor.
The next generation of search engines tried to understand page content automatically. Methods such as tf-idf (term-frequency inverse document-frequency https://en.wikipedia.org/wiki/Tf%E2%80%93idf accessed 10/13/2018) or natural language processing were widely used. Difficulties arose due to the complexity of natural language processing, language subtleties, context and differing languages; however this is still is a component of many search tools. The current generation of search engines utilizes very different mechanisms. Algorithms such as HITS (https://en.wikipedia.org/wiki/HITS algorithm accessed 9/29/2018) and Pagerank (https://en.wikipedia.org/wiki/PageRank accessed 10/13/2018) have become mainstays of modern search. The unifying factor is that they look at networks of webpages, bootstrapping reliability and relevance scores, more than they look at the page content itself.
Figure 1 shows the history of search and misinformation detection. The final block titled "Network- based identification" relates to the present invention. Thus, even though the entire figure is marked as "prior art", this last block is not taught or suggested in the prior art.
SUMMARY OF THE INVENTION
The present invention uses a method somewhat similar to the prior art HITS method to detect misinformation and fake news. In HITS, each node is assigned two numerical scores. The Authoritative score indicates how likely a given webpage is likely to have good information, while the Hub score indicates how likely it is to link to pages with a good Authoritative score. A page with a good Authoritative score is pointed to by many pages with good Hubness, and one with a good Hub score points to many Authoritative pages. These definitions are recursive, as each page's score references the scores of neighbors in its link graph. This recursion is solved by assigning initial weights to each page and updating the scores until the values converge. The present invention modifies the HITS method to pair people with articles, and will be called HINTS, as opposed to HITS. The HINTS method is also recursive and more accurately identifies misinformation than HITS.
Thus, the present invention (HINTS) represents a method for detection of misinformation without the need to analyze any articles that includes forming a mixed graph containing at least two different node types, such as users and articles with edges between users and articles with user weights for user nodes and article weights for article nodes. Seed nodes are planted at least one user node and at least one article node. User weights and article weights are manually assigned to the seed nodes, then neighborhoods are defined for the seed nodes. A HITS-like algorithm is then run for a predetermined number of rounds updating both people and articles while keeping the weights of the seed nodes constant to converge the graph for the weights of articles and users. Finally, a set of highest weights for users and/or articles is outputted and possible remedial action can be taken.
An exemplary embodiment of the disclosed subject matter is a computer program product comprising an non-transitory computer readable medium; a first computer instruction forming a mixed graph containing at least two different node types, users and articles with edges between users and articles, with user weights for user nodes and article weights for article nodes; a second computer instruction planting at least one seed user node and at least one seed article node into said mixed graph; a third computer instruction manually assigning user weights and article weights to the seed nodes; a forth computer instruction defining neighborhoods of the seed nodes; a fifth computer instruction running a HITS-like algorithm for a predetermined number of rounds updating both people and articles while keeping the weights of the seed nodes constant to converge the mixed graph for the weights of articles and users; a sixth computer instruction outputting a set of highest weights for users and/or articles; wherein, said first, second, third, forth, fifth and sixth program instructions are stored on said non- transitory computer readable medium and executed on a computing device.
DESCRIPTION OF THE FIGURES
Several figures are presented that illustrate features of the present invention.
Figure 1 shows the history and evolution of search techniques and of misinformation detection.
Figure 2 shows an outline of an embodiment of the present invention HITS.
Figures have been presented to aid in understanding the present invention. The scope of the present invention is not limited to what is shown in the figures.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
Turning to Figure 2, embodiments of the present invention can be seen. Figure 2 shows both the HITS method and the HINTS method.
The present invention describes an automated, robust fake news detector which we call the Human Interaction News Trust System [HINTS] to detect fake news and misinformation, even in the presence of adversaries who know how the detector works. Our key tools are network dynamics and classification of members of the network in terms of their historical interaction with news. The present inventions looks at how known and suspected fake news propagates in a dynamic network of people, and uses this data to identify new posts/publication/news items that are likely to be fake as well. This also gives information about accounts controlled by an adversary. Platforms can use this data to limit the damage a fake news article can do by limiting the reach of such an article. And while limiting its reach, they can still increase confidence in the fakeness of the article e.g., by making it visible to small groups of users whose use patterns are the strongest indicators. The present invention works for a wide variety of classification problems.
Applying the present invention to news sharing
A key insight behind our fake news detector is that we focus on limiting the exposure of people to fake news, rather than trying to block all such news from being shared. This dramatically increases the cost of spreading fake news, as it is most cost effective when people spread it on their own. For instance, there is very little fake news on broadcast television.
We identify people who are disproportionately likely to spread fake news, and use this to weight the credibility of what they share. The proverb "consider the source" shows that we already implicitly weigh the source of a statement in deciding how much to trust it.
This leads us to the following working definition: A credulous person is someone who disproportionately interacts positively with fake news, and a piece of fake news is one that is interacted with disproportionately by credulous people. Of course, some of these credulous accounts are intentionally sharing fake news, and may not be real people. As with HITS, this definition is recursive and converges: we assign an initial fake value to each article and a credulous value to each user, and iterate.
Depending on the application, modes of interactions can include liking, sharing, spending time reading a source (estimated by for instance mouse movement over an article), commenting, reposting, following, favoriting, and the like. Other metrics such as bounce time (amount of time before user returns to previous page) and changes in search patterns can also be used. For any individual, this signal might be weak (or wrong) — for example, some individuals might comment to disprove an article. However different modes of interaction can be assigned different weights, to make the aggregate signal useful. (And despite disclaimers, retweets are endorsements
The method of user identification can vary. Some websites have enough user activity on their website to rank the user by themselves. Others can utilize plugins on other websites such as Facebook or Twitter plugins, or can use the browser, such as Google sync
(https://www.theverge.com/2018/9/24/17895536/google-chrome-69-log-in-sync-password-user-data- privacy accessed 10/13/2018) which tracks data through backup of users behaviors.
Another way is to utilize ad network data (https://en.wikipedia.org/wiki/Advertising network accessed 10/13/2018), such as cookies on a user's computer, or device identification,
(https://www.devhub.com/blog/2672675-reach-a-target-audience-with-device-id-targeting/ accessed 10/13/2018) or device fingerprinting to identify users — to calibrate a user's information level or other traits. Yet another way is to use browser history (https://www.spinda.net/papers/smith-2018- revisited.pdf accessed 11/4/2018).
Further methods are possible.
Thus, similar to HITS, we can define a graph. In the case of fake news the graph will be bipartite (HITS itself is not bipartite, but a person and a webpage are different entities) in which one side are people and the other side are articles or posts (or clusters of articles and posts), and there is a weighted link where there is an interactions between a person and an article. The weight can depend on the type of interaction, and can be negative if the person saw but declined to interact with the article - e.g., if a person habitually interacts with links they see on their twitter feed, and we know (or can assign a probability) that they saw an article and did not interact with it. Weights can be modified by the individual's propensity to interact with content (this would be equivalent to the 'out-degree' measure in the original HITS algorithm). Weights can also be modified based on personal information about the user such as gender age, political affiliation or other attributed (either known or attributed).
Details and novel elements
Negative links are novel to this use case; among web pages we don't have negative links: while we see which links exists on a webpage, we do not see which pages an author considered and rejected.
In order to seed the algorithm and teach it, we can use existing labeling by humans (note that a given article/user can have multiple labels with multiple levels of confidence). Sources that label data include Politifact, ABC News, the Associated Press, FactCheck.org, Snopes (see e.g., https://www.cjr.org/tow center/facebook-fact-checking-partnerships.php accessed 10/1/2018), and AP Verify (https://newsinitiative.withgoogle.com/dnifund/dni-proiects/ap-verify/ accessed 10/12/2018). When an article is manually fact checked, we can set the 'fakeness' value of that article to zero or one (or some appropriate value). While the algorithm can modify the fake news value for most articles, articles which are manually checked can optionally be pegged to that value, and the algorithm will not update them. This does not interfere with convergence.
A user can similarly be assigned a fixed credulous value of one if it is known to be a bot controlled by an adversary.
Clustering: when an article is marked as being untrustworthy, we do not merely mark an individual link. We can aggregate links to similar stories, or similar links to the same story. This is similar to how Google News (http://news.google.com) aggregates stories based on text similarity. Obviously if multiple links point to the same text (e.g., short links such as bit.ly) it is even easier to aggregate stories. Users can similarly be clustered when the same user has accounts on multiple platforms. Users can be identified/linked e.g., by cookies on their computers, browser fingerprinting or other methods. If users cannot be identified the algorithm will still work but convergence will be slightly slower.
The spread of news in a social network is different from new webpages. In particular, the speed of distribution is much faster. So, it is useful to calculate marginal values for the ranking of articles and people based on the already-calculated values from the graph at a prior time point. This makes it unnecessary to recalculate the values from scratch (though that can be done as a sanity check from time to time). For example, we can frequently update the fakeness of an article based on user interactions, and only update the user values infrequently, or when a new user appears.
Updating one side of the graph (e.g., articles) much faster than the other side of the graph (e.g., users) is a novel need for this type of graph. We can also update the values of users with a limited number of steps. All of these methods introduce additional error, but it is small compared to the signal.
Applications
Given these rankings, various actions can be taken. For example, pages or sources can be down-ranked and appear less frequently in newsfeeds or social media feeds. Warnings can be displayed or sources can be banned. It is also possible to show information from other sources to counterbalance. Of course, this can require some integration with other providers. However, in some cases a plugin can be used similar to how Adblock (https://en.wikipedia.org/wiki/AdBlock accessed 10/28/2018) hides ads or how Facebook purity filters post (https://en.wikipedia.org/wiki/Fluff Busting Purity accessed 10/28/2018).
Extended use cases
While we have focused on fake news, similar analysis can be performed on other issues or objectionable content. For example, we can think of hate speech or deep fakes (e.g. https://www.deepfakes.club/faq accessed on 10/12/2018.
Note that the same person will have different scores for different propensities. It is possible that some sources (e.g., bots) might have high scores in multiple areas. For instance, some people are particularly good at detecting deepfakes (https://www.sciencedailv.com/releases/2018/10/181011173106.htm accessed 10/12/2018). Propaganda, conspiracy theories and misinformation are subject to similar analysis. This scoring can also be used to divide people into a variety of bins. For example, given a seed of political affiliation (e.g., Fox news links vs MSNBC links) one can detect political affiliation as well as the bias of various news outlets. It is particularly useful where there is a correlation between the properties of the different types of entities.
Another use case is identifying patterns of small social media channels. For example, some chat servers running the Discord chat tool have issues with Nazi communities forming against the wishes of the server maintainers. Some of these have names such as "Nazism 'n' Chill," "Reich Lords," "Rotten Reich," "KKK of America," "Oven Baked Jews," and "Whitetopia." By manually labeling these groups we can then use the algorithm to find other groups which are disproportionately inhabited by Nazis. These can be shut down or marked for manual inspection. Similar efforts can be done for chatrooms frequented by ISIS or other militant groups.
The place of a "user" can be replaced with other aspects of identity, such as IP address, username, typing habits (e.g., by using https://www.typingdna.com/ accessed on 10/9/2018) or any other method of statistically identifying a user across time or location. This identification can be unique or merely probabilistic.
We can also seed such a network with reliable classifications of users as well as, or instead of, with content classification. For example, if a user makes a statement that "I'd be the first to sign up and help you slaughter Muslims." (example from https://slate.com/technology/2018/10/discord-safe-space- white-supremacists.html accessed on 9/10/2018) we can mark that user as a racist/threat and then see where similar users congregate or what articles similar users read.
One interesting effect of using users to detect servers/articles/webpages/newspapers/group/etc is that while it is easy to change the name of a server, it is much harder to simultaneously change all of the users. Thus even if an adversary tries to rename/reinstall/move their chatrooms/webpage/twitter account/and the like, they must simultaneously change their users base IDs (which can be tracked e.g., using adtech which tracks users across the web). This poses some technical difficulties for an adversary.
Further notes
Disproportionality
A key concept is the notion of disproportionate actions or interactions. This notion is governed by the comparison of a user(s) with a control group. Ideally this control group would be matched as to aspects such as age/country/language/education/gender/etc. If the matching is not done properly, the algorithm will still work though it will have reduced power and hence more people will be exposed to the content of interest.
Control matching can be discovered in a variety of ways. For example, FB or Linkedin explicitly know demographic characteristics while Ad networks know them implicitly.
Adversarial models
A great deal of time and money is invested into propaganda and fake news networks. We expect adversaries to try to outwit detection methods, for instance by creating fake profiles which appear to be benign (e.g., no interaction with any fake article) until they are called upon to manipulate the algorithm. However compared to the sybil attacks possible on current platforms, this is expensive and time consuming for an adversary.
In particular, in contrast with traditional sybil attacks where a successful sybil account becomes more effective over time as its follower count increases, our network analysis reduces its effectiveness after its first few broadcasts of fake items.
Compounding and chaining
We can chain this method with other known methods of identifying fake news. Popular methods of identifying users across multiple websites, such as tracking cookies, are used to identify a user across multiple websites. This lets us identify users who visit problematic websites (e.g. Stormfront) and mark other websites (e.g., the benign sounding Odinia International or Vanguard News Network) as being problematic, since they disproportionately share a common user base.
Similarly, with Nazi chat rooms on Discord, we can seed the algorithm with Nazi websites to identify users (using the graph with websites and users) and then use those users to identify problematic chat rooms. Alternately we could start with any of the levels and reach any other (e.g., start with seeded chatrooms and end up with websites).
Note that it is possible to treat each level separately. For example, we can complete the computations on a bipartite graph of webs and users before starting the computation on users and Discord chatrooms. We can also do this in parallel with a single graph containing all of the entities (e.g., users, websites, and chatrooms) and weights measuring the connections. This can be done e.g., by utilized belief propagation (https://en.wikipedia.org/wiki/Belief propagation accessed on 9/10/2018).
Note that this method can be used either independently or in conjunction with other methods such as manual human input or NLP.
A variant example
There have been many attempts to rank webpage quality. For example, https://moz.com/blog/low- quality-pages (accessed 10/15/2018) has a list of criteria to rank the quality of webpages. Similarly, https://support.google.com/google-ads/answer/2454010 (accessed 10/15/2018). We can use our invention to increase the accuracy of such attempts. For example, we can use cookie tracking (e.g., https://www.svmantec.com/security-center/writeup/2006-080217-3524-99) to identify users. We can then manually identify a set of high quality websites (e.g. websites ending in .edu or .gov. We can use high quality websites if desired). We can then define a graph which consists of websites and users. A user is linked to a website if they visited (e.g., within the last month). Websites are linked to each other if they have a link (similar to how it works today). User can be linked if they share properties (e.g. known demographics). We can then run Pagerank on the graph to determine quality of websites. It is also possible to achieve other properties by judiciously choosing the initial seed of websites. For example, choosing a seed of the websites for high quality cars (e.g., BMW.com, ferrari.com, https://www.lamborghini.com/en-en/) will allow us to find a set of users who are interested in high end cars. Such users, are disproportionately likely to be interested in other sites such as https://global.astonmartin.com/en-us.
This increases the accuracy of web targeting and ranking since we can incorporate the actual behavior of users.
Clickfraud
Click fraud is when an adversary attempts to fool an ad network to think it is receiving valid clicks when it is not (despite the name this also refers to impressions and actions). This then causes the advertiser to pay for the ads it allegedly served to clients. For example, https://www.buzzfeednews.com/article/craigsilverman/how-a-massive-ad-fraud-scheme-exploited- android-phones-to (accessed 10/23/2018) details a network in which an adversary created a network of fake users to consume ads. The users were created based on profiles of real users of apps. These users had the same click behavior, and the same activity as other users, except they were multiplied by created fake personas.
In this case we have a network of humans (and personas) and of apps (through which the ads were served and where the money was connected). Note that the personas disproportionately interact through the compromised apps (in fact they exclusively interact through the apps. By tracking users through multiple apps (e.g. by fingerprinting or other methods such as described in https://arstechnica.com/information-technology/2018/09/dozens-of-ios-apps-surreptitiouslv-share- user-location-data-with-tracking-firms/ accessed 10/23/2018) we can detect that these users interact disproportionately through a small set of apps, and take action. In order to circumvent this detection, the fake personas would have to reduce their proportion of activity through the compromised apps to approximately background activity of a normal actor, in order to avoid detection. This greatly reduces the value of this scheme to the adversary.
In this case the apps (or parts of the apps) are acting in a way similar to the URLs, and the users are acting as users (identified e.g., by fingerprinting). Thus, we have a bipartite graph, with two parts (users and apps) and can use the algorithm as before.
This also works for detection of purchased likes on e.g. Instagram.
Pseudo code:
The following pseudo code provide a concrete example of an embodiment of the present invention. It should be noted that numerous other embodiments and code are possible, and are within the scope of the present invention.
Create a graph which consists of users and news articles. There is an edge between a user and an article, if the user interacted (e.g. liked) an article and a negative weight between a user if the user saw the article and did not interact. Users are not necessarily just individuals, but any entity that can consume or distribute information including hashtags. Articles are not necessarily just news articles and the like. but any type of information that is distributed to any group of users over any network with any technology.
Start with a labeled set of data (e.g., Snopes, Politifact https://www.politifact.com/truth-o- meter/article/2017/dec/ 15/we-started-fact-checking-partnership-facebook-vear/ accessed 18/11/2018). Mark it as fake.
Use the HITS (or Pagerank) algorithm to converge the graph for the weights of the articles and users. A HITS-like algorithm is any algorithm that converges the graph for the weights of multiple sides of a mixed graph.
Mark articles with a fake score above a threshold.
(Optionally) extract fake articles to feed into a text algorithm for earlier detection.
Technical workflow and example
Figure imgf000014_0001
Figure imgf000015_0001
Note
The technique of the present invention can be used as a standalone method, or can be incorporated as a signal or input to other methods. For example, our system can be integrated into content ranking on Google, or used as a prefilter for human filtering by Facebook.
It is also interesting to note that after the seeding, our method does not need to analyze the data. This can be useful in such cases as encrypted communications.
Extension to increase labeled data for semantic analysis
A possible problem our method is that it requires data from users, and hence we must allow an adversary to expose some users to the content we wish to avoid. However, conventional methods suffer from the lack of sufficient labeled data to do NLP/ML/semantic analysis properly. One way of improving our results is that we can use the output of our method as input for semantic analysis. For example, by giving an NLP algorithm the labels of articles (and potentially the margins https://en.wikipedia.org/wiki/Margin (machine learning) or other aggregate data) we can increase the size of the labeled data which is available for NLP. Note that this can be done in a privacy preserving fashion.
User interface
Since the output of the algorithm depends on the labeled seed, it is useful to have an easy method for labeling data. One way to do so is to have a search (e.g., Google search) find related articles and show them to a user who can then manually click a True/False button next to each article. It is also possible to have multiple buttons for different aspects (e.g. True/False and conspiracy/Not). This Ul can make easier to feed a new seed/query into the system. Feedback loops
HINTS can also be paired with machine learning classification methods to improve fake news detection before network interaction. HINTS scores collected for a set of articles could provide a set of labelled training data for training a classifier to predict trustworthiness of future articles. Current content-based detection methods rely on human labeling. The speed at which HINTS labels could be collected would reduce the lag in current content-based methods that inhibit scaling and increase exploitability.
The labeling via network analysis also provides a margin which can be treated as a confidence level into the NLP. This is useful for some applications. For example, when feeding into a Bayes net, knowing the weights on the labeled sample provides additional value and can improve the accuracy of the classifier.
Multiple weak learners
NLP approaches oftentimes have a margin (https://en.wikipedia.org/wiki/Margin (machine learning)) accessed 11/18/2018). The use of HINTS approach has an assigned probability which can also be thought of as a margin. Thus, we can combine both of these methods (as well as potentially other methods) by using ML techniques such as boosting (https://en.wikipedia.org/wiki/Boosting (machine learning) accessed 11/18/2018). One advantages of this is that it reduces the number of users which have to interact with a given piece of content before we can do classification.
Time based linkages and harassment
Another useful application of the present invention is to detect harassment. There are cases of harassment of online figures which are coordinated in sites.. The ability to coordinate harassment is important for the psychological effect it has on the victim.
The present invention can disrupt the loop. For example, we can create a bipartite graph with celebrities (or other potential victims) on one side of the graph and people who contact them on the other side.
We can restrict the graph to contact within a given time period. We can then run hits on the bipartite graph with celebrities and contacts to discover the correlations between contacts and remediate (e.g., by rate limiting, prohibiting suspect contacts or manual inspection).
The use of time limitations or constraints on edges is useful in other applications.
Phrases
The unit of analysis does not have to be pages. It can be phrases or hashtags. Partial remediation for worldview
One interesting aspect of the present invention is remediation. One novel form of remediation is to remediate the appearance of the content at issue such that only people who have not expressed an interest or affinity for the content are remediated. This reduces the perceived impact on those who share the same world view (e.g., whose score on the graph created by the seed is within a constant factor) since they are not subject to remediation effects and are not impacted by the algorithm.
Sources of seeds and labeled data
There are many types of labeled data. Data can be self-labeled (e.g., using a given hashtag), labeled by manual fact checkers (e.g., datacommons.org), labeled by trustworthiness of source (e.g., papers of records), labeled by political affiliation or other methods.
Sociological analysis
The people who interact with misinformation are not random. There are certain traits which contribute to the propensity to interact with misinformation. For example, many studies have found that older people are more susceptible to misinformation (PEW, 2018), (Andrew Guess, 2019:), (Antonis Kalogeropoulos, 2018). Other studies have shown that a psychological trait known as "Need for Cognition" (John T Cacioppo, 1982) mediates susceptibility to misinformation even controlling for the desirability of information as reflecting the world view of the user (Pennycook, 2018), (Juliana Leding, 2019). Other traits such as social networks, information ecologies, and social context also influence susceptibility to misinformation (Krause, 2019). There is very active research into additional traits and properties which impact on vulnerability to interaction with misinformation.
These traits vary widely. However, they all share in common the fact that these traits change slowly (if at all). This means that if we knew these traits for all users, we could assign a probability for every user to interact with misinformation. Therefore, we could simply look at which users have the traits, and invoke Bayes rule to determine the probability that a given piece of news is fake, given the users which interact with it. Unfortunately, we don't actually have the value of these traits for all users. Fortunately, we don't actually need the value. By looking at the previous interactions with misinformation, we can look at these traits as a hidden variable and still determine the likelihood that a piece of content is misinformation given the users which interact with it. This simply requires two invocations of Bayes rule, one to estimate the hidden variable based on previous interactions (with labeled data) and one two estimate the current piece of content based on the hidden variable. Summary of Features
The present invention has at least the following features:
Ranking pages based on user input and vice versa +
Mixed graph with more than one type of entity (e.g., people are diff from websites)
Stopping propagation based on interaction with specific users (and not just on content). Tradeoff between number of exposures and certainty.
Network analysis on changing graph with increased efficiency
Negative links where we look at the lack of an expected link(s) as opposed to the mere existence of a link.
Claim use of normalization and fixed values in graph taken from manual input. a graph with more than 2 entity types (HITS only has 1-2 types of entities depending on whether the two groups are of the same type [webpages] or not [urls & people])
Use of implicit human interactions to propagate labels the claim of (x) where the labels are authenticity rating
Aggregation of inputs from multiple users where the labels are aggregated non linearly and where the labels are of trustworthiness metrics. collaborative filtering with manual seeds filtering with diff user weights
The subject matter is related to the field of collaborative filtering (e.g., https://en.wikipedia.org/wiki/Collaborative filtering accessed on Nov 1st 2019). However, in contrast to collaborative filtering some of the data is labeled. There are many techniques used in collaborative filtering such as deep learning (ibid)
The subject matter is related to boosting in that each individual can be thought of as a weak learner and we aggregate across multiple learners. See e.g., https://en.wikipedia.org/wiki Boosting (machine learning) accessed on Nov 1st 2019). The solution can be used for other moderation tasks such as pornography or harassment detection. This is due to the fact that different users have different propensities to indulge in e.g., pornography. Thus, for example, we can differentiate the napalm girl photograph from a pornographic photo since the user base who shares/interacts/views/likes the napalm girl photo is substantially different from the user base who interacts with pornography. Thus, previous interactions with porn can help us determine whether this “new” photo is being interacted with in a lewd way of in a non-lewd way. See e.g..(https://www.theguardian.eom/technolos:v/2016/sep/09/facebook-reinstates-napalm-s:irl-photo accessed on Nov 1st 2019).
The invention can also be used in combination with Gibbs sampling (e.g. https://en.wikipedia.org/wiki/Gibbs sampling access Nov 1st 2019). If there are different segments of the population, we can weigh the known segments differently so as to achieve better results.
We can also other methods to leverage the signal. For example, we can use Bayes theorem (https ://en. wikipedia. org/wiki/Bayes%27 theorem accessed Nov 1st 2019) to determine the hidden variables which inform the propensity and then use our estimate of the hidden propensities to estimate the probability that a new article is fake news.
Several illustrations and descriptions have been presented to aid in understanding the present invention. One with skill in the art will understand that numerous changes and variations may be made without departing from the spirit of the invention. Each of these changes and variations is within the scope of the present invention.
Industrial Applicability
The present invention is a major improvement in networking technology and has wide applicability in the industrial field of detecting and preventing the spread of misinformation including fake news.

Claims

im:
1. A method for detection of misinformation without need to analyze articles comprising: forming a mixed graph containing at least two different node types, users and articles with edges between users and articles, with user weights for user nodes and article weights for article nodes; planting at least one seed user node and at least one seed article node into said mixed graph; manually assigning user weights and article weights to the seed nodes; defining neighborhoods of the seed nodes; running a HITS-like algorithm for a predetermined number of rounds updating both people and articles while keeping the weights of the seed nodes constant to converge the mixed graph for the weights of articles and users; outputting a set of highest weights for users and/or articles.
2. The method of claim 1 further comprising updating a first side of the mixed graph faster than a second side of the mixed graph.
3. The method of claim 2 wherein the first side of the mixed graph is articles, and the second side of the mixed graph is users.
4. The method of claim 1 wherein user weights are only updated when a new user appears.
5. The method of claim 1 wherein no article is analyzed.
6. The method of claim 1 wherein user weights are determined by comparison with a control group.
7. The method of claim 1 wherein article weights are determined by user input.
8. The method of claim 1 wherein propagation is stopped based on interaction with specific predetermined users.
9. The method of claim 1 further comprising assigning negative links between users and articles that represent a lack of an expected association between a particular user and a particular article.
10. The method of claim 1 further comprising normalization and fixed values in the mixed graph taken from manual input.
11. The method of claim 1 wherein the mixed graph has more than two node types.
12. The method of claim 1 further comprising using implicit human interactions to propagate labels.
13. The method of claim 1 further comprising aggregation of inputs from multiple users to generate labels.
14. The method of claim 13 wherein the labels are aggregated non linearly.
15. The method of clam 13 wherein the labels are of trustworthiness metrics.
16. A method for detection of misinformation comprising: forming a mixed graph containing at least two different node types, users and articles with edges between users and articles, with user weights for user nodes and article weights for article nodes; planting at least one seed user node and at least one seed article node into said mixed graph; manually assigning user weights and article weights to the seed nodes; defining neighborhoods of the seed nodes; running a HITS-like algorithm for a predetermined number of rounds updating both people and articles while keeping the weights of the seed nodes constant to converge the mixed graph for the weights of articles and users, wherein articles may be updated more often than users; outputting a set of highest weights for users and/or articles.
17. The method of claim 16 wherein user weights are only updated when a new user appears.
18. The method of claim 16 wherein no article is analyzed.
19. The method of claim 16 wherein user weights are determined by comparison with a control group.
20. The method of claim 16 wherein propagation is stopped based on interaction with specific predetermined users.
21. The method of claim 16 further comprising assigning negative links between users and articles that represent a lack of an expected association between a particular user and a particular article.
22. A method for detection of misinformation comprising using non-intentional human interactions to rate content for a property of interest.
23. The method of claim 22 wherein the content is misinformation.
24. The method of claim 22 wherein the interactions are in a graph.
25. The method of claim 22 wherein multiple people are utilized to get a ranking.
26. A method for detection of misinformation comprising using a network with more than one type of entity to do content ranking.
27. The method of claim 26 wherein the content is misinformation.
28. A method for detection of misinformation including ranking pages based on user input and vice versa.
29. A method for detection of misinformation including mixed graph with more than one type of entity (e.g., people are diff from websites).
30. A method for detection of misinformation including stopping propagation based on interaction with specific users (and not just on content) including tradeoffs between number of exposures and certainty.
31. A method for detection of misinformation including network analysis on changing graph with increased efficiency.
32. A method for detection of misinformation including negative links where we look at the lack of an expected link(s) as opposed to the mere existence of a link.
33. A method for detection of misinformation including use of normalization and fixed values in a graph taken from manual input.
34. A method for detection of misinformation including a graph with more than 2 entity types.
35. A method for detection of misinformation including use of implicit human interactions to propagate labels.
36. A method for detection of misinformation including where the labels are authenticity rating where the labels are authenticity rating.
37. A method for detection of misinformation including aggregation of inputs from multiple users.
38. A method for detection of misinformation wherein the labels are aggregated non-linearly.
39. A method for detection of misinformation wherein the labels are of trustworthiness metrics.
40. A method for detection of misinformation including collaborative filtering with manual seeds.
41. A method for detection of misinformation including filtering with diff user weights.
42. A computer program product comprising a non-transitory computer readable medium: a first computer instruction forming a mixed graph containing at least two different node types, users and articles with edges between users and articles, with user weights for user nodes and article weights for article nodes; a second computer instruction planting at least one seed user node and at least one seed article node into said mixed graph; a third computer instruction manually assigning user weights and article weights to the seed nodes; a forth computer instruction defining neighborhoods of the seed nodes; a fifth computer instruction running a HITS-like algorithm for a predetermined number of rounds updating both people and articles while keeping the weights of the seed nodes constant to converge the mixed graph for the weights of articles and users; a sixth computer instruction outputting a set of highest weights for users and/or articles; wherein, said first, second, third, forth, fifth and sixth program instructions are stored on said non-transitory computer readable medium and executed on a computing device.
PCT/US2020/039658 2020-06-25 2020-06-25 System and method for detecting misinformation and fake news via network analysis WO2021262180A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2020/039658 WO2021262180A1 (en) 2020-06-25 2020-06-25 System and method for detecting misinformation and fake news via network analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2020/039658 WO2021262180A1 (en) 2020-06-25 2020-06-25 System and method for detecting misinformation and fake news via network analysis

Publications (1)

Publication Number Publication Date
WO2021262180A1 true WO2021262180A1 (en) 2021-12-30

Family

ID=79281663

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/039658 WO2021262180A1 (en) 2020-06-25 2020-06-25 System and method for detecting misinformation and fake news via network analysis

Country Status (1)

Country Link
WO (1) WO2021262180A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220318823A1 (en) * 2021-03-31 2022-10-06 International Business Machines Corporation Personalized alert generation based on information dissemination
US20220391474A1 (en) * 2021-06-03 2022-12-08 Beatdapp Software Inc. Streaming fraud detection using blockchain

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070112714A1 (en) * 2002-02-01 2007-05-17 John Fairweather System and method for managing knowledge
US20130204664A1 (en) * 2012-02-07 2013-08-08 Yeast, LLC System and method for evaluating and optimizing media content
US20160097788A1 (en) * 2014-10-07 2016-04-07 Snappafras Corp. Pedestrian direction of motion determination system and method
US20170286431A1 (en) * 2013-07-11 2017-10-05 Outside Intelligence Inc. Method and system for scoring credibility of information sources
US20200067861A1 (en) * 2014-12-09 2020-02-27 ZapFraud, Inc. Scam evaluation system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070112714A1 (en) * 2002-02-01 2007-05-17 John Fairweather System and method for managing knowledge
US20130204664A1 (en) * 2012-02-07 2013-08-08 Yeast, LLC System and method for evaluating and optimizing media content
US20170286431A1 (en) * 2013-07-11 2017-10-05 Outside Intelligence Inc. Method and system for scoring credibility of information sources
US20160097788A1 (en) * 2014-10-07 2016-04-07 Snappafras Corp. Pedestrian direction of motion determination system and method
US20200067861A1 (en) * 2014-12-09 2020-02-27 ZapFraud, Inc. Scam evaluation system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220318823A1 (en) * 2021-03-31 2022-10-06 International Business Machines Corporation Personalized alert generation based on information dissemination
US12086817B2 (en) * 2021-03-31 2024-09-10 International Business Machines Corporation Personalized alert generation based on information dissemination
US20220391474A1 (en) * 2021-06-03 2022-12-08 Beatdapp Software Inc. Streaming fraud detection using blockchain

Similar Documents

Publication Publication Date Title
Rao et al. A review on social spam detection: Challenges, open issues, and future directions
Can et al. A new direction in social network analysis: Online social network analysis problems and applications
Al-Qurishi et al. Leveraging analysis of user behavior to identify malicious activities in large-scale social networks
Al-Qurishi et al. A prediction system of Sybil attack in social network using deep-regression model
Chakraborty et al. Recent developments in social spam detection and combating techniques: A survey
Prabhu Kavin et al. Machine Learning‐Based Secure Data Acquisition for Fake Accounts Detection in Future Mobile Communication Networks
Singh et al. Behavioral analysis and classification of spammers distributing pornographic content in social media
Singhal et al. SoK: Content moderation in social media, from guidelines to enforcement, and research to practice
Singh et al. Who is who on twitter–spammer, fake or compromised account? a tool to reveal true identity in real-time
Alharbi et al. Social media identity deception detection: a survey
Fu et al. Robust spammer detection in microblogs: Leveraging user carefulness
Sahoo et al. Fake profile detection in multimedia big data on online social networks
Pv et al. UbCadet: detection of compromised accounts in twitter based on user behavioural profiling
Chen et al. A study on real-time low-quality content detection on Twitter from the users’ perspective
US20210342704A1 (en) System and Method for Detecting Misinformation and Fake News via Network Analysis
David et al. Features combination for the detection of malicious Twitter accounts
US20200050758A1 (en) Systems and Methods for Detecting Spam
Stockmann Toward Area‐Smart Data Science: Critical Questions for Working With Big Data From China
US10510014B2 (en) Escalation-compatible processing flows for anti-abuse infrastructures
WO2021262180A1 (en) System and method for detecting misinformation and fake news via network analysis
Xia et al. Characterizing and detecting malicious accounts in privacy-centric mobile social networks: A case study
Venkatesh et al. Malicious account detection based on short URLs in twitter
Dewan et al. Hiding in plain sight: The anatomy of malicious pages on facebook
Balmau et al. The fake news vaccine: A content-agnostic system for preventing fake news from becoming viral
WO2022119552A2 (en) System and method for detecting misinformation and fake news via network analysis

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20942087

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20942087

Country of ref document: EP

Kind code of ref document: A1