WO2024089860A1 - Classification device, classification method, and classification program - Google Patents
Classification device, classification method, and classification program Download PDFInfo
- Publication number
- WO2024089860A1 WO2024089860A1 PCT/JP2022/040260 JP2022040260W WO2024089860A1 WO 2024089860 A1 WO2024089860 A1 WO 2024089860A1 JP 2022040260 W JP2022040260 W JP 2022040260W WO 2024089860 A1 WO2024089860 A1 WO 2024089860A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- post
- feature
- tweets
- classification
- unit
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 31
- 239000000284 extract Substances 0.000 claims abstract description 34
- 238000012549 training Methods 0.000 claims abstract description 14
- 238000000605 extraction Methods 0.000 claims description 60
- 238000012545 processing Methods 0.000 claims description 22
- 238000010801 machine learning Methods 0.000 claims description 13
- 238000012015 optical character recognition Methods 0.000 claims description 12
- 230000006855 networking Effects 0.000 claims description 5
- 230000003287 optical effect Effects 0.000 claims description 4
- 238000013145 classification model Methods 0.000 abstract description 12
- 238000010586 diagram Methods 0.000 description 20
- 239000013598 vector Substances 0.000 description 13
- 230000008569 process Effects 0.000 description 12
- 238000005516 engineering process Methods 0.000 description 11
- 238000013480 data collection Methods 0.000 description 8
- 230000000007 visual effect Effects 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000007637 random forest analysis Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 3
- 230000001186 cumulative effect Effects 0.000 description 2
- 238000003066 decision tree Methods 0.000 description 2
- 230000010365 information processing Effects 0.000 description 2
- 238000010187 selection method Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- ITJLNEXJUADEMK-UHFFFAOYSA-N Shirin Natural products CCC(C)(O)c1c(Cl)c(OC)c(C)c2OC(=O)c3c(C)c(Cl)c(O)c(Cl)c3Oc12 ITJLNEXJUADEMK-UHFFFAOYSA-N 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000007477 logistic regression Methods 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/38—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/383—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
Definitions
- the present invention relates to a classification device, a classification method, and a classification program for classifying posts related to security threat information.
- Security blogs, security reports, social platforms, etc. are sources from which information on security threats such as phishing attacks can be extracted.
- non-patent documents 3 and 4 natural language processing technology can be applied to blogs and reports that summarize threat information analyzed by security experts, and the data can be extracted as formatted data, making it possible to use it mechanically.
- Non-Patent Document 5 compares and evaluates Twitter (registered trademark), Facebook (registered trademark), news sites, security blogs, security forums, etc. as sources of threat information, and reports that Twitter is superior in terms of both the quantity and quality of information that can be collected.
- Non-Patent Documents 6, 7, and 8 propose technology that focuses on specific users and keywords on Twitter and extracts threat-related URLs, domain names, hash values, IP addresses, vulnerability information, and other information from each user's tweets. It has been reported that this technology can obtain a large amount of useful threat information.
- Phishing attacks continue at a rapid pace - unique URLs, an average of about 270 per day, Security NEXT, [online], [searched October 13, 2022], Internet ⁇ URL: https://www.security-next.com/134607> 2022/02 Phishing report status, [online], Council of Anti-Phishing Japan, [Retrieved October 13, 2022], Internet ⁇ URL: https://www.antiphishing.jp/report/monthly/202202.html> Zhu, Ziyun and Dumitras, Jerusalem, “ChainSmith: Automatically Learning the Semantics of Malicious Campaigns by Mining Threat Intelligence Reports”, 2018 IEEE European Symposium on Security and Privacy Satvat, Kiavash, Gjomemo, Rigel and Venkatakrishnan, V.N., “EXTRACTOR: Extracting Attack Behavior from Threat Reports”, IEEE EuroS&P 2021.
- the objective of the present invention is to solve the above-mentioned problem and extract useful security threat information.
- the present invention is characterized by comprising a feature extraction unit that extracts features of each of the text and images contained in posts related to security threats on SNS (Social Networking Service) from the posts; a learning unit that uses the features to learn from training data in which each post is labeled with a correct answer as to whether it is a security threat or not, thereby learning a machine learning model for classifying an input post as to whether the post is a security threat or not, a classification unit that uses the trained machine learning model to classify an input post as to whether it is a security threat or not, and an output processing unit that outputs the results of the classification.
- SNS Social Networking Service
- the present invention makes it possible to extract useful security threat information.
- FIG. 1 is a diagram illustrating an example of a system configuration.
- FIG. 2A is a diagram illustrating an example of the configuration of a collection device.
- FIG. 2B is a flowchart illustrating an example of a processing procedure executed by the collection device.
- FIG. 3 is a diagram for explaining a specific example of a processing procedure executed by the collection device.
- FIG. 4 is a diagram showing an example of security keywords.
- Figure 5 is a diagram illustrating an example of generating co-occurrence keywords.
- FIG. 6 is a diagram showing an example of a Tweet that is the subject of data collection.
- FIG. 7 is a diagram for explaining the process of extracting a URL and a domain name from the text and image of a Tweet.
- FIG. 1 is a diagram illustrating an example of a system configuration.
- FIG. 2A is a diagram illustrating an example of the configuration of a collection device.
- FIG. 2B is a flowchart illustrating an example of a processing procedure executed by the collection
- FIG. 8A is a diagram illustrating an example of the configuration of a classification device.
- FIG. 8B is a flowchart illustrating an example of a processing procedure executed by the classification device.
- FIG. 9 is a diagram for explaining a specific example of a processing procedure executed by the classification device.
- FIG. 10 is a diagram showing an example of feature quantities generated from a Tweet.
- Figure 11 is a diagram showing an example of an Account Feature of a Tweet.
- Figure 12 shows an example of a Content Feature of a Tweet.
- FIG. 13 is a diagram showing an example of a URL Feature of a Tweet.
- Figure 14 shows an example of an OCR Feature of a Tweet.
- Figure 15 shows an example of a Visual Feature of a Tweet.
- Figure 16 shows an example of a Context Feature of a Tweet.
- FIG. 17 is a diagram showing an example of feature amounts selected by the selection unit in FIG. 8A.
- FIG. 18 shows the evaluation results of the classification accuracy of the system.
- FIG. 19 is a diagram showing the number of phishing attack reports and URLs related to phishing attacks extracted by the system during a given period.
- FIG. 20 is a diagram showing the results of comparing the system with OpenPhish.
- FIG. 21 is a diagram showing the comparison results between the system and PhishTank.
- FIG. 22 is a diagram showing the survey results of the number of reports by users and the number of phishing URLs.
- FIG. 23 is a diagram showing the effect of dynamically selecting keywords.
- FIG. 24 is a diagram illustrating a computer that executes a program.
- SNS Social Networking Service
- Twitter posts Twitter posts
- SNS posts may be in either Japanese or English.
- the system for example, quickly and accurately extracts tweets reporting phishing attacks from each user's tweets.
- the system includes a collection device 10 and a classification device 20.
- the collection device 10 and the classification device 20 may be connected to each other so as to be able to communicate with each other via a network such as the Internet, or may be installed in the same device.
- Collection device 10 Collects a wide range of tweets that may be reports of phishing attacks. For example, the collection device 10 extracts keywords that co-occur in reports of phishing attacks (Co-occurrence Keywords). The collection device 10 then uses keywords related to security threats (Security Keywords) and the above-mentioned Co-occurrence Keywords to collect a wide range of tweets that may be reports of phishing attacks (Screened Tweets in Figure 1).
- Classification device 20 Classifies tweets reporting phishing attacks from among the tweets collected by collection device 10. For example, classification device 20 extracts text and image features of tweets reporting phishing attacks through machine learning, and uses the extracted features to classify each tweet as either a tweet reporting a phishing attack or another tweet.
- the collection device 10 may extract Co-occurrence Keywords from the group of Tweets classified as Tweets reporting phishing attacks. The collection device 10 may then use the extracted Co-occurrence Keywords to collect Tweets that may be reports of phishing attacks. In this way, the system can dynamically expand/reduce the keywords for collecting Tweets that may be reports of phishing attacks, and collect Tweets that should be collected at the appropriate time.
- the system can also accurately extract reports of phishing attacks from the large amount of collected Tweets. Furthermore, the system extracts information about phishing attacks from both the text and images contained in Tweets, making it possible to extract useful information that could not be obtained by simply analyzing the text of Tweets.
- This system provides the following benefits in countering phishing attacks: (1) It becomes possible to collect threat information from a wider range than the limited monitoring targets of conventional technology, making it possible to provide threat information from a new perspective.
- the collection device 10 includes, for example, an input/output unit 11, a storage unit 12, and a control unit 13.
- the input/output unit 11 is an interface that handles the input and output of various data. For example, the input/output unit 11 accepts input of Tweets collected from Twitter. In addition, the input/output unit 11 outputs Tweets that may be reports of phishing attacks extracted by the control unit 13 (Screened Tweets in FIG. 1 ).
- the memory unit 12 stores data, programs, etc. that are referenced when the control unit 13 executes various processes.
- the memory unit 12 is realized, for example, by a semiconductor memory element such as a RAM (Random Access Memory) or a flash memory, or a storage device such as a hard disk or an optical disk.
- the memory unit 12 stores, for example, security keywords, co-occurrence keywords, etc. extracted by the control unit 13.
- the control unit 13 is responsible for controlling the entire collection device 10.
- the functions of the control unit 13 are realized, for example, by a CPU (Central Processing Unit) executing a program stored in the memory unit 12.
- a CPU Central Processing Unit
- the control unit 13 includes, for example, a first collection unit 131, a keyword extraction unit 132, a second collection unit 133, and a data collection unit 134.
- a URL/domain name extraction unit 135 and a selection unit 136 shown by dashed lines, may or may not be provided, and cases in which they are provided will be described later.
- the first collection unit 131 uses Security Keywords, which are keywords related to security threats, to collect Tweets reporting phishing attacks from each user's Tweets.
- the keyword extraction unit 132 extracts co-occurrence keywords, which are keywords that co-occur with more than a predetermined frequency, from tweets reporting phishing attacks collected by the first collection unit 131. Note that these co-occurrence keywords may be extracted from tweets classified by the classification device 20 as tweets reporting phishing attacks.
- the second collection unit 133 uses the Co-occurrence Keywords to collect Tweets that may be reports of phishing attacks from the Tweets of each user. For example, the second collection unit 133 collects Tweets that contain Security Keywords and Co-occurrence Keywords in the text of the Tweet or in images linked to the Tweet from the Tweets of each user. The collected Tweets are stored, for example, in the memory unit 12.
- the data collection unit 134 collects data necessary for input to the classification device 20.
- the data collection unit 134 collects the following data from Tweets collected by the second collection unit 133: (1) Tweet character strings (e.g., hashtags, number of characters, etc.), (2) meta information linked to the Tweet (e.g., application information, presence or absence of defang, etc.), (3) information related to the Tweet's account (e.g., number of followers of the account, period of account registration, etc.), and (4) images included in the Tweet (e.g., up to four images linked to the Tweet, etc.).
- the collected data is stored, for example, in the memory unit 12.
- the first collection unit 131 of the collection device 10 collects tweets reporting phishing attacks using, for example, security keywords (S1: collection of tweets using security keywords). Then, the keyword extraction unit 132 extracts co-occurrence keywords, which are keywords that co-occur with a predetermined frequency or more, from the tweets reporting phishing attacks collected in S1 (S2: extraction of co-occurrence keywords).
- S1 collection of tweets using security keywords
- S2 extraction of co-occurrence keywords
- the second collection unit 133 uses the Security Keywords and Co-occurrence Keywords to collect Tweets that may be reports of phishing attacks from each user's Tweets (S3).
- the data collection unit 134 collects data necessary for input to the classification device 20 from the Tweets collected in S3 (S4).
- the collection device 10 can collect tweets that may be reports of phishing attacks.
- the collection device 10 may also include a URL/domain name extraction unit 135 and a selection unit 136 as shown in FIG. 2A.
- the URL/domain name extraction unit 135 extracts URLs and domain names from the text and images of the Tweets collected by the second collection unit 133.
- the selection unit 136 selects Tweets that are likely to be reports of phishing attacks from the Tweets collected by the second collection unit 133, based on the URLs or domain names extracted by the URL/domain name extraction unit 135.
- the selection unit 136 selects the Tweet as likely to be a report of a phishing attack.
- the selection unit 136 selects the Tweet as likely to be a report of a phishing attack. For example, the selection unit 136 selects a domain name that has been registered in WHOIS for less than a predetermined number of days as a Tweet that is likely to be a report of a phishing attack.
- the data collection unit 134 collects data (e.g., Tweet character strings, etc.) necessary for input to the classification device 20 from the Tweets selected by the selection unit 136.
- data e.g., Tweet character strings, etc.
- the collection device 10 can collect tweets and their data that are more likely to be reports of phishing attacks from the collected tweets.
- the collection device 10 generates two types of keywords (Security Keywords and Co-occurrence Keywords) for searching for Tweets containing reports of phishing attacks.
- the collection device 10 generates, as security keywords, keywords related to security threats and the media through which they are spread, such as "SMS” and “fake site,” and keywords for sharing security threat information, such as "#phishing” and "#fraud” (see FIG. 4). Note that existing keywords related to security threats may be used as the security keywords.
- the collection device 10 extracts co-occurring keywords (co-occurrence keywords) with a frequency exceeding a predetermined value only from reports of phishing attacks collected using security keywords as keys.
- the first collection unit 131 of the collection device 10 uses Security Keywords to collect Tweets reporting phishing attacks from each user's Tweets.
- the keyword extraction unit 132 then extracts Co-occurrence Keywords from the collected Tweets.
- the keyword extraction unit 132 newly extracts Co-occurrence Keywords from the Tweets collected during each specified period.
- the keyword extraction unit 132 extracts proper nouns from the character strings of tweets for a given period of time, and calculates PMI (Pointwise Mutual Information) using the following formula (1). Note that X and Y in formula (1) are proper nouns contained in the tweets.
- the keyword extraction unit 132 calculates the SoA using formula (2).
- W is a proper noun contained in the Tweet
- L is a label (security threat information or other).
- the keyword extraction unit 132 extracts proper nouns whose SoA exceeds a predetermined threshold.
- tweets containing the security keyword "fraud” include tweets related to phishing reports shown in FIG. 5 (1) and tweets unrelated to phishing reports shown in FIG. 5 (2).
- the keyword extraction unit 132 extracts "Company d” and "SMS,” proper nouns that appear frequently (whose SoA exceeds a predetermined threshold) only in tweets ((1)) related to phishing reports that contain "fraud,” as co-occurrence keywords.
- the collection device 10 collects data necessary for input to the classification device 20 from Twitter.
- the second collection unit 133 collects Tweets that may be reports of phishing attacks from Tweets of each user by using the co-occurrence keywords extracted by the keyword extraction unit 132. In this way, the second collection unit 133 can collect Tweets that include URLs and domains of Potentially Phishing Sites, for example, as shown in FIG. 3.
- the second collection unit 133 can collect Tweets (Screened Tweets) from among the Tweets of each user, excluding Tweets (Unrelated Tweets) related to Legitimate Sites.
- the data collection unit 134 collects the following data related to the Tweets collected by the second collection unit 133 (see FIG. 6).
- Tweet string e.g. hashtag, number of characters, etc.
- meta information associated with the Tweet e.g. application information, whether or not defanged, etc.
- information about the Tweet's account e.g. number of followers, period of account registration, etc.
- images included in the Tweet e.g. up to four images associated with the Tweet, etc.
- the URL/domain name extraction unit 135 of the collection device 10 extracts URLs and domain names from the text and images of the Tweets (Screened Tweets) collected by the second collection unit 133 .
- the URL/domain name extraction unit 135 applies optical character recognition to the image of the Tweet to extract a character string.
- a defang e.g., https -> ttps
- the URL/domain name extraction unit 135 restores it to its original state.
- the URL/domain name extraction unit 135 then extracts URLs and domain names from the character strings in the text and image of the Tweet using regular expressions.
- the URL/domain name extraction unit 135 checks whether the extracted domain name exists in the Public Suffix List (see Reference 1) or the like.
- the URL/domain name extraction unit 135 confirms that the extracted domain name exists, it extracts the domain name and a URL that includes the domain name. For example, the URL/domain name extraction unit 135 extracts the following URL and domain name from the Tweet shown in FIG. 7.
- the selection unit 136 screens the URLs and domain names extracted by the URL/domain name extraction unit 135 for URLs and domain names related to phishing.
- the selection unit 136 determines that the extracted URL and domain name are Potentially Phishing Sites. The selection unit 136 then selects Tweets that include URLs or domain names determined to be Potentially Phishing Sites as Tweets that are likely to be reports of phishing attacks.
- the Allowlist e.g., a list of URLs or domain names of legitimate websites
- a Long-lived Domain Name e.g., a domain name that has been registered in WHOIS for a predetermined number of days or more
- the selection unit 136 determines that the URL and domain name are Legitimate Sites.
- the selection unit 136 passes the domain name. In addition, if the extracted domain name matches the Tranco List (see Reference 2), the selection unit 136 excludes the domain name as a domain name that is not related to phishing attacks.
- the selection unit 136 also queries WHOIS for the extracted domain name, and if no information can be obtained, passes the domain name. Furthermore, based on the WHOIS information, the selection unit 136 excludes a domain name if it has been more than 365 days since it was registered, and passes the domain name if it has not been 365 days since it was registered. The selection unit 136 then selects, for example, a Tweet that contains at least one URL or domain name that has been passed in the above process as a Tweet that is likely to be a report of a phishing attack.
- the collection device 10 can extract tweets from each user that are likely to be reports of phishing attacks.
- the classification device 20 includes, for example, an input/output unit 21, a storage unit 22, and a control unit 23.
- the input/output unit 21 is an interface that handles the input and output of various data.
- the input/output unit 21 accepts input of tweets that may be reports of phishing attacks collected by the collection device 10 and the associated data.
- the input/output unit 21 also outputs the classification results obtained by the control unit 23.
- the storage unit 22 stores data, programs, etc. referenced when the control unit 23 executes various processes.
- the storage unit 22 is realized by a semiconductor memory element such as a RAM or a flash memory, or a storage device such as a hard disk or an optical disk.
- the storage unit 22 stores tweets that are likely to be reports of phishing attacks received by the input/output unit 21 and the data (collected data), etc.
- the storage unit 22 stores parameters of the classification model after the control unit 23 has learned the classification model.
- the control unit 23 is responsible for controlling the entire classification device 20.
- the functions of the control unit 23 are realized, for example, by the CPU executing a program stored in the storage unit 22.
- the control unit 23 includes, for example, a data acquisition unit 231, a feature extraction unit 232, a feature selection unit 233, a learning unit 234, a classification unit 235, and an output processing unit 236.
- the data acquisition unit 231 acquires tweets and their data that are likely to be reports of phishing attacks from the collection device 10.
- the feature extraction unit 232 extracts features from the Tweet and its data acquired by the data acquisition unit 231. For example, the feature extraction unit 232 extracts features from the text and image of the Tweet acquired by the data acquisition unit 231.
- the feature extraction unit 232 extracts, from a Tweet acquired by the data acquisition unit 231, features of the account of the Tweet, features of the content of the Tweet, features of the URL or domain name included in the Tweet, features of a character string obtained by optical character recognition of an image included in the post, features of an image included in the Tweet, features of the context of the text included in the Tweet, etc. Details of the extraction of Tweet features by the feature extraction unit 232 will be described later using specific examples.
- the feature selection unit 233 selects, from among the features extracted by the feature extraction unit 232, features that are effective in classifying whether or not a tweet is related to a report of a phishing attack.
- the feature selection method uses Boruta-SHAP (see References 3 and 4).
- the feature selection unit 233 selects, from among the features extracted by the feature extraction unit 232, features that are effective for classifying whether or not a tweet is related to a report of a phishing attack, using the following procedure.
- the feature selecting unit 233 generates false features that include random values in addition to the features to be selected.
- the feature selection unit 233 classifies the features to be selected and the false features using a decision tree-based algorithm, and calculates the variable importance of each feature.
- the feature selecting unit 233 counts the variable importance of the feature to be selected calculated in (2) if it is greater than the variable importance of the false feature.
- the feature value selection unit 233 repeats the processes (1) to (3) multiple times and selects feature values that are determined to be statistically significant as feature values that are effective for classification.
- the learning unit 234 learns a machine learning model (classification model) for classifying whether an input Tweet is a Tweet reporting a phishing attack or not through supervised learning using the features selected by the feature selection unit 233.
- a machine learning model for classifying whether an input Tweet is a Tweet reporting a phishing attack or not through supervised learning using the features selected by the feature selection unit 233.
- the learning unit 234 learns a classification model through supervised learning using the features selected by the feature selection unit 233 for teacher data related to phishing attacks (data to which each Tweet is assigned a correct answer label indicating whether it is a phishing attack or not).
- the classification unit 235 uses the classification model learned by the learning unit 234 to classify whether the input Tweet is a Tweet reporting a phishing attack.
- the output processing unit 236 outputs the result of the classification of the Tweet by the classification unit 235.
- the data acquisition unit 231 of the classification device 20 acquires Tweets and their data that are likely to be reports of phishing attacks collected by the collection device 10 (S11: Acquisition of collected data).
- the feature extraction unit 232 extracts features from the Tweets and their data acquired by the data acquisition unit 231 (S12: Extraction of Tweet features).
- the feature selection unit 233 selects, from the features extracted in S12, features that are effective for classifying whether or not a Tweet is a report of a phishing attack (S13). Then, the learning unit 234 uses the features selected in S13 for the teacher data related to phishing attacks to learn a classification model for classifying whether or not an input Tweet is a report of a phishing attack (S14).
- the classification unit 235 uses the classification model learned in S14 to classify whether the input Tweet is a Tweet reporting a phishing attack (S15). Then, the output processing unit 236 outputs the result of the classification in S16 (S16).
- the data acquisition unit 231 of the classification device 20 acquires Tweets (Screened Tweets) and their data collected by the collection device 10. Then, the feature extraction unit 232 extracts features from the Tweets and their data acquired by the data acquisition unit 231.
- the feature extraction unit 232 generates a total of 27 features of six types: Account Feature (1) from the account of the Tweet, Content Feature (2) from information linked to the Tweet, URL Feature (3) from the extracted URL, OCR Feature (5) from character strings extracted by OCR, Visual Feature (6) from the appearance of the image, and Context Feature (4) from the context of the Tweet.
- Account Feature (1) from the account of the Tweet
- Content Feature (2) from information linked to the Tweet
- URL Feature (3) from the extracted URL
- OCR Feature (5) from character strings extracted by OCR
- Visual Feature (6) from the appearance of the image
- Context Feature (4) from the context of the Tweet.
- the feature extraction unit 232 In order to capture the characteristics of a Twitter user, the feature extraction unit 232 generates an Account Feature for each Tweet from information about the user's account (e.g., number of followings, number of followers, number of Tweets, number of media, number of lists, account registration date, etc.), as shown in FIG. 11 .
- information about the user's account e.g., number of followings, number of followers, number of Tweets, number of media, number of lists, account registration date, etc.
- (5-2) Content Feature In order to capture the characteristics of content that frequently appears in Tweets reporting phishing attacks, the feature extraction unit 232 generates a Content Feature for each Tweet from information linked to the Tweet itself (e.g., a character string, a mentioned user, a hashtag, an image, a URL or domain name, an application used in the Tweet, a defang type, etc.), as shown in FIG. 12 .
- information linked to the Tweet itself e.g., a character string, a mentioned user, a hashtag, an image, a URL or domain name, an application used in the Tweet, a defang type, etc.
- the feature extraction unit 232 In order to capture features related to the abuse of subdomains specific to phishing URLs and the abuse of specific top-level domains, the feature extraction unit 232 generates a URL Feature for each Tweet from the URL (or domain name) extracted from both the character string and image of the Tweet, as shown in Fig. 13.
- the URL Feature is, for example, the character string of the URL, the domain name, the path, the numbers included in the URL, the top-level domain, etc.
- OCR Feature In order to capture characteristics of similar character strings in Tweets related to phishing attacks, the feature extraction unit 232 generates an OCR feature for each Tweet from character strings extracted by optical character recognition (OCR), as shown in Fig. 14.
- OCR optical character recognition
- the OCR feature is, for example, a character string, a word, a symbol, a number, a URL, a domain name, etc.
- the feature extraction unit 232 In order to capture the commonality in the appearance of images contained in Tweets related to reports of phishing attacks, the feature extraction unit 232 generates a Visual Feature for each Tweet from the images associated with the Tweet.
- the feature extraction unit 232 uses the Efficient Net model (see Reference 5), which has produced excellent results in image classification, to generate a fixed-dimensional vector of the image linked to the Tweet.
- the feature extraction unit 232 then compresses the dimension of the vector using Truncated SV (see Reference 6), which converts a sparse vector into a dense vector.
- the feature extraction unit 232 then treats the compressed vector as the Visual Feature of the image included in the Tweet.
- the feature extraction unit 232 converts images associated with Tweets into vectors with inherent dimensions using an Efficient Net model that has been pre-trained on a large number of images from the Image Net, as shown in FIG. 15, for example.
- the feature extraction unit 232 then compresses the converted vectors to a cumulative contribution rate of 99% in the training data using Truncated SV.
- the feature extraction unit 232 In order to grasp the commonality of context in Tweets related to reports of phishing attacks, the feature extraction unit 232 generates a Context Feature for each Tweet from character strings in the Tweet.
- the feature extraction unit 232 generates a fixed-dimensional vector from the character strings in the Tweet, for example, using the BERT model, which has shown excellent results in sentence classification.
- the feature extraction unit 232 then compresses the dimension of the vector using Truncated SV.
- the feature extraction unit 232 then sets the compressed vector as the Context Feature of the Tweet.
- the feature extraction unit 232 converts the strings in the Tweet into vectors with inherent dimensions using a BERT model that has been pre-trained on a large number of strings from Wikipedia in English and Japanese, as shown in FIG. 16. The feature extraction unit 232 then compresses the converted vectors to a cumulative contribution rate of 99% in the training data using Truncated SV.
- the feature selection unit 233 selects, from the group of features generated by the feature extraction unit 232 in (5), features that are effective (important) for classifying tweets reporting phishing attacks from other tweets.
- Figure 17 shows examples of features that were determined to be important for classification as a result of feature selection.
- the learning unit 234 learns a classification model (machine learning model) using the features (feature vectors) selected by the feature selection unit 233 in (6) and training data (Ground-Truth Dataset) to which correct labels indicating whether or not the attack is a phishing attack have been assigned.
- Algorithms that can be used to train classification models include, for example, Random Forest, Neural Network, Decision Tree, Support Vector Machine, Logistic Regression, Naive Bayes, Gradient Boosting, and Stochastic Gradient Descent. After evaluating these algorithms against training data, it was confirmed that it is preferable to use Random Forest for the following three reasons.
- Random Forest had better classification accuracy than any other algorithm. - Random Forest performed at a stable speed in both the learning and estimation (classification) phases. ⁇ Random Forest had a distributed feature importance for all six types of features.
- the classification unit 235 classifies the Tweets collected by the collection device 10 into Tweets related to reports of phishing attacks (positive) or not (negative) using the machine learning model (classification model) learned in (7). Then, the output processing unit 236 outputs the result of the classification.
- the classification device 20 may extract proper nouns that appear in tweets classified as reports of phishing attacks, and the collection device 10 may use the proper nouns when extracting co-occurrence keywords.
- each component of each part shown in the figure is a functional concept, and does not necessarily have to be physically configured as shown in the figure.
- the specific form of distribution and integration of each device is not limited to that shown in the figure, and all or a part of it can be functionally or physically distributed and integrated in any unit depending on various loads, usage conditions, etc.
- each processing function performed by each device can be realized in whole or in any part by a CPU and a program executed by the CPU, or can be realized as hardware using wired logic.
- the above-mentioned system can be implemented by installing a program as package software or online software on a desired computer.
- the above-mentioned program can be executed by an information processing device to function as the above-mentioned system.
- the information processing device referred to here includes mobile communication terminals such as smartphones, mobile phones, and PHS (Personal Handyphone System), as well as terminals such as PDAs (Personal Digital Assistants).
- FIG. 24 is a diagram showing an example of a computer that executes a program.
- the computer 1000 has, for example, a memory 1010 and a CPU 1020.
- the computer 1000 also has a hard disk drive interface 1030, a disk drive interface 1040, a serial port interface 1050, a video adapter 1060, and a network interface 1070. Each of these components is connected by a bus 1080.
- the memory 1010 includes a ROM (Read Only Memory) 1011 and a RAM (Random Access Memory) 1012.
- the ROM 1011 stores a boot program such as a BIOS (Basic Input Output System).
- BIOS Basic Input Output System
- the hard disk drive interface 1030 is connected to a hard disk drive 1090.
- the disk drive interface 1040 is connected to a disk drive 1100.
- a removable storage medium such as a magnetic disk or optical disk is inserted into the disk drive 1100.
- the serial port interface 1050 is connected to a mouse 1110 and a keyboard 1120, for example.
- the video adapter 1060 is connected to a display 1130, for example.
- the hard disk drive 1090 stores, for example, an OS 1091, an application program 1092, a program module 1093, and program data 1094. That is, the programs that define each process executed by the above-mentioned system are implemented as program modules 1093 in which computer-executable code is written.
- the program modules 1093 are stored, for example, in the hard disk drive 1090.
- a program module 1093 for executing processes similar to the functional configuration of the system is stored in the hard disk drive 1090.
- the hard disk drive 1090 may be replaced by an SSD (Solid State Drive).
- the data used in the processing of the above-described embodiment is stored as program data 1094, for example, in memory 1010 or hard disk drive 1090. Then, the CPU 1020 reads the program module 1093 or program data 1094 stored in memory 1010 or hard disk drive 1090 into RAM 1012 as necessary and executes it.
- the program module 1093 and program data 1094 are not limited to being stored in the hard disk drive 1090, but may be stored in, for example, a removable storage medium and read by the CPU 1020 via the disk drive 1100 or the like. Alternatively, the program module 1093 and program data 1094 may be stored in another computer connected via a network (such as a LAN (Local Area Network), WAN (Wide Area Network)). The program module 1093 and program data 1094 may then be read by the CPU 1020 from the other computer via the network interface 1070.
- a network such as a LAN (Local Area Network), WAN (Wide Area Network)
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Library & Information Science (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
This classification device extracts, from Tweets pertaining to reports of phishing attacks that are collected by a collection device, feature amounts for each of text and an image included in the Tweets. The classification device subsequently carries out learning, using the feature amounts, with respect to teaching data labeled with a correct-answer label indicating whether the Tweets pertain to reports of phishing attacks, thereby training a classification model for classifying inputted Tweets with regard to whether the Tweets pertain to reports of phishing attacks. The classification device subsequently classifies the Tweets with regard to whether the Tweets pertain to reports of phishing attacks by using the trained classification model. The classification device then outputs the result of classifying the Tweets with regard to whether the Tweets pertain to reports of phishing attacks.
Description
本発明は、セキュリティ脅威情報に関する投稿を分類するための、分類装置、分類方法、および、分類プログラムに関する。
The present invention relates to a classification device, a classification method, and a classification program for classifying posts related to security threat information.
ソーシャルプラットフォーム上では、セキュリティ有識者に加えて善意の一般ユーザが自ら観測した疑わしいフィッシング攻撃の事例を注意喚起として画像(例えば、スクリーンショット)等により多く共有している。これらの情報を可能な限り早期かつ正確に収集・分析・抽出できればフィッシング攻撃の対策に有用である。
On social platforms, security experts as well as well-intentioned general users are sharing images (e.g. screenshots) of suspicious phishing attacks they have observed as a warning. If this information can be collected, analyzed, and extracted as quickly and accurately as possible, it will be useful in preventing phishing attacks.
フィッシング攻撃等のセキュリティ脅威情報を抽出する対象として、セキュリティブログ、セキュリティレポート、ソーシャルプラットフォーム等がある。
Security blogs, security reports, social platforms, etc. are sources from which information on security threats such as phishing attacks can be extracted.
例えば、非特許文献3,4のように、セキュリティ専門家が分析した脅威情報をまとめたブログやレポートに自然言語処理技術を適用し、形式化したデータとして抽出することで、機械的に利活用可能となる。
For example, as in non-patent documents 3 and 4, natural language processing technology can be applied to blogs and reports that summarize threat information analyzed by security experts, and the data can be extracted as formatted data, making it possible to use it mechanically.
また、非特許文献5では、脅威情報の収集対象として、Twitter(登録商標)、Facebook(登録商標)、ニュースサイト、セキュリティブログ、セキュリティフォーラム等を比較評価し、収集可能な情報の量と質の両方においてTwitterが最も優れていることが報告されている。
In addition, Non-Patent Document 5 compares and evaluates Twitter (registered trademark), Facebook (registered trademark), news sites, security blogs, security forums, etc. as sources of threat information, and reports that Twitter is superior in terms of both the quantity and quality of information that can be collected.
非特許文献6,7,8では、Twitterの特定のユーザやキーワードに着目して、各ユーザのTweetから脅威に関連したURLやドメイン名、ハッシュ値、IPアドレス、脆弱性情報等を抽出する技術を提案している。当該技術によれば、多数の有用な脅威情報が得られることが報告されている。
Non-Patent Documents 6, 7, and 8 propose technology that focuses on specific users and keywords on Twitter and extracts threat-related URLs, domain names, hash values, IP addresses, vulnerability information, and other information from each user's tweets. It has been reported that this technology can obtain a large amount of useful threat information.
しかし、上記の従来技術には以下の課題がある。
However, the above conventional technologies have the following problems:
(1)情報収集対象のTweetが限定的である
従来技術は、情報収集対象を特定のユーザアカウントに限定しているため、様々なユーザによるフィッシング攻撃の報告の情報は収集できない。また、従来技術は、「#phishing」や「#注意喚起」等の限定的なキーワードを収集対象としているため、限定的な範囲のTweetしか収集できない。 (1) The Tweets that are the subject of information collection are limited. Conventional technology limits the subjects of information collection to specific user accounts, so it is not possible to collect information on reports of phishing attacks by various users. In addition, conventional technology collects only limited keywords such as "#phishing" and "#warning", so it can only collect a limited range of Tweets.
従来技術は、情報収集対象を特定のユーザアカウントに限定しているため、様々なユーザによるフィッシング攻撃の報告の情報は収集できない。また、従来技術は、「#phishing」や「#注意喚起」等の限定的なキーワードを収集対象としているため、限定的な範囲のTweetしか収集できない。 (1) The Tweets that are the subject of information collection are limited. Conventional technology limits the subjects of information collection to specific user accounts, so it is not possible to collect information on reports of phishing attacks by various users. In addition, conventional technology collects only limited keywords such as "#phishing" and "#warning", so it can only collect a limited range of Tweets.
(2)情報抽出対象はTweetに含まれる一定の形式の文章のみである
Tweetによるフィッシング攻撃の報告にはスクリーンショット等の画像も含まれるが、従来技術は、Tweet内の文章のみを情報抽出対象としている。そのため、従来技術では画像内に含まれる情報を抽出できない。また、ユーザは様々な形式で情報を投稿するため、一定の形式に特化した従来技術では、限定的な情報しか抽出できない。 (2) Information extraction is limited to text in a certain format contained in Tweets. Reports of phishing attacks via Tweets include images such as screenshots, but the conventional technology extracts information only from text in Tweets. Therefore, the conventional technology cannot extract information contained in images. In addition, since users post information in various formats, the conventional technology, which is specialized in a certain format, can only extract limited information.
Tweetによるフィッシング攻撃の報告にはスクリーンショット等の画像も含まれるが、従来技術は、Tweet内の文章のみを情報抽出対象としている。そのため、従来技術では画像内に含まれる情報を抽出できない。また、ユーザは様々な形式で情報を投稿するため、一定の形式に特化した従来技術では、限定的な情報しか抽出できない。 (2) Information extraction is limited to text in a certain format contained in Tweets. Reports of phishing attacks via Tweets include images such as screenshots, but the conventional technology extracts information only from text in Tweets. Therefore, the conventional technology cannot extract information contained in images. In addition, since users post information in various formats, the conventional technology, which is specialized in a certain format, can only extract limited information.
その結果、従来技術では、有用なセキュリティ脅威情報を抽出できないという問題があった。そこで、本発明は、前記した問題を解決し、有用なセキュリティ脅威情報を抽出することを課題とする。
As a result, the conventional technology had the problem of being unable to extract useful security threat information. Therefore, the objective of the present invention is to solve the above-mentioned problem and extract useful security threat information.
前記した課題を解決するため、本発明は、SNS(Social Networking Service)のセキュリティ脅威に関する投稿から前記投稿に含まれるテキストおよび画像それぞれの特徴量を抽出する特徴量抽出部と、各投稿がセキュリティ脅威に関する投稿か否かの正解ラベルが付された教師データに対し、前記特徴量を用いた学習を行うことにより、入力された投稿に対し、前記投稿がセキュリティ脅威に関する投稿か否かを分類するための機械学習モデルの学習を行う学習部と、学習された前記機械学習モデルを用いて、入力された投稿がセキュリティ脅威に関する投稿か否かを分類する分類部と、前記分類の結果を出力する出力処理部とを備えることを特徴とする。
In order to solve the above problems, the present invention is characterized by comprising a feature extraction unit that extracts features of each of the text and images contained in posts related to security threats on SNS (Social Networking Service) from the posts; a learning unit that uses the features to learn from training data in which each post is labeled with a correct answer as to whether it is a security threat or not, thereby learning a machine learning model for classifying an input post as to whether the post is a security threat or not, a classification unit that uses the trained machine learning model to classify an input post as to whether it is a security threat or not, and an output processing unit that outputs the results of the classification.
本発明によれば、有用なセキュリティ脅威情報を抽出することができる。
The present invention makes it possible to extract useful security threat information.
以下、図面を参照しながら、本発明を実施するための形態(実施形態)について説明する。本発明は、本実施形態に限定されない。
Below, a form (embodiment) for carrying out the present invention will be described with reference to the drawings. The present invention is not limited to this embodiment.
[概要]
まず、図1を用いて、本実施形態の収集装置および分類装置を備えるシステムの概要を説明する。 [overview]
First, an overview of a system including a collection device and a classification device according to the present embodiment will be described with reference to FIG.
まず、図1を用いて、本実施形態の収集装置および分類装置を備えるシステムの概要を説明する。 [overview]
First, an overview of a system including a collection device and a classification device according to the present embodiment will be described with reference to FIG.
なお、システムが扱うSNS(Social Networking Service)の投稿は、Twitterの投稿(Tweet)である場合を例に説明するが、これに限定されない。また、SNSの投稿は、日本語の投稿でもよいし英語の投稿でもよい。
Note that the SNS (Social Networking Service) posts handled by the system will be described as Twitter posts (Tweets) as an example, but are not limited to this. Also, SNS posts may be in either Japanese or English.
また、本実施形態において、システムは、SNSの投稿からフィッシング攻撃の報告に関する投稿を収集する場合を例に説明するが、フィッシング攻撃以外のセキュリティ脅威の報告に関する投稿を収集してもよい。
In addition, in this embodiment, the system will be described taking as an example a case where posts reporting phishing attacks are collected from SNS posts, but posts reporting security threats other than phishing attacks may also be collected.
システムは、例えば、各ユーザのTweetからフィッシング攻撃の報告のTweetを早期かつ高精度に抽出する。例えば、システムは、収集装置10と分類装置20とを含んで構成される。なお、収集装置10と分類装置20とはインターネット等のネットワーク経由で通信可能に接続されてもよいし、同じ装置内に装備されてもよい。
The system, for example, quickly and accurately extracts tweets reporting phishing attacks from each user's tweets. For example, the system includes a collection device 10 and a classification device 20. The collection device 10 and the classification device 20 may be connected to each other so as to be able to communicate with each other via a network such as the Internet, or may be installed in the same device.
(1)収集装置10:フィッシング攻撃の報告である可能性のあるTweetを幅広く収集する。例えば、収集装置10は、フィッシング攻撃の報告に共起するキーワード(Co-occurrence Keywords)を抽出する。そして、収集装置10は、セキュリティ脅威に関するキーワード(Security Keywords)と上記のCo-occurrence Keywordsを用いて、フィッシング攻撃の報告である可能性のあるTweet(図1におけるScreened Tweets)を幅広く収集する。
(1) Collection device 10: Collects a wide range of tweets that may be reports of phishing attacks. For example, the collection device 10 extracts keywords that co-occur in reports of phishing attacks (Co-occurrence Keywords). The collection device 10 then uses keywords related to security threats (Security Keywords) and the above-mentioned Co-occurrence Keywords to collect a wide range of tweets that may be reports of phishing attacks (Screened Tweets in Figure 1).
(2)分類装置20:収集装置10により収集されたTweetの中からフィッシング攻撃の報告のTweetを分類する。例えば、分類装置20は、フィッシング攻撃の報告のTweetのテキストおよび画像の特徴を機械学習により抽出し、その抽出した特徴を用いて、各Tweetがフィッシング攻撃の報告のTweetかそれ以外のTweetかを分類する。
(2) Classification device 20: Classifies tweets reporting phishing attacks from among the tweets collected by collection device 10. For example, classification device 20 extracts text and image features of tweets reporting phishing attacks through machine learning, and uses the extracted features to classify each tweet as either a tweet reporting a phishing attack or another tweet.
なお、分類装置20によるTweetの分類後、収集装置10は、フィッシング攻撃の報告のTweetと分類されたTweet群からCo-occurrence Keywordsを抽出してもよい。そして、収集装置10は、抽出したCo-occurrence Keywordsを用いて、フィッシング攻撃の報告である可能性のあるTweetを収集してもよい。このようにすることで、システムは、フィッシング攻撃の報告である可能性のあるTweetを収集するためのキーワードを動的に拡充/縮小し、適切なタイミングで収集すべきTweetを収集することができる。
In addition, after the classification device 20 classifies the Tweets, the collection device 10 may extract Co-occurrence Keywords from the group of Tweets classified as Tweets reporting phishing attacks. The collection device 10 may then use the extracted Co-occurrence Keywords to collect Tweets that may be reports of phishing attacks. In this way, the system can dynamically expand/reduce the keywords for collecting Tweets that may be reports of phishing attacks, and collect Tweets that should be collected at the appropriate time.
このようなシステムによれば、セキュリティ有識者だけではなく善意の一般ユーザからもフィッシング攻撃の報告のTweetを収集できる。また、システムは、多数のキーワードでTweetを収集するので、フィッシング攻撃の報告を大規模に分析できる。
With such a system, it is possible to collect tweets reporting phishing attacks not only from security experts but also from well-intentioned general users. In addition, because the system collects tweets using a large number of keywords, it is possible to analyze reports of phishing attacks on a large scale.
また、システムは、収集した大規模なTweetの中からフィッシング攻撃の報告を精度よく抽出できる。さらに、システムは、Tweetに含まれるテキストと画像の両方からフィッシング攻撃に関する情報を抽出するので、Tweetのテキストを分析するだけでは得られなかった有用な情報を抽出することができる。
The system can also accurately extract reports of phishing attacks from the large amount of collected Tweets. Furthermore, the system extracts information about phishing attacks from both the text and images contained in Tweets, making it possible to extract useful information that could not be obtained by simply analyzing the text of Tweets.
本システムは、フィッシング攻撃の対策に以下の効果をもたらす。
(1)従来技術の限定的な監視対象を超えた幅広い範囲から脅威情報が収集可能となり、新たな観点での脅威情報の提供が可能となる。 This system provides the following benefits in countering phishing attacks:
(1) It becomes possible to collect threat information from a wider range than the limited monitoring targets of conventional technology, making it possible to provide threat information from a new perspective.
(1)従来技術の限定的な監視対象を超えた幅広い範囲から脅威情報が収集可能となり、新たな観点での脅威情報の提供が可能となる。 This system provides the following benefits in countering phishing attacks:
(1) It becomes possible to collect threat information from a wider range than the limited monitoring targets of conventional technology, making it possible to provide threat information from a new perspective.
(2)特に、これまで不足していた日本人を標的としていたフィッシング攻撃の対策に利活用可能な脅威情報をいち早く提供可能となる。
(2) In particular, it will be possible to quickly provide threat information that can be used to counter phishing attacks targeting Japanese people, which has been in short supply until now.
(3)本システムにより得られるデータを通信事業者のフィルタリングルール等に適用することで、フィッシング攻撃等の被害者の減少につながる。
(3) Applying the data obtained by this system to telecommunications carriers' filtering rules, etc., will lead to a reduction in the number of victims of phishing attacks, etc.
[収集装置]
[構成例]
次に、収集装置10を詳細に説明する。まず、図2Aを用いて、収集装置10の構成例を説明する。収集装置10は、例えば、入出力部11、記憶部12、および、制御部13を備える。 [Collection Device]
[Configuration example]
Next, a detailed description will be given of thecollection device 10. First, a configuration example of the collection device 10 will be described with reference to Fig. 2A. The collection device 10 includes, for example, an input/output unit 11, a storage unit 12, and a control unit 13.
[構成例]
次に、収集装置10を詳細に説明する。まず、図2Aを用いて、収集装置10の構成例を説明する。収集装置10は、例えば、入出力部11、記憶部12、および、制御部13を備える。 [Collection Device]
[Configuration example]
Next, a detailed description will be given of the
入出力部11は、各種データの入出力を司るインタフェースである。入出力部11は、例えば、Twitter上から収集したTweetの入力を受け付ける。また、入出力部11は、例えば、制御部13により抽出されたフィッシング攻撃の報告である可能性のあるTweet(図1におけるScreened Tweets)を出力する。
The input/output unit 11 is an interface that handles the input and output of various data. For example, the input/output unit 11 accepts input of Tweets collected from Twitter. In addition, the input/output unit 11 outputs Tweets that may be reports of phishing attacks extracted by the control unit 13 (Screened Tweets in FIG. 1 ).
記憶部12は、制御部13が各種処理を実行する際に参照されるデータ、プログラム等を記憶する。記憶部12は、例えば、RAM(Random Access Memory)、フラッシュメモリ(Flash Memory)等の半導体メモリ素子、または、ハードディスク、光ディスク等の記憶装置によって実現される。記憶部12は、例えば、制御部13により抽出されたSecurity Keywords、Co-occurrence Keywords等を記憶する。
The memory unit 12 stores data, programs, etc. that are referenced when the control unit 13 executes various processes. The memory unit 12 is realized, for example, by a semiconductor memory element such as a RAM (Random Access Memory) or a flash memory, or a storage device such as a hard disk or an optical disk. The memory unit 12 stores, for example, security keywords, co-occurrence keywords, etc. extracted by the control unit 13.
制御部13は、収集装置10全体の制御を司る。制御部13の機能は、例えば、CPU(Central Processing Unit)が、記憶部12に記憶されるプログラムを実行することにより実現される。
The control unit 13 is responsible for controlling the entire collection device 10. The functions of the control unit 13 are realized, for example, by a CPU (Central Processing Unit) executing a program stored in the memory unit 12.
制御部13は、例えば、第1の収集部131と、キーワード抽出部132と、第2の収集部133と、データ収集部134とを備える。なお、破線で示すURL・ドメイン名抽出部135および選別部136は装備される場合と装備されない場合とがあり、装備される場合については後記する。
The control unit 13 includes, for example, a first collection unit 131, a keyword extraction unit 132, a second collection unit 133, and a data collection unit 134. Note that a URL/domain name extraction unit 135 and a selection unit 136, shown by dashed lines, may or may not be provided, and cases in which they are provided will be described later.
第1の収集部131は、セキュリティ脅威に関するキーワードであるSecurity Keywords(セキュリティキーワード)を用いて、各ユーザのTweetからフィッシング攻撃の報告のTweetを収集する。
The first collection unit 131 uses Security Keywords, which are keywords related to security threats, to collect Tweets reporting phishing attacks from each user's Tweets.
キーワード抽出部132は、第1の収集部131により収集されたフィッシング攻撃の報告のTweetから所定の頻度を超えて共起するキーワードであるCo-occurrence Keywords(共起キーワード)を抽出する。なお、このCo-occurrence Keywordsは、分類装置20によりフィッシング攻撃の報告のTweetと分類されたTweetから抽出してもよい。
The keyword extraction unit 132 extracts co-occurrence keywords, which are keywords that co-occur with more than a predetermined frequency, from tweets reporting phishing attacks collected by the first collection unit 131. Note that these co-occurrence keywords may be extracted from tweets classified by the classification device 20 as tweets reporting phishing attacks.
第2の収集部133は、Co-occurrence Keywordsを用いて、各ユーザのTweetから、フィッシング攻撃の報告である可能性のあるTweetを収集する。例えば、第2の収集部133は、各ユーザのTweetから、当該Tweetのテキストまたは当該Tweetに紐づく画像に、Security Keywords、Co-occurrence Keywordsを含むTweetを収集する。収集したTweetは、例えば、記憶部12に格納される。
The second collection unit 133 uses the Co-occurrence Keywords to collect Tweets that may be reports of phishing attacks from the Tweets of each user. For example, the second collection unit 133 collects Tweets that contain Security Keywords and Co-occurrence Keywords in the text of the Tweet or in images linked to the Tweet from the Tweets of each user. The collected Tweets are stored, for example, in the memory unit 12.
データ収集部134は、分類装置20への入力に必要なデータを収集する。例えば、データ収集部134は、第2の収集部133により収集されたTweetから、以下のデータを収集する。(1)Tweetの文字列(例えば、ハッシュタグ、文字数等)、(2)Tweetに紐づくメタ情報(例えば、アプリケーション情報、デファングの有無等)、(3)Tweetのアカウントに関する情報(例えば、アカウントのフォロワー数、アカウント登録期間等)、(4)Tweetに含まれる画像(例えば、Tweetに紐づく最大4枚までの画像等)。収集したデータ(収集データ)は、例えば、記憶部12に格納される。
The data collection unit 134 collects data necessary for input to the classification device 20. For example, the data collection unit 134 collects the following data from Tweets collected by the second collection unit 133: (1) Tweet character strings (e.g., hashtags, number of characters, etc.), (2) meta information linked to the Tweet (e.g., application information, presence or absence of defang, etc.), (3) information related to the Tweet's account (e.g., number of followers of the account, period of account registration, etc.), and (4) images included in the Tweet (e.g., up to four images linked to the Tweet, etc.). The collected data (collected data) is stored, for example, in the memory unit 12.
[処理手順の例]
次に、図2Bを用いて、収集装置10が実行する処理手順の例を説明する。まず、収集装置10の第1の収集部131は、例えば、Security Keywordsを用いて、フィッシング攻撃の報告のTweetを収集する(S1:Security Keywordsを用いたTweetの収集)。そして、キーワード抽出部132は、S1で収集されたフィッシング攻撃の報告のTweetから所定の頻度を超えて共起するキーワードであるCo-occurrence Keywordsを抽出する(S2:Co-occurrence Keywordsの抽出)。 [Example of processing procedure]
Next, an example of a processing procedure executed by thecollection device 10 will be described with reference to Fig. 2B. First, the first collection unit 131 of the collection device 10 collects tweets reporting phishing attacks using, for example, security keywords (S1: collection of tweets using security keywords). Then, the keyword extraction unit 132 extracts co-occurrence keywords, which are keywords that co-occur with a predetermined frequency or more, from the tweets reporting phishing attacks collected in S1 (S2: extraction of co-occurrence keywords).
次に、図2Bを用いて、収集装置10が実行する処理手順の例を説明する。まず、収集装置10の第1の収集部131は、例えば、Security Keywordsを用いて、フィッシング攻撃の報告のTweetを収集する(S1:Security Keywordsを用いたTweetの収集)。そして、キーワード抽出部132は、S1で収集されたフィッシング攻撃の報告のTweetから所定の頻度を超えて共起するキーワードであるCo-occurrence Keywordsを抽出する(S2:Co-occurrence Keywordsの抽出)。 [Example of processing procedure]
Next, an example of a processing procedure executed by the
S2の後、第2の収集部133は、各ユーザのTweetから、Security KeywordsとCo-occurrence Keywordsを用いて、フィッシング攻撃の報告である可能性のあるTweetを収集する(S3)。その後、データ収集部134は、S3で収集されたTweetから、分類装置20への入力に必要なデータを収集する(S4)。
After S2, the second collection unit 133 uses the Security Keywords and Co-occurrence Keywords to collect Tweets that may be reports of phishing attacks from each user's Tweets (S3). After that, the data collection unit 134 collects data necessary for input to the classification device 20 from the Tweets collected in S3 (S4).
収集装置10が上記の処理を実行することで、フィッシング攻撃の報告である可能性のあるTweetを収集することができる。
By performing the above process, the collection device 10 can collect tweets that may be reports of phishing attacks.
なお、収集装置10は、図2Aに示す、URL・ドメイン名抽出部135および選別部136を備えていてもよい。
The collection device 10 may also include a URL/domain name extraction unit 135 and a selection unit 136 as shown in FIG. 2A.
URL・ドメイン名抽出部135は、第2の収集部133により収集されたTweetのテキストおよび画像からURLとドメイン名を抽出する。選別部136は、URL・ドメイン名抽出部135により抽出されたURLまたはドメイン名に基づき、第2の収集部133により収集されたTweetからフィッシング攻撃の報告である可能性の高いTweetを選別する。
The URL/domain name extraction unit 135 extracts URLs and domain names from the text and images of the Tweets collected by the second collection unit 133. The selection unit 136 selects Tweets that are likely to be reports of phishing attacks from the Tweets collected by the second collection unit 133, based on the URLs or domain names extracted by the URL/domain name extraction unit 135.
例えば、選別部136は、第2の収集部133により収集されたTweetに含まれるURLまたはドメインが、正当なウェブサイトのURLまたはドメイン名のリストに含まれない場合、フィッシング攻撃の報告である可能性の高いTweetとして選別する。また、選別部136は、当該Tweetに含まれるURLのドメイン名の利用期間が所定期間未満の場合、フィッシング攻撃の報告である可能性の高いTweetとして選別する。例えば、選別部136は、WHOISの登録からの経過日数が所定日数未満のドメイン名を、フィッシング攻撃の報告である可能性の高いTweetとして選別する。
For example, if a URL or domain included in a Tweet collected by the second collection unit 133 is not included in the list of URLs or domain names of legitimate websites, the selection unit 136 selects the Tweet as likely to be a report of a phishing attack. In addition, if the domain name of the URL included in the Tweet has been in use for less than a predetermined period, the selection unit 136 selects the Tweet as likely to be a report of a phishing attack. For example, the selection unit 136 selects a domain name that has been registered in WHOIS for less than a predetermined number of days as a Tweet that is likely to be a report of a phishing attack.
その後、データ収集部134は、選別部136により選別されたTweetから、分類装置20への入力に必要なデータ(例えば、Tweetの文字列等)を収集する。
Then, the data collection unit 134 collects data (e.g., Tweet character strings, etc.) necessary for input to the classification device 20 from the Tweets selected by the selection unit 136.
このようにすることで収集装置10は、収集されたTweetからフィッシング攻撃の報告である可能性がより高いTweetおよびそのデータを収集することができる。
In this way, the collection device 10 can collect tweets and their data that are more likely to be reports of phishing attacks from the collected tweets.
[処理手順の具体例]
次に、図3を用いて、収集装置10が実行する処理手順の具体例を説明する。なお、収集装置10には、URL・ドメイン名抽出部135および選別部136が装備される場合を例に説明する。 [Specific example of processing procedure]
Next, a specific example of the process executed by thecollection device 10 will be described with reference to Fig. 3. Note that the collection device 10 will be described with reference to a case where it is equipped with a URL/domain name extraction unit 135 and a selection unit 136.
次に、図3を用いて、収集装置10が実行する処理手順の具体例を説明する。なお、収集装置10には、URL・ドメイン名抽出部135および選別部136が装備される場合を例に説明する。 [Specific example of processing procedure]
Next, a specific example of the process executed by the
(1)Generating Keywords
収集装置10は、フィッシング攻撃の報告を含むTweetを検索するための2種類のキーワード(Security KeywordsとCo-occurrence Keywords)を生成する。 (1) Generating Keywords
Thecollection device 10 generates two types of keywords (Security Keywords and Co-occurrence Keywords) for searching for Tweets containing reports of phishing attacks.
収集装置10は、フィッシング攻撃の報告を含むTweetを検索するための2種類のキーワード(Security KeywordsとCo-occurrence Keywords)を生成する。 (1) Generating Keywords
The
(1-1)Security Keywords
まず、Security Keywordsについて説明する。例えば、収集装置10は、Security Keywordsとして、「SMS」や「偽サイト」といったセキュリティ脅威やそれが拡散される媒体に関連したキーワード、「#phishing」や「#詐欺」といったセキュリティ脅威情報を共有するためのキーワードを生成する(図4参照)。なお、このSecurity Keywordsは、セキュリティ脅威に関する既存のキーワードを用いてもよい。 (1-1) Security Keywords
First, the security keywords will be described. For example, thecollection device 10 generates, as security keywords, keywords related to security threats and the media through which they are spread, such as "SMS" and "fake site," and keywords for sharing security threat information, such as "#phishing" and "#fraud" (see FIG. 4). Note that existing keywords related to security threats may be used as the security keywords.
まず、Security Keywordsについて説明する。例えば、収集装置10は、Security Keywordsとして、「SMS」や「偽サイト」といったセキュリティ脅威やそれが拡散される媒体に関連したキーワード、「#phishing」や「#詐欺」といったセキュリティ脅威情報を共有するためのキーワードを生成する(図4参照)。なお、このSecurity Keywordsは、セキュリティ脅威に関する既存のキーワードを用いてもよい。 (1-1) Security Keywords
First, the security keywords will be described. For example, the
(1-2)Security Keywords
次に、Co-occurrence Keywordsについて説明する。例えば、収集装置10は、Security Keywordsをキーとして収集したフィッシング攻撃の報告にのみ、所定値を超える頻度で共起するキーワード(Co-occurrence Keywords)を抽出する。 (1-2) Security Keywords
Next, the co-occurrence keywords will be described. For example, thecollection device 10 extracts co-occurring keywords (co-occurrence keywords) with a frequency exceeding a predetermined value only from reports of phishing attacks collected using security keywords as keys.
次に、Co-occurrence Keywordsについて説明する。例えば、収集装置10は、Security Keywordsをキーとして収集したフィッシング攻撃の報告にのみ、所定値を超える頻度で共起するキーワード(Co-occurrence Keywords)を抽出する。 (1-2) Security Keywords
Next, the co-occurrence keywords will be described. For example, the
例えば、収集装置10の第1の収集部131は、Security Keywordsを用いて、各ユーザのTweetからフィッシング攻撃の報告のTweetを収集する。その後、キーワード抽出部132は、収集されたTweetからCo-occurrence Keywordsを抽出する。例えば、キーワード抽出部132は、所定期間ごとに、当該所定期間に収集されたTweetの中からCo-occurrence Keywordsを新規に抽出する。
For example, the first collection unit 131 of the collection device 10 uses Security Keywords to collect Tweets reporting phishing attacks from each user's Tweets. The keyword extraction unit 132 then extracts Co-occurrence Keywords from the collected Tweets. For example, the keyword extraction unit 132 newly extracts Co-occurrence Keywords from the Tweets collected during each specified period.
例えば、キーワード抽出部132は、所定期間のTweetの文字列から固有名詞を抽出し、以下の式(1)によりPMI(Pointwise Mutual Information)を計算する。なお、式(1)における、X,Yは、Tweet中に含まれる固有名詞である。
For example, the keyword extraction unit 132 extracts proper nouns from the character strings of tweets for a given period of time, and calculates PMI (Pointwise Mutual Information) using the following formula (1). Note that X and Y in formula (1) are proper nouns contained in the tweets.
PMI(X,Y)=log(P(X,Y)/P(X)P(Y))…式(1)
PMI(X,Y)=log(P(X,Y)/P(X)P(Y))…Equation (1)
次に、キーワード抽出部132は、式(2)によりSoAを計算する。なお、式(2)における、W:Tweet中に含まれる固有名詞、L:ラベル(セキュリティ脅威情報orその他)である。
Next, the keyword extraction unit 132 calculates the SoA using formula (2). In formula (2), W is a proper noun contained in the Tweet, and L is a label (security threat information or other).
SoA(W,L)=PMI(W,L)-PMI(W,¬L)…式(2)
SoA(W,L)=PMI(W,L)-PMI(W,¬L)...Equation (2)
そして、キーワード抽出部132は、SoAが所定の閾値を超える固有名詞を抽出する。例えば、Security Keyword「詐欺」を含むTweetには、図5の(1)に示すフィッシング報告に関連のあるTweetと、図5の(2)に示すフィッシング報告に関連のないTweetとが含まれる。キーワード抽出部132は、このうち「詐欺」を含むフィッシング報告に関連のあるTweet((1))にのみ頻出する(SoAが所定の閾値を超える)固有名詞である「d社」と「SMS」をCo-occurrence Keywordsとして抽出する。
Then, the keyword extraction unit 132 extracts proper nouns whose SoA exceeds a predetermined threshold. For example, tweets containing the security keyword "fraud" include tweets related to phishing reports shown in FIG. 5 (1) and tweets unrelated to phishing reports shown in FIG. 5 (2). The keyword extraction unit 132 extracts "Company d" and "SMS," proper nouns that appear frequently (whose SoA exceeds a predetermined threshold) only in tweets ((1)) related to phishing reports that contain "fraud," as co-occurrence keywords.
(2)Searching Tweets
次に、収集装置10は、分類装置20への入力に必要なデータをTwitterから収集する。例えば、第2の収集部133は、キーワード抽出部132により抽出されたCo-occurrence Keywordsを用いて、各ユーザのTweetから、フィッシング攻撃の報告である可能性のあるTweetを収集する。これにより、第2の収集部133は、例えば、図3に示すようにPotentially Phishing SitesのURL・ドメインを含むTweetを収集することができる。 (2) Searching Tweets
Next, thecollection device 10 collects data necessary for input to the classification device 20 from Twitter. For example, the second collection unit 133 collects Tweets that may be reports of phishing attacks from Tweets of each user by using the co-occurrence keywords extracted by the keyword extraction unit 132. In this way, the second collection unit 133 can collect Tweets that include URLs and domains of Potentially Phishing Sites, for example, as shown in FIG. 3.
次に、収集装置10は、分類装置20への入力に必要なデータをTwitterから収集する。例えば、第2の収集部133は、キーワード抽出部132により抽出されたCo-occurrence Keywordsを用いて、各ユーザのTweetから、フィッシング攻撃の報告である可能性のあるTweetを収集する。これにより、第2の収集部133は、例えば、図3に示すようにPotentially Phishing SitesのURL・ドメインを含むTweetを収集することができる。 (2) Searching Tweets
Next, the
つまり、第2の収集部133は、各ユーザのTweetのうち、Legitimate Sites(正当なサイト)に関するTweet(Unrelated Tweets)を除外したTweet(Screened Tweets)を収集することができる。データ収集部134は、第2の収集部133により収集されたTweet(図6参照)に関する、以下のデータを収集する。
In other words, the second collection unit 133 can collect Tweets (Screened Tweets) from among the Tweets of each user, excluding Tweets (Unrelated Tweets) related to Legitimate Sites. The data collection unit 134 collects the following data related to the Tweets collected by the second collection unit 133 (see FIG. 6).
Tweetの文字列(例えば、ハッシュタグ、文字数等)、Tweetに紐づくメタ情報(例えば、アプリケーション情報、デファングの有無等)、Tweetのアカウントに関する情報(例えば、フォロワー数、アカウント登録期間等)、Tweetに含まれる画像(例えば、Tweetに紐づく最大4枚までの画像等)。
Tweet string (e.g. hashtag, number of characters, etc.), meta information associated with the Tweet (e.g. application information, whether or not defanged, etc.), information about the Tweet's account (e.g. number of followers, period of account registration, etc.), images included in the Tweet (e.g. up to four images associated with the Tweet, etc.).
(3)Extracting URLs and Domain Names
次に、収集装置10のURL・ドメイン名抽出部135は、第2の収集部133が収集したTweet(Screened Tweets)のテキストおよび画像から、URLおよびドメイン名を抽出する。 (3) Extracting URLs and Domain Names
Next, the URL/domainname extraction unit 135 of the collection device 10 extracts URLs and domain names from the text and images of the Tweets (Screened Tweets) collected by the second collection unit 133 .
次に、収集装置10のURL・ドメイン名抽出部135は、第2の収集部133が収集したTweet(Screened Tweets)のテキストおよび画像から、URLおよびドメイン名を抽出する。 (3) Extracting URLs and Domain Names
Next, the URL/domain
例えば、URL・ドメイン名抽出部135は、Tweetの画像に光学文字認識を適用して文字列を抽出する。また、URL・ドメイン名抽出部135は、Tweetの文字列にデファング(例えば、https -> ttps)が存在する場合は元に戻す。そして、URL・ドメイン名抽出部135は、Tweetのテキストおよび画像の文字列から正規表現でURLとドメイン名を抽出する。その後、URL・ドメイン名抽出部135は、抽出したドメイン名が存在し得るか否かをPublic Suffix List(文献1参照)等で確認する。
For example, the URL/domain name extraction unit 135 applies optical character recognition to the image of the Tweet to extract a character string. In addition, if a defang (e.g., https -> ttps) is present in the character string of the Tweet, the URL/domain name extraction unit 135 restores it to its original state. The URL/domain name extraction unit 135 then extracts URLs and domain names from the character strings in the text and image of the Tweet using regular expressions. The URL/domain name extraction unit 135 then checks whether the extracted domain name exists in the Public Suffix List (see Reference 1) or the like.
・文献1:“Public Suffix List”, https://publicsuffix.org/
Reference 1: “Public Suffix List”, https://publicsuffix.org/
そして、URL・ドメイン名抽出部135は、抽出したドメイン名が存在することを確認すると、当該ドメイン名および当該ドメイン名を含むURLを抽出する。例えば、URL・ドメイン名抽出部135は、図7に示すTweetから、以下のURLおよびドメイン名を抽出する。
Then, when the URL/domain name extraction unit 135 confirms that the extracted domain name exists, it extracts the domain name and a URL that includes the domain name. For example, the URL/domain name extraction unit 135 extracts the following URL and domain name from the Tweet shown in FIG. 7.
・URL:https://tinyurl.com/yph6pswp、https://atavollwei.duckdns.org/
・ドメイン名:tinyurl.com、atavollwei.duckdns.org ・URL: https://tinyurl.com/yph6pswp, https://atavollwei.duckdns.org/
Domain names: tinyurl.com, atavollwei.duckdns.org
・ドメイン名:tinyurl.com、atavollwei.duckdns.org ・URL: https://tinyurl.com/yph6pswp, https://atavollwei.duckdns.org/
Domain names: tinyurl.com, atavollwei.duckdns.org
(4)Screening Phishing-related URLs and Domain Names
次に、選別部136は、URL・ドメイン名抽出部135により抽出されたURLおよびドメイン名から、フィッシングに関連のあるURLおよびドメイン名をスクリーニングする。 (4) Screening Phishing-related URLs and Domain Names
Next, theselection unit 136 screens the URLs and domain names extracted by the URL/domain name extraction unit 135 for URLs and domain names related to phishing.
次に、選別部136は、URL・ドメイン名抽出部135により抽出されたURLおよびドメイン名から、フィッシングに関連のあるURLおよびドメイン名をスクリーニングする。 (4) Screening Phishing-related URLs and Domain Names
Next, the
例えば、選別部136は、抽出されたURLまたはドメイン名がAllowlist(例えば、正当なウェブサイトのURLまたはドメイン名のリスト)にmatchせず、かつ、Long-lived Domain Names(例えば、WHOISの登録からの経過日数が所定日数以上のドメイン名)でもない場合、抽出されたURLおよびドメイン名を、Potentially Phishing Sitesと判定する。そして、選別部136は、Potentially Phishing Sitesと判定したURLまたはドメイン名を含むTweetを、フィッシング攻撃の報告である可能性の高いTweetとして選別する。
For example, if the extracted URL or domain name does not match the Allowlist (e.g., a list of URLs or domain names of legitimate websites) and is not a Long-lived Domain Name (e.g., a domain name that has been registered in WHOIS for a predetermined number of days or more), the selection unit 136 determines that the extracted URL and domain name are Potentially Phishing Sites. The selection unit 136 then selects Tweets that include URLs or domain names determined to be Potentially Phishing Sites as Tweets that are likely to be reports of phishing attacks.
一方、抽出されたURLとドメイン名がAllowlistにmatchする場合、または、Long-lived Domain Namesである場合、選別部136は、当該URLおよびドメイン名をLegitimate Sites(正当なサイト)とする。
On the other hand, if the extracted URL and domain name match the Allowlist or are Long-lived Domain Names, the selection unit 136 determines that the URL and domain name are Legitimate Sites.
例えば、選別部136は、抽出されたドメイン名が事前定義したURL短縮サービスのドメイン名に該当する場合は、当該ドメイン名を通過させる。また、選別部136は、抽出されたドメイン名がTranco List(文献2参照)にマッチする場合、当該ドメイン名をフィッシング攻撃に関連がないドメイン名として除外する。
For example, if the extracted domain name corresponds to a domain name of a predefined URL shortening service, the selection unit 136 passes the domain name. In addition, if the extracted domain name matches the Tranco List (see Reference 2), the selection unit 136 excludes the domain name as a domain name that is not related to phishing attacks.
・文献2:“A research-oriented top sites ranking hardened against manipulation - Tranco”, https://tranco-list.eu/
- Reference 2: "A research-oriented top sites ranking hardened against manipulation - Tranco", https://tranco-list.eu/
また、選別部136は、抽出されたドメイン名をWHOISに問い合わせて、情報が取得できない場合、当該ドメイン名を通過させる。さらに、選別部136は、WHOIS情報に基づき、ドメイン名が登録後365日以上経過している場合、当該ドメイン名を除外し、登録後365日経過していない場合、当該ドメイン名を通過させる。そして、選別部136は、例えば、上記の処理で通過したURLまたはドメイン名が少なくとも1種類存在するTweetをフィッシング攻撃の報告である可能性の高いTweetとして選別する。
The selection unit 136 also queries WHOIS for the extracted domain name, and if no information can be obtained, passes the domain name. Furthermore, based on the WHOIS information, the selection unit 136 excludes a domain name if it has been more than 365 days since it was registered, and passes the domain name if it has not been 365 days since it was registered. The selection unit 136 then selects, for example, a Tweet that contains at least one URL or domain name that has been passed in the above process as a Tweet that is likely to be a report of a phishing attack.
このようにすることで収集装置10は、各ユーザのTweetから、フィッシング攻撃の報告である可能性の高いTweetを抽出することができる。
In this way, the collection device 10 can extract tweets from each user that are likely to be reports of phishing attacks.
[分類装置]
[構成例]
次に、分類装置20を詳細に説明する。まず、図8Aを用いて、分類装置20の構成例を説明する。分類装置20は、例えば、入出力部21、記憶部22、および、制御部23を備える。 [Classification device]
[Configuration example]
Next, a detailed description will be given of theclassification device 20. First, a configuration example of the classification device 20 will be described with reference to Fig. 8A. The classification device 20 includes, for example, an input/output unit 21, a storage unit 22, and a control unit 23.
[構成例]
次に、分類装置20を詳細に説明する。まず、図8Aを用いて、分類装置20の構成例を説明する。分類装置20は、例えば、入出力部21、記憶部22、および、制御部23を備える。 [Classification device]
[Configuration example]
Next, a detailed description will be given of the
入出力部21は、各種データの入出力を司るインタフェースである。入出力部21は、例えば、収集装置10が収集したフィッシング攻撃の報告である可能性のあるTweetとそのデータの入力を受け付ける。また、入出力部21は、制御部23による分類結果を出力する。
The input/output unit 21 is an interface that handles the input and output of various data. For example, the input/output unit 21 accepts input of tweets that may be reports of phishing attacks collected by the collection device 10 and the associated data. The input/output unit 21 also outputs the classification results obtained by the control unit 23.
記憶部22は、制御部23が各種処理を実行する際に参照されるデータ、プログラム等を記憶する。記憶部22は、RAM、フラッシュメモリ等の半導体メモリ素子、または、ハードディスク、光ディスク等の記憶装置によって実現される。例えば、記憶部22は、入出力部21で受け付けたフィッシング攻撃の報告である可能性の高いTweetとそのデータ(収集データ)等を記憶する。また、記憶部22は、制御部23による分類モデルの学習後、分類モデルのパラメータ等を記憶する。
The storage unit 22 stores data, programs, etc. referenced when the control unit 23 executes various processes. The storage unit 22 is realized by a semiconductor memory element such as a RAM or a flash memory, or a storage device such as a hard disk or an optical disk. For example, the storage unit 22 stores tweets that are likely to be reports of phishing attacks received by the input/output unit 21 and the data (collected data), etc. In addition, the storage unit 22 stores parameters of the classification model after the control unit 23 has learned the classification model.
制御部23は、分類装置20全体の制御を司る。制御部23の機能は、例えば、CPUが、記憶部22に記憶されるプログラムを実行することにより実現される。
The control unit 23 is responsible for controlling the entire classification device 20. The functions of the control unit 23 are realized, for example, by the CPU executing a program stored in the storage unit 22.
制御部23は、例えば、データ取得部231と、特徴量抽出部232と、特徴量選定部233と、学習部234と、分類部235と、出力処理部236とを備える。
The control unit 23 includes, for example, a data acquisition unit 231, a feature extraction unit 232, a feature selection unit 233, a learning unit 234, a classification unit 235, and an output processing unit 236.
データ取得部231は、収集装置10からフィッシング攻撃の報告である可能性の高いTweetとそのデータを取得する。
The data acquisition unit 231 acquires tweets and their data that are likely to be reports of phishing attacks from the collection device 10.
特徴量抽出部232は、データ取得部231により取得されたTweetとそのデータから特徴量を抽出する。例えば、特徴量抽出部232は、データ取得部231により取得されたTweetのテキストおよび画像それぞれの特徴量を抽出する。
The feature extraction unit 232 extracts features from the Tweet and its data acquired by the data acquisition unit 231. For example, the feature extraction unit 232 extracts features from the text and image of the Tweet acquired by the data acquisition unit 231.
例えば、特徴量抽出部232は、データ取得部231により取得されたTweetから、当該Tweetのアカウントの特徴量、当該Tweetのコンテンツの特徴量、当該Tweetに含まれるURLまたはドメイン名の特徴量、当該投稿に含まれる画像の光学文字認識により得られる文字列の特徴量、当該Tweetに含まれる画像の特徴量、当該Tweetに含まれるテキストの文脈の特徴量等を抽出する。特徴量抽出部232によるTweetの特徴量の抽出の詳細は、具体例を用いて後記する。
For example, the feature extraction unit 232 extracts, from a Tweet acquired by the data acquisition unit 231, features of the account of the Tweet, features of the content of the Tweet, features of the URL or domain name included in the Tweet, features of a character string obtained by optical character recognition of an image included in the post, features of an image included in the Tweet, features of the context of the text included in the Tweet, etc. Details of the extraction of Tweet features by the feature extraction unit 232 will be described later using specific examples.
特徴量選定部233は、特徴量抽出部232により抽出された特徴量の中から、フィッシング攻撃の報告に関するTweetか否かの分類に有効な特徴量を選定する。特徴量の選定方法は、例えば、Boruta-SHAP(文献3,4参照)を用いる。
The feature selection unit 233 selects, from among the features extracted by the feature extraction unit 232, features that are effective in classifying whether or not a tweet is related to a report of a phishing attack. For example, the feature selection method uses Boruta-SHAP (see References 3 and 4).
・文献3:Kursa, Miron B. and Rudnicki, Witold R., “Feature Selection with the Boruta Package,” Journal of Statistical Software 2010.
・文献4:“BorutaShap : A wrapper feature selection method which combines the Boruta feature selection algorithm with Shapley values,” https://zenodo.org/badge/latestdoi/255354538 Reference 3: Kursa, Miron B. and Rudnicki, Witold R., “Feature Selection with the Boruta Package,” Journal of Statistical Software 2010.
Reference 4: “BorutaShap: A wrapper feature selection method which combines the Boruta feature selection algorithm with Shapley values,” https://zenodo.org/badge/latestdoi/255354538
・文献4:“BorutaShap : A wrapper feature selection method which combines the Boruta feature selection algorithm with Shapley values,” https://zenodo.org/badge/latestdoi/255354538 Reference 3: Kursa, Miron B. and Rudnicki, Witold R., “Feature Selection with the Boruta Package,” Journal of Statistical Software 2010.
Reference 4: “BorutaShap: A wrapper feature selection method which combines the Boruta feature selection algorithm with Shapley values,” https://zenodo.org/badge/latestdoi/255354538
例えば、特徴量選定部233は、特徴量抽出部232により抽出された特徴量の中から、以下の手順により、フィッシング攻撃の報告に関するTweetか否かの分類に有効な特徴量を選定する。
For example, the feature selection unit 233 selects, from among the features extracted by the feature extraction unit 232, features that are effective for classifying whether or not a tweet is related to a report of a phishing attack, using the following procedure.
(1)まず、特徴量選定部233は、選定対象の特徴量に加えてランダムな値を含めた偽の特徴量を生成する。
(2)次に、特徴量選定部233は、選定対象の特徴量と偽の特徴量で決定木ベースのアルゴリズムで分類を行い、各特徴量の変数重要度を計算する。
(3)次に、特徴量選定部233は、(2)で計算した選定対象の特徴量の変数重要度が偽の特徴量の変数重要度よりも大きければそれをカウントする。
(4)特徴量選定部233は、(1)~(3)の処理を複数回繰り返し、統計的に有意と判断した特徴量を、分類に有効な特徴量として選定する。 (1) First, thefeature selecting unit 233 generates false features that include random values in addition to the features to be selected.
(2) Next, thefeature selection unit 233 classifies the features to be selected and the false features using a decision tree-based algorithm, and calculates the variable importance of each feature.
(3) Next, thefeature selecting unit 233 counts the variable importance of the feature to be selected calculated in (2) if it is greater than the variable importance of the false feature.
(4) The featurevalue selection unit 233 repeats the processes (1) to (3) multiple times and selects feature values that are determined to be statistically significant as feature values that are effective for classification.
(2)次に、特徴量選定部233は、選定対象の特徴量と偽の特徴量で決定木ベースのアルゴリズムで分類を行い、各特徴量の変数重要度を計算する。
(3)次に、特徴量選定部233は、(2)で計算した選定対象の特徴量の変数重要度が偽の特徴量の変数重要度よりも大きければそれをカウントする。
(4)特徴量選定部233は、(1)~(3)の処理を複数回繰り返し、統計的に有意と判断した特徴量を、分類に有効な特徴量として選定する。 (1) First, the
(2) Next, the
(3) Next, the
(4) The feature
学習部234は、特徴量選定部233により選定された特徴量を用いた教師あり学習により、入力されたTweetがフィッシング攻撃の報告のTweetか否かを分類するための機械学習モデル(分類モデル)の学習を行う。例えば、学習部234は、フィッシング攻撃に関する教師データ(各Tweetがフィッシング攻撃か否かの正解ラベルが付与されたデータ)について、特徴量選定部233により選定された特徴量を用いた教師あり学習により、分類モデルの学習を行う。
The learning unit 234 learns a machine learning model (classification model) for classifying whether an input Tweet is a Tweet reporting a phishing attack or not through supervised learning using the features selected by the feature selection unit 233. For example, the learning unit 234 learns a classification model through supervised learning using the features selected by the feature selection unit 233 for teacher data related to phishing attacks (data to which each Tweet is assigned a correct answer label indicating whether it is a phishing attack or not).
分類部235は、学習部234により学習された分類モデルを用いて、入力されたTweetがフィッシング攻撃の報告のTweetか否かを分類する。出力処理部236は、分類部235によるTweetの分類の結果を出力する。
The classification unit 235 uses the classification model learned by the learning unit 234 to classify whether the input Tweet is a Tweet reporting a phishing attack. The output processing unit 236 outputs the result of the classification of the Tweet by the classification unit 235.
[処理手順の例]
次に、図8Bを用いて、分類装置20が実行する処理手順の例を説明する。まず、分類装置20のデータ取得部231は、収集装置10により収集されたフィッシング攻撃の報告である可能性の高いTweetとそのデータを取得する(S11:収集データの取得)。その後、特徴量抽出部232は、データ取得部231により取得されたTweetとそのデータから特徴量を抽出する(S12:Tweetの特徴量の抽出)。 [Example of processing procedure]
Next, an example of a processing procedure executed by theclassification device 20 will be described with reference to Fig. 8B. First, the data acquisition unit 231 of the classification device 20 acquires Tweets and their data that are likely to be reports of phishing attacks collected by the collection device 10 (S11: Acquisition of collected data). After that, the feature extraction unit 232 extracts features from the Tweets and their data acquired by the data acquisition unit 231 (S12: Extraction of Tweet features).
次に、図8Bを用いて、分類装置20が実行する処理手順の例を説明する。まず、分類装置20のデータ取得部231は、収集装置10により収集されたフィッシング攻撃の報告である可能性の高いTweetとそのデータを取得する(S11:収集データの取得)。その後、特徴量抽出部232は、データ取得部231により取得されたTweetとそのデータから特徴量を抽出する(S12:Tweetの特徴量の抽出)。 [Example of processing procedure]
Next, an example of a processing procedure executed by the
S12の後、特徴量選定部233は、S12で抽出された特徴量の中から、フィッシング攻撃の報告に関するTweetか否かの分類に有効な特徴量を選定する(S13)。そして、学習部234は、フィッシング攻撃に関する教師データについて、S13で選定された特徴量を用いて、入力されたTweetがフィッシング攻撃の報告のTweetか否かを分類するための分類モデルの学習を行う(S14)。
After S12, the feature selection unit 233 selects, from the features extracted in S12, features that are effective for classifying whether or not a Tweet is a report of a phishing attack (S13). Then, the learning unit 234 uses the features selected in S13 for the teacher data related to phishing attacks to learn a classification model for classifying whether or not an input Tweet is a report of a phishing attack (S14).
S14の後、分類部235は、S14で学習された分類モデルを用いて、入力されたTweetがフィッシング攻撃の報告のTweetか否かを分類する(S15)。そして、出力処理部236は、S16における分類の結果を出力する(S16)。
After S14, the classification unit 235 uses the classification model learned in S14 to classify whether the input Tweet is a Tweet reporting a phishing attack (S15). Then, the output processing unit 236 outputs the result of the classification in S16 (S16).
[処理手順の具体例]
次に、図9を用いて、分類装置20が実行する処理手順の具体例を説明する。 [Specific example of processing procedure]
Next, a specific example of a processing procedure executed by theclassification device 20 will be described with reference to FIG.
次に、図9を用いて、分類装置20が実行する処理手順の具体例を説明する。 [Specific example of processing procedure]
Next, a specific example of a processing procedure executed by the
(5)Feature Engineering
まず、分類装置20のデータ取得部231は、収集装置10により収集されたTweet(Screened Tweets)とそのデータを取得する。そして、特徴量抽出部232は、データ取得部231により取得されたTweetとそのデータから特徴量を抽出する。 (5) Feature Engineering
First, the data acquisition unit 231 of theclassification device 20 acquires Tweets (Screened Tweets) and their data collected by the collection device 10. Then, the feature extraction unit 232 extracts features from the Tweets and their data acquired by the data acquisition unit 231.
まず、分類装置20のデータ取得部231は、収集装置10により収集されたTweet(Screened Tweets)とそのデータを取得する。そして、特徴量抽出部232は、データ取得部231により取得されたTweetとそのデータから特徴量を抽出する。 (5) Feature Engineering
First, the data acquisition unit 231 of the
例えば、特徴量抽出部232は、図10に示すように、TweetのアカウントからAccount Feature(1)、Tweetに紐づく情報からContent Feature(2)、抽出したURLからURL Feature(3)、OCRで抽出した文字列から OCR Feature(5)、画像の見た目からVisual Feature(6)、Tweetの文脈からContext Feature(4)の6種類、合計27項目の特徴量を生成する。以下、各特徴量について詳細に説明する。
For example, as shown in FIG. 10, the feature extraction unit 232 generates a total of 27 features of six types: Account Feature (1) from the account of the Tweet, Content Feature (2) from information linked to the Tweet, URL Feature (3) from the extracted URL, OCR Feature (5) from character strings extracted by OCR, Visual Feature (6) from the appearance of the image, and Context Feature (4) from the context of the Tweet. Each feature is explained in detail below.
(5-1)Account Feature
特徴量抽出部232は、Twitterのユーザの特徴を捉えるために、例えば、図11に示すように、ユーザのアカウントの情報(例えば、フォロー数、フォロワー数、ツイート数、メディア数、リスト数、アカウント登録日等)から、TweetごとにAccount Featureを生成する。 (5-1) Account Feature
In order to capture the characteristics of a Twitter user, thefeature extraction unit 232 generates an Account Feature for each Tweet from information about the user's account (e.g., number of followings, number of followers, number of Tweets, number of media, number of lists, account registration date, etc.), as shown in FIG. 11 .
特徴量抽出部232は、Twitterのユーザの特徴を捉えるために、例えば、図11に示すように、ユーザのアカウントの情報(例えば、フォロー数、フォロワー数、ツイート数、メディア数、リスト数、アカウント登録日等)から、TweetごとにAccount Featureを生成する。 (5-1) Account Feature
In order to capture the characteristics of a Twitter user, the
(5-2)Content Feature
特徴量抽出部232は、フィッシング攻撃の報告のTweetに頻出するコンテンツの特性を捉えるために、例えば、図12に示すように、Tweet自体に紐づく情報(例えば、文字列、メンションしたユーザ、ハッシュタグ、画像、URLまたはドメイン名、Tweetに用いるアプリケーション、デファングタイプ等)から、TweetごとにContent Featureを生成する。 (5-2) Content Feature
In order to capture the characteristics of content that frequently appears in Tweets reporting phishing attacks, thefeature extraction unit 232 generates a Content Feature for each Tweet from information linked to the Tweet itself (e.g., a character string, a mentioned user, a hashtag, an image, a URL or domain name, an application used in the Tweet, a defang type, etc.), as shown in FIG. 12 .
特徴量抽出部232は、フィッシング攻撃の報告のTweetに頻出するコンテンツの特性を捉えるために、例えば、図12に示すように、Tweet自体に紐づく情報(例えば、文字列、メンションしたユーザ、ハッシュタグ、画像、URLまたはドメイン名、Tweetに用いるアプリケーション、デファングタイプ等)から、TweetごとにContent Featureを生成する。 (5-2) Content Feature
In order to capture the characteristics of content that frequently appears in Tweets reporting phishing attacks, the
(5-3)URL Feature
特徴量抽出部232は、フィッシングURLに特有なサブドメインの悪用や特定のTop-level domainの悪用に関する特徴を捉えるために、例えば、図13に示すように、Tweetの文字列と画像の両方から抽出したURL(またはドメイン名)から、TweetごとのURL Featureを生成する。URL Featureは、例えば、URLの文字列、ドメイン名、パス、URLに含まれる数字、トップレベルドメイン等である。 (5-3) URL Feature
In order to capture features related to the abuse of subdomains specific to phishing URLs and the abuse of specific top-level domains, thefeature extraction unit 232 generates a URL Feature for each Tweet from the URL (or domain name) extracted from both the character string and image of the Tweet, as shown in Fig. 13. The URL Feature is, for example, the character string of the URL, the domain name, the path, the numbers included in the URL, the top-level domain, etc.
特徴量抽出部232は、フィッシングURLに特有なサブドメインの悪用や特定のTop-level domainの悪用に関する特徴を捉えるために、例えば、図13に示すように、Tweetの文字列と画像の両方から抽出したURL(またはドメイン名)から、TweetごとのURL Featureを生成する。URL Featureは、例えば、URLの文字列、ドメイン名、パス、URLに含まれる数字、トップレベルドメイン等である。 (5-3) URL Feature
In order to capture features related to the abuse of subdomains specific to phishing URLs and the abuse of specific top-level domains, the
(5-4)OCR Feature
特徴量抽出部232は、フィッシング攻撃に関するTweetにおいて類似する文字列の特性を捉えるために、例えば、図14に示すように、光学文字認識(OCR)で抽出した文字列から、TweetごとにOCR Featureを生成する。OCR Featureは、例えば、文字列、単語、シンボル、数字、URLまたはドメイン名等である。 (5-4) OCR Feature
In order to capture characteristics of similar character strings in Tweets related to phishing attacks, thefeature extraction unit 232 generates an OCR feature for each Tweet from character strings extracted by optical character recognition (OCR), as shown in Fig. 14. The OCR feature is, for example, a character string, a word, a symbol, a number, a URL, a domain name, etc.
特徴量抽出部232は、フィッシング攻撃に関するTweetにおいて類似する文字列の特性を捉えるために、例えば、図14に示すように、光学文字認識(OCR)で抽出した文字列から、TweetごとにOCR Featureを生成する。OCR Featureは、例えば、文字列、単語、シンボル、数字、URLまたはドメイン名等である。 (5-4) OCR Feature
In order to capture characteristics of similar character strings in Tweets related to phishing attacks, the
(5-5)Visual Feature
特徴量抽出部232は、フィッシング攻撃の報告に関するTweetに含まれる画像の見た目の共通性を捉えるために、Tweetに紐づく画像から、TweetごとにVisual Featureを生成する。 (5-5) Visual Feature
In order to capture the commonality in the appearance of images contained in Tweets related to reports of phishing attacks, thefeature extraction unit 232 generates a Visual Feature for each Tweet from the images associated with the Tweet.
特徴量抽出部232は、フィッシング攻撃の報告に関するTweetに含まれる画像の見た目の共通性を捉えるために、Tweetに紐づく画像から、TweetごとにVisual Featureを生成する。 (5-5) Visual Feature
In order to capture the commonality in the appearance of images contained in Tweets related to reports of phishing attacks, the
特徴量抽出部232は、画像分類で優れた結果を出しているEfficient Netモデル(文献5参照)を用いて、Tweetに紐づく画像の固定次元のベクトルを生成する。その後、特徴量抽出部232は、疎なベクトルを密なベクトルに変換するためのTruncated SV(文献6参照)により、ベクトルの次元を圧縮する。そして、特徴量抽出部232は、圧縮したベクトルを、Tweetに含まれる画像のVisual Featureとする。
The feature extraction unit 232 uses the Efficient Net model (see Reference 5), which has produced excellent results in image classification, to generate a fixed-dimensional vector of the image linked to the Tweet. The feature extraction unit 232 then compresses the dimension of the vector using Truncated SV (see Reference 6), which converts a sparse vector into a dense vector. The feature extraction unit 232 then treats the compressed vector as the Visual Feature of the image included in the Tweet.
・文献5:Tan, Mingxing and Le, Quoc., “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks”, ICML 2019.
・文献6:“The truncatedsvd as a method for regularization”, BIT Numerical Mathematics. ・Reference 5: Tan, Mingxing and Le, Quoc. “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks”, ICML 2019.
・Reference 6: “The truncatedsvd as a method for regularization”, BIT Numerical Mathematics.
・文献6:“The truncatedsvd as a method for regularization”, BIT Numerical Mathematics. ・Reference 5: Tan, Mingxing and Le, Quoc. “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks”, ICML 2019.
・Reference 6: “The truncatedsvd as a method for regularization”, BIT Numerical Mathematics.
特徴量抽出部232は、例えば、図15に示すように、Image Netの大量の画像を事前学習したEfficient Netモデルを用いて、Tweetに紐づく画像を固有次元のべクトルに変換する。そして、特徴量抽出部232は、Truncated SVにより、変換したベクトルを、教師データにおける累積寄与率99%に圧縮する。
The feature extraction unit 232 converts images associated with Tweets into vectors with inherent dimensions using an Efficient Net model that has been pre-trained on a large number of images from the Image Net, as shown in FIG. 15, for example. The feature extraction unit 232 then compresses the converted vectors to a cumulative contribution rate of 99% in the training data using Truncated SV.
(5-6)Context Feature
特徴量抽出部232は、フィッシング攻撃の報告に関するTweetにおける文脈の共通性を捉えるために、Tweet内の文字列から、TweetごとにContext Featureを生成する。 (5-6) Context Feature
In order to grasp the commonality of context in Tweets related to reports of phishing attacks, thefeature extraction unit 232 generates a Context Feature for each Tweet from character strings in the Tweet.
特徴量抽出部232は、フィッシング攻撃の報告に関するTweetにおける文脈の共通性を捉えるために、Tweet内の文字列から、TweetごとにContext Featureを生成する。 (5-6) Context Feature
In order to grasp the commonality of context in Tweets related to reports of phishing attacks, the
特徴量抽出部232は、例えば、文章分類で優れた結果を出しているBERTモデルを用いて、Tweet内の文字列から固定次元のベクトルを生成する。その後、特徴量抽出部232は、Truncated SVにより、ベクトルの次元を圧縮する。そして、特徴量抽出部232は、圧縮したベクトルを、TweetのContext Featureとする。
The feature extraction unit 232 generates a fixed-dimensional vector from the character strings in the Tweet, for example, using the BERT model, which has shown excellent results in sentence classification. The feature extraction unit 232 then compresses the dimension of the vector using Truncated SV. The feature extraction unit 232 then sets the compressed vector as the Context Feature of the Tweet.
特徴量抽出部232は、例えば、図16に示すように、英語と日本語のWikipediaの大量の文字列を事前学習したBERTモデルを用いて、Tweet内の文字列を固有次元のベクトルに変換する。そして、特徴量抽出部232は、Truncated SVにより、変換したベクトルを、教師データにおける累積寄与率99%に圧縮する。
The feature extraction unit 232 converts the strings in the Tweet into vectors with inherent dimensions using a BERT model that has been pre-trained on a large number of strings from Wikipedia in English and Japanese, as shown in FIG. 16. The feature extraction unit 232 then compresses the converted vectors to a cumulative contribution rate of 99% in the training data using Truncated SV.
(6)Feature Selection
特徴量選定部233は、(5)において特徴量抽出部232により生成された特徴量群から、フィッシング攻撃の報告のTweetとその他のTweetとの分類に有効な(重要な)特徴量を選定する。 (6) Feature Selection
Thefeature selection unit 233 selects, from the group of features generated by the feature extraction unit 232 in (5), features that are effective (important) for classifying tweets reporting phishing attacks from other tweets.
特徴量選定部233は、(5)において特徴量抽出部232により生成された特徴量群から、フィッシング攻撃の報告のTweetとその他のTweetとの分類に有効な(重要な)特徴量を選定する。 (6) Feature Selection
The
なお、Feature Selectionの結果、分類において重要な特徴量と判断された特徴量の例を図17に示す。
In addition, Figure 17 shows examples of features that were determined to be important for classification as a result of feature selection.
Account Feature:英語6種類(6次元)、日本語5種類(5次元)
Content Feature:英語6種類(9次元)、日本語4種類(7次元)
URL Feature:英語2種類(2次元)、日本語3種類(3次元)
OCR Feature:英語3種類(3次元)、日本語3種類(3次元)
Visual Feature:英語9次元、日本語5次元
Context Feature:英語58次元、日本語33次元 Account Feature: 6 English (6 dimensions), 5 Japanese (5 dimensions)
Content Feature: 6 English types (9 dimensions), 4 Japanese types (7 dimensions)
URL Feature: 2 English (2D), 3 Japanese (3D)
OCR Feature: 3 types of English (3D), 3 types of Japanese (3D)
Visual Feature:English 9 dimensions, Japanese 5 dimensions
Context Feature: 58 dimensions in English, 33 dimensions in Japanese
Content Feature:英語6種類(9次元)、日本語4種類(7次元)
URL Feature:英語2種類(2次元)、日本語3種類(3次元)
OCR Feature:英語3種類(3次元)、日本語3種類(3次元)
Visual Feature:英語9次元、日本語5次元
Context Feature:英語58次元、日本語33次元 Account Feature: 6 English (6 dimensions), 5 Japanese (5 dimensions)
Content Feature: 6 English types (9 dimensions), 4 Japanese types (7 dimensions)
URL Feature: 2 English (2D), 3 Japanese (3D)
OCR Feature: 3 types of English (3D), 3 types of Japanese (3D)
Visual Feature:
Context Feature: 58 dimensions in English, 33 dimensions in Japanese
なお、図17に示すContext Featureのうち、App source(14)について、Twitter Web App、Twitter for iPhone(登録商標)、Twitter for Android(登録商標)は、両言語で重要であり、PhishingPickerは、英語の場合のみ重要であった。また、Defanged type(15)については、example[.]comは両言語で重要、hxxpは日本語の場合のみ重要であった。さらに、図17に示すURL Featureのうち、Top-level domain(20)については、.xyzが日本語の場合のみ重要であった。
Furthermore, among the Context Features shown in Figure 17, for App source (14), Twitter Web App, Twitter for iPhone (registered trademark), and Twitter for Android (registered trademark) were important in both languages, while PhishingPicker was important only in the case of English. Furthermore, for Defanged type (15), example[.]com was important in both languages, while hxxp was important only in the case of Japanese. Furthermore, among the URL Features shown in Figure 17, for Top-level domain (20), .xyz was important only when it was Japanese.
最終的には、英語87次元、日本語56次元の特徴量がフィッシング攻撃の報告のTweetとその他のTweetとの分類に重要であることが確認できた。
Finally, we were able to confirm that 87 English and 56 Japanese feature dimensions are important for classifying tweets reporting phishing attacks from other tweets.
(7)Offline Training
学習部234は、(6)において特徴量選定部233により選定された特徴量(特徴ベクトル)と、フィッシング攻撃か否かの正解ラベルが付与された教師データ(Ground-Truth Dataset)とを用いて、分類モデル(Machine Learning Model)を学習する。 (7) Offline Training
Thelearning unit 234 learns a classification model (machine learning model) using the features (feature vectors) selected by the feature selection unit 233 in (6) and training data (Ground-Truth Dataset) to which correct labels indicating whether or not the attack is a phishing attack have been assigned.
学習部234は、(6)において特徴量選定部233により選定された特徴量(特徴ベクトル)と、フィッシング攻撃か否かの正解ラベルが付与された教師データ(Ground-Truth Dataset)とを用いて、分類モデル(Machine Learning Model)を学習する。 (7) Offline Training
The
なお、分類モデルの学習に用いられるアルゴリズムは、例えば、Random Forest、Neural Network、 Decision Tree、Support Vector Machine、Logistic Regression、Naive Bayes、Gradient Boosting、Stochastic Gradient Descent等が考えられる。これらのアルゴリズムについて、教師データに対して評価した結果、以下の3つの理由によりRandom Forestを用いることが好ましいことが確認できた。
Algorithms that can be used to train classification models include, for example, Random Forest, Neural Network, Decision Tree, Support Vector Machine, Logistic Regression, Naive Bayes, Gradient Boosting, and Stochastic Gradient Descent. After evaluating these algorithms against training data, it was confirmed that it is preferable to use Random Forest for the following three reasons.
・Random Forestは、他のどのアルゴリズムよりも優れた分類精度であった。
・Random Forestは、学習と推定(分類)の両方のフェーズで安定した速度で動作した。
・Random Forestは、6種類全ての特徴に対して特徴量重要度が分散していた。 - Random Forest had better classification accuracy than any other algorithm.
- Random Forest performed at a stable speed in both the learning and estimation (classification) phases.
・Random Forest had a distributed feature importance for all six types of features.
・Random Forestは、学習と推定(分類)の両方のフェーズで安定した速度で動作した。
・Random Forestは、6種類全ての特徴に対して特徴量重要度が分散していた。 - Random Forest had better classification accuracy than any other algorithm.
- Random Forest performed at a stable speed in both the learning and estimation (classification) phases.
・Random Forest had a distributed feature importance for all six types of features.
(8)Online Classification
分類部235は、(7)において学習されたMachine Learning Model(分類モデル)を用いて、収集装置10により収集されたTweetが、フィッシング攻撃の報告に関するTweet(positive)か否(Negative)かを分類する。そして、出力処理部236は、その分類の結果を出力する。 (8) Online Classification
Theclassification unit 235 classifies the Tweets collected by the collection device 10 into Tweets related to reports of phishing attacks (positive) or not (negative) using the machine learning model (classification model) learned in (7). Then, the output processing unit 236 outputs the result of the classification.
分類部235は、(7)において学習されたMachine Learning Model(分類モデル)を用いて、収集装置10により収集されたTweetが、フィッシング攻撃の報告に関するTweet(positive)か否(Negative)かを分類する。そして、出力処理部236は、その分類の結果を出力する。 (8) Online Classification
The
なお、分類装置20は、フィッシング攻撃の報告と分類したTweetに登場する固有名詞(Proper Nouns)を抽出し、収集装置10は、当該固有名詞をCo-occurrence Keywordsを抽出する際に用いてもよい。
The classification device 20 may extract proper nouns that appear in tweets classified as reports of phishing attacks, and the collection device 10 may use the proper nouns when extracting co-occurrence keywords.
[評価結果]
次に、本実施形態のシステムの評価結果を説明する。例えば、システムが選定した特徴量を用いることで、英語、日本語ともにおよそ95%の精度でフィッシング攻撃の報告のTweetか否かを分類できることが確認できた(図18参照)。 [Evaluation results]
Next, the evaluation results of the system of this embodiment will be described. For example, it was confirmed that by using the features selected by the system, it is possible to classify tweets reporting phishing attacks with an accuracy of approximately 95% in both English and Japanese (see FIG. 18).
次に、本実施形態のシステムの評価結果を説明する。例えば、システムが選定した特徴量を用いることで、英語、日本語ともにおよそ95%の精度でフィッシング攻撃の報告のTweetか否かを分類できることが確認できた(図18参照)。 [Evaluation results]
Next, the evaluation results of the system of this embodiment will be described. For example, it was confirmed that by using the features selected by the system, it is possible to classify tweets reporting phishing attacks with an accuracy of approximately 95% in both English and Japanese (see FIG. 18).
また、本実施形態のシステムは、実験期間(2021/8/1~2021/9/30)において、図19に示すように、77,004件のフィッシング攻撃の報告(User Reports)と85,027件のフィッシングURL(Phising URLs)を抽出することができた。
In addition, during the experimental period (August 1, 2021 to September 30, 2021), the system of this embodiment was able to extract 77,004 phishing attack reports (User Reports) and 85,027 phishing URLs (Phising URLs), as shown in FIG. 19.
さらに、既存のデータフィードであるOpenPhish(文献7参照)により収集されたフィッシングURLと、本実施形態のシステムにより収集されたフィッシングURLとを比較したところ(図20参照)、両者で共通していた4,802件のフィッシングURLのうち、2,686件(全体の55.9%)のフィッシングURLについて、本実施形態のシステムの方が早く収集できた。
Furthermore, when phishing URLs collected by the system of this embodiment were compared with those collected by the existing data feed OpenPhish (see Reference 7) (see FIG. 20), it was found that of the 4,802 phishing URLs common to both, the system of this embodiment was able to collect 2,686 phishing URLs (55.9% of the total) more quickly.
・文献7:“OpenPhish - Phishing Intelligence”, https://openphish.com
Reference 7: “OpenPhish - Phishing Intelligence”, https://openphish.com
また、既存のデータフィードであるPhishTank(文献8参照)により収集されたフィッシングURLと、本実施形態のシステムにより収集されたフィッシングURLとを比較したところ(図21参照)、両者で共通していた5,323件のフィッシングURLのうち、3,183件(全体の59.8%)のフィッシングURLについて、本実施形態のシステムの方が早く収集できた。
Furthermore, when phishing URLs collected by the system of this embodiment were compared with those collected by PhishTank (see Reference 8), an existing data feed (see FIG. 21), it was found that of the 5,323 phishing URLs common to both, the system of this embodiment was able to collect 3,183 phishing URLs (59.8% of the total) more quickly.
・文献8:“PhishTank | Join the fight against phishing”, https://www.phishtank.com/.
- Reference 8: “PhishTank | Join the fight against phishing”, https://www.phishtank.com/.
また、ユーザによるフィッシング攻撃の報告の回数とフィッシングURLの数を調査したところ、ユーザにより1度しか報告されないフィッシング攻撃はフィッシングURL全体の49.8%であることが確認された(図22参照)。つまり、幅広いユーザからのフィッシング攻撃の報告は唯一性が高いフィッシングURLを含んでいる可能性が高いことが確認された。このことから、本実施形態のシステムのように、幅広いユーザからフィッシング攻撃の報告を収集することは極めて有効であるとことが確認できた。
Furthermore, when the number of phishing attack reports by users and the number of phishing URLs were investigated, it was confirmed that phishing attacks reported only once by users accounted for 49.8% of all phishing URLs (see Figure 22). In other words, it was confirmed that reports of phishing attacks from a wide range of users are highly likely to contain phishing URLs that are highly unique. From this, it was confirmed that collecting reports of phishing attacks from a wide range of users, as in the system of this embodiment, is extremely effective.
また、フィッシング攻撃の報告のTweetの収集に、固定的なキーワード(Security Keywords)のみならず、動的なキーワード(Co-occurrence Keywords)も用いることの効果を確認した(図23参照)。その結果、固定的なキーワード(Security Keywords)のみならず、動的なキーワード(Co-occurrence Keywords)も用いた方が、固定的なキーワード(Security Keywords)のみを用いるよりも、User Reports(フィッシング攻撃の報告のTweet)を+23.3%抽出できることが確認できた。また、固定的なキーワード(Security Keywords)のみならず、動的なキーワード(Co-occurrence Keywords)も用いた方が、フィッシングURLを+24.1%抽出できることが確認できた。
We also confirmed the effectiveness of using not only fixed keywords (Security Keywords) but also dynamic keywords (Co-occurrence Keywords) to collect tweets reporting phishing attacks (see Figure 23). As a result, we confirmed that using not only fixed keywords (Security Keywords) but also dynamic keywords (Co-occurrence Keywords) was able to extract +23.3% more User Reports (tweets reporting phishing attacks) than using only fixed keywords (Security Keywords). We also confirmed that using not only fixed keywords (Security Keywords) but also dynamic keywords (Co-occurrence Keywords) was able to extract +24.1% more phishing URLs.
このことから、本実施形態のシステムのように、固定的なキーワード(Security Keywords)のみならず動的なキーワード(Co-occurrence Keywords)も用いてTweetを収集することはフィッシング攻撃の情報収集に極めて有効であるとことが確認できた。
From this, it was confirmed that collecting tweets using not only fixed keywords (Security Keywords) but also dynamic keywords (Co-occurrence Keywords), as in the system of this embodiment, is extremely effective in collecting information on phishing attacks.
[システム構成等]
また、図示した各部の各構成要素は機能概念的なものであり、必ずしも物理的に図示のように構成されていることを要しない。すなわち、各装置の分散・統合の具体的形態は図示のものに限られず、その全部又は一部を、各種の負荷や使用状況等に応じて、任意の単位で機能的又は物理的に分散・統合して構成することができる。さらに、各装置にて行われる各処理機能は、その全部又は任意の一部が、CPU及び当該CPUにて実行されるプログラムにて実現され、あるいは、ワイヤードロジックによるハードウェアとして実現され得る。 [System configuration, etc.]
In addition, each component of each part shown in the figure is a functional concept, and does not necessarily have to be physically configured as shown in the figure. In other words, the specific form of distribution and integration of each device is not limited to that shown in the figure, and all or a part of it can be functionally or physically distributed and integrated in any unit depending on various loads, usage conditions, etc. Furthermore, each processing function performed by each device can be realized in whole or in any part by a CPU and a program executed by the CPU, or can be realized as hardware using wired logic.
また、図示した各部の各構成要素は機能概念的なものであり、必ずしも物理的に図示のように構成されていることを要しない。すなわち、各装置の分散・統合の具体的形態は図示のものに限られず、その全部又は一部を、各種の負荷や使用状況等に応じて、任意の単位で機能的又は物理的に分散・統合して構成することができる。さらに、各装置にて行われる各処理機能は、その全部又は任意の一部が、CPU及び当該CPUにて実行されるプログラムにて実現され、あるいは、ワイヤードロジックによるハードウェアとして実現され得る。 [System configuration, etc.]
In addition, each component of each part shown in the figure is a functional concept, and does not necessarily have to be physically configured as shown in the figure. In other words, the specific form of distribution and integration of each device is not limited to that shown in the figure, and all or a part of it can be functionally or physically distributed and integrated in any unit depending on various loads, usage conditions, etc. Furthermore, each processing function performed by each device can be realized in whole or in any part by a CPU and a program executed by the CPU, or can be realized as hardware using wired logic.
また、前記した実施形態において説明した処理のうち、自動的に行われるものとして説明した処理の全部又は一部を手動的に行うこともでき、あるいは、手動的に行われるものとして説明した処理の全部又は一部を公知の方法で自動的に行うこともできる。この他、上記文書中や図面中で示した処理手順、制御手順、具体的名称、各種のデータやパラメータを含む情報については、特記する場合を除いて任意に変更することができる。
Furthermore, among the processes described in the above embodiments, all or part of the processes described as being performed automatically can be performed manually, or all or part of the processes described as being performed manually can be performed automatically using known methods. In addition, the information including the processing procedures, control procedures, specific names, various data and parameters shown in the above documents and drawings can be changed as desired unless otherwise specified.
[プログラム]
前記したシステムは、パッケージソフトウェアやオンラインソフトウェアとしてプログラムを所望のコンピュータにインストールさせることによって実装できる。例えば、上記のプログラムを情報処理装置に実行させることにより、情報処理装置を前記したシステムとして機能させることができる。ここで言う情報処理装置にはスマートフォン、携帯電話機やPHS(Personal Handyphone System)等の移動体通信端末、さらには、PDA(Personal Digital Assistant)等の端末等がその範疇に含まれる。 [program]
The above-mentioned system can be implemented by installing a program as package software or online software on a desired computer. For example, the above-mentioned program can be executed by an information processing device to function as the above-mentioned system. The information processing device referred to here includes mobile communication terminals such as smartphones, mobile phones, and PHS (Personal Handyphone System), as well as terminals such as PDAs (Personal Digital Assistants).
前記したシステムは、パッケージソフトウェアやオンラインソフトウェアとしてプログラムを所望のコンピュータにインストールさせることによって実装できる。例えば、上記のプログラムを情報処理装置に実行させることにより、情報処理装置を前記したシステムとして機能させることができる。ここで言う情報処理装置にはスマートフォン、携帯電話機やPHS(Personal Handyphone System)等の移動体通信端末、さらには、PDA(Personal Digital Assistant)等の端末等がその範疇に含まれる。 [program]
The above-mentioned system can be implemented by installing a program as package software or online software on a desired computer. For example, the above-mentioned program can be executed by an information processing device to function as the above-mentioned system. The information processing device referred to here includes mobile communication terminals such as smartphones, mobile phones, and PHS (Personal Handyphone System), as well as terminals such as PDAs (Personal Digital Assistants).
図24は、プログラムを実行するコンピュータの一例を示す図である。コンピュータ1000は、例えば、メモリ1010、CPU1020を有する。また、コンピュータ1000は、ハードディスクドライブインタフェース1030、ディスクドライブインタフェース1040、シリアルポートインタフェース1050、ビデオアダプタ1060、ネットワークインタフェース1070を有する。これらの各部は、バス1080によって接続される。
FIG. 24 is a diagram showing an example of a computer that executes a program. The computer 1000 has, for example, a memory 1010 and a CPU 1020. The computer 1000 also has a hard disk drive interface 1030, a disk drive interface 1040, a serial port interface 1050, a video adapter 1060, and a network interface 1070. Each of these components is connected by a bus 1080.
メモリ1010は、ROM(Read Only Memory)1011及びRAM(Random Access Memory)1012を含む。ROM1011は、例えば、BIOS(Basic Input Output System)等のブートプログラムを記憶する。ハードディスクドライブインタフェース1030は、ハードディスクドライブ1090に接続される。ディスクドライブインタフェース1040は、ディスクドライブ1100に接続される。例えば磁気ディスクや光ディスク等の着脱可能な記憶媒体が、ディスクドライブ1100に挿入される。シリアルポートインタフェース1050は、例えばマウス1110、キーボード1120に接続される。ビデオアダプタ1060は、例えばディスプレイ1130に接続される。
The memory 1010 includes a ROM (Read Only Memory) 1011 and a RAM (Random Access Memory) 1012. The ROM 1011 stores a boot program such as a BIOS (Basic Input Output System). The hard disk drive interface 1030 is connected to a hard disk drive 1090. The disk drive interface 1040 is connected to a disk drive 1100. A removable storage medium such as a magnetic disk or optical disk is inserted into the disk drive 1100. The serial port interface 1050 is connected to a mouse 1110 and a keyboard 1120, for example. The video adapter 1060 is connected to a display 1130, for example.
ハードディスクドライブ1090は、例えば、OS1091、アプリケーションプログラム1092、プログラムモジュール1093、プログラムデータ1094を記憶する。すなわち、上記のシステムが実行する各処理を規定するプログラムは、コンピュータにより実行可能なコードが記述されたプログラムモジュール1093として実装される。プログラムモジュール1093は、例えばハードディスクドライブ1090に記憶される。例えば、システムにおける機能構成と同様の処理を実行するためのプログラムモジュール1093が、ハードディスクドライブ1090に記憶される。なお、ハードディスクドライブ1090は、SSD(Solid State Drive)により代替されてもよい。
The hard disk drive 1090 stores, for example, an OS 1091, an application program 1092, a program module 1093, and program data 1094. That is, the programs that define each process executed by the above-mentioned system are implemented as program modules 1093 in which computer-executable code is written. The program modules 1093 are stored, for example, in the hard disk drive 1090. For example, a program module 1093 for executing processes similar to the functional configuration of the system is stored in the hard disk drive 1090. The hard disk drive 1090 may be replaced by an SSD (Solid State Drive).
また、上述した実施形態の処理で用いられるデータは、プログラムデータ1094として、例えばメモリ1010やハードディスクドライブ1090に記憶される。そして、CPU1020が、メモリ1010やハードディスクドライブ1090に記憶されたプログラムモジュール1093やプログラムデータ1094を必要に応じてRAM1012に読み出して実行する。
The data used in the processing of the above-described embodiment is stored as program data 1094, for example, in memory 1010 or hard disk drive 1090. Then, the CPU 1020 reads the program module 1093 or program data 1094 stored in memory 1010 or hard disk drive 1090 into RAM 1012 as necessary and executes it.
なお、プログラムモジュール1093やプログラムデータ1094は、ハードディスクドライブ1090に記憶される場合に限らず、例えば着脱可能な記憶媒体に記憶され、ディスクドライブ1100等を介してCPU1020によって読み出されてもよい。あるいは、プログラムモジュール1093及びプログラムデータ1094は、ネットワーク(LAN(Local Area Network)、WAN(Wide Area Network)等)を介して接続される他のコンピュータに記憶されてもよい。そして、プログラムモジュール1093及びプログラムデータ1094は、他のコンピュータから、ネットワークインタフェース1070を介してCPU1020によって読み出されてもよい。
The program module 1093 and program data 1094 are not limited to being stored in the hard disk drive 1090, but may be stored in, for example, a removable storage medium and read by the CPU 1020 via the disk drive 1100 or the like. Alternatively, the program module 1093 and program data 1094 may be stored in another computer connected via a network (such as a LAN (Local Area Network), WAN (Wide Area Network)). The program module 1093 and program data 1094 may then be read by the CPU 1020 from the other computer via the network interface 1070.
10 収集装置
11,21 入出力部
12,22 記憶部
13,23 制御部
20 分類装置
131 第1の収集部
132 キーワード抽出部
133 第2の収集部
134 データ収集部
135 URL・ドメイン名抽出部
136 選別部
231 データ取得部
232 特徴量抽出部
233 特徴量選定部
234 学習部
235 分類部
236 出力処理部 REFERENCE SIGNSLIST 10 Collection device 11, 21 Input/ output unit 12, 22 Memory unit 13, 23 Control unit 20 Classification device 131 First collection unit 132 Keyword extraction unit 133 Second collection unit 134 Data collection unit 135 URL/domain name extraction unit 136 Selection unit 231 Data acquisition unit 232 Feature extraction unit 233 Feature selection unit 234 Learning unit 235 Classification unit 236 Output processing unit
11,21 入出力部
12,22 記憶部
13,23 制御部
20 分類装置
131 第1の収集部
132 キーワード抽出部
133 第2の収集部
134 データ収集部
135 URL・ドメイン名抽出部
136 選別部
231 データ取得部
232 特徴量抽出部
233 特徴量選定部
234 学習部
235 分類部
236 出力処理部 REFERENCE SIGNS
Claims (8)
- SNS(Social Networking Service)のセキュリティ脅威に関する投稿から前記投稿に含まれるテキストおよび画像それぞれの特徴量を抽出する特徴量抽出部と、
各投稿がセキュリティ脅威に関する投稿か否かの正解ラベルが付された教師データに対し、前記特徴量を用いた学習を行うことにより、入力された投稿に対し、前記投稿がセキュリティ脅威に関する投稿か否かを分類するための機械学習モデルの学習を行う学習部と、
学習された前記機械学習モデルを用いて、入力された投稿がセキュリティ脅威に関する投稿か否かを分類する分類部と、
前記分類の結果を出力する出力処理部と
を備えることを特徴とする分類装置。 A feature extraction unit that extracts features of text and images included in a post about a security threat on a social networking service (SNS);
a learning unit that performs learning using the features of training data in which each post is labeled with a correct answer as to whether or not the post is a post related to a security threat, thereby learning a machine learning model for classifying an input post as to whether or not the post is a post related to a security threat;
A classification unit that classifies an input post as whether or not the input post is a post related to a security threat using the trained machine learning model;
and an output processing unit that outputs a result of the classification. - 前記投稿に含まれる画像の特徴量は、
前記画像の特徴量および前記画像の光学文字認識により得られる文字列の特徴量を含む
ことを特徴とする請求項1に記載の分類装置。 The feature amount of the image included in the post is
The classification device according to claim 1 , further comprising a feature amount of the image and a feature amount of a character string obtained by optical character recognition of the image. - 前記特徴量は、
前記投稿のテキストまたは画像から抽出されたURLまたはドメイン名の特徴量を含む
ことを特徴とする請求項1に記載の分類装置。 The feature amount is
The classification device of claim 1 , further comprising URL or domain name features extracted from the post text or images. - 前記特徴量は、
前記投稿の投稿者のアカウントの特徴量、当該投稿のコンテンツの特徴量、当該投稿のテキストまたは画像から抽出されたURLまたはドメイン名の特徴量、当該投稿に含まれる画像の光学認識により得られる文字列の特徴量、当該投稿に含まれる画像の特徴量、および、当該投稿に含まれるテキストの文脈の特徴量のうち、少なくともいずれか
であることを特徴とする請求項1に記載の分類装置。 The feature amount is
The classification device according to claim 1, characterized in that the features are at least any of the following: a feature of the account of the poster of the post; a feature of the content of the post; a feature of a URL or domain name extracted from the text or image of the post; a feature of a character string obtained by optical recognition of the image included in the post; a feature of the image included in the post; and a feature of the context of the text included in the post. - 前記特徴量抽出部により抽出された特徴量の中から、セキュリティ脅威に関連する投稿か否かの分類に有効な特徴量を選定する特徴量選定部をさらに備え、
前記学習部は、
前記選定された特徴量を用いて前記機械学習モデルの学習を行う
ことを特徴とする請求項1に記載の分類装置。 a feature selection unit that selects a feature that is effective for classifying whether or not a post is related to a security threat from among the features extracted by the feature extraction unit,
The learning unit is
The classification device according to claim 1 , further comprising: a machine learning model that is trained using the selected feature quantity. - 前記特徴量選定部は、
Boruta-SHAPにより、前記セキュリティ脅威に関連する投稿か否かの分類に有効な特徴量を選定する
ことを特徴とする請求項5に記載の分類装置。 The feature amount selection unit is
The classification device according to claim 5 , further comprising: selecting a feature quantity effective for classifying whether or not a post is related to a security threat by Boruta-SHAP. - 分類装置により実行される分類方法であって、
SNS(Social Networking Service)のセキュリティ脅威に関する投稿から前記投稿に含まれるテキストおよび画像それぞれの特徴量を抽出する工程と、
各投稿がセキュリティ脅威に関する投稿か否かの正解ラベルが付された教師データに対し、前記特徴量を用いた学習を行うことにより、入力された投稿に対し、前記投稿がセキュリティ脅威に関する投稿か否かを分類するための機械学習モデルの学習を行う工程と、
学習された前記機械学習モデルを用いて、入力された投稿がセキュリティ脅威に関する投稿か否かを分類する工程と、
前記分類の結果を出力する工程と
を含むことを特徴とする分類方法。 1. A classification method performed by a classification device, comprising:
extracting feature quantities of text and images contained in posts related to security threats on a social networking service (SNS);
A step of training a machine learning model for classifying an input post as whether or not the post is a security threat by performing training using the features on training data in which each post is labeled with a correct answer as to whether or not the post is a security threat;
A step of classifying an input post as being related to a security threat or not using the trained machine learning model;
and outputting a result of the classification. - SNS(Social Networking Service)のセキュリティ脅威に関する投稿から前記投稿に含まれるテキストおよび画像それぞれの特徴量を抽出する工程と、
各投稿がセキュリティ脅威に関する投稿か否かの正解ラベルが付された教師データに対し、前記特徴量を用いた学習を行うことにより、入力された投稿に対し、前記投稿がセキュリティ脅威に関する投稿か否かを分類するための機械学習モデルの学習を行う工程と、
学習された前記機械学習モデルを用いて、入力された投稿がセキュリティ脅威に関する投稿か否かを分類する工程と、
前記分類の結果を出力する工程と
をコンピュータに実行させるための分類プログラム。 extracting feature quantities of text and images contained in posts related to security threats on a social networking service (SNS);
A step of training a machine learning model for classifying an input post as whether or not the post is a security threat by performing training using the features on training data in which each post is labeled with a correct answer as to whether or not the post is a security threat;
A step of classifying an input post as being related to a security threat or not using the trained machine learning model;
and a step of outputting the results of the classification.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2022/040260 WO2024089860A1 (en) | 2022-10-27 | 2022-10-27 | Classification device, classification method, and classification program |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2022/040260 WO2024089860A1 (en) | 2022-10-27 | 2022-10-27 | Classification device, classification method, and classification program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024089860A1 true WO2024089860A1 (en) | 2024-05-02 |
Family
ID=90830373
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/040260 WO2024089860A1 (en) | 2022-10-27 | 2022-10-27 | Classification device, classification method, and classification program |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2024089860A1 (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015072614A (en) * | 2013-10-03 | 2015-04-16 | インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation | Method for detecting expression capable of becoming dangerous expression by relying on specific theme and electronic device and program for electronic device for detecting the same expression |
WO2020240834A1 (en) * | 2019-05-31 | 2020-12-03 | 楽天株式会社 | Illicit activity inference system, illicit activity inference method, and program |
JP2021193545A (en) * | 2020-06-08 | 2021-12-23 | 旭化成ホームズ株式会社 | Information linking server, information linking system, information linking method and program |
-
2022
- 2022-10-27 WO PCT/JP2022/040260 patent/WO2024089860A1/en unknown
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015072614A (en) * | 2013-10-03 | 2015-04-16 | インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation | Method for detecting expression capable of becoming dangerous expression by relying on specific theme and electronic device and program for electronic device for detecting the same expression |
WO2020240834A1 (en) * | 2019-05-31 | 2020-12-03 | 楽天株式会社 | Illicit activity inference system, illicit activity inference method, and program |
JP2021193545A (en) * | 2020-06-08 | 2021-12-23 | 旭化成ホームズ株式会社 | Information linking server, information linking system, information linking method and program |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Nouh et al. | Understanding the radical mind: Identifying signals to detect extremist content on twitter | |
Shafi’I et al. | A review on mobile SMS spam filtering techniques | |
Aljofey et al. | An effective detection approach for phishing websites using URL and HTML features | |
Thakur et al. | An intelligent algorithmically generated domain detection system | |
Lin et al. | Malicious URL filtering—A big data application | |
US11146586B2 (en) | Detecting a root cause for a vulnerability using subjective logic in social media | |
WO2014066698A1 (en) | Method and system for social media burst classifications | |
Pv et al. | UbCadet: detection of compromised accounts in twitter based on user behavioural profiling | |
Riadi | Detection of cyberbullying on social media using data mining techniques | |
Pan et al. | Semantic graph neural network: A conversion from spam email classification to graph classification | |
Sun et al. | Efficient event detection in social media data streams | |
Bhakuni et al. | Evolution and evaluation: Sarcasm analysis for twitter data using sentiment analysis | |
Boahen et al. | Detection of compromised online social network account with an enhanced knn | |
Rafat et al. | Evading obscure communication from spam emails | |
De La Torre-Abaitua et al. | On the application of compression-based metrics to identifying anomalous behaviour in web traffic | |
Saka et al. | Context-based clustering to mitigate phishing attacks | |
Singh et al. | Spam detection using ANN and ABC Algorithm | |
Pradeepa et al. | Lightweight approach for malicious domain detection using machine learning | |
Dangwal et al. | Feature selection for machine learning-based phishing websites detection | |
Prusty et al. | SMS Fraud detection using machine learning | |
CN117614644A (en) | Malicious website identification method, electronic equipment and storage medium | |
Alsaedi et al. | Multi-Modal Features Representation-Based Convolutional Neural Network Model for Malicious Website Detection | |
WO2024089860A1 (en) | Classification device, classification method, and classification program | |
WO2024089859A1 (en) | Collection device, collection method, and collection program | |
Zareapoor et al. | Text mining for phishing e-mail detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22963507 Country of ref document: EP Kind code of ref document: A1 |