CN114186269A - Big data information safety protection method based on artificial intelligence and artificial intelligence system - Google Patents

Big data information safety protection method based on artificial intelligence and artificial intelligence system Download PDF

Info

Publication number
CN114186269A
CN114186269A CN202111477385.7A CN202111477385A CN114186269A CN 114186269 A CN114186269 A CN 114186269A CN 202111477385 A CN202111477385 A CN 202111477385A CN 114186269 A CN114186269 A CN 114186269A
Authority
CN
China
Prior art keywords
data
sent out
outgoing
sent
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111477385.7A
Other languages
Chinese (zh)
Inventor
黄昌源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zibo Yunke Internet Information Technology Co ltd
Original Assignee
Zibo Yunke Internet Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zibo Yunke Internet Information Technology Co ltd filed Critical Zibo Yunke Internet Information Technology Co ltd
Priority to CN202111477385.7A priority Critical patent/CN114186269A/en
Publication of CN114186269A publication Critical patent/CN114186269A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1416Event detection, e.g. attack signature detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Bioethics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a big data information safety protection method and an artificial intelligence system based on artificial intelligence, wherein after abnormal disturbance request data are obtained in the process of distributing safe outgoing data, abnormal disturbance intention is obtained by analyzing the abnormal disturbance request data, and when the abnormal disturbance intention is associated with the safe outgoing data, the distribution operation of the safe outgoing data is stopped, so that the information safety can be effectively ensured.

Description

Big data information safety protection method based on artificial intelligence and artificial intelligence system
Technical Field
The application relates to the field of data transmission, in particular to a big data information safety protection method and an artificial intelligence system based on artificial intelligence.
Background
In the management of the internet service provider, in order to ensure the security of sensitive data sent by the internet service provider, detection is often required in the process of data outgoing, and the data can be sent out after the outgoing condition set by a user is met, so that the commercial confidentiality, account property and the like of the internet service provider user are ensured. In the related art, there is currently an abnormal disturbance in the distribution operation process, that is, an abnormal disturbance in the distribution process after passing the security detection, and there may be a security risk that the detection cannot be performed in time.
Disclosure of Invention
The application provides a big data information safety protection method based on artificial intelligence and an artificial intelligence system.
In a first aspect, an embodiment of the present application provides a big data information security protection method based on artificial intelligence, which is applied to an artificial intelligence system, and includes:
acquiring safe outgoing data obtained by carrying out security detection on to-be-outgoing original data uploaded by a user terminal, and after acquiring abnormal disturbance request data in the process of distributing the safe outgoing data, analyzing the abnormal disturbance request data to obtain an abnormal disturbance intention;
stopping the distribution operation of the safely outgoing data when the abnormal disturbance intention is associated with the safely outgoing data.
In a possible implementation manner of the first aspect, the step of obtaining the safely outgoing data obtained by performing security detection on the to-be-outgoing original data uploaded by the user terminal includes:
acquiring to-be-sent-out original data uploaded by a user terminal, and carrying out security detection on the to-be-sent-out data stored in an outgoing data storage space; the outgoing data storage space stores a target data group to be outgoing, the target data group to be outgoing comprises at least one data to be outgoing, and different data to be outgoing are generated by different service servers respectively;
if first to-be-sent-out data in the at least one to-be-sent-out data passes security detection and the first to-be-sent-out data is to-be-sent-out data with the highest priority in the target to-be-sent-out data group, acquiring a security vector associated with the first to-be-sent-out data, generating second to-be-sent-out data according to the to-be-sent-out original data and the security vector, and adding the second to-be-sent-out data to the target to-be-sent-out data group to obtain an updated outgoing data storage space;
broadcasting the second data to be sent out in an outgoing network so as to enable other service servers in the outgoing network except the service server generating the second data to be sent out to cache the second data to be sent out to the storage space to which the second data to be sent out belongs;
updating the safety factor respectively associated with each piece of data to be sent out in the updated outgoing data storage space, and determining the data to be sent out with the updated safety factor larger than a preset safety factor threshold value as the data which can be sent out safely;
if the target data group to be sent out has data to be sent out which does not pass the security detection, and the first data to be sent out is the data to be sent out with the highest priority in the data to be sent out which passes the security detection in the target data group to be sent out, acquiring a security vector corresponding to the first data to be sent out, and generating second data to be sent out according to the original data to be sent out and the security vector;
and all the data to be sent out which pass the security detection in the target data group to be sent out and the second data to be sent out form a new data group to be sent out, and the new data group to be sent out and the target data group to be sent out are determined as an updated data storage space to be sent out.
In a possible implementation manner of the first aspect, the generating second outgoing data according to the original data to be outgoing and the security vector includes:
acquiring a security identifier carried by the original data to be sent out, and acquiring a security identifier generation algorithm corresponding to the user terminal;
processing the safety identification based on the safety identification generating algorithm to obtain first safety characteristic information corresponding to the safety identification;
performing MD5 operation on the original data to be sent out based on an MD5 model to obtain second safety feature information corresponding to the original data to be sent out;
if the first safety characteristic information is the same as the second safety characteristic information, the to-be-sent-out original data passes verification, and a data main body is generated based on the to-be-sent-out original data passing verification;
and generating a data label according to the safety vector, and generating second data to be sent out according to the data label and the data main body.
In a possible implementation manner of the first aspect, the updating the safety factor corresponding to each piece of data to be sent out in the updated outgoing data storage space, and determining the data to be sent out, whose updated safety factor is greater than the preset safety factor threshold, as the data to be sent out safely includes:
acquiring the number of data of to-be-sent data contained in the updated outgoing data storage space, determining an artificial intelligence system corresponding to each to-be-sent data in the updated outgoing data storage space, and acquiring a server identity weight matched with the artificial intelligence system;
updating the safety factor corresponding to each data to be sent out in the updated outgoing data storage space respectively based on the number of the data to be sent out contained in the updated outgoing data storage space and the server identity weight;
determining the to-be-sent data with the updated safety factor larger than the preset safety factor threshold value as safe-to-be-sent data, and adding the to-be-sent data in the safe-to-be-sent data to the target sending-out data group; the target outgoing data group is used for storing all the data to be outgoing which pass the verification;
in a possible implementation manner of the first aspect, the adding the data to be sent out to the target data sending out group, which is the data that can be sent out safely, includes:
acquiring the current priority corresponding to the data to be sent out which can safely send out the data;
if the current priority and the priority corresponding to the target to-be-sent-out data with the highest priority in the target outgoing data group are in a preset priority range, adding the to-be-sent-out data in the safe outgoing data to the target outgoing data group;
and if the current priority and the priority corresponding to the target to-be-sent-out data with the highest priority in the target outgoing data group are in a non-preset priority range, updating the priority of the to-be-sent-out data in the safe outgoing data, and adding the updated to-be-sent-out data in the safe outgoing data to the target outgoing data group.
For example, in one possible implementation of the first aspect, the outgoing data storage space stores a plurality of outgoing data sets, the plurality of outgoing data sets including the target outgoing data set; the security detection of the data to be sent out stored in the data storage space to be sent out comprises the following steps:
acquiring the multiple data groups to be sent out from the data storage space, and acquiring initial data numbers corresponding to the multiple data groups to be sent out respectively;
and sequencing the plurality of data groups to be sent out based on the number of the initial data groups to be sent out, and sequentially carrying out security detection on the data to be sent out contained in each data group to be sent out based on the sequencing sequence of each data group to be sent out.
For example, in a possible implementation manner of the first aspect, if a first data to be sent out in the at least one data to be sent out passes security detection and the first data to be sent out is a data to be sent out with a highest priority in the target data group to be sent out, a security vector corresponding to the first data to be sent out is obtained, a second data to be sent out is generated according to the original data to be sent out and the security vector, and the second data to be sent out is added to the target data group to be sent out, so as to obtain an updated data storage space to be sent out, including:
if a target data group to be sent out exists in the multiple data groups to be sent out, wherein the data to be sent out passes the security detection, the data to be sent out with the highest priority in the target data group to be sent out is taken as the first data to be sent out, and a security vector corresponding to the first data to be sent out is obtained;
generating second data to be sent out according to the original data to be sent out and the safety vector, adding the second data to be sent out to the target data group to be sent out, and determining the updated target data group to be sent out and the rest data groups to be sent out as an updated data storage space to be sent out; and the residual data group to be sent out is a data group to be sent out except the target data group to be sent out in the data storage space to be sent out.
For example, in one possible implementation manner of the first aspect, the method further includes:
if the data to be sent out which does not pass the security detection exists in the data groups to be sent out, respectively counting the number of target data of the data to be sent out which passes the security detection in each data group to be sent out, and determining the data group to be sent out with the highest number of target data as the target data group to be sent out;
acquiring data to be sent out with the highest priority from data to be sent out which passes security detection and is contained in the target data group to be sent out, wherein the data to be sent out is used as the first data to be sent out, and acquiring a security vector corresponding to the first data to be sent out;
generating second data to be sent out according to the original data to be sent out and the safety vector, and forming a new data group to be sent out by all data to be sent out which pass safety detection in the target data group to be sent out and the second data to be sent out;
and determining the new data group to be sent out and the plurality of data groups to be sent out as the updated outgoing data storage space.
For example, in a possible implementation manner of the first aspect, the updating the security factor corresponding to each piece of data to be sent out in the updated outgoing data storage space includes:
acquiring the number of outgoing data corresponding to the updated target outgoing data group and the remaining outgoing data groups from the updated outgoing data storage space, and respectively counting the frequency of each outgoing data in the updated target outgoing data group and the remaining outgoing data groups;
and based on the number of the outgoing data and the occurrence frequency, re-counting the safety factors corresponding to each data to be outgoing in the updated outgoing data storage space.
Compared with the prior art, after the abnormal disturbance request data is obtained in the process of distributing the safe outgoing data, the abnormal disturbance intention is obtained by analyzing the abnormal disturbance request data, and when the abnormal disturbance intention is associated with the safe outgoing data, the distribution operation of the safe outgoing data is stopped, so that the information safety can be effectively ensured.
Drawings
Fig. 1 is a schematic flowchart illustrating steps of a big data information security protection method based on artificial intelligence according to an embodiment of the present application.
Detailed Description
Step S110, acquiring safe outgoing data obtained by carrying out safety detection on to-be-outgoing original data uploaded by a user terminal, and after acquiring abnormal disturbance request data in the process of distributing the safe outgoing data, analyzing the abnormal disturbance request data to obtain an abnormal disturbance intention.
In this embodiment, after security detection is performed on to-be-sent original data uploaded by a user terminal to obtain safe outgoing data, corresponding distribution operation may be performed, however, the inventor finds that there is a safety risk that abnormal disturbance currently existing in a distribution operation process, that is, abnormal disturbance in a distribution process after passing the security detection may not be detected in time, and therefore after obtaining abnormal disturbance request data in the process of distributing the safe outgoing data, the abnormal disturbance request data needs to be analyzed to obtain an abnormal disturbance intention. Wherein the anomalous perturbation intent may include a plurality of related intent components, each intent component may be used to characterize a targeted request field.
And step S120, when the abnormal disturbance intention is associated with the safely outgoing data, stopping distributing the safely outgoing data.
In this embodiment, if the abnormal disturbance intention is associated with the safely outgoing data, the request field for identifying each intention component is associated with a data header field in the safely outgoing data, and at this time, in order to ensure information safety, it is necessary to stop the distribution operation on the safely outgoing data in time.
Therefore, based on the above steps, in the embodiment, after the abnormal disturbance request data is obtained in the process of distributing the safely outgoing data, the abnormal disturbance intention is obtained by analyzing the abnormal disturbance request data, and when the abnormal disturbance intention is associated with the safely outgoing data, the distribution operation of the safely outgoing data is stopped, so that the information safety can be effectively ensured.
In step S110, in the process of analyzing the abnormal disturbance request data to obtain an abnormal disturbance intention, for example, the abnormal disturbance request data may be configured in an abnormal disturbance intention prediction network to generate an abnormal disturbance intention of the abnormal disturbance request data;
the training steps of the abnormal disturbance intention prediction network are as follows, that is, the embodiment of the application provides an abnormal disturbance intention prediction method based on artificial intelligence, and the method comprises the following steps:
step W10, obtaining a teacher AI training unit and a student AI training unit, wherein the teacher AI training unit and the student AI training unit have a data transfer node, at least three intention vector extraction nodes and a prediction node, and an intention target domain range of the intention vector extraction node in the teacher AI training unit is larger than an intention target domain range of the intention vector extraction node in the student AI training unit, one or at least two of the at least three intention vector extraction nodes (at least one intention vector extraction node) in the teacher AI training unit are configured as teacher extraction nodes, one or at least two of the at least three intention vector extraction nodes (at least one intention vector extraction node) in the student AI training unit are configured as student extraction nodes, the AI training unit further has a vector compression node in connection with the teacher extraction node, the student AI training unit is also provided with a vector extension node which is connected with the student extraction node;
step W20, acquiring reference abnormal disturbance request data, configuring the reference abnormal disturbance request data into a teacher AI training unit, generating teacher intention vectors in teacher extraction nodes, configuring the reference abnormal disturbance request data into a student AI training unit, and generating student intention vectors in student extraction nodes;
step W30, a teacher intention vector is configured in a vector compression node to obtain a first intention vector, a student intention vector is configured in a vector expansion node to obtain a second intention vector, wherein the first intention vector is consistent with the intention target domain range in the second intention vector;
step W40, determining a first cost coefficient based on the first intention vector and the second intention vector, optimizing extraction weight information in the student extraction node based on the first cost coefficient, and obtaining a student AI training unit after teaching;
and step W50, configuring the reference abnormal disturbance request data into a student AI training unit after teaching to generate a target learning abnormal intention, determining a second cost coefficient based on the target learning abnormal intention and the actual abnormal intention of the reference abnormal disturbance request data, optimizing extraction weight information in the student AI training unit according to the second cost coefficient, and obtaining an abnormal disturbance intention prediction network to perform abnormal disturbance intention prediction based on the abnormal disturbance intention prediction network.
Based on the above steps, the teacher AI training unit provided in this embodiment may have a relatively large scope of the intention target domain in the intention vector extraction node, so that the teacher AI training unit may have a better feature learning capability, then the reference abnormal disturbance request data is input into the teacher AI training unit and the student AI training unit, respectively, the teacher extraction node generates the teacher intention vector in the teacher AI training unit, and the student extraction node generates the student intention vector in the student AI training unit, the teacher intention vector is compressed and the student intention vector is expanded so that the obtained intention target domain ranges of the first intention vector and the second intention vector are consistent, thereby realizing the calculation of the first cost coefficient, and the extraction weight information in the student extraction node is optimized according to the first cost coefficient, so that the student extraction node performs reference learning to the teacher extraction node, the learning efficiency and the learning performance of the student AI training unit are improved; in addition, the range of the intention target domain of the intention vector extraction node of the student AI training unit is relatively small, and the abnormal disturbance intention prediction efficiency can be improved by taking the trained student AI training unit as an abnormal disturbance intention prediction network.
Configuring the teacher intent vector into a vector compression node to obtain a first intent vector comprises:
step W301, configuring the teacher intention vector into a vector compression node, and obtaining the support degree of each intention extraction target domain related to the teacher intention vector;
step W302, sorting the order of the intention extraction target domains according to the descending order of the associated support degrees of the intention extraction target domains to obtain an intention vector list;
step W303, selecting at least N final intention extraction target domains from the intention vector list to form a dimension reduction intention vector;
step W304, selecting a part of intention extraction target domains from the dimension reduction intention vectors based on a set strategy to form a noise intention vector;
step W305, the teacher intention vector and the noise intention vector are divided to obtain a first intention vector.
For example, while the extraction weight information in the student extraction node is optimized, the extraction weight information in the vector extension node is optimized. For example, after determining the first cost factor based on the first intention vector and the second intention vector, further comprising: and optimizing the extraction weight information in the vector expansion node based on the first cost coefficient. That is, the weight information in the vector expansion node is optimized based on the direction in which the first cost factor decreases.
In step W40, a first cost factor is determined based on the first intention vector and the second intention vector, and the student AI training unit is optimized according to the first cost factor, thereby obtaining a student AI training unit after teaching.
In an embodiment that may be based on an independent concept, an embodiment of the present application further provides a security protection configuration method based on an abnormal disturbance source, including the following steps.
Step R110, obtaining an abnormal disturbance source associated with an abnormal disturbance intention associated with each piece of safely outgoing data;
and step R120, based on the abnormal disturbance source, performing security protection configuration on the service distribution channel associated with the safely outgoing data.
In an embodiment that may be based on an independent concept, an embodiment of the present application further provides a security protection configuration method based on an abnormal disturbance source, including the following steps.
And step Q110, acquiring a historical attack behavior vector associated with the abnormal disturbance source, wherein the historical attack behavior vector is obtained by carrying out attack event mining on the attack event of the abnormal disturbance source.
And step Q120, based on the target type clustering strategy of the abnormal disturbance source, adding the historical attack behavior vector associated with the abnormal disturbance source matched with the same clustering label to an attack behavior vector cluster.
Step Q130, a first preset security protection configuration instruction set of the clustered label and a second preset security protection configuration instruction set of each historical attack behavior vector in the attack behavior vector cluster associated with the clustered label are obtained.
And step Q140, carrying out linkage protection instruction set mining on a first preset safety protection configuration instruction set and a second preset safety protection configuration instruction set to obtain a linkage protection instruction set associated with the abnormal disturbance source under the clustering label, and carrying out safety protection configuration on a service distribution channel associated with the safety outgoing data based on the linkage protection instruction set.
Based on the steps, historical attack behavior vectors associated with the abnormal disturbance source are obtained by obtaining attack event mining results of the attack events of the abnormal disturbance source, historical attack behavior vectors associated with the abnormal disturbance source matched with the same clustering label are added to an attack behavior vector cluster based on a target type clustering strategy of the abnormal disturbance source, linkage protection instruction set mining is carried out on the first preset safety protection configuration instruction set and the second preset safety protection configuration instruction set by obtaining a first preset safety protection configuration instruction set of the clustering label and a second preset safety protection configuration instruction set of each historical attack behavior vector in the attack behavior vector cluster associated with the clustering label, and linkage protection instruction sets associated with the abnormal disturbance source under the clustering label are obtained, whereby a security guard configuration may be made based on the source of the anomalous perturbation under the clustered label.
In an exemplary design idea, the mining of a linkage protection instruction set for a first preset safety protection configuration instruction set and a second preset safety protection configuration instruction set, and the obtaining of the linkage protection instruction set associated with the abnormal disturbance source under the clustering label includes:
and obtaining the historical attack behavior vector of which the second preset security protection configuration instruction set matches preset matching requirements from the attack behavior vector cluster associated with the clustering label, and adding the historical attack behavior vector to a candidate attack behavior vector cluster.
And determining whether the abnormal disturbance source has linkage protection attributes or not based on a first preset safety protection configuration instruction set of the clustering label and a second preset safety protection configuration instruction set of the historical attack behavior vectors in the candidate attack behavior vector cluster.
And if the linkage protection attribute exists, carrying out linkage protection instruction mining on a second preset safety protection configuration instruction set of the historical attack behavior vectors in the candidate attack behavior vector group to obtain a linkage protection instruction set in the abnormal disturbance source.
In an exemplary design idea, aiming at the step S110, a big data information security protection method scenario based on artificial intelligence is provided in an embodiment of the present application. The scenario may include: the artificial intelligence system in the outgoing network has the same data group to be outgoing, the data group to be outgoing comprises a plurality of data to be outgoing, and each artificial intelligence system in the outgoing network can read all data recorded in the data group to be outgoing. Of course, each artificial intelligence system in the outgoing network can also generate new outgoing data based on the newly generated data, and when the new outgoing data passes the safety verification in all the artificial intelligence systems, the new outgoing data can be submitted to the outgoing data group. The verification process for new outgoing data is explained below: the artificial intelligence system in the outgoing network has a data group to be outgoing formed by 10 data to be outgoing, and each data to be outgoing except the initial data to be outgoing in the data group to be outgoing can contain the safety vector of the previous data to be outgoing, so that a chain data structure among the data to be outgoing is formed. For example, after the server 10a receives new data 1 uploaded by the client, the server 10a may package the data 1 into a new data to be sent out 11, where the data to be sent out 11 is composed of a data tag and a data main body, the data tag may include a security vector corresponding to the highest priority target data to be sent out in the data group to be sent out (i.e., the 10 th data to be sent out in the data group to be sent out), and the data main body is mainly used for storing the received data 1. After the server 10a generates the data to be sent out 11, the data to be sent out 11 may be broadcast to the rest artificial intelligence systems in the outgoing network, and the data to be sent out 11 is stored in the local cache area corresponding to the server 10 a.
Since the server 10a broadcasts the data to be sent out 11 in the outgoing network, the data to be sent out 11 is stored in the local cache areas corresponding to the server 10b and the server 10 c. After the server 10b receives the new data 2, before the server 10b packages the data 2 into the data to be sent out 12, it needs to verify the data to be sent out 11 stored in the local cache region, for example, detect whether the data 1 stored in the data to be sent out 11 is valid data, whether the security vector stored in the data tag is legal, and the like. After the data to be sent out 11 passes the verification, the server 10b may obtain the security vector corresponding to the data to be sent out 11, and generate the data to be sent out 12 based on the security vector corresponding to the data to be sent out 11 and the data 2, and similarly, the server 10b may broadcast the data to be sent out 12 to the other artificial intelligence systems in the outgoing network, and store the data to be sent out 12 in the local cache region. In other words, the data tag of the to-be-outgoing data 12 includes the security vector corresponding to the to-be-outgoing data 11, which indicates that the verification result of the server 10b on the to-be-outgoing data 11 is: it is reasonable to verify that the data to be sent out 11 is approved by the server 10b, at which time the data to be sent out 11 has already been approved by both the server 10a and the server 10b artificial intelligence systems in the outgoing network (the data to be sent out 11 is generated by the server 10a, so the server 10a must confirm that the data to be sent out 11 is legitimate).
After the server 10b broadcasts the data to be sent out 12 in the whole network, the data to be sent out 11 generated by the server 10a and the data to be sent out 12 generated by the server 10b are already stored in the local storage area 20a corresponding to the server 10 c. If the user 1 with the account bb wants to obtain sensitive outgoing data from the user 2 with the account aa, the user 1 can achieve outgoing service with the user 2. The user 2 may create sensitive outgoing raw data (i.e., data 3) through the terminal device 10e, and upload the data 3 to an outgoing network, that is, send the data 3 to the server 10c, where the data 3 may include sensitive data corresponding to an initiator (i.e., account information of the user 2 corresponding to the terminal device 10e that creates the data 3), a receiver (i.e., account information of the user 1 that obtains the sensitive outgoing data), and a transmission mode, and the data 3 is used to instruct the outgoing network to send the corresponding sensitive data from an account of the user 2 to an account of the user 1. Before the server 10c packages the data 3 uploaded by the terminal device 10e into the data to be sent out 13, it is also necessary to verify the data to be sent out 11 and the data to be sent out 12 stored in the local cache area 20a, for example, if the data 1 and the data 2 are sensitive original data to be sent out, the server 10c may respectively detect whether the account information of the initiator and the recipient in the data 1 and the data 2 is correct, whether the transmission mode is expired (for example, the expiration of the transmission mode may refer to that the transmission mode has completed the sending out process, or the transmission mode has exceeded the time limit for sending out, etc.), and detect whether the user level of the initiator can support the current sending out service, etc. When the server 10c detects that the account information in the data to be sent out is correct, the transmission mode is not expired, and the user level of the initiator reaches the standard, the verification result of the data to be sent out can be obtained as follows: and (5) passing the verification.
For the outgoing data 11 and the outgoing data 12, the verification result of the server 10c may include the following cases: when both the data to be sent out 11 and the data to be sent out 12 pass the verification, the server 10c may generate the data to be sent out 13 based on the security vector corresponding to the data to be sent out 12 and the received data 3. In other words, when the data tag of the data to be sent out 13 includes the security vector of the data to be sent out 12, it indicates that the server 10c approves the data to be sent out 12 and the data to be sent out 11 (because the data tag of the data to be sent out 12 includes the security vector corresponding to the data to be sent out 11). At this time, the data to be sent out 11 has passed the approval of 3 artificial intelligence systems (i.e. the server 10a, the server 10b and the server 10 c), and the safety factor is 3; the data to be sent out 12 passes the approval of 2 artificial intelligence systems (namely the server 10b and the server 10 c), and the safety factor is 2; the outgoing data 13 passes the approval of 1 artificial intelligence system (i.e., the server 10 c), with a security factor of 1.
For example, when the outgoing data 11 is authenticated and the outgoing data 12 is not authenticated, the server 10c may generate the outgoing data 13 based on the security vector corresponding to the outgoing data 11 and the received data 3. In other words, when the data tag of the outgoing data 13 includes the security vector of the outgoing data 11, it indicates that only the outgoing data 11 is approved by the server 10 c. At this time, the data to be sent out 11 has passed the approval of 3 artificial intelligence systems (i.e. the server 10a, the server 10b and the server 10 c), and the safety factor is 3; the data to be sent out 12 passes the approval of 1 artificial intelligence system (namely the server 10 b), and the safety factor is 1; the outgoing data 13 passes the approval of 1 artificial intelligence system (i.e., the server 10 c), with a security factor of 1.
For example, when both the outgoing data 11 and the outgoing data 12 are not authenticated, the server 10c may generate the outgoing data 13 based on the security vector corresponding to the outgoing data 10 and the received data 3. At this time, the data to be sent out 11 has passed the approval of 2 artificial intelligence systems (i.e. the server 10a and the server 10 b), and the safety factor is 2; the data to be sent out 12 passes the approval of 1 artificial intelligence system (namely the server 10 b), and the safety factor is 1; the outgoing data 13 passes the approval of 1 artificial intelligence system (i.e., the server 10 c), with a security factor of 1.
After the server 10c generates the data to be outbound 13, likewise, the server 10c may broadcast the data to be outbound 13 to the remaining artificial intelligence systems in the outbound network, and stores the data to be sent out 13 in the local buffer memory, the data to be sent out 11, the data to be sent out 12 and the data to be sent out 13 are all temporarily sent out without formal sending out, and so on, the safety factor of each newly generated data to be sent out can be obtained, when the safety factor of certain data to be sent out reaches the preset safety factor threshold, the data to be sent out can be sent out formally, for example, there are 5 artificial intelligence systems in the sending out network, the preset safety factor threshold is 51%, when the data to be sent out 11 passes the approval of 3 artificial intelligence systems, the data to be sent out 11 can be sent out formally, and the data to be sent out 11 formally can be deleted in the local cache area of each artificial intelligent system. In the verification process, each artificial intelligence system only needs to broadcast the generated data to be sent out to other artificial intelligence systems, so that the broadcast process of the verification result can be avoided, and the verification efficiency can be improved.
Please refer to fig. 1, which is a flowchart illustrating a big data information security protection method based on artificial intelligence according to an embodiment of the present application. As shown in fig. 1, the method may include the steps of:
step S101, acquiring to-be-sent-out original data uploaded by a user terminal, and carrying out security detection on to-be-sent-out data stored in an outgoing data storage space; the outgoing data storage space stores a target data group to be outgoing, the target data group to be outgoing comprises at least one data to be outgoing, and different data to be outgoing are generated by different artificial intelligence systems respectively;
for example, after the user terminal (e.g., the terminal device 10e in the foregoing corresponding embodiment) uploads the original data to be sent out to the outgoing network, the artificial intelligence system (e.g., the server 10c in the foregoing corresponding embodiment) in the outgoing network may obtain the original data to be sent out uploaded by the user terminal. Before packing the original data to be sent out into the data to be sent out, the artificial intelligence system may perform security detection on all the data to be sent out (e.g., the data to be sent out 11 and the data to be sent out 12 in the foregoing corresponding embodiments) stored in the data storage space (e.g., the local cache area 20a in the foregoing corresponding embodiments). The outgoing data storage space is used for storing data to be outgoing which is temporarily not formally outgoing in an outgoing network, that is, data to be outgoing which does not pass verification, the outgoing data storage space may include a target data group to be outgoing, the target data group to be outgoing may include at least one data to be outgoing, in the target data group to be outgoing, different data to be outgoing are generated by different artificial intelligence systems, respectively, a data tag of the latter data to be outgoing includes a security vector corresponding to the former data to be outgoing, and a data tag of the first data to be outgoing includes a security vector corresponding to the highest priority target data to be outgoing in the target data group to be outgoing (the data group to be outgoing which is formed by the data to be outgoing which passes verification). It should be understood that the priority of the first data to be sent out in the outgoing data storage space is 1 greater than the priority corresponding to the highest priority target data to be sent out in the target outgoing data group, and the priority of the second data to be sent out is 1 greater than the priority of the first data to be sent out. In other words, in the present embodiment, the priority of the data to be sent out in the outgoing data storage space is based on the highest priority of the target outgoing data group, and the priority of the data to be sent out is associated with the generation order of the corresponding data to be sent out. As in the corresponding embodiment, since the data tag of the data to be sent out 12 includes the security vector corresponding to the data to be sent out 11, the target data group to be sent out included in the local cache 20a is: the data to be sent out 11-data to be sent out 12, when the highest priority in the target data group to be sent out is a, the priority corresponding to the data to be sent out 11 is a +1, and the priority corresponding to the data to be sent out 12 is a + 2.
The artificial intelligence system in the outgoing network can verify the data to be outgoing stored in the outgoing data storage space based on a verification mechanism, wherein the verification mechanism includes but is not limited to: proof of Work (PoW), Proof of rights (PoS), mixed Proof of Work and rights (PoW + PoS), Proof of equity authorization (Delegated Proof of stamp, DPoS), Practical bypath fault-tolerant algorithm (PBFT), and rayleigh authentication Protocol (RCP). It should be noted that the security detection refers to a verification process of the current artificial intelligence system on the to-be-sent data stored in the to-be-sent data storage space.
It is to be understood that the order of execution of the two method steps of obtaining the original data to be sent out and performing the security check on the data to be sent out is not limited by the order of expression, for example, the two method steps may be executed interchangeably.
It should be understood that after the user terminal uploads the original data to be sent out to the outgoing network, the internal of the outgoing network may determine the artificial intelligence system that packs the original data to be sent out into new data to be sent out according to the preset artificial intelligence system sequencing order and the artificial intelligence system that generates the previous data to be sent out, where the previous data to be sent out and the new data to be sent out are both data to be sent out. For example, the outgoing network includes 5 artificial intelligence systems altogether, and the 5 artificial intelligence systems are ordered in the order: an artificial intelligence system A, an artificial intelligence system B, an artificial intelligence system C, an artificial intelligence system D and an artificial intelligence system E; after the outgoing network receives the original data to be outgoing uploaded by the user terminal, the artificial intelligence system for generating new data to be outgoing (the new data to be outgoing is the data to be outgoing storing the original data to be outgoing) can be determined according to the position of the artificial intelligence system for generating the previous data to be outgoing in the arrangement sequence, and if the artificial intelligence system for generating the previous data to be outgoing is: the artificial intelligence system A can pack the original data to be sent out into new data to be sent out by the artificial intelligence system B; if the artificial intelligence system for generating the last data to be sent out is as follows: the artificial intelligence system B can pack the original data to be sent out into new data to be sent out by the artificial intelligence system C; by analogy, if the artificial intelligence system for generating the last data to be sent out is: and the artificial intelligence system E can pack the original data to be sent out into new data to be sent out by the artificial intelligence system A. In other words, the artificial intelligence systems can be selected in the ordering order of the artificial intelligence systems in a polling manner to generate new data to be sent out. The artificial intelligence system sorting order can be determined based on the contribution of each artificial intelligence system to the outgoing network, the artificial intelligence systems are sorted based on the contribution amount, and when new to-be-outgoing original data is received, the artificial intelligence systems can be selected according to the sorting order. For example, if the number of the historical outgoing data generated by the artificial intelligence system a is 10, and the 10 historical data to be outgoing generated by the artificial intelligence system a are all approved by the consistency of the outgoing network, the formal outgoing is finally completed; the number of the historical outgoing data generated by the artificial intelligence system B is also 10, but only 5 pieces of historical data to be outgoing pass the consistency approval of the outgoing network and complete the formal outgoing, which means that the contribution of the artificial intelligence system A to the outgoing network is larger than that of the artificial intelligence system B, so the artificial intelligence system A should be arranged in front of the artificial intelligence system B.
Step S102, if first to-be-sent-out data in at least one to-be-sent-out data passes security detection and the first to-be-sent-out data is to-be-sent-out data with the highest priority in a target to-be-sent-out data group, obtaining a security vector corresponding to the first to-be-sent-out data, generating second to-be-sent-out data according to-be-sent-out original data and the security vector, adding the second to-be-sent-out data to the target to-be-sent-out data group, and obtaining an updated outgoing data storage space;
for example, the artificial intelligence system can perform security detection on each piece of data to be sent out in turn based on the priority of each piece of data to be sent out in the outgoing data storage space. If the first to-be-sent-out data in the target to-be-sent-out data group passes the security detection, and the first to-be-sent-out data is to-be-sent-out data with the highest priority in the target to-be-sent-out data group, a security vector corresponding to the first to-be-sent-out data can be obtained, the security vector is used as input data of a data tag, the to-be-sent-out original data is used as data main body data to generate second to-be-sent-out data, the second to-be-sent-out data is added to the target to-be-sent-out data group, and the newly generated second to-be-sent-out data is cached in an outgoing data storage space to obtain an updated outgoing data storage space. In other words, if the data to be sent out contained in the target data group to be sent out all passes the security detection, the data to be sent out with the highest priority in the target data group to be sent out is called as the first data to be sent out, the second data to be sent out is generated based on the security vector corresponding to the first data to be sent out and the original data to be sent out uploaded by the user terminal, the updated target data group to be sent out is obtained, and the second data to be sent out is stored in the data sending out storage space. It can be understood that the first data to be sent out and the second data to be sent out both belong to the target data group to be sent out, and the first data to be sent out and the second data to be sent out in the target data group to be sent out are adjacent data to be sent out, that is, the priority of the second data to be sent out is the priority of the first data to be sent out plus 1.
For example, the target outgoing data set is: the data to be sent out 1-data to be sent out 2-data to be sent out 3-data to be sent out 4, the artificial intelligence system carries out security detection on the data to be sent out 1, the data to be sent out 2, the data to be sent out 3 and the data to be sent out 4 in sequence, if the data to be sent out 1, the data to be sent out 2, the data to be sent out 3 and the data to be sent out 4 all pass the security detection, a security vector corresponding to the data to be sent out 4 can be obtained (the data to be sent out 4 at this moment is the first data to be sent out), the data to be sent out 5 (the second data to be sent out) is generated based on the security vector corresponding to the data to be sent out 4 and the original data to be sent out, and the updated target data to be sent out is: the data to be sent out 1, the data to be sent out 2, the data to be sent out 3, the data to be sent out 4 and the data to be sent out 5, wherein the data to be sent out 5 can be added to an outgoing data storage space for storage, and the updated outgoing data storage space is obtained.
For example, if there is data to be sent out which does not pass the security detection in the target data group to be sent out, and the first data to be sent out is the data to be sent out with the highest priority in the data to be sent out which passes the security detection in the target data group to be sent out, a security vector corresponding to the first data to be sent out is obtained, and second data to be sent out is generated according to the original data to be sent out and the security vector; and all the data to be sent out which pass the security detection in the target data group to be sent out and the second data to be sent out form a new data group to be sent out, and the new data group to be sent out and the target data group to be sent out are determined as an updated data storage space to be sent out. In other words, if the data to be sent out contained in the target data group to be sent out is verified, and if the data to be sent out does not pass the verification, the verification process of the remaining data to be sent out in the data group to be sent out can be stopped, the data to be sent out with the highest priority in the data to be sent out which passes the security detection in the target data group to be sent out is called as the first data to be sent out, and the second data to be sent out is generated based on the first data to be sent out and the original data to be sent out uploaded by the user terminal. Based on all the data to be sent out which pass the security detection in the target data group to be sent out and the second data to be sent out, a new data group to be sent out can be constructed, the second data to be sent out is added to the data storage space to obtain an updated data storage space to be sent out, and the updated data storage space to be sent out also comprises the constructed new data group to be sent out besides the existing target data group to be sent out.
As in the previous example, the target outgoing data set is: the method comprises the steps that data to be sent out 1, data to be sent out 2, data to be sent out 3, data to be sent out 4 are sent out, an artificial intelligence system carries out safety detection on the data to be sent out 1, the data to be sent out 2 is subjected to safety detection after the data to be sent out 1 passes verification, the data to be sent out 3 is verified after the data to be sent out 2 passes verification, if the data to be sent out 3 does not pass verification, the safety detection process of the data to be sent out 4 can be stopped (a data label of the data to be sent out 4 comprises a safety vector corresponding to the data to be sent out 3, when the data to be sent out 3 does not pass safety detection, the data to be sent out 4 does not pass safety detection, otherwise, when the data to be sent out 4 passes safety detection, the data to be sent out 3 also passes safety detection), a safety vector corresponding to the data to be sent out 2 is obtained (the data to be sent out 2 is the first data to be sent out at the moment), and generating the data to be sent out 5 (namely second data to be sent out) based on the safety vector corresponding to the data to be sent out 2 and the original data to be sent out. Based on the data to be sent out 1, the data to be sent out 2 and the data to be sent out 5, a new data group to be sent out can be constructed, and the new data group to be sent out is as follows: data to be sent out 1-data to be sent out 2-data to be sent out 5; the outgoing data 5 can be added to the outgoing data storage space to obtain an updated outgoing data storage space.
For example, before generating the second data to be sent out based on the security vector corresponding to the first data to be sent out and the original data to be sent out, the artificial intelligence system may also verify the received original data to be sent out, and package the verified original data to be sent out and the security vector into the second data to be sent out. For example, the verification process is: the artificial intelligence system can acquire a safety identification carried by the original data to be sent out and acquire a safety identification generation algorithm corresponding to the user terminal; processing the safety identification based on a safety identification generation algorithm to obtain first safety characteristic information corresponding to the safety identification; performing MD5 operation on the original data to be sent out based on an MD5 model to obtain second safety feature information corresponding to the original data to be sent out; if the first safety characteristic information is the same as the second safety characteristic information, the to-be-sent original data passes verification, and a data main body is generated based on the to-be-sent original data passing verification; and generating a data label according to the safety vector corresponding to the first data to be sent out, and generating second data to be sent out according to the data label and the data main body. In other words, in order to prevent the original data to be sent out from being maliciously tampered in the transmission process, the user terminal may generate a key pair (including a private key and a security identifier generation algorithm, the private key is managed by the user terminal itself, and the security identifier generation algorithm may notify all artificial intelligence systems in the outgoing network), the user terminal may perform MD5 operation on the original data to be sent out by using an MD5 model, generate first security feature information corresponding to the original data to be sent out, and encrypt the first security feature information by using the generated private key, where the encrypted first security feature information is a security identifier corresponding to the original data to be sent out. The method comprises the steps that a user terminal uploads original data to be sent out, which carries a security identifier, to an outgoing network, an artificial intelligence system in the outgoing network can obtain a security identifier generation algorithm corresponding to the user terminal after receiving the original data to be sent out, which carries the security identifier, uploaded by the user terminal, and carries the security identifier, the artificial intelligence system processes the security identifier based on the security identifier generation algorithm to obtain first security feature information corresponding to the security identifier, then MD5 operation is carried out on the original data to be sent out, which is received by the artificial intelligence system, according to an MD5 model (namely, an MD5 model adopted by the user terminal to generate the security identifier), second security feature information corresponding to the received original data to be sent out is obtained, and if the first security feature information is the same as the second security feature information, the original data to be sent out is not tampered in the uploading process and the authentication is passed; if the first security feature information is different from the second security feature information, it indicates that the original data to be sent out may be tampered in the uploading process, and the verification is not passed.
It should be understood that, before the user terminal uploads the original data to be sent out, the artificial intelligence system in the outgoing network is notified of the security identifier generation algorithm and the MD5 model used for generating the security identifier, and if the original data to be sent out is tampered during the uploading process and the security identifier received by the artificial intelligence system is not the security identifier originally generated by the user terminal, the artificial intelligence system cannot solve the security identifier when processing the security identifier by using the security identifier generation algorithm corresponding to the user terminal.
After the original data to be sent out passes the verification, the artificial intelligence system can pack the safety vector corresponding to the first data to be sent out and the original data to be sent out into second data to be sent out, the data label of the second data to be sent out can comprise the safety vector corresponding to the first data to be sent out, and the data main body of the second data to be sent out can be used for recording the original data to be sent out.
Step S103, broadcasting the second data to be sent out in the outgoing network, so that other artificial intelligence systems except the artificial intelligence system generating the second data to be sent out in the outgoing network respectively cache the second data to be sent out to the storage spaces to which the artificial intelligence systems belong;
for example, after the artificial intelligence system generates the second data to be sent out, the artificial intelligence system may broadcast the second data to be sent out in the outgoing network, that is, the second data to be sent out is sent to the other artificial intelligence systems in the outgoing network, so that the other artificial intelligence systems in the outgoing network except the artificial intelligence system generating the second data to be sent out respectively cache the second data to be sent out, and cache the second data to be sent out in the storage spaces to which the respective artificial intelligence systems belong. In other words, in the outgoing network, the data to be outgoing generated by any artificial intelligence system needs to be broadcast in the outgoing network. It can be understood that the cache regions corresponding to the other artificial intelligence systems in the outgoing network have the same function as the outgoing data storage space corresponding to the current artificial intelligence system, and can be used for storing the data to be outgoing generated by all the artificial intelligence systems.
And step S104, updating the safety factor corresponding to each data to be sent out in the updated outgoing data storage space, and determining the data to be sent out with the updated safety factor larger than a preset safety factor threshold value as the data which can be sent out safely.
For example, the artificial intelligence system may obtain an updated outgoing data storage space after adding the generated second to-be-artificial intelligence system to the outgoing data storage space. The artificial intelligence system can acquire the number of pieces of data to be sent out contained in the updated outgoing data storage space, and then the safety factor of each to-be-sent-out data storage space at the current moment is counted, the to-be-sent-out data with the current safety factor larger than the preset safety factor threshold is determined as the safe to-be-sent-out data, the to-be-sent-out data in the safe to-be-sent-out data is added to the target outgoing data group, the to-be-sent-out data which is to be verified is formally sent out, the preset safety factor threshold is related to a verification mechanism adopted in the to-be-sent-out data group, different verification mechanisms can have different preset safety factor thresholds, and the adopted verification mechanism is not specifically limited in the embodiment of the application. For example, the updated outgoing data storage space includes the updated target pending outgoing data set: the data to be sent out 1, the data to be sent out 2, the data to be sent out 3, the data to be sent out 4 and the data to be sent out 5 are generated by different artificial intelligence systems, so that the safety coefficient corresponding to the data to be sent out 1 can be obtained through statistics: 5 standard security units; the safety factor corresponding to the outgoing data 2 is as follows: 4 standard security units; the safety factor corresponding to the outgoing data 3 is as follows: 3 standard security units; the safety factor corresponding to the outgoing data 4 is as follows: 2 standard security units; the safety factor corresponding to the outgoing data 5 is as follows: 1 standard security unit. Assuming that the outgoing network comprises 9 artificial intelligence systems in total and the preset safety coefficient threshold is 51% of the number of the artificial intelligence systems, it indicates that the safety coefficient corresponding to the data to be outgoing 1 exceeds the preset safety coefficient threshold, that is, the data to be outgoing 1 passes verification, and the data to be outgoing 1 can be added to the target outgoing data group for formal outgoing.
For example, the updated outgoing data storage space includes the target to-be-outgoing data set: data to be sent out 1-data to be sent out 2-data to be sent out 3-data to be sent out 4, and a new data group to be sent out: the data to be sent out 1, the data to be sent out 2 and the data to be sent out 5 can be counted to obtain the safety factor corresponding to the data to be sent out 1 as follows: 5 standard security units; the safety factor corresponding to the outgoing data 2 is as follows: 4 standard security units; the safety factor corresponding to the outgoing data 3 is as follows: 2 standard security units; the safety factor corresponding to the outgoing data 4 is as follows: 1 standard security unit; the safety factor corresponding to the outgoing data 5 is as follows: 1 standard security unit. Assuming that the outgoing network comprises 9 artificial intelligence systems in total and the preset safety coefficient threshold is 51% of the number of the artificial intelligence systems, it indicates that the safety coefficient corresponding to the data to be outgoing 1 exceeds the preset safety coefficient threshold, that is, the data to be outgoing 1 passes verification, and the data to be outgoing 1 can be added to the target outgoing data group for formal outgoing.
For example, in the outgoing network, a server identity weight may be further assigned to each artificial intelligence system based on the historical verification record of each artificial intelligence system, and when the verification result of a certain artificial intelligence system on the outgoing data within a period of time is consistent with the final verification result of the outgoing data in the outgoing network (i.e., when the outgoing data passes the verification in the outgoing network, the verification result of the artificial intelligence system on the outgoing data is verification pass), the server identity weight of the artificial intelligence system may be set higher, for example, the server identity weight is 1.2; when the verification result of most of the data to be sent out is inconsistent with the final verification result of the data to be sent out in the outgoing network in a period of time (namely when the data to be sent out passes the verification in the outgoing network, the verification result of the artificial intelligence system to the data to be sent out is that the verification fails, or when the data to be sent out fails the verification in the outgoing network, the verification result of the artificial intelligence system to the data to be sent out is that the verification passes), the server identity weight of the artificial intelligence system can be set to be lower, for example, the server identity weight is 0.8, and the like. After each artificial intelligence system is provided with the corresponding server identity weight, the server can acquire the number of pieces of data of the data to be sent out contained in the updated outgoing data storage space, determine the artificial intelligence system corresponding to each piece of data to be sent out in the updated outgoing data storage space, and acquire the server identity weight matched with the artificial intelligence system; updating the safety coefficient corresponding to each data to be sent out in the updated outgoing data storage space respectively based on the number of the data to be sent out contained in the updated outgoing data storage space and the server identity weight; determining the to-be-sent data with the updated safety factor larger than the preset safety factor threshold value as safe-to-be-sent data, and adding the to-be-sent data in the safe-to-be-sent data to the target sending-out data group; the target outgoing data group is used for storing all the data to be outgoing which pass the verification. In other words, after each artificial intelligence system sets the corresponding server identity weight, the safety factor corresponding to the data to be sent out is related to not only the number of data pieces of the artificial intelligence system confirming that the data to be sent out is legal, but also the server identity weight of the artificial intelligence system confirming that the data to be sent out is legal. For example, the updated outgoing data storage space includes the target to-be-outgoing data set: data to be sent out 1-data to be sent out 2-data to be sent out 3; the data to be sent out 1 is generated by an artificial intelligence system A, and the server identity weight corresponding to the artificial intelligence system A is 1.2; the data to be sent out 2 is generated by an artificial intelligence system B, and the server identity weight corresponding to the artificial intelligence system B is 1.0; the data to be sent out 3 is generated by an artificial intelligence system C, and the server identity weight corresponding to the artificial intelligence system C is 0.8; the current safety factor of the data to be sent out 1 can be obtained by statistics as follows: 1.2 × 1+1.0 × 1+0.8 × 1=3 standard security units.
When the artificial intelligence system adds the verified data to be sent out to the target data group for formal sending out, the artificial intelligence system can acquire the current priority corresponding to the data to be sent out, which is in the state of safe data to be sent out; if the current priority and the priority corresponding to the target data to be sent out with the highest priority in the target data group to be sent out are in the preset priority range, adding the data to be sent out which can be safely sent out to the target data group to be sent out; and if the current priority and the priority corresponding to the target to-be-sent-out data with the highest priority in the target outgoing data group are in a non-preset priority range, updating the priority of the to-be-sent-out data in the safe outgoing data, and adding the updated to-be-sent-out data in the safe outgoing data to the target outgoing data group. In the verification process, there may be a case that the data to be sent out cannot be verified, for example, the data to be sent out 1 cannot be consistent with the artificial intelligence system in the outgoing network, and the data to be sent out 2 is verified in the outgoing network (the default time for generating the data to be sent out 2 is later than the time for generating the data to be sent out 1), so that the data to be sent out 2 can be formally sent out. If the data label of the data to be sent out 2 contains the security vector corresponding to the target data to be sent out with the highest priority in the target data group to be sent out, the data to be sent out 2 can be directly added to the target data group to be sent out for formal sending out; if the data tag of the data to be sent out 2 does not contain the security vector corresponding to the target data to be sent out with the highest priority in the target data group to be sent out (for example, if the data tag of the data to be sent out 2 contains the security vector corresponding to the data to be sent out 1), the priority of the data to be sent out 2 needs to be updated, that is, the original security vector in the data tag of the data to be sent out 2 is updated to the security vector corresponding to the target data to be sent out with the highest priority in the target data group to be sent out, and the updated data to be sent out 2 is added to the target data group to be sent out for formal sending out. After the data to be sent out 2 completes the formal sending out process, the data to be sent out can be deleted from the sending out data storage space of each artificial intelligent system.
It should be understood that after the other artificial intelligence systems in the outbound network receive the second data to be outbound broadcast by the current artificial intelligence system and cache the second data to be outbound to the storage space, the other artificial intelligence systems may verify the second data to be outbound, generate new data to be outbound according to the verification result, and count the security coefficients corresponding to each data to be outbound in the cache region. For example, if the next artificial intelligence system passes the verification of the second data to be sent out, the third data to be sent out can be generated according to the security vector of the second data to be sent out and the received data; if the next artificial intelligence system does not pass the verification of the second data to be sent out, generating third data to be sent out according to the security vector corresponding to the highest data to be sent out which passes the verification (namely the data to be sent out which has the highest priority in the verified data to be sent out) and the received data; and then, an updated cache region can be obtained according to the third data to be sent out, and the safety factor corresponding to each data to be sent out in the updated cache region is counted, and the implementation process is the same as the steps S101 to S104.
The embodiment of the application also provides an example of verification of the to-be-sent data group. The outgoing network comprises 7 artificial intelligence systems, each artificial intelligence system has the same data group to be outgoing, namely a target outgoing data group 30a, all data to be outgoing (such as data to be outgoing 1, data to be outgoing 2 and data to be outgoing 3, the data to be outgoing 3 is the data to be outgoing with the highest priority in the target outgoing data group 30 a) contained in the target outgoing data group 30a are the data to be outgoing which passes the verification, and the data recorded in each data to be outgoing are different. In the outgoing network, the data to be outgoing generated by each artificial intelligence system needs to be broadcast in the whole network, that is, the cache area of each artificial intelligence system can store all the data to be outgoing generated by the artificial intelligence system (the data to be outgoing which has not passed the verification temporarily), and the generated data to be outgoing can be verified according to the sequence from the artificial intelligence system 1 to the artificial intelligence system 7 in the verification process. For example, the outgoing data storage space 30c corresponding to the artificial intelligence system 4 may include: the data to be sent out 4 generated by the artificial intelligence system 1, the data to be sent out 5 generated by the artificial intelligence system 2, and the data to be sent out 6 generated by the artificial intelligence system 3 (the data to be sent out 4, the data to be sent out 5, and the data to be sent out 6 are all data to be sent out that have not passed verification temporarily), because the data label of the data to be sent out 5 includes the security vector corresponding to the data to be sent out 4, and the data label of the data to be sent out 6 includes the security vector corresponding to the data to be sent out 5, the data to be sent out 4, the data to be sent out 5, and the data to be sent out 6 can be regarded as a data group to be sent out, which can be referred to as a target data group to be sent out 30 b. When the artificial intelligence system 4 receives the data 7 uploaded by the user terminal 1, the artificial intelligence system 4 can sequentially verify the data to be sent out contained in the target data group to be sent out 30b, if the data to be sent out 4, the data to be sent out 5 and the data to be sent out 6 all pass the verification, that is, all the data to be sent out contained in the target data group to be sent out 30b all pass the verification, the security vector corresponding to the data to be sent out 6 and the data 7 uploaded by the user terminal 1 can be packaged into the data to be sent out 7, the data to be sent out 7 is added into the data storage space to be sent out 30c, and the target data group to be sent out 30b at the moment is updated as follows: data to be sent out 4-data to be sent out 5-data to be sent out 6-data to be sent out 7. The artificial intelligence system 4 may count the safety factor corresponding to each data to be sent out based on the updated target data group to be sent out 30b, and the statistical result is as follows: the safety factor corresponding to the data to be sent out 4 is 4 standard safety units, the safety factor corresponding to the data to be sent out 5 is 3 standard safety units, the safety factor corresponding to the data to be sent out 6 is 2 standard safety units, and the safety factor corresponding to the data to be sent out 7 is 1 standard safety unit. If more than 51% of the artificial intelligence systems in the outgoing network approve a certain data to be outgoing, the data to be outgoing is verified, so that the data 4 to be outgoing can be determined to pass the verification, the data 4 to be outgoing can be added to the target outgoing data group 30a for formal outgoing, and the data label of the data 4 to be outgoing contains the security vector corresponding to the data 3 to be outgoing.
It should be understood that, when the data tag of the data to be sent out generated by the artificial intelligence system contains the security vector corresponding to another data to be sent out, it indicates that the artificial intelligence system verifies that all data recorded in the another data to be sent out passes.
For example, if the artificial intelligence system 4 verifies the data to be sent out included in the target data group to be sent out 30b, only the data to be sent out 4 passes the verification, and neither the data to be sent out 5 nor the data to be sent out 6 passes the verification, the security vector corresponding to the data to be sent out 4 nor the data 7 uploaded by the user terminal 1 may be packaged into the data to be sent out 7, and the data to be sent out 7 is added to the data storage space 30c, where the data storage space 30c may include not only the original target data group to be sent out 30 b: the data to be sent out 4-data to be sent out 5-data to be sent out 6-data to be sent out 7, and may further include a new data group to be sent out 30 d: outgoing data 4-outgoing data 7. The artificial intelligence system 4 can count the safety factors corresponding to each data to be sent out respectively based on the target data group to be sent out 30b and the new data group to be sent out 30d, and the statistical result is as follows: the safety factor corresponding to the data to be sent out 4 is 4 standard safety units, the safety factor corresponding to the data to be sent out 5 is 2 standard safety units, the safety factor corresponding to the data to be sent out 6 is 3 standard safety units, and the safety factor corresponding to the data to be sent out 7 is 1 standard safety unit. Therefore, it can be determined that the data to be sent out 4 passes the verification, and then the data to be sent out 4 can be added to the target data group to be sent out 30a for formal sending out, and the data tag of the data to be sent out 4 includes the security vector corresponding to the data to be sent out 3.
In this embodiment, for each artificial intelligence system in the outgoing network, the generated data to be outgoing may be broadcast to other artificial intelligence systems in the outgoing network for caching, the next artificial intelligence system may verify all the data to be outgoing that is locally cached after receiving the original data to be outgoing, and select the data to be outgoing with the highest priority from all the data to be outgoing that pass the verification as the first data to be outgoing, generate new data to be outgoing (the second data to be outgoing) based on the security vector corresponding to the first data to be outgoing and the received original data to be outgoing, that is, determine the data to be outgoing that the artificial intelligence system passes the verification based on the security vector included in the data to be outgoing that is newly generated by the artificial intelligence system, for example, the new data to be outgoing that is generated by the artificial intelligence system includes the security vector of the data to be outgoing 3, the artificial intelligence system can be confirmed to pass the data to be sent out 3, the data to be sent out corresponding to the safety vector contained in the data to be sent out 3 and the like; the safety factors corresponding to the data to be sent out are determined based on the number of the data to be sent out and the safety vector contained in each data to be sent out, so that the condition that an artificial intelligence system broadcasts the verification result of the data to be sent out, the number of messages of the broadcast verification result is reduced, and the verification efficiency is improved.
The embodiment of the application provides another example of a big data information security protection method based on artificial intelligence. The method may comprise the steps of:
step S201, acquiring to-be-sent-out original data uploaded by a user terminal, acquiring a plurality of to-be-sent-out data groups from an outgoing data storage space, and acquiring initial outgoing data numbers corresponding to the plurality of to-be-sent-out data groups respectively;
for example, after receiving the original data to be sent out uploaded by the user terminal, the artificial intelligence system may obtain all the data to be sent out stored in the data storage space to be sent out, and determine the chain relationship among all the data to be sent out based on the security vector included in the data tag of each data to be sent out, that is, determine how many data groups to be sent out exist in the data storage space to be sent out, and the number of pieces of data to be sent out included in each data group to be sent out, which may also be referred to as the number of pieces of initial data to be sent out corresponding to each data group to be sent out. Different data to be sent out certainly exist in different data groups to be sent out, but the data to be sent out may contain the same data to be sent out, for example, the data group to be sent out 1 may be: data to be sent out 1-data to be sent out 2-data to be sent out 3, the data group to be sent out 2 may be: data to be sent out 1, data to be sent out 2, data to be sent out 4 and data to be sent out 5.
Step S202, sequencing a plurality of data groups to be sent out based on the number of the initial data groups to be sent out, and sequentially carrying out security detection on the data to be sent out contained in each data group to be sent out based on the sequencing sequence of each data group to be sent out;
for example, the artificial intelligence system may sort all the data sets to be sent out included in the data storage space according to the number of the initial data sets to be sent out corresponding to each data set to be sent out, that is, sort all the data sets to be sent out according to the order from the large number to the small number of the initial data sets to be sent out, and the artificial intelligence system may sequentially perform security detection on the data to be sent out included in each data set to be sent out based on the sorting order. In other words, the artificial intelligence system can preferentially perform security detection on the data group to be sent out, the number of which is the largest, of the initial data groups to be sent out, and if all the data to be sent out in the data group to be sent out, the number of which is the largest, of the initial data groups to be sent out pass the security detection, the artificial intelligence system can stop the verification operation on the remaining data groups to be sent out; if the data to be sent out in the data group to be sent out with the largest number of initial data to be sent out does not pass the security detection, the security detection is continuously carried out on the data group to be sent out arranged behind, and by analogy, the security detection process of the data to be sent out in the data storage space can be completed.
It should be noted that the more the number of the initial outgoing data corresponding to the outgoing data group, the more the artificial intelligence system that recognizes the outgoing data group in the outgoing network, the higher the possibility that the outgoing data group passes the security detection, so that the artificial intelligence system can preferentially verify the outgoing data group with the largest number of the initial outgoing data groups, thereby reducing the time for verifying the outgoing data and saving resources. For a plurality of data groups to be sent out in the outgoing data storage space, if the artificial intelligence system verifies that a certain data group to be sent out passes (that is, all data to be sent out included in the data group to be sent out passes), the artificial intelligence system inevitably fails to verify the verification result of the rest data groups to be sent out (that is, each rest data group to be sent out inevitably has data to be sent out which fails to verify). Therefore, when the data group to be sent out with the largest number of initial data pieces to be sent out passes the security detection, the verification operation on the remaining data groups to be sent out can be stopped.
Step S203, if there is a target data group to be sent out, in which the data to be sent out all pass the security detection, in the multiple data groups to be sent out, the data to be sent out with the highest priority in the target data group to be sent out is used as the first data to be sent out, and a security vector corresponding to the first data to be sent out is obtained.
For example, after the artificial intelligence system performs security detection on the data to be sent out stored in the data storage space, if there is a data group to be sent out in which all the data to be sent out pass verification in the multiple data groups to be sent out, the data group to be sent out may be referred to as a target data group to be sent out, and then the data to be sent out having the highest priority in the target data group to be sent out may be used as the first data to be sent out, and a security vector corresponding to the first data to be sent out is obtained. For example, the outgoing data storage space includes 3 data sets to be outgoing, which are data set 1 to be outgoing (data 1 to be outgoing-data 2-data 3 to be outgoing), data set 2 to be outgoing (data 1 to be outgoing-data 2-data 5 to be outgoing), and data set 3 to be outgoing (data 1 to be outgoing-data 4), and if the data 1 to be outgoing, the data 2 to be outgoing, and the data 3 to be outgoing, which are included in the data set 1 to be outgoing, all pass security detection, the data set 1 to be outgoing may be referred to as a target data set to be outgoing, and a security vector corresponding to the data 3 to be outgoing in the data set 1 to be outgoing is acquired.
Step S204, generating second data to be sent out according to the original data to be sent out and the safety vector, adding the second data to be sent out to a target data group to be sent out, and determining the updated target data group to be sent out and the rest data groups to be sent out as an updated data storage space to be sent out;
for example, after the artificial intelligence system obtains the security vector corresponding to the first data to be sent out, the artificial intelligence system may generate second data to be sent out based on the original data to be sent out and the security vector uploaded by the user terminal, where the security vector may be used as input data of a data tag of the second data to be sent out, and the original data to be sent out is data recorded by a data main body of the second data to be sent out. And adding the second data to be sent out to the data sending out storage space for storage, namely adding the second data to be sent out to the target data set to be sent out to obtain an updated target data set to be sent out. As in the foregoing example, the data to be sent out 6 (i.e., the second data to be sent out) may be generated based on the security vector corresponding to the original data to be sent out and the data to be sent out 3, and the data to be sent out 6 is added to the data storage space to be sent out for storage, where the data storage space to be sent out includes the updated data group to be sent out 1 (data to be sent out 1-data to be sent out 2-data to be sent out 3-data to be sent out 6), the data group to be sent out 2 (data to be sent out 1-data to be sent out 2-data to be sent out 5) and the data group to be sent out 3 (data to be sent out 1-data to be sent out 4). After receiving the to-be-sent original data uploaded by the user terminal, the artificial intelligence system may verify the to-be-sent original data, and after the verification passes, may package the to-be-sent original data that passes the verification and the security vector corresponding to the to-be-sent data 3 into to-be-sent data 6.
Step S205, if there are data to be sent out which do not pass the security detection in the multiple data groups to be sent out, respectively counting the number of target data of the data to be sent out which pass the security detection in each data group to be sent out, and determining the data group to be sent out with the highest number of target data as the target data group to be sent out;
for example, if there is data to be sent out that does not pass the security detection in all data groups to be sent out in the data storage space to be sent out, that is, all data groups to be sent out do not pass the security detection, the number of target data of the data to be sent out that passes the security detection in each data group to be sent out may be counted respectively, and the data group to be sent out that has the highest number of target data may be determined as the target data group to be sent out. For example, the 3 data sets to be sent out in the outgoing data storage space are respectively: the data group to be sent out 1 comprises 4 data to be sent out, and the number of target data of the data to be sent out through security detection is 3; the data group to be sent out 2 comprises 4 data to be sent out, and the number of target data of the data to be sent out through security detection is 2; the data group to be sent out 3 comprises 2 data to be sent out, and the number of target data of the data to be sent out through security detection is 0. The artificial intelligence system can determine the data set to be sent out 1 as the target data set to be sent out.
When the number of target data of the data to be sent out, which passes the security detection, in the multiple data groups to be sent out is equal, it is indicated that the data to be sent out, which passes the security detection, in the multiple data groups to be sent out, which have the same number of target data, is the same, and one data group to be sent out can be randomly selected from the multiple data groups to be sent out, which have the same number of target data, as the target data group to be sent out.
Step S206, acquiring the data to be sent out with the highest priority as the first data to be sent out from the data to be sent out which passes the security detection and is contained in the target data group to be sent out, and acquiring a security vector corresponding to the first data to be sent out;
for example, the artificial intelligence system may select the outgoing data group with the highest priority as the first outgoing data from all the outgoing data of the target outgoing data group that passes the security detection, and obtain the security vector corresponding to the first outgoing data. For example, the target outgoing data set is: the method comprises the following steps that data to be sent out A, data to be sent out B, data to be sent out C, data to be sent out D and data to be sent out E are sent out, and if the data to be sent out in a target data group to be sent out through security detection are as follows: and taking the data to be sent out C as the first data to be sent out and acquiring a safety vector corresponding to the data to be sent out C.
Step S207, generating second data to be sent out according to the original data to be sent out and the safety vector, and forming a new data group to be sent out by all the data to be sent out which pass the safety detection in the target data group to be sent out and the second data to be sent out;
for example, the artificial intelligence system may package the security vector corresponding to the first data to be sent out and the original data to be sent out uploaded by the user terminal into second data to be sent out, and form a new data group to be sent out by using the data to be sent out, which is to be sent out and passes through the security detection, in the target data group to be sent out and the generated second data to be sent out. As shown in the foregoing example, the data to be sent out detected by security in the target data group to be sent out is: the data label of the second data to be sent out contains the safety vector corresponding to the data to be sent out C, so that the data to be sent out A, the data to be sent out B, the data to be sent out C and the second data to be sent out can form a new data group to be sent out: data to be sent out A, data to be sent out B, data to be sent out C and second data to be sent out. The artificial intelligence system may verify the received original data to be sent out, and package the verified original data to be sent out and the security vector corresponding to the data to be sent out C into the second data to be sent out after the verification is passed, for example, the verification process may refer to the description of step S102 in the embodiment corresponding to fig. 1, which is not described herein again. The artificial intelligence system needs to broadcast the generated second data to be sent out in the outgoing network, that is, the second data to be sent out is sent to the rest artificial intelligence systems in the outgoing network, so that the rest artificial intelligence systems can cache the second data to be sent out.
Step S208, determining a new data group to be sent out and a plurality of data groups to be sent out as an updated data storage space to be sent out;
for example, the artificial intelligence system may add the newly generated second data to be sent out to the outgoing data storage space for caching, where the outgoing data storage space at this time includes the constructed new data group to be sent out in addition to the previous multiple data groups to be sent out. In other words, the second outgoing data to be sent is added to the outgoing data storage space, so that the updated outgoing data storage space can be obtained.
Step S209, updating the safety factor corresponding to each data to be sent out in the updated outgoing data storage space, and determining the data to be sent out, whose updated safety factor is greater than the preset safety factor threshold, as the data that can be sent out safely.
For example, if the second data to be sent out is added to the target data group to be sent out to obtain an updated target data group to be sent out, the number of pieces of data to be sent out corresponding to the updated target data group to be sent out and the remaining data groups to be sent out respectively can be obtained from the updated data storage space to be sent out, and the occurrence frequency of each piece of data to be sent out in the updated target data group to be sent out and the remaining data groups to be sent out is respectively counted; and based on the number and the occurrence frequency of outgoing data, re-counting the safety factors corresponding to each data to be outgoing in the updated outgoing data storage space, determining the data to be outgoing with the updated safety factors larger than a preset safety factor threshold value as the data to be outgoing safely, and adding the data to be outgoing in the data to be outgoing safely to the target outgoing data group. For example, the updated target outgoing data set is: the data to be sent out 1, the data to be sent out 2, the data to be sent out 3, the data to be sent out 6 and the rest data groups to be sent out are respectively as follows: data to be sent out 1-data to be sent out 2-data to be sent out 5, data to be sent out 1-data to be sent out 4; the safety factor corresponding to each data to be sent out is related to the priority of each data to be sent out in the data group to be sent out and the number of data to be sent out of the data group to be sent out, if the data 1 to be sent out exists in 3 data groups to be sent out (namely the frequency of occurrence of the data 1 to be sent out in each data group to be sent out is 3 times), and the data 1 to be sent out is the first data to be sent out in the 3 data groups to be sent out, the safety factor corresponding to the data 1 to be sent out is: subtracting the repeated number of the outgoing data from the sum of the number of the outgoing data of the 3 data groups to be outgoing, namely 4+3+2-2-1= 6; data 2 to be sent out exist in 2 data sets to be sent out (that is, the frequency of occurrence of data 2 to be sent out in each data set to be sent out is 2 times), and data 2 to be sent out is not the first data to be sent out in 2 data sets to be sent out, and then the corresponding factor of safety of data 2 to be sent out is: the priority in the 2 data groups to be sent out is greater than or equal to the number of data sent out of the data group to be sent out 2, and the number of repeated data sent out is subtracted, namely 3+2-1= 4.
For example, if the second data to be sent out and the data to be sent out, which is detected by the security detection, in the target data group to be sent out form a new data group to be sent out, the security factor corresponding to each data to be sent out in the data storage space to be sent out is re-counted based on the original multiple data groups to be sent out and the new data group to be sent out, and the specific statistical manner is described above.
For example, the artificial intelligence system can compare the safety factor obtained by current statistics with a preset safety factor threshold, if the safety factor is greater than the preset safety factor threshold, the data to be sent out, of which the safety factor is greater than the preset safety factor threshold, is determined as the data to be sent out safely, and the data to be sent out, of which the safety factor is greater than the preset safety factor threshold, is added to the target data group to be sent out. Certainly, before the data to be sent out in the safe data to be sent out is added to the target data group to be sent out, whether the priority of the data to be sent out in the safe data to be sent out is in a preset priority range relation with the priority corresponding to the target data to be sent out with the highest priority in the target data group to be sent out needs to be judged, if yes, the data to be sent out is directly added to the target data group to be sent out for formal sending out; if the relation is not in the preset priority level range, updating the priority of the data to be sent out which is in the safe data to be sent out, and adding the updated data to be sent out which is in the safe data to be sent out to the target data group to be sent out.
The embodiment of the application provides another example of a big data information security protection method scene based on artificial intelligence. The scenario may include: the outgoing network comprises 7 artificial intelligence systems, each artificial intelligence system has the same data group to be outgoing, namely a target outgoing data group 40a, all data to be outgoing (such as data to be outgoing 1, data to be outgoing 2 and data to be outgoing 3, the data to be outgoing 3 is the data to be outgoing with the highest priority in the target outgoing data group 40 a) contained in the target outgoing data group 40a are the data to be outgoing which passes the verification, and the data recorded in each data to be outgoing are different. In the outgoing network, the data to be outgoing generated by each artificial intelligence system needs to be broadcast in the whole network, that is, the cache area of each artificial intelligence system can store all the data to be outgoing generated by the artificial intelligence system (the data to be outgoing which has not passed the verification temporarily), and the generated data to be outgoing can be verified according to the sequence from the artificial intelligence system 1 to the artificial intelligence system 7 in the verification process.
For example, outgoing data storage space 40b for artificial intelligence system 6 may include: the data to be sent out 4 generated by the artificial intelligence system 1, the data to be sent out 5 generated by the artificial intelligence system 2, the data to be sent out 6 generated by the artificial intelligence system 3, the data to be sent out 7 generated by the artificial intelligence system 4 and the data to be sent out 8 generated by the artificial intelligence system 5 (the data to be sent out 4 to the data to be sent out 8 are the data to be sent out which are not verified temporarily), and the data label of the data to be sent out 6 contains the safety vector corresponding to the data to be sent out 4, so that the data to be sent out 4 and the data to be sent out 6 can be regarded as a data group to be sent out, which is called a data group to be sent out 40 c; the data tag of the data to be sent out 7 includes the security vector corresponding to the data to be sent out 5, and the data tag of the data to be sent out 8 includes the security vector corresponding to the data to be sent out 7, so that the data to be sent out 5, the data to be sent out 7, and the data to be sent out 8 can be regarded as a data group to be sent out, which is called a data group to be sent out 40 d.
When the artificial intelligence system 6 receives the data 9 uploaded by the user terminal 2, the artificial intelligence system 6 may verify the data to be sent out stored in the data to be sent out storage space 40b, and specifically, the verification order of the data to be sent out may be determined by the number of the data to be sent out of the data group to be sent out, that is, the artificial intelligence system 6 may preferentially verify the data to be sent out in the data to be sent out group 40 d. If the data to be sent out 5, the data to be sent out 7 and the data to be sent out 8 are all verified, that is, all the data to be sent out contained in the data group to be sent out 40d are verified, the data to be sent out contained in the data group to be sent out 40c does not need to be verified again, the security vector corresponding to the data to be sent out 8 and the data 9 uploaded by the user terminal 2 are directly packaged into the data to be sent out 9, the data to be sent out 9 is added into the data storage space to be sent out 40b, and the data group to be sent out 40d is updated as follows: outgoing data 5-outgoing data 7-outgoing data 8-outgoing data 9, while outgoing data set 40c remains unchanged. The artificial intelligence system 6 may count the safety factor corresponding to each data to be sent out based on the updated data group to be sent out 40d and the data group to be sent out 40c, and the statistical result is as follows: the safety factor corresponding to the data to be sent out 4 is 2 standard safety units, the safety factor corresponding to the data to be sent out 5 is 4 standard safety units, the safety factor corresponding to the data to be sent out 6 is 1 standard safety unit, the safety factor corresponding to the data to be sent out 7 is 3 standard safety units, the safety factor corresponding to the data to be sent out 8 is 2 standard safety units, and the safety factor corresponding to the data to be sent out 9 is 1 standard safety unit.
If more than 51% of the artificial intelligence systems in the outgoing network approve a certain data to be outgoing, the data to be outgoing is verified, so that the data 5 to be outgoing can be determined to pass the verification, the data 5 to be outgoing can be added to the target outgoing data group 40a for formal outgoing, and the data label of the data 5 to be outgoing should contain the security vector corresponding to the data 3 to be outgoing. Of course, at this time, the verification result of the data to be sent out 4 in the outgoing network may also be determined as follows: if the verification fails (even if the subsequent artificial intelligence system 7 passes the result of the security detection on the data to be sent out 4, the safety factor of the data to be sent out 4 still cannot exceed 51%, and thus it can be determined that the result of the verification on the data to be sent out 4 is failed), the artificial intelligence system 6 may empty the data 4 recorded in the data to be sent out 4, or delete the data to be sent out 4 from the data to be sent out storage space 40b, which is not limited herein.
It should be understood that, when the data tag of the data to be sent out generated by the artificial intelligence system contains the security vector corresponding to another data to be sent out, it indicates that the artificial intelligence system verifies that all data recorded in the another data to be sent out passes.
For example, if the data group to be sent out 40d does not pass the verification of the artificial intelligence system 6 (that is, there is data to be sent out that does not pass the verification in the data group to be sent out 40 d), the artificial intelligence system 6 may verify the data group to be sent out 40c, and when the data group to be sent out 40c passes the verification, the security vector corresponding to the data to be sent out 6 and the data 9 uploaded by the user terminal 2 may be packaged into the data to be sent out 9, and the data to be sent out 9 is added to the data storage space to be sent out 40b, and the data group to be sent out 40c at this time is updated as follows: outgoing data 4-outgoing data 6-outgoing data 9, while outgoing data set 40d remains unchanged. Based on the same statistical manner, the safety factor corresponding to each to-be-sent data in the updated outgoing data storage space 40b is counted. Of course, if neither the data group to be sent out 40d nor the data group to be sent out 40c passes the verification of the artificial intelligence system 6, a new data group to be sent out may be constructed, and the specific construction process may refer to steps S205 to S208 in the above corresponding embodiments, which is not described herein again.
In this embodiment, for each artificial intelligence system in the outgoing network, the generated data to be outgoing may be broadcast to other artificial intelligence systems in the outgoing network for caching, the next artificial intelligence system may verify all the data to be outgoing that is locally cached after receiving the original data to be outgoing, and select the data to be outgoing with the highest priority from all the data to be outgoing that pass the verification as the first data to be outgoing, generate new data to be outgoing (the second data to be outgoing) based on the security vector corresponding to the first data to be outgoing and the received original data to be outgoing, that is, determine the data to be outgoing that the artificial intelligence system passes the verification based on the security vector included in the data to be outgoing that is newly generated by the artificial intelligence system, for example, the new data to be outgoing that is generated by the artificial intelligence system includes the security vector of the data to be outgoing 3, the artificial intelligence system can be confirmed to pass the data to be sent out 3, the data to be sent out corresponding to the safety vector contained in the data to be sent out 3 and the like; the safety factors corresponding to the data to be sent out are determined based on the number of the data to be sent out and the safety vector contained in each data to be sent out, so that the condition that an artificial intelligence system broadcasts the verification result of the data to be sent out, the number of messages of the broadcast verification result is reduced, and the verification efficiency is improved.
The embodiment of the present application provides an artificial intelligence system 100, where the artificial intelligence system 100 includes a processor and a non-volatile memory storing computer instructions, and when the computer instructions are executed by the processor, the artificial intelligence system 100 executes the above-mentioned big data information security protection method system 110 based on artificial intelligence. The artificial intelligence system 100 comprises an artificial intelligence based big data information security protection method system 110, a memory 111, a processor 112 and a communication unit 113.
To facilitate the transfer or interaction of data, the elements of the memory 111, the processor 112 and the communication unit 113 are electrically connected to each other, directly or indirectly. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The big data information security protection method system 110 based on artificial intelligence comprises at least one software function module which can be stored in the memory 111 in the form of software or firmware (firmware) or solidified in the Operating System (OS) of the artificial intelligence system 100. The processor 112 is used for executing the artificial intelligence based big data information security protection method system 110 stored in the memory 111, such as a software function module and a computer program included in the artificial intelligence based big data information security protection method system 110.
The embodiment of the application provides a readable storage medium, the readable storage medium comprises a computer program, and when the computer program runs, the computer program controls an artificial intelligence system where the readable storage medium is located to execute the above method for protecting safety of big data information based on artificial intelligence.
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the disclosure and its practical applications, to thereby enable others skilled in the art to best utilize the disclosure and various embodiments with various modifications as are suited to the particular use contemplated. The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the disclosure and its practical applications, to thereby enable others skilled in the art to best utilize the disclosure and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (10)

1. A big data information safety protection method based on artificial intelligence is applied to an artificial intelligence system and is characterized by comprising the following steps:
acquiring safe outgoing data obtained by carrying out security detection on to-be-outgoing original data uploaded by a user terminal, and after acquiring abnormal disturbance request data in the process of distributing the safe outgoing data, analyzing the abnormal disturbance request data to obtain an abnormal disturbance intention;
stopping the distribution operation of the safely outgoing data when the abnormal disturbance intention is associated with the safely outgoing data.
2. The big data information safety protection method based on artificial intelligence as claimed in claim 1, wherein said step of analyzing said abnormal disturbance request data to obtain an abnormal disturbance intention includes:
configuring the abnormal disturbance request data into an abnormal disturbance intention prediction network to generate an abnormal disturbance intention of the abnormal disturbance request data;
the network convergence optimization step of the abnormal disturbance intention prediction network comprises the following steps:
obtaining a teacher AI training unit and a student AI training unit, the teacher AI training unit and the student AI training unit having a data transfer node, at least three intention vector extraction nodes, and a prediction node, an intention target domain range of the intention vector extraction node in the teacher AI training unit is larger than an intention target domain range of the intention vector extraction nodes in the student AI training unit, one or at least two of the at least three intention vector extraction nodes in the teacher AI training unit are configured as teacher extraction nodes, one or at least two of the at least three intention vector extraction nodes in the student AI training unit are configured as student extraction nodes, the teacher AI training element also has a vector compression node in communication with the teacher extraction node, the student AI training unit is also provided with a vector extension node which is connected with the student extraction node;
acquiring reference abnormal disturbance request data, configuring the reference abnormal disturbance request data into the teacher AI training unit, generating a teacher intention vector in a teacher extraction node, configuring the reference abnormal disturbance request data into the student AI training unit, and generating a student intention vector in a student extraction node;
configuring the teacher intention vector into the vector compression node to obtain a first intention vector, and configuring the student intention vector into the vector expansion node to obtain a second intention vector, wherein the first intention vector is consistent with the intention target domain range in the second intention vector;
determining a first cost coefficient based on the first intention vector and the second intention vector, optimizing extraction weight information in the student extraction node according to the first cost coefficient, and obtaining a student AI training unit after teaching;
and configuring the reference abnormal disturbance request data into the student AI training unit after teaching to generate a target learning abnormal intention, determining a second cost coefficient according to the target learning abnormal intention and the actual abnormal intention of the reference abnormal disturbance request data, and optimizing the extraction weight information in the student AI training unit based on the second cost coefficient to obtain the abnormal disturbance intention prediction network.
3. The artificial intelligence based big data information security protection method according to claim 2, wherein the configuring the teacher intent vector into the vector compression node to obtain a first intent vector comprises:
configuring the teacher intention vector into the vector compression node to obtain the support degree associated with each intention extraction target domain in the teacher intention vector;
sorting the order of the intention extraction target domains based on the descending order of the support degree associated with each intention extraction target domain to obtain an intention vector list;
selecting at least N final intention extraction target domains from the intention vector list to form a dimension reduction intention vector;
selecting a part of intention extraction target domains from the dimensionality reduction intention vectors to form noise intention vectors based on a set strategy;
and dividing the teacher intention vector and the noise intention vector to obtain a first intention vector.
4. The artificial intelligence based big data information security protection method according to claim 2, further comprising, after determining the first cost coefficient according to the first intention vector and the second intention vector:
and optimizing the extraction weight information in the vector expansion node according to the first cost coefficient.
5. The big data information security protection method based on artificial intelligence, as claimed in claim 1, wherein said step of configuring security protection for the service distribution channel associated with said securely outgoing data based on said anomalous perturbation source comprises:
acquiring a historical attack behavior vector associated with the abnormal disturbance source, wherein the historical attack behavior vector is obtained by carrying out attack event mining on an attack event of the abnormal disturbance source;
based on a target type clustering strategy for the abnormal disturbance source, adding a historical attack behavior vector associated with the abnormal disturbance source matched with the same clustering label to an attack behavior vector cluster;
acquiring a first preset safety protection configuration instruction set of the clustering label and a second preset safety protection configuration instruction set of each historical attack behavior vector in the attack behavior vector cluster associated with the clustering label;
and carrying out linkage protection instruction set mining on a first preset safety protection configuration instruction set and a second preset safety protection configuration instruction set to obtain a linkage protection instruction set associated with the abnormal disturbance source under the clustering label, and carrying out safety protection configuration on a service distribution channel associated with the safety outgoing data based on the linkage protection instruction set.
6. The big data information safety protection method based on artificial intelligence, according to claim 1, wherein the performing linkage protection instruction set mining on a first preset safety protection configuration instruction set and a second preset safety protection configuration instruction set, and obtaining the linkage protection instruction set associated with the abnormal disturbance source under the clustering label comprises:
obtaining the historical attack behavior vector of which the second preset security protection configuration instruction set matches preset matching requirements from the attack behavior vector cluster associated with the clustered label, and adding the historical attack behavior vector to a candidate attack behavior vector cluster;
determining whether the abnormal disturbance source has linkage protection attributes or not based on a first preset safety protection configuration instruction set of the clustering label and a second preset safety protection configuration instruction set of the historical attack behavior vectors in the candidate attack behavior vector cluster;
and if the linkage protection attribute exists, carrying out linkage protection instruction mining on a second preset safety protection configuration instruction set of the historical attack behavior vectors in the candidate attack behavior vector group to obtain a linkage protection instruction set in the abnormal disturbance source.
7. The big data information safety protection method based on artificial intelligence as claimed in claim 1, wherein said step of obtaining the safety outgoing data obtained by performing safety detection on the original data to be outgoing uploaded by the user terminal comprises:
acquiring to-be-sent-out original data uploaded by a user terminal, and carrying out security detection on the to-be-sent-out data stored in an outgoing data storage space; the outgoing data storage space stores a target data group to be outgoing, the target data group to be outgoing comprises at least one data to be outgoing, and different data to be outgoing are generated by different service servers respectively;
if first to-be-sent-out data in the at least one to-be-sent-out data passes security detection and the first to-be-sent-out data is to-be-sent-out data with the highest priority in the target to-be-sent-out data group, acquiring a security vector associated with the first to-be-sent-out data, generating second to-be-sent-out data according to the to-be-sent-out original data and the security vector, and adding the second to-be-sent-out data to the target to-be-sent-out data group to obtain an updated outgoing data storage space;
broadcasting the second data to be sent out in an outgoing network so as to enable other service servers in the outgoing network except the service server generating the second data to be sent out to cache the second data to be sent out to the storage space to which the second data to be sent out belongs;
updating the safety factor respectively associated with each piece of data to be sent out in the updated outgoing data storage space, and determining the data to be sent out with the updated safety factor larger than a preset safety factor threshold value as the data which can be sent out safely;
if the target data group to be sent out has data to be sent out which does not pass the security detection, and the first data to be sent out is the data to be sent out with the highest priority in the data to be sent out which passes the security detection in the target data group to be sent out, acquiring a security vector corresponding to the first data to be sent out, and generating second data to be sent out according to the original data to be sent out and the security vector;
and all the data to be sent out which pass the security detection in the target data group to be sent out and the second data to be sent out form a new data group to be sent out, and the new data group to be sent out and the target data group to be sent out are determined as an updated data storage space to be sent out.
8. The big data information security protection method based on artificial intelligence, as claimed in claim 7, wherein said generating second outgoing data according to said outgoing original data and said security vector comprises:
acquiring a security identifier carried by the original data to be sent out, and acquiring a security identifier generation algorithm corresponding to the user terminal;
processing the safety identification based on the safety identification generating algorithm to obtain first safety characteristic information corresponding to the safety identification;
performing MD5 operation on the original data to be sent out based on an MD5 model to obtain second safety feature information corresponding to the original data to be sent out;
if the first safety characteristic information is the same as the second safety characteristic information, the to-be-sent-out original data passes verification, and a data main body is generated based on the to-be-sent-out original data passing verification;
and generating a data label according to the safety vector, and generating second data to be sent out according to the data label and the data main body.
9. The big data information safety protection method based on artificial intelligence, as claimed in claim 7, wherein said updating the safety factor corresponding to each data to be sent out in said updated outgoing data storage space, and determining the data to be sent out whose updated safety factor is greater than a preset safety factor threshold as the data to be sent out safely comprises:
acquiring the number of data of to-be-sent data contained in the updated outgoing data storage space, determining an artificial intelligence system corresponding to each to-be-sent data in the updated outgoing data storage space, and acquiring a server identity weight matched with the artificial intelligence system;
updating the safety factor corresponding to each data to be sent out in the updated outgoing data storage space respectively based on the number of the data to be sent out contained in the updated outgoing data storage space and the server identity weight;
determining the to-be-sent data with the updated safety factor larger than the preset safety factor threshold value as safe-to-be-sent data, and adding the to-be-sent data in the safe-to-be-sent data to the target sending-out data group; the target outgoing data group is used for storing all the data to be outgoing which pass the verification;
wherein, the adding the data to be sent out to the target data group which can be sent out safely comprises:
acquiring the current priority corresponding to the data to be sent out which can safely send out the data;
if the current priority and the priority corresponding to the target to-be-sent-out data with the highest priority in the target outgoing data group are in a preset priority range, adding the to-be-sent-out data in the safe outgoing data to the target outgoing data group;
and if the current priority and the priority corresponding to the target to-be-sent-out data with the highest priority in the target outgoing data group are in a non-preset priority range, updating the priority of the to-be-sent-out data in the safe outgoing data, and adding the updated to-be-sent-out data in the safe outgoing data to the target outgoing data group.
10. An artificial intelligence system, comprising:
a processor;
a memory having stored therein a computer program that, when executed, implements the artificial intelligence based big data information security method of any one of claims 1 to 9.
CN202111477385.7A 2021-12-06 2021-12-06 Big data information safety protection method based on artificial intelligence and artificial intelligence system Withdrawn CN114186269A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111477385.7A CN114186269A (en) 2021-12-06 2021-12-06 Big data information safety protection method based on artificial intelligence and artificial intelligence system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111477385.7A CN114186269A (en) 2021-12-06 2021-12-06 Big data information safety protection method based on artificial intelligence and artificial intelligence system

Publications (1)

Publication Number Publication Date
CN114186269A true CN114186269A (en) 2022-03-15

Family

ID=80603468

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111477385.7A Withdrawn CN114186269A (en) 2021-12-06 2021-12-06 Big data information safety protection method based on artificial intelligence and artificial intelligence system

Country Status (1)

Country Link
CN (1) CN114186269A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101754221A (en) * 2008-12-19 2010-06-23 中国移动通信集团山东有限公司 Data transmission method between heterogeneous systems and data transmission system
CN111131335A (en) * 2020-03-30 2020-05-08 腾讯科技(深圳)有限公司 Network security protection method and device based on artificial intelligence and electronic equipment
WO2021008028A1 (en) * 2019-07-18 2021-01-21 平安科技(深圳)有限公司 Network attack source tracing and protection method, electronic device and computer storage medium
CN113688383A (en) * 2021-08-31 2021-11-23 林楠 Attack defense testing method based on artificial intelligence and artificial intelligence analysis system
CN113688382A (en) * 2021-08-31 2021-11-23 林楠 Attack intention mining method based on information security and artificial intelligence analysis system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101754221A (en) * 2008-12-19 2010-06-23 中国移动通信集团山东有限公司 Data transmission method between heterogeneous systems and data transmission system
WO2021008028A1 (en) * 2019-07-18 2021-01-21 平安科技(深圳)有限公司 Network attack source tracing and protection method, electronic device and computer storage medium
CN111131335A (en) * 2020-03-30 2020-05-08 腾讯科技(深圳)有限公司 Network security protection method and device based on artificial intelligence and electronic equipment
CN113688383A (en) * 2021-08-31 2021-11-23 林楠 Attack defense testing method based on artificial intelligence and artificial intelligence analysis system
CN113688382A (en) * 2021-08-31 2021-11-23 林楠 Attack intention mining method based on information security and artificial intelligence analysis system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈观林等: "智能化网络入侵检测模型的研究", 《计算机工程与应用》 *

Similar Documents

Publication Publication Date Title
CN106230851B (en) Data security method and system based on block chain
CN110708171B (en) Block chain consensus voting method, device, equipment and storage medium
US20190074962A1 (en) Multiple-Phase Rewritable Blockchain
US10938896B2 (en) Peer-to-peer communication system and peer-to-peer processing apparatus
US9959065B2 (en) Hybrid blockchain
CN110941859A (en) Method, apparatus, computer-readable storage medium, and computer program product for block chain formation consensus
Kumar et al. A survey on the blockchain techniques for the Internet of Vehicles security
EP4216077A1 (en) Blockchain network-based method and apparatus for data processing, and computer device
CN116405187B (en) Distributed node intrusion situation sensing method based on block chain
CN111523890A (en) Data processing method and device based on block chain, storage medium and equipment
CN113055188A (en) Data processing method, device, equipment and storage medium
CN112631550A (en) Block chain random number generation method, device, equipment and computer storage medium
CN114245323B (en) Message processing method and device, computer equipment and storage medium
CN111340483A (en) Data management method based on block chain and related equipment
CN111367923A (en) Data processing method, data processing device, node equipment and storage medium
CN114139203A (en) Block chain-based heterogeneous identity alliance risk assessment system and method and terminal
CN115065503B (en) Method for preventing replay attack of API gateway
CN114528565B (en) Sensitive data efficient uplink algorithm based on blockchain
CN115941691A (en) Method, device, equipment and medium for modifying data on block chain
CN112037055B (en) Transaction processing method, device, electronic equipment and readable storage medium
CN114115748B (en) Intelligent management method based on big data information safety and big data information system
CN114186269A (en) Big data information safety protection method based on artificial intelligence and artificial intelligence system
CN109657447B (en) Equipment fingerprint generation method and device
CN113592638A (en) Transaction request processing method and device and alliance chain
CN112995988B (en) Network port distribution method and device based on multiple network ports of wireless network equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20220315

WW01 Invention patent application withdrawn after publication