CN116866916A - Security verification method of intelligent device and intelligent device - Google Patents

Security verification method of intelligent device and intelligent device Download PDF

Info

Publication number
CN116866916A
CN116866916A CN202310719418.7A CN202310719418A CN116866916A CN 116866916 A CN116866916 A CN 116866916A CN 202310719418 A CN202310719418 A CN 202310719418A CN 116866916 A CN116866916 A CN 116866916A
Authority
CN
China
Prior art keywords
measurement
measurement result
security
program
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310719418.7A
Other languages
Chinese (zh)
Inventor
赵戈
胡津铭
孙永清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Third Research Institute of the Ministry of Public Security
Original Assignee
Third Research Institute of the Ministry of Public Security
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Third Research Institute of the Ministry of Public Security filed Critical Third Research Institute of the Ministry of Public Security
Priority to CN202310719418.7A priority Critical patent/CN116866916A/en
Publication of CN116866916A publication Critical patent/CN116866916A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/08Access security
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/51Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems at application loading time, e.g. accepting, rejecting, starting or inhibiting executable software based on integrity or source reliability
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/57Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
    • G06F21/575Secure boot
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/06Authentication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/60Context-dependent security
    • H04W12/66Trust-dependent, e.g. using trust scores or trust relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Storage Device Security (AREA)

Abstract

The invention provides a security verification method of intelligent equipment and the intelligent equipment, which relate to the technical field of security verification and comprise the following steps: the intelligent device performs integrity measurement on a pre-configured boot process of the operating system when the intelligent device is powered on, stores the integrity measurement result as a first measurement value and completes the starting process when the integrity measurement result judges that the boot process is complete, then performs static measurement when the operating system starts an executable file to obtain a security measurement result, stores the security measurement result as a second measurement value, starts the executable file, then generates a security report according to the first measurement value, the second measurement value and basic attributes of the executable file, and sends the security report to a network center for remote trusted proving, and the network center judges whether the intelligent device is allowed to access the network according to the remote trusted proving result. The intelligent device has the beneficial effects that the intelligent device is accessed to the network after being subjected to multi-aspect measurement and long-range trusted certification, and the integrity and the safety of the device are improved.

Description

Security verification method of intelligent device and intelligent device
Technical Field
The invention relates to the technical field of security verification, in particular to a security verification method of intelligent equipment and the intelligent equipment.
Background
At present, the wearable intelligent device of the intelligent watch on the market of government affairs industry is a function extension of a mobile phone, and most intelligent watches can independently operate and have the functions of government affair service assistance, motion tracking, conversation, positioning, physical state detection, connection with the Internet and government affair intranet and the like. While smartwatches may provide valuable data about human health and environment, they open the door for deeper invasion of privacy activities, as users may store sensitive information on these devices, such as data related to work, and even data for important government fingering services. A hacker can obtain information in a mail, a communication record or a secret file by just placing a disguised application program in the smart watch. As the popularity of smartwatches in government industry increases, they will also be the most attractive target of attack.
The existing smart watch protection measures are mostly realized based on authentication and limited application authorization. The method has great limitation, and an attacker can bypass a security detection mechanism through methods such as vulnerable attack of a communication protocol, phishing, system loopholes, program loopholes and the like, so that the system is lifted, and user data is stolen.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a security verification method of intelligent equipment, which comprises the following steps:
step S1, the intelligent device performs integrity measurement on a boot process of an operating system which is preconfigured by the intelligent device when the intelligent device is powered on, and judges whether the boot process is complete according to a corresponding integrity measurement result:
if yes, the integrity measurement result is stored as a first measurement value, the power-on starting is completed, and then the step S2 is carried out;
if not, giving an abnormal prompt, and then exiting;
step S2, when the operating system prepares to start the executable file, the intelligent device performs static measurement on the security of the executable file to obtain a security measurement result, and judges whether the executable file is secure according to the security measurement result:
if yes, the safety measurement result is stored as a second measurement value, the executable file is started, and then the step S3 is performed;
if not, stopping starting the executable file and recording, and then exiting;
step S3, the intelligent device generates a security report according to the first metric value, the second metric value and the basic attribute of the executable file and sends the security report to a network center for remote trusted proving, and the network center judges whether the intelligent device is trusted according to a trusted result of the remote trusted proving:
if yes, allowing the intelligent equipment to access a network;
and if not, preventing the intelligent equipment from accessing the network.
Preferably, the complete measurement result comprises a program measurement result, a kernel measurement result and an application measurement result; the step S1 includes:
step S11, the intelligent device performs integrity measurement on the bootstrap program of the operating system when the intelligent device is powered on to obtain the program measurement result, and determines whether the bootstrap program is complete according to the program measurement result:
if yes, saving the program measurement result, and then turning to step S12;
if not, giving an abnormal prompt, and then exiting;
step S12, the intelligent device carries out integrity measurement on the kernel file of the operating system to obtain the kernel measurement result, and judges whether the kernel file is complete or not according to the kernel measurement result:
if yes, saving the sum of the kernel measurement result and the program measurement result, and then turning to step S13;
if not, taking the program measurement result as the first measurement value, giving an abnormal prompt, and then exiting;
step S13, the intelligent device performs integrity measurement on the key application of the operating system to obtain the application measurement result, and determines whether the key application is complete according to the application measurement result:
if yes, taking the sum of the kernel measurement result, the program measurement result and the application measurement result as the first measurement value, and then completing the power-on starting and turning to the step S2;
if not, taking the sum of the program measurement result and the program measurement result as the first measurement value, giving an abnormal prompt, and then exiting.
Preferably, the step S2 includes:
step S21, the intelligent device calculates a corresponding file reference value according to the basic attribute of the executable file, and takes the file reference value as the security measurement result;
step S22, matching the file reference value in a pre-stored standard reference value library and judging whether a corresponding standard reference value exists according to a matching result:
if yes, the security measurement result is stored as the second measurement value, the executable file is started, and then the step S3 is performed;
if not, the executable file is prevented from being started and recorded, and then the executable file is exited.
Preferably, the network center comprises a network controller, a security management center and a service system; the step S3 includes:
step S31, the intelligent device generates the security report according to the first metric value, the second metric value and the basic attribute of the executable file;
step S32, the intelligent equipment signs the security report and sends the security report to the security management center, and simultaneously initiates a network connection request;
step S33, the security management center performs signature verification on the security report, and then performs remote trusted certification according to the security report to obtain the trusted result and sends the trusted result to the network controller;
step S34, the network controller determines, according to the trusted result, whether the intelligent device is trusted:
if yes, allowing the intelligent equipment to connect with a network to access the service system;
and if not, preventing the intelligent equipment from connecting with a network.
The invention also provides an intelligent device, and the intelligent device comprises:
the first measurement module is used for carrying out integrity measurement on a boot process of an operating system which is preconfigured by the first measurement module when the power-on is started, storing the integrity measurement result as a first measurement value according to the corresponding integrity measurement result when the boot process is judged to be complete, completing the power-on starting, then generating a first completion signal, and giving an abnormal prompt when the boot process is judged to be incomplete;
the second measurement module is connected with the first measurement module and is used for carrying out static measurement on the security of the executable file to obtain a security measurement result when the operating system prepares to start the executable file according to the first completion signal, storing the security measurement result as a second measurement value when the executable file is judged to be secure according to the security measurement result, starting the executable file, generating a second completion signal, and preventing starting the executable file and recording when the executable file is judged to be unsafe;
the trust proving module is respectively connected with the first measurement module and the second measurement module and is used for generating a security report according to the first measurement value, the second measurement value and the basic attribute of the executable file and sending the security report to a network center for remote trust proving, and the network center allows the intelligent equipment to access the network when judging that the intelligent equipment is trusted according to the trust result of the remote trust proving and prevents the intelligent equipment from accessing the network when judging that the intelligent equipment is not trusted.
Preferably, the complete measurement result comprises a program measurement result, a kernel measurement result and an application measurement result; the first metrology module comprises:
the program measurement unit is used for carrying out integrity measurement on the bootstrap program of the operating system to obtain a program measurement result when the bootstrap program is started up, storing the program measurement result when the bootstrap program is judged to be complete according to the program measurement result, then generating a kernel measurement signal, and giving an abnormal prompt when the bootstrap program is judged to be incomplete;
the kernel measurement unit is connected with the program measurement unit and is used for carrying out integrity measurement on a kernel file of the operating system to obtain a kernel measurement result, storing the sum of the kernel measurement result and the program measurement result according to the kernel measurement result when judging that the kernel file is complete, then generating an application measurement signal, taking the program measurement result as the first measurement value when judging that the kernel file is incomplete, and giving an exception prompt;
and the application measurement unit is connected with the kernel measurement unit and is used for carrying out integrity measurement on the key application of the operating system to obtain an application measurement result, taking the sum of the kernel measurement result, the program measurement result and the application measurement result as the first measurement value when judging that the key application is complete according to the application measurement result, then completing the power-on starting, taking the sum of the program measurement result and the program measurement result as the first measurement value when judging that the key application is incomplete, and giving an abnormal prompt.
Preferably, the second metrology module includes:
the calculation unit is used for calculating a corresponding file reference value according to the basic attribute of the executable file, and taking the file reference value as the security measurement result;
and the matching unit is connected with the calculating unit and is used for matching the file reference value in a pre-stored standard reference value library, storing the safety measurement result as the second measurement value according to the matching result when judging that the corresponding standard reference value exists, starting the executable file, and preventing starting and recording the executable file when judging that the corresponding standard reference value exists.
Preferably, the network center comprises a network controller, a security management center and a service system; the trusted attestation module includes:
the report generation unit is used for generating a security report according to the first metric value, the second metric value and the basic attribute of the executable file;
a network request unit connected with the report generation unit and used for signing the security report data packet through a certificate and sending the security report data packet to the security management center, and initiating a network connection request at the same time;
the trusted proving unit is connected with the network request unit and is used for the security management center to check the data packet through the certificate, then remotely trusted proving according to the security report to obtain a trusted result and sending the trusted result to the network controller;
the network connection unit is connected with the credibility proving unit and is used for allowing the intelligent equipment to be connected with a network to access the service system when the intelligent equipment is judged to be credible according to the credibility result by the network controller and preventing the intelligent equipment from being connected with the network when the intelligent equipment is judged to be not credible.
The technical scheme has the following advantages or beneficial effects: before the intelligent device is powered on to start a self-preconfigured operating system to start a booting process, a trusted computing technology is utilized to realize integrity measurement of the booting process of the operating system; in addition, in the running process after the starting of the operating system is completed, the trusted computing static measurement technology is utilized to realize the security measurement before the starting of the executable file of the operating system, and the security can be verified by a remote proving method when the intelligent equipment is accessed to the network center, so that the integrity and the security of the system are ensured.
Drawings
FIG. 1 is a flow chart of a security verification method for an intelligent device according to a preferred embodiment of the present invention;
FIG. 2 is a schematic flow chart of step S1 in a preferred embodiment of the present invention;
FIG. 3 is a schematic flow chart of step S2 in the preferred embodiment of the present invention;
FIG. 4 is a schematic flow chart of step S3 according to the preferred embodiment of the present invention;
fig. 5 is a schematic structural diagram of an intelligent device according to a preferred embodiment of the present invention.
Detailed Description
The invention will now be described in detail with reference to the drawings and specific examples. The present invention is not limited to the embodiment, and other embodiments may fall within the scope of the present invention as long as they conform to the gist of the present invention.
In a preferred embodiment of the present invention, based on the above-mentioned problems existing in the prior art, a security verification method for an intelligent device is now provided, as shown in fig. 1, including:
step S1, the intelligent device performs integrity measurement on a boot process of an operating system which is pre-configured by the intelligent device when the intelligent device is powered on, and judges whether the boot process is complete according to a corresponding integrity measurement result:
if yes, the integrity measurement result is stored as a first measurement value, the power-on starting is completed, and then the step S2 is carried out;
if not, giving an abnormal prompt, and then exiting;
step S2, when the operating system prepares to start the executable file, the intelligent device performs static measurement on the security of the executable file to obtain a security measurement result, and judges whether the executable file is secure according to the security measurement result:
if yes, the safety measurement result is stored as a second measurement value, an executable file is started, and then the step S3 is performed;
if not, stopping starting the executable file and recording, and then exiting;
step S3, the intelligent device generates a security report according to the first metric value, the second metric value and the basic attribute of the executable file and sends the security report to the network center for remote trusted proving, and the network center judges whether the intelligent device is trusted according to the trusted result of the remote trusted proving:
if yes, allowing the intelligent equipment to access the network;
and if not, preventing the intelligent equipment from accessing the network.
In particular, in this embodiment, since the existing smart watch protection measures are mostly implemented based on authentication and restricted application authorization. The method has great limitation, and an attacker can bypass a security detection mechanism through methods such as vulnerable attack of a communication protocol, phishing, system loopholes, program loopholes and the like, so that the system is lifted, and user data is stolen.
Aiming at the defects or improvement demands of the prior art, the invention provides a security verification method of intelligent equipment, taking an intelligent watch as an example, by utilizing security hardware of the watch, the integrity measurement of the operating system in the guiding process of an operating system configured in the watch can be realized by utilizing a trusted computing technology, and in the running process after the starting of the operating system is completed, the security measurement before the starting of an executable file of the operating system is realized by utilizing a trusted computing static measurement technology, and the access to a network center is allowed when the trust of the intelligent watch is proved by a remote proving method, so that the integrity and the security of the system are ensured.
In a preferred embodiment of the present invention, the complete measurement results include a program measurement result, a kernel measurement result, and an application measurement result; then, as shown in fig. 2, step S1 includes:
step S11, the intelligent device carries out integrity measurement on the bootstrap program of the operating system when the intelligent device is powered on to obtain a program measurement result, and judges whether the bootstrap program is complete or not according to the program measurement result:
if yes, saving the program measurement result, and then turning to step S12;
if not, giving an abnormal prompt, and then exiting;
step S12, the intelligent device carries out integrity measurement on the kernel file of the operating system to obtain a kernel measurement result, and judges whether the kernel file is complete or not according to the kernel measurement result:
if yes, saving the sum of the kernel measurement result and the program measurement result, and then turning to step S13;
if not, taking the program measurement result as a first measurement value, giving an abnormal prompt, and then exiting;
step S13, the intelligent equipment carries out integrity measurement on the key application of the operating system to obtain an application measurement result, and judges whether the key application is complete or not according to the application measurement result:
if yes, taking the sum of the kernel measurement result, the program measurement result and the application measurement result as a first measurement value, then completing power-on starting and turning to a step S2;
if not, taking the sum of the program measurement result and the program measurement result as a first measurement value, giving an abnormal prompt, and then exiting.
Specifically, in this embodiment, the integrity metrics of the bootstrap program, the kernel file and the key application are all calculated by using a hash algorithm to obtain a hash algorithm value as a corresponding measurement result.
The hash (handling) algorithm is a method of mapping an arbitrary length message to a fixed length hash value (hash value). The algorithm is generally used in the aspects of encryption, data integrity verification, information authentication and the like.
The basic idea is to pass the message through a series of complex processing algorithms to generate a hash value that is the same as the fixed length of the message. Typical hashing algorithms are MD5, SHA-1, SHA-256, etc.
In terms of encryption, a hash algorithm may be used to verify the integrity of the message, e.g., to send the message to the recipient along with its corresponding hash value, which the recipient may again calculate and compare with the received hash value to ensure that the message has not been tampered with.
In terms of information authentication, hash algorithms can be used in the context of authentication, digital signatures, etc. For example, hash a certain identity information, use the hash value as the unique identifier of the identity information, and encrypt or digitally sign the hash value to ensure the security and authenticity of the information.
It should be noted that the hash algorithm, although mapping messages of arbitrary length to hash values of fixed length, cannot guarantee that no hash collision occurs (i.e., two different messages are hashed to get the same hash value). Therefore, it is necessary to select an appropriate algorithm at the time of application and rationally design the data structure and algorithm to avoid the risk of hash collision.
Accumulation algorithm: fori=1 tomdopcr_ext [ n ] + =pcr_ext (n, data [ m ]); this accumulation algorithm is an accumulation based on a cyclic structure. Where m is the size of the data set, PCR_Ext is an array storing PCR (polymerasechainreaction) amplification results, n represents the number of PCR samples currently processed, and data [ m ] is the mth data sample.
The cycle is implemented by amplifying each data sample m with the currently processed PCR sample n and then accumulating the amplification result into PCR_Ext [ n ]. Since the cycle structure contains + =, the last result is added with the new amplification result every cycle, and finally the total amplification result of the PCR sample n in all the data samples is obtained.
When the integrity measurement is performed on the booting process of the operating system, the integrity measurement is performed on the booting program, the kernel file and the key application of the operating system in sequence, and the hash algorithm value is obtained through calculation by adopting the hash algorithm as a corresponding measurement result.
Firstly, measuring a bootstrap program, giving an abnormal prompt when a measurement result shows incompleteness, storing the measurement result and starting to measure a kernel file when the measurement result shows completeness, giving an abnormal prompt when the measurement result shows incompleteness, accumulating and storing the measurement result and the measurement result of the bootstrap program when the measurement result shows completeness, starting to measure a key application, giving an abnormal prompt when the measurement result shows incompleteness, and accumulating all measurement results as a first measurement value when the measurement result shows completeness.
In a preferred embodiment of the present invention, as shown in fig. 3, step S2 includes:
step S21, the intelligent equipment calculates a corresponding file reference value according to the basic attribute of the executable file, and takes the file reference value as a security measurement result;
step S22, matching the file reference value in a pre-stored standard reference value library and judging whether a corresponding standard reference value exists according to a matching result:
if yes, the safety measurement result is stored as a second measurement value, an executable file is started, and then the step S3 is performed;
if not, the executable file is prevented from being started and recorded, and then the executable file is exited.
Specifically, in this embodiment, before an operating system starts an executable file (ELF) to load a memory, an LSM security framework is used to intercept, and a reference value library is used to determine whether a corresponding standard reference value exists in a reference value library for a file reference value of the ELF file, if not, the ELF file is intercepted to load the memory, and an audit is recorded. If so, the executable file is started and then the memory is loaded.
The reference value of a file refers to a value that some basic properties of the file, such as file size, file name, creation time, modification time, etc., do not change within a certain range. The reference value of a file needs to be determined at a certain point in time after or after the creation of the file, and in general, a piece of file metadata is allocated to each file at the time of the creation or after the modification of the file, wherein basic attribute information of the file, such as a file size, a complete path, a modification time, etc., are included, and these metadata are often used to calculate the generation reference value. In calculating the reference value, the system hashes the metadata to generate a unique reference value.
To determine the reference value of a file, we need to create a library of reference values and save the standard reference values for all files in the library. The file base attribute information in the reference value library is maintained by the file management system, and typically, the reference value for each file may be generated as it is created and recorded in the library. When we need to identify whether a file is modified, we can calculate its reference value by loading the file and then comparing it with the values stored in the reference value library, if the two reference values do not agree, we say that the file has been modified, thus enabling to enhance the security of the system.
In the preferred embodiment of the invention, the network center comprises a network controller, a security management center and a service system; as shown in fig. 4, step S3 includes:
step S31, the intelligent equipment generates a security report according to the first metric value, the second metric value and the basic attribute of the executable file;
step S32, the intelligent equipment signs the security report and sends the security report to a security management center, and simultaneously initiates a network connection request;
step S33, the security management center performs signature verification on the security report, and then performs remote trusted certification according to the security report to obtain a trusted result and sends the trusted result to the network controller;
step S34, the network controller judges whether the intelligent equipment is trusted according to the trusted result:
if yes, allowing the intelligent equipment to connect with a network to access a service system;
if not, the intelligent equipment is prevented from connecting with the network.
Specifically, in this embodiment, in the government industry, the application scenario of the smart watch is mainly law enforcement and an office environment, and important data will be involved in the application process, so that the smart watch needs to access to the government intranet through the network controller, and performs unified management through the security management center to ensure the security of data transmission.
The security report is divided into two parts, the first part is the first metric value read from the hardware register after the boot process metric for the operating system at boot time is completed. The second part is a second measurement value of the executable program which measures success and a corresponding basic attribute (comprising a file name and a timestamp), and the two parts are combined to generate a security report.
The security report is sent to the security management center through the exclusive channel, the security management center judges the trusted result of the security report, and the trusted result of the security report is sent to the boundary equipment of the government network, namely the network controller.
And after receiving the trusted status report of the security center, the network controller judges the government intranet access status of the intelligent watch according to the trusted result in the report. One of the judging modes may be that if the trusted state is 1, the intelligent watch can continue to access the government internal network, and if the trusted state is 0, the network controller blocks the access state of the government internal network of the intelligent watch.
When the intelligent watch sends a security report, the security report data packet is signed through the certificate, and after the security center receives the security report data packet, the security center performs signature verification on the data packet through the certificate, so that the integrity and tamper resistance of the security report data are ensured.
The present invention also provides an intelligent device, and if the security verification method described above is applied, as shown in fig. 5, the intelligent device includes:
the first measurement module 1 is used for carrying out integrity measurement on a boot process of an operating system which is preconfigured by the first measurement module when the power-on is started, storing the integrity measurement result as a first measurement value when the boot process is judged to be complete according to the corresponding integrity measurement result, completing the power-on starting, generating a first completion signal, and giving an abnormal prompt when the boot process is judged to be incomplete;
the second measurement module 2 is connected with the first measurement module 1 and is used for carrying out static measurement on the security of the executable file to obtain a security measurement result when the operating system prepares to start the executable file according to the first completion signal, storing the security measurement result as a second measurement value when the executable file is judged to be secure according to the security measurement result, starting the executable file, generating a second completion signal, and preventing the executable file from being started and recorded when the executable file is judged to be unsafe;
the trusted proving module 3 is respectively connected with the first measuring module 1 and the second measuring module 2, and is used for generating a security report according to the first measuring value, the second measuring value and the basic attribute of the executable file and sending the security report to the network center 4 for remote trusted proving, and the network center 4 allows the intelligent device to access the network when the intelligent device is judged to be trusted according to the trusted result of the remote trusted proving and prevents the intelligent device from accessing the network when the intelligent device is judged to be untrusted.
In a preferred embodiment of the present invention, the complete measurement results include a program measurement result, a kernel measurement result, and an application measurement result; then, as shown in fig. 5, the first metrology module 1 comprises:
the program measurement unit 11 is configured to perform integrity measurement on a boot program of the operating system when the boot program is powered on to obtain a program measurement result, save the program measurement result when the boot program is determined to be complete according to the program measurement result, then generate a kernel measurement signal, and give an exception prompt when the boot program is determined to be incomplete;
the kernel measurement unit 12 is connected with the program measurement unit 11 and is used for carrying out integrity measurement on a kernel file of the operating system to obtain a kernel measurement result, storing the sum of the kernel measurement result and the program measurement result when judging that the kernel file is complete according to the kernel measurement result, then generating an application measurement signal, taking the program measurement result as a first measurement value when judging that the kernel file is incomplete, and giving an exception prompt;
the application measurement unit 13 is connected to the kernel measurement unit 12, and is configured to perform integrity measurement on a critical application of the operating system to obtain an application measurement result, take a sum of the kernel measurement result, the program measurement result and the application measurement result as a first measurement value when the critical application is determined to be complete according to the application measurement result, complete power-on startup later, and take a sum of the program measurement result and the program measurement result as the first measurement value when the critical application is determined to be incomplete, and give an exception prompt.
In a preferred embodiment of the present invention, as shown in fig. 5, the second metrology module 2 includes:
a calculating unit 21, configured to calculate a corresponding file reference value according to the basic attribute of the executable file, and use the file reference value as a security measurement result;
a matching unit 22, connected to the calculating unit 21, for matching the file reference value with a pre-stored standard reference value library, storing the security measurement result as a second measurement value when the corresponding standard reference value is determined according to the matching result, starting the executable file, and preventing starting the executable file and recording when the corresponding standard reference value is determined.
In the preferred embodiment of the present invention, the network center 4 includes a network controller 41, a security management center 42 and a service system 43; then, as shown in fig. 5, the proof of trust module 3 comprises:
a report generating unit 31 for generating a security report according to the first metric value, the second metric value and the basic attribute of the executable file;
a network request unit 32, which is used for signing the security report and sending the security report to the security management center 42, and initiating a network connection request;
the trusted proof unit 33 is connected with the network request unit 32 and is used for checking and signing the security report by the security management center 42, then carrying out remote trusted proof according to the security report to obtain a trusted result and sending the trusted result to the network controller 41;
the network connection unit 34 is connected to the trust proving unit 33, and is configured to allow the intelligent device to connect to the network to access the service system 43 when the intelligent device is determined to be trusted according to the trust result by the network controller 41, and prevent the intelligent device from connecting to the network when the intelligent device is determined to be not trusted.
The foregoing is merely illustrative of the preferred embodiments of the present invention and is not intended to limit the embodiments and scope of the present invention, and it should be appreciated by those skilled in the art that equivalent substitutions and obvious variations may be made using the description and illustrations herein, which should be included in the scope of the present invention.

Claims (8)

1. A security verification method for an intelligent device, comprising:
step S1, the intelligent device performs integrity measurement on a boot process of an operating system which is preconfigured by the intelligent device when the intelligent device is powered on, and judges whether the boot process is complete according to a corresponding integrity measurement result:
if yes, the integrity measurement result is stored as a first measurement value, the power-on starting is completed, and then the step S2 is carried out;
if not, giving an abnormal prompt, and then exiting;
step S2, when the operating system prepares to start the executable file, the intelligent device performs static measurement on the security of the executable file to obtain a security measurement result, and judges whether the executable file is secure according to the security measurement result:
if yes, the safety measurement result is stored as a second measurement value, the executable file is started, and then the step S3 is performed;
if not, stopping starting the executable file and recording, and then exiting;
step S3, the intelligent device generates a security report according to the first metric value, the second metric value and the basic attribute of the executable file and sends the security report to a network center for remote trusted proving, and the network center judges whether the intelligent device is trusted according to a trusted result of the remote trusted proving:
if yes, allowing the intelligent equipment to access a network;
and if not, preventing the intelligent equipment from accessing the network.
2. The security validation method of claim 1, wherein the complete metric results comprise program metrics, kernel metrics, and application metrics; the step S1 includes:
step S11, the intelligent device performs integrity measurement on the bootstrap program of the operating system when the intelligent device is powered on to obtain the program measurement result, and determines whether the bootstrap program is complete according to the program measurement result:
if yes, saving the program measurement result, and then turning to step S12;
if not, giving an abnormal prompt, and then exiting;
step S12, the intelligent device carries out integrity measurement on the kernel file of the operating system to obtain the kernel measurement result, and judges whether the kernel file is complete or not according to the kernel measurement result:
if yes, saving the sum of the kernel measurement result and the program measurement result, and then turning to step S13;
if not, taking the program measurement result as the first measurement value, giving an abnormal prompt, and then exiting;
step S13, the intelligent device performs integrity measurement on the key application of the operating system to obtain the application measurement result, and determines whether the key application is complete according to the application measurement result:
if yes, taking the sum of the kernel measurement result, the program measurement result and the application measurement result as the first measurement value, and then completing the power-on starting and turning to the step S2;
if not, taking the sum of the program measurement result and the program measurement result as the first measurement value, giving an abnormal prompt, and then exiting.
3. The security verification method according to claim 1, wherein the step S2 includes:
step S21, the intelligent device calculates a corresponding file reference value according to the basic attribute of the executable file, and takes the file reference value as the security measurement result;
step S22, matching the file reference value in a pre-stored standard reference value library and judging whether a corresponding standard reference value exists according to a matching result:
if yes, the security measurement result is stored as the second measurement value, the executable file is started, and then the step S3 is performed;
if not, the executable file is prevented from being started and recorded, and then the executable file is exited.
4. The security verification method according to claim 1, wherein the network center comprises a network controller, a security management center, and a service system; the step S3 includes:
step S31, the intelligent device generates the security report according to the first metric value, the second metric value and the basic attribute of the executable file;
step S32, the intelligent equipment signs the security report and sends the security report to the security management center, and simultaneously initiates a network connection request;
step S33, the security management center performs signature verification on the security report, and then performs remote trusted certification according to the security report to obtain the trusted result and sends the trusted result to the network controller;
step S34, the network controller determines, according to the trusted result, whether the intelligent device is trusted:
if yes, allowing the intelligent equipment to connect with a network to access the service system;
and if not, preventing the intelligent equipment from connecting with a network.
5. An intelligent device, characterized in that, if the security verification method according to any one of claims 1-4 is applied, the intelligent device comprises:
the first measurement module is used for carrying out integrity measurement on a boot process of an operating system which is preconfigured by the first measurement module when the power-on is started, storing the integrity measurement result as a first measurement value according to the corresponding integrity measurement result when the boot process is judged to be complete, completing the power-on starting, then generating a first completion signal, and giving an abnormal prompt when the boot process is judged to be incomplete;
the second measurement module is connected with the first measurement module and is used for carrying out static measurement on the security of the executable file to obtain a security measurement result when the operating system prepares to start the executable file according to the first completion signal, storing the security measurement result as a second measurement value when the executable file is judged to be secure according to the security measurement result, starting the executable file, generating a second completion signal, and preventing starting the executable file and recording when the executable file is judged to be unsafe;
the trust proving module is respectively connected with the first measurement module and the second measurement module and is used for generating a security report according to the first measurement value, the second measurement value and the basic attribute of the executable file and sending the security report to a network center for remote trust proving, and the network center allows the intelligent equipment to access the network when judging that the intelligent equipment is trusted according to the trust result of the remote trust proving and prevents the intelligent equipment from accessing the network when judging that the intelligent equipment is not trusted.
6. The security verification system of claim 5, wherein the complete metric comprises a program metric, a kernel metric, and an application metric; the first metrology module comprises:
the program measurement unit is used for carrying out integrity measurement on the bootstrap program of the operating system to obtain a program measurement result when the bootstrap program is started up, storing the program measurement result when the bootstrap program is judged to be complete according to the program measurement result, then generating a kernel measurement signal, and giving an abnormal prompt when the bootstrap program is judged to be incomplete;
the kernel measurement unit is connected with the program measurement unit and is used for carrying out integrity measurement on a kernel file of the operating system to obtain a kernel measurement result, storing the sum of the kernel measurement result and the program measurement result according to the kernel measurement result when judging that the kernel file is complete, then generating an application measurement signal, taking the program measurement result as the first measurement value when judging that the kernel file is incomplete, and giving an exception prompt;
and the application measurement unit is connected with the kernel measurement unit and is used for carrying out integrity measurement on the key application of the operating system to obtain an application measurement result, taking the sum of the kernel measurement result, the program measurement result and the application measurement result as the first measurement value when judging that the key application is complete according to the application measurement result, then completing the power-on starting, taking the sum of the program measurement result and the program measurement result as the first measurement value when judging that the key application is incomplete, and giving an abnormal prompt.
7. The security verification system of claim 5, wherein the second metric module comprises:
the calculation unit is used for calculating a corresponding file reference value according to the basic attribute of the executable file, and taking the file reference value as the security measurement result;
and the matching unit is connected with the calculating unit and is used for matching the file reference value in a pre-stored standard reference value library, storing the safety measurement result as the second measurement value according to the matching result when judging that the corresponding standard reference value exists, starting the executable file, and preventing starting and recording the executable file when judging that the corresponding standard reference value exists.
8. The security verification system of claim 5, wherein the network center comprises a network controller, a security management center, and a business system; the trusted attestation module includes:
the report generation unit is used for generating a security report according to the first metric value, the second metric value and the basic attribute of the executable file;
a network request unit connected with the report generation unit and used for signing the security report and sending the security report to the security management center, and initiating a network connection request;
the trusted proving unit is connected with the network request unit and used for the security management center to check the security report, then remotely trusted proving according to the security report to obtain a trusted result and sending the trusted result to the network controller;
the network connection unit is connected with the credibility proving unit and is used for allowing the intelligent equipment to be connected with a network to access the service system when the intelligent equipment is judged to be credible according to the credibility result by the network controller and preventing the intelligent equipment from being connected with the network when the intelligent equipment is judged to be not credible.
CN202310719418.7A 2023-06-16 2023-06-16 Security verification method of intelligent device and intelligent device Pending CN116866916A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310719418.7A CN116866916A (en) 2023-06-16 2023-06-16 Security verification method of intelligent device and intelligent device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310719418.7A CN116866916A (en) 2023-06-16 2023-06-16 Security verification method of intelligent device and intelligent device

Publications (1)

Publication Number Publication Date
CN116866916A true CN116866916A (en) 2023-10-10

Family

ID=88220715

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310719418.7A Pending CN116866916A (en) 2023-06-16 2023-06-16 Security verification method of intelligent device and intelligent device

Country Status (1)

Country Link
CN (1) CN116866916A (en)

Similar Documents

Publication Publication Date Title
TWI667586B (en) System and method for verifying changes to uefi authenticated variables
US9867043B2 (en) Secure device service enrollment
US9432362B2 (en) Secure time functionality for a wireless device
US8578174B2 (en) Event log authentication using secure components
US8793780B2 (en) Mitigation of application-level distributed denial-of-service attacks
US8572692B2 (en) Method and system for a platform-based trust verifying service for multi-party verification
KR100823738B1 (en) Method for integrity attestation of a computing platform hiding its configuration information
Böck et al. Towards more trustable log files for digital forensics by means of “trusted computing”
CN112765684B (en) Block chain node terminal management method, device, equipment and storage medium
KR20180036140A (en) Method and apparatus for examining forgery of file by using file hash value
EP3598333B1 (en) Electronic device update management
CN101166126B (en) Method and system for notarizing packet traces
CN113225324A (en) Block chain anonymous account creation method, system, device and storage medium
Ivanov et al. Ethclipper: a clipboard meddling attack on hardware wallets with address verification evasion
CN117610083A (en) File verification method and device, electronic equipment and computer storage medium
US11290471B2 (en) Cross-attestation of electronic devices
CN116132149A (en) Tamper-resistant communication method and device, server, intelligent home and terminal equipment
CN116866916A (en) Security verification method of intelligent device and intelligent device
CN115795432A (en) Program integrity verification system and method suitable for read-only file system
CA2762383C (en) Mitigation of application-level distributed denial-of-service attacks
CN111478770A (en) Security verification method and device, computer equipment and storage medium
US20230299970A1 (en) Sensor Data Authentication
CN114417356A (en) Data security protection method and device
KR20220168860A (en) System and method for authenticating security level of content provider
WO2024086858A1 (en) Ledger environment threat detection protocol system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination