WO2022156609A1 - 键盘加密方法、设备、存储介质及计算机程序产品 - Google Patents

键盘加密方法、设备、存储介质及计算机程序产品 Download PDF

Info

Publication number
WO2022156609A1
WO2022156609A1 PCT/CN2022/072040 CN2022072040W WO2022156609A1 WO 2022156609 A1 WO2022156609 A1 WO 2022156609A1 CN 2022072040 W CN2022072040 W CN 2022072040W WO 2022156609 A1 WO2022156609 A1 WO 2022156609A1
Authority
WO
WIPO (PCT)
Prior art keywords
character
image
keyboard
character image
wrong
Prior art date
Application number
PCT/CN2022/072040
Other languages
English (en)
French (fr)
Inventor
汪昊
张天明
薛韬略
王智恒
周士奇
Original Assignee
北京嘀嘀无限科技发展有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京嘀嘀无限科技发展有限公司 filed Critical 北京嘀嘀无限科技发展有限公司
Publication of WO2022156609A1 publication Critical patent/WO2022156609A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/82Protecting input, output or interconnection devices
    • G06F21/83Protecting input, output or interconnection devices input devices, e.g. keyboards, mice or controllers thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/21Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/2107File encryption

Definitions

  • the embodiments of the present disclosure relate to the field of communication technologies, and in particular, to a keyboard encryption method, device, storage medium, and computer program product.
  • Smartphones can be installed with various applications (applications, APPs) to provide users with various Internet services.
  • applications applications, APPs
  • users register or log in to the APP they usually use a numeric keyboard to input a user name, password, and verification code.
  • the embodiments of the present disclosure provide a keyboard encryption method, device, storage medium and computer program product, which can improve the security of the keyboard and effectively block malicious registration behaviors.
  • an embodiment of the present disclosure provides a keyboard encryption method, which includes:
  • An encrypted keyboard is constructed based on the encrypted keyboard character image; the neural network recognition result of the encrypted keyboard character image is an incorrect keyboard character, and the encrypted keyboard character image and the visual recognition result of the keyboard character image are the same.
  • an embodiment of the present disclosure provides a computer device, where the computer device includes:
  • the encryption unit is used to perform feature extraction on the keyboard character image, determine the feature of the keyboard character image, determine the fusion feature according to the feature of the keyboard character image and the feature of the wrong keyboard character, and determine the encrypted keyboard character image according to the fusion feature; the wrong keyboard character image Different characters from keyboard character images;
  • the construction unit is used for constructing an encrypted keyboard based on the encrypted keyboard character image; the neural network recognition result of the encrypted keyboard character image is an incorrect keyboard character, and the encrypted keyboard character image and the visual recognition result of the keyboard character image are the same.
  • embodiments of the present disclosure provide a server, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the method of the first aspect when executing the computer program.
  • an embodiment of the present disclosure provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, implements the method of the first aspect.
  • an embodiment of the present disclosure provides a computer program product, where the computer program product includes a computer program, which implements the method of the first aspect when the computer program is executed by a processor.
  • the keyboard encryption method, computer device, and storage medium can add features of erroneous characters (eg, the aforementioned erroneous keyboard characters) to the original character image (eg, the aforementioned keyboard character image) to interfere with attacks
  • the recognition results of the square neural network model help the server to identify the machine registration behavior, effectively block malicious registration behavior, and improve the security of the keyboard.
  • the loss value of the encrypted character image and the original character image is less than the preset value, which can ensure that the visual recognition results of the encrypted character image and the original character image are the same, that is, the encrypted character image still looks like the original character image to the human eye. It will not affect the user's eye recognition.
  • FIG. 1 is an application environment diagram of a keyboard encryption method provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of a keyboard application provided by an embodiment of the present disclosure
  • FIG. 3 is a schematic flowchart of a keyboard encryption method provided by an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram of a character image provided by an embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram of transmission of keyboard image information according to an embodiment of the present disclosure.
  • FIG. 6 is another schematic diagram of transmission of keyboard image information provided by an embodiment of the present disclosure.
  • FIG. 7 is another schematic flowchart of a keyboard encryption method provided by an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram of an image encryption model provided by an embodiment of the present disclosure.
  • FIG. 9 is a schematic diagram of an encryption effect provided by an embodiment of the present disclosure.
  • FIG. 10 is a schematic diagram of a model training process provided by an embodiment of the present disclosure.
  • FIG. 11 is a schematic diagram of a neural network model provided by an embodiment of the present disclosure.
  • FIG. 12 is a schematic diagram of a loss function provided by an embodiment of the present disclosure.
  • FIG. 13 is a schematic diagram of model training provided by an embodiment of the present disclosure.
  • FIG. 14 is another schematic diagram of a model training process provided by an embodiment of the present disclosure.
  • 15 is a schematic diagram of a training data set provided by an embodiment of the present disclosure.
  • 16 is a schematic diagram of an encryption effect provided by an embodiment of the present disclosure.
  • 17 is a structural block diagram of a computer device provided by an embodiment of the present disclosure.
  • FIG. 18 is another structural block diagram of a computer device provided by an embodiment of the present disclosure.
  • FIG. 19 is an internal structural diagram of a computer device provided by an embodiment of the present disclosure.
  • the keyboard encryption method provided by the embodiment of the present disclosure can be applied to the application environment shown in FIG. 1 .
  • the electronic device 10 communicates with the server 20 through a network.
  • Electronic devices can install APPs to provide users with various Internet services, such as providing online car-hailing services, online shopping services, and so on.
  • the server 20 may be an application server of the APP, and is used to support the background implementation of the services provided by the APP.
  • the electronic device 10 can be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers and portable wearable devices
  • the server 20 can be implemented by an independent server or a server cluster composed of multiple servers.
  • a user name, a password and a verification code can be input through a keyboard (eg, a numeric keyboard) of the electronic device 10 .
  • the electronic device 10 may send the user name, password and verification code input by the user to the server 20, and the server 20 may complete the user registration according to the received user name, password and verification code.
  • an embodiment of the present disclosure provides a keyboard encryption method.
  • the method is applicable to the system shown in FIG. 1 , and the execution subject may be the server 20 in the system shown in FIG. 1 .
  • the method includes the following steps:
  • Step 301 perform feature extraction on the keyboard character image, and determine the features of the keyboard character image.
  • the keyboard character image is a character image included in the virtual keyboard, for example, it may be a character image of a numeric keyboard.
  • the embodiment of the present disclosure aims to add interference features to the features of the original character image, and then generate an encrypted character image according to the features after adding the interference.
  • the server can first perform feature extraction on the character image so as to fuse with the interference feature.
  • each keyboard character image included in the keyboard may be encrypted to improve the security of the keyboard.
  • the keyboard character image corresponds to a character, and the character may be an Arabic numeral, an English letter, or a punctuation mark, which is not limited in this embodiment of the present disclosure.
  • characters image 1 corresponds to character "1”
  • characters image 2 corresponds to character "2”. It should be noted that the character corresponding to the character image may be called the label of the character image.
  • an autoencoder can be used to obtain features of keyboard character images.
  • AE includes an encoder (encoder) and a decoder (decoder), wherein the input of the encoder is an image, and the output is the feature of the image; the input of the decoder can be the feature output by the encoder, and the output is an image reconstructed according to the input features .
  • the server may use an encoder to perform feature extraction on the keyboard character image to obtain the features of the keyboard character image. Specifically, the keyboard character image is input into the encoder, and the output of the encoder is the feature of the keyboard character image.
  • Step 302 Determine the fusion feature according to the features of the keyboard character image and the features of the wrong keyboard characters.
  • the wrong keyboard character is different from the character actually corresponding to the keyboard character image.
  • the server may specify an incorrect keyboard character, so as to direct the recognition result of the attacker's neural network model to the incorrect keyboard character.
  • an incorrect keyboard character For example, when encrypting the keyboard character image, specify a wrong keyboard character that is different from the keyboard character image.
  • a fusion feature can also be determined according to the features of the keyboard character image and the features of the wrong keyboard characters, that is, the features of the keyboard character image and the features of the second character are fused.
  • the server may process the feature of the keyboard character image and the feature of the wrong keyboard character by using a feature fusion algorithm to obtain the above-mentioned fusion feature.
  • the feature fusion algorithm includes, but is not limited to, addition and multiplication of features, which are not limited in this embodiment of the present disclosure.
  • Step 303 Determine an encrypted keyboard character image according to the fusion feature, and construct an encrypted keyboard based on the encrypted keyboard character image.
  • the malicious registration behavior of the attacker is effectively resisted by interfering with the identification result of the neural network model of the attacker.
  • the encrypted keyboard character image is visually indistinguishable, that is, the encrypted keyboard character image and the original keyboard character image appear to be indistinguishable to the human eye.
  • the recognition result is different from the original character.
  • the neural network recognition result of the encrypted keyboard character image is the second character, but the visual recognition result is still the first character.
  • the preset threshold is the minimum loss value to ensure that the visual recognition results of the original keyboard character image and the encrypted keyboard character image are the same.
  • the preset threshold may be a threshold set according to experience, or may be a threshold obtained by iteratively training a neural network model.
  • the server may further construct a keyboard based on the encrypted keyboard character image.
  • the keyboard includes a number of different keyboard character images.
  • the image information of the keyboard may also be sent to the electronic device.
  • the electronic device can display the encrypted keyboard after decoding the image information of the keyboard.
  • the data keyboard includes different keyboard character images, for example, a keyboard character image corresponding to the character "1", a keyboard character image corresponding to the character "2", and the like.
  • the keyboard character images included in the data keyboard are all encrypted character images.
  • the encrypted character images contain the characteristics of wrong characters, which are enough to interfere with the recognition results of the attacker’s neural network model, but the visual recognition results are still the real corresponding characters. The user's human eye recognition will not be affected by the interference of introducing wrong keyboard characters.
  • the server after encrypting each keyboard character image, the server sends the image information of the character image to the electronic device.
  • the electronic device can determine a single character image according to the image information of the character image, combine multiple character images in a specific combination manner to construct an encrypted keyboard, or randomly combine multiple encrypted character images to construct a keyboard to construct an encrypted keyboard.
  • the server can add the features of wrong keyboard characters to the original keyboard character image to interfere with the recognition result of the attacker's neural network model, help the server to identify the machine registration behavior, effectively block malicious registration behavior, and improve the keyboard's performance. safety.
  • the visual recognition result of the encrypted keyboard character image is the same as that of the original keyboard character image, that is, the encrypted character image still looks like the original character image to the human eye, which will not affect the user's human eye recognition.
  • the server may encrypt the original keyboard character image by using an encryption model.
  • an encryption model it specifically includes the following steps:
  • Step 701 Input the keyboard character image into the image encoder of the image encryption model to obtain the features of the keyboard character image, and input the wrong keyboard character into the character encoder of the image encryption model to obtain the feature of the wrong keyboard character.
  • an incorrect keyboard character can be specified.
  • the image encoder is used to extract the features of the keyboard character image, and the character encoder can also be used to extract the features of the wrong keyboard characters.
  • the server can also train an image encryption model, and the image encryption model includes an image encoder, an image decoder, and a character encoder.
  • the image encryption model is used to add the features of wrong characters in the original character image, so as to interfere with the recognition result of the character image by the neural network model.
  • the keyboard character image can be encrypted to interfere with the recognition results of the neural network model used by the black industry.
  • the image encoder is used to extract the features of the character image, and its input is a character image, and the output can be a character;
  • the character encoder is used to extract the feature of a character, and its input is a character, and the output is a feature of the character;
  • an image The decoder is used to generate an encrypted character image, the input is the feature of the original character image and the feature of the wrong character fused, and the output is the encrypted character image.
  • the image encoder and the image decoder are trained with the same visual recognition results of the original character image and the encrypted character image as the model training target; the neural network recognition result of the encrypted character image is the wrong character as the model training target.
  • Target train the character encoder; the wrong character is different from the character corresponding to the original character image.
  • Step 702 input the image decoder of the image encryption model after fusing the features of the keyboard character image and the features of the wrong keyboard characters, and obtain the encrypted keyboard character image.
  • the feature of “original keyboard character image 4” is output, which is fused with the feature of the wrong keyboard character “6” and then input to the image decoder, and the output encrypted After the keyboard character image.
  • the encrypted keyboard character image still looks like the character "4" to the human eye, but it is recognized by the deep learning method, and the recognition result is "6", which causes the attacker's automatic recognition algorithm to recognize the error, so as to achieve the effect of encryption.
  • a special neural network model is trained to generate encrypted character images, and the encrypted character images output by the neural network model can interfere with the recognition results of the neural network model used by the attacker, and will not cause any problems for the user's naked eye recognition. Influence, the encrypted character image is visually identical to the original character image.
  • the visual recognition results of the original character image and the encrypted character image are the same as the model training target, and the process of training the image encoder and the image decoder is shown in FIG. 10 .
  • FIG. 10 it specifically includes the following steps:
  • Step 1001 train the self-encoding learning module.
  • the self-encoding learning module includes an encoding module (encoder) and a decoding module (decoder), which are used to reconstruct the image.
  • an image is input into the self-encoding learning module, and the encoding module first encodes the image to obtain the features of the image.
  • the features of the image can also be input into the decoding module, and the input features are decoded to reconstruct the image.
  • the training goal of the self-encoding learning module is that the loss value of the input image and the reconstructed image is small enough to ensure that there is not much difference between the input image and the reconstructed image.
  • the encoding module includes a convolution layer and a pooling layer
  • the decoding module includes an upsampling layer and a convolution layer.
  • the input image is subjected to layer convolution and pooling layers to obtain the features of the image (which can be referred to as intermediate features for short).
  • the features of the image go through an upsampling layer and a convolutional layer to output a reconstructed image.
  • the loss function between the reconstructed image and the input image is denoted as Loss 1.
  • the parameters of the auto-encoding network are updated using the loss function Loss 1. After the training is completed, given an input image, the self-encoding network can output a reconstructed image with the same content as the original image.
  • the loss function is used to represent the functional relationship between the loss value of the input image compared to the reconstructed image and the parameters of the neural network of the encoding module, and the goal of training the self-encoding module is to minimize the loss function.
  • the loss function Loss 1 can represent the functional relationship between the neural network of the coding module and the loss value.
  • the neural network parameter of the coding module is x
  • the value of the loss function Loss 1 is the smallest.
  • the neural network parameter x may represent a set of a series of parameters, or may be a certain parameter, which is not limited in this embodiment of the present application. It should be noted that the neural network parameters in the embodiments of the present disclosure include but are not limited to weights, biases, gradient values, and the like of the neural network.
  • Step 1002 determine that the encoding module is the above-mentioned image encoder.
  • the encoding module is used to extract the features of the character image, which meets the requirements of the encryption model, so the encoding module is determined as the above-mentioned image encoder.
  • Step 1003 Adjust the neural network parameters of the decoding module to obtain the above-mentioned image decoding model.
  • the neural network parameters of the decoding module can also be adjusted, and the decoding module after adjusting the parameters is the image decoding model described in the embodiments of the present disclosure.
  • the input of the image decoding model is the feature of the original character image and the feature of the wrong character fused, and the output of the image decoding model is the encrypted character image of the original character image.
  • the adjustment of the neural network parameters of the decoding module is to ensure that the loss value of the encrypted character image and the original character image is small enough, and that the original character image and the encrypted character image do not have too much visual difference.
  • the adjustment of the neural network parameters of the decoding module includes the following steps:
  • the fusion vector of the features of the original character image and the features of the wrong characters is input into the decoding module, and the decoding module can decode the image according to the input features, that is, the above-mentioned reconstructed image.
  • the feature after fusion of the feature of the original character image and the feature of the wrong character is v, and the feature is v, and the reconstructed image is obtained by the input image decoding module.
  • the loss value of the original character image is Loss(x).
  • Loss(x) When the neural network parameters of the image decoding module change, Loss(x) will also change.
  • Loss(x) obtains the minimum value, that is, the above-mentioned preset threshold, it is determined that the current image decoding module is the above-mentioned image decoding model, which is used for outputting encrypted character images.
  • the server may train the character encoder to ensure that the feature fused with the feature of the original character image will interfere with the recognition of the attacker's neural network model.
  • Figure 14 it specifically includes the following steps:
  • Step 1401 Train a character encoding module.
  • the character encoding model includes an embedding layer for encoding characters as features.
  • a deep learning method can be used to train the initial character encoding module.
  • the training data set includes a large number of character images and labels, wherein the labels can be considered as characters.
  • the character image is used as the input of the neural network model, and the cross-entropy loss between the output result and the real label of the character image is calculated.
  • the parameters of the neural network model are adjusted to minimize the cross-entropy loss to obtain a stable character encoding module that can accurately identify the characters corresponding to the character images.
  • Step 1402 Input the erroneous character into the character encoding module to obtain the feature of the erroneous character, fuse the feature of the erroneous character with the feature of the original character image and then input it into the above-mentioned image decoder to obtain a reconstructed image.
  • the image in order to ensure that the encrypted character image can be recognized as an incorrect character by the neural network model, the image can be reconstructed according to the features fused with the incorrect character for iterative training, so that the neural network result of the reconstructed image is constantly approaching the incorrect character. .
  • Step 1403 Adjust the neural network parameters of the character encoding module according to the difference between the neural network recognition result of the reconstructed image and the wrong character, until the neural network recognition result of the reconstructed image is the wrong character.
  • the reconstructed image is input into the character image classification model, the second loss function is determined according to the output of the character image classification model, the neural network parameters of the initial character encoding model are adjusted until the second loss function obtains the minimum value, and a stable character encoding is obtained. device.
  • the character image classification model is used to identify the characters corresponding to the character images, the input is the character image, and the output is the character.
  • the reconstructed image is an image obtained by inputting the above-mentioned image decoding model after fusing the features of the original character image and the features of the wrong characters.
  • the second loss function is used to characterize the functional relationship between the loss value for determining the output result of the character image classification model compared to wrong characters and the neural network parameters of the character encoding model.
  • the neural network parameters of the character encoding model can be embedding layer parameters.
  • the value of the second loss function also changes, that is, the loss value between the output result of the character image classification model and the wrong character also changes.
  • the attacker usually uses the character image classification model to identify the characters corresponding to the character images.
  • the loss value between the output result of the character image classification model and the wrong character is reduced by using the neural network parameters of the character encoding model.
  • the recognition result of the encrypted character image by the character image classification model is close to the wrong character, so as to effectively interfere with the recognition result of the character image classification model used by the attacker.
  • the recognition result of the encrypted character image by the character image classification model has the highest degree of closeness to the wrong character, and the final character encoding model can be determined according to the current neural network parameters for encoding the wrong characters. Get the character of the wrong character.
  • the second character is input into a character encoding model to obtain features of the second character. It can be ensured that the feature of the second character and the feature of the above keyboard character image will effectively interfere with the character image classification model used by the attacker.
  • Step 1404 Determine that the adjusted character encoding module is the character encoder.
  • the character features output by the adjusted character encoding module are beneficial to refer the visual recognition result of the reconstructed image to wrong characters, and meet the requirements of the above-mentioned image encryption model.
  • the features of the wrong characters are fused with the features of the original character images.
  • the character encoding model can also be trained, and the neural network parameters of the character encoding model can be adjusted to ensure that the recognition result of the neural network model used by the attacker is different from the real characters corresponding to the character image.
  • the following describes the keyboard encryption method provided by the embodiments of the present disclosure with reference to specific examples. Specifically, it includes the model training process and the application process.
  • the training process includes the following steps:
  • Step a "Original character image 4" is input to the encoder to obtain the feature v1 of the character image.
  • Step b Determine an error character "6", and input the character "6" into the character encoding model. Get the feature v2 of the character "6".
  • Step c The feature v3 is obtained by fusing the feature v1 and the feature v2.
  • Step d Feature v3 is input to the decoder to obtain a reconstructed image.
  • Step e Rebuild the image input character image classification model to obtain predicted characters.
  • Step f Iterative training to update the embedding layer parameters of the character encoding model and the neural network parameters of the decoder to obtain a stable character encoding model and decoder.
  • the above application process includes: inputting the original character image into the encoder to obtain the feature of the original character image, and inputting the wrong character into the character encoding model to obtain the feature of the wrong character.
  • the feature of the original character image and the feature of the wrong character are fused and input to the image decoder, and the encrypted character image is output.
  • the verification code set by the server during APP registration is 480
  • the electronic device displays a keyboard constructed from character images encrypted by the server.
  • the user can accurately identify the characters "4", "8” and Character “0", enter the verification code "480" exactly.
  • the neural network model for example, the character image classification model used by the black product cannot recognize the real character "4", character "8” and the character "0".
  • the character "4" is recognized by the hacker as the character "6"
  • the character "6” is incorrectly recognized as the character "4"
  • the character "3” is recognized as the character "8”
  • the character "9” is recognized as the character "0”
  • the verification code entered by the black production control electric shock device is "639", which cannot be successfully registered. It can be seen that the present disclosure can effectively block malicious registration behavior.
  • FIGS. 3 , 7 , 10 and 14 are shown in sequence according to the arrows, these steps are not necessarily executed in the sequence indicated by the arrows. Unless explicitly stated herein, the execution of these steps is not strictly limited to the order, and the steps may be executed in other orders. Moreover, at least a part of the steps in FIG. 3 , FIG. 7 , FIG. 10 and FIG. 14 may include multiple steps or multiple stages, and these steps or stages are not necessarily executed at the same time, but may be executed at different times. The execution order of these steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in the other steps.
  • the embodiment of the present disclosure provides a computer device, which may be the server 20 described in the embodiment of the present disclosure.
  • the computer device includes: an encryption unit 1701 and a construction unit 1702 .
  • the encryption unit 1701 is used to perform feature extraction on the keyboard character image, determine the features of the keyboard character image, determine the fusion feature according to the feature of the keyboard character image and the feature of the wrong keyboard character, and determine the encrypted keyboard character image according to the fusion feature;
  • the character is different from the character corresponding to the keyboard character image;
  • the construction unit 1702 is configured to construct an encrypted keyboard based on the encrypted keyboard character image; the neural network recognition result of the encrypted keyboard character image is an incorrect keyboard character, and the encrypted keyboard character image and the visual recognition result of the keyboard character image are the same.
  • the encryption unit 1701 is specifically used to input the keyboard character image into the image encoder of the image encryption model to obtain the characteristics of the keyboard character image, and input the wrong keyboard character into the character encoder of the image encryption model to obtain the feature of the wrong keyboard character;
  • the features and the features of the wrong keyboard characters are fused and input into the image decoder of the image encryption model to obtain the encrypted keyboard character image.
  • the computer device further includes a training unit 1703 .
  • the training unit 1703 is configured to train an image encryption model, where the image encryption model includes an image encoder, an image decoder and a character encoder.
  • the training process of the image encryption model includes: taking the same visual recognition result of the original character image and the encrypted character image as the model training target, training the image encoder and the image decoder; taking the neural network recognition result of the encrypted character image as an error Characters are used as model training targets to train the character encoder; the wrong characters are different from the characters corresponding to the original character images.
  • the training unit 1703 is used to train the image self-encoder;
  • the image self-encoder includes an encoding module and a decoding module;
  • the encoding module is used to determine the characteristics of the character image input to the encoder; Input the features of the decoding module to reconstruct the character image;
  • the image decoder is used to output the encrypted character image of the original character image, and the difference between the encrypted character image of the original character image and the original character image is less than or equal to the preset threshold, the pre- The threshold is used to ensure that the visual recognition results of the encrypted character image and the original character image are the same.
  • the training unit 1703 is specifically used to fuse the features of the original character image and the features of the wrong characters and then input them into the decoder to obtain a reconstructed image;
  • the first loss function is used to represent the loss value of the original character image compared to the reconstructed image and the neural network of the decoding model.
  • the training unit 1703 is also used to train the character encoding module, input the wrong character into the character encoding module, and obtain the feature of the wrong character; the character encoding module is used to extract the feature of the character;
  • the feature of the wrong character is fused with the feature of the original character image and then input to the image decoder to obtain the reconstructed image;
  • the adjusted character encoding module is a character encoder.
  • the training unit 1703 is specifically configured to input the reconstructed image into the character image classification model, and determine a second loss function according to the output of the character image classification model; the second loss function is used to characterize the determination of the character image classification model.
  • the output result is compared to the functional relationship between the loss value of the wrong character and the neural network parameters of the character encoder.
  • the input of the character image classification model is a character image, and the output of the character image classification model is a character; adjust the neural network of the character encoder parameters until the second loss function takes a minimum value.
  • the training of the image encryption model can be implemented by computer equipment, for example, by the above-mentioned training unit 1703, or other equipment can train the image encryption model and send the trained image encryption model to the computer equipment.
  • FIG. 19 is a block diagram of a server 1900 according to an exemplary embodiment.
  • server 1900 includes processing component 1920, which further includes one or more processors, and memory resources represented by memory 1922 for storing instructions or computer programs, such as application programs, executable by processing component 1920.
  • An application program stored in memory 1922 may include one or more modules, each corresponding to a set of instructions.
  • the processing component 1920 is configured to execute instructions to perform the keyboard encryption method described above.
  • the server 1900 may also include a power component 1924 configured to perform power management of the device 1900, a wired or wireless network interface 1926 configured to connect the device 1900 to a network, and an input output (I/O) interface 1928.
  • Server 1900 may operate based on an operating system stored in memory 1922, such as Windows 19 19erverTM, Mac O19 XTM, UnixTM, LinuxTM, FreeB19DTM or the like.
  • the present disclosure also provides a computer program product that, when executed by a processor, can implement the above method.
  • the computer program product includes one or more computer instructions. When these computer instructions are loaded and executed on a computer, some or all of the above methods can be implemented in whole or in part according to the processes or functions described in the embodiments of the present disclosure.
  • Embodiments of the present disclosure also provide a non-transitory computer-readable storage medium including instructions, such as a memory including instructions, the instructions can be executed by a processor of a computer device (eg, the aforementioned server) to complete the above method.
  • a computer device eg, the aforementioned server
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
  • Embodiments of the present disclosure also provide a storage medium including instructions, for example, a memory including instructions, and the above-mentioned instructions can be executed by a processor of a server to complete the above-mentioned method.
  • the storage medium may be a non-transitory computer-readable storage medium such as ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
  • any reference to memory, storage, database, or other media used in the various embodiments provided by the embodiments of the present disclosure may include at least one of non-volatile and volatile memory.
  • Non-volatile memory may include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory, or optical memory, and the like.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • the RAM may be in various forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Hardware Design (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Computer Security & Cryptography (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Bioethics (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Input From Keyboards Or The Like (AREA)
  • Character Discrimination (AREA)

Abstract

本公开实施例涉及一种键盘加密方法、设备、存储介质及计算机程序产品。所述方法包括:对键盘字符图像进行特征提取,确定键盘字符图像的特征,根据键盘字符图像的特征以及错误键盘字符的特征确定融合特征,根据融合特征确定加密后的键盘字符图像;错误键盘字符与键盘字符图像对应的字符不同;基于加密后的键盘字符图像构建加密键盘;加密后的键盘字符图像的神经网络识别结果为错误键盘字符,加密后的键盘字符图像和键盘字符图像的视觉识别结果相同。采用本方法能够提高键盘的安全性,有效阻挡恶意注册行为。

Description

键盘加密方法、设备、存储介质及计算机程序产品
交叉引用
本申请要求于2021年1月19日提交的中国专利申请No.202110069624.9的优先权,其全部内容通过引用结合于此。
技术领域
本公开实施例涉及通信技术领域,特别是涉及一种键盘加密方法、设备、存储介质及计算机程序产品。
背景技术
随着通信电子技术的飞速发展,智能手机得到了广泛应用。智能手机可以安装各种不同的应用程序(application,APP)向用户提供多样的互联网服务,用户注册或登陆APP时通常利用数字键盘,输入用户名、密码和验证码。
但是恶意黑产往往可以控制点击器点击数字键盘,进行恶意注册,干扰企业的正常运营。即使智能手机使用随机数字键盘,恶意攻击方也可以采用字符识别算法识别数字键盘上的字符,进行恶意注册。
发明内容
本公开实施例提供一种键盘加密方法、设备、存储介质及计算机程序产品,可以提高键盘的安全性,有效阻挡恶意注册行为。
第一方面,本公开实施例提供一种键盘加密方法,方法包括:
对键盘字符图像进行特征提取,确定键盘字符图像的特征,根据键盘字符图像的特征以及错误键盘字符的特征确定融合特征,根据融合特征确定加密后的键盘字符图像;错误键盘字符与键盘字符图像对应的字符不同;
基于加密后的键盘字符图像构建加密键盘;加密后的键盘字符图像的神经网络识别结果为错误键盘字符,加密后的键盘字符图像和键盘字符图像的视觉识别结果相同。
第二方面,本公开实施例提供一种计算机设备,计算机设备包括:
加密单元,用于对键盘字符图像进行特征提取,确定键盘字符图像的特征,根据键盘字符图像的特征以及错误键盘字符的特征确定融合特征,根据融合特征确定加密后的键盘字符图像;错 误键盘字符与键盘字符图像对应的字符不同;
构建单元,用于基于加密后的键盘字符图像构建加密键盘;加密后的键盘字符图像的神经网络识别结果为错误键盘字符,加密后的键盘字符图像和键盘字符图像的视觉识别结果相同。
第三方面,本公开实施例提供一种服务器,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行计算机程序时实现上述第一方面的方法。
第四方面,本公开实施例提供一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现上述第一方面的方法。
第五方面,本公开实施例提供一种计算机程序产品,该计算机程序产品包括计算机程序,该计算机程序被处理器执行时实现现上述第一方面的方法。
本公开实施例提供的键盘加密方法、计算机设备和存储介质,可以在原始字符图像(例如,前文的键盘字符图像)中加入错误字符(例如,前文所述的错误键盘字符)的特征以干扰攻击方神经网络模型的识别结果,帮助服务器识别机器注册行为,有效阻挡恶意注册行为,提高键盘的安全性。同时加密后的字符图像与原始字符图像的损失值小于预设值,能够保证加密后的字符图像与原始字符图像的视觉识别结果相同,即加密后的字符图像人眼看起来仍为原始字符图像,不会影响用户人眼识别。
附图说明
图1为本公开实施例提供的键盘加密方法的应用环境图;
图2为本公开实施例提供的键盘应用示意图;
图3为本公开实施例提供的键盘加密方法的流程示意图;
图4为本公开实施例提供的字符图像示意图;
图5为本公开实施例提供的键盘图像信息的传输示意图;
图6为本公开实施例提供的键盘图像信息的另一传输示意图;
图7为本公开实施例提供的键盘加密方法的另一流程示意图;
图8为本公开实施例提供的图像加密模型的示意图;
图9为本公开实施例提供的加密效果示意图;
图10为本公开实施例提供的模型训练流程示意图;
图11为本公开实施例提供的神经网络模型示意图;
图12为本公开实施例提供的损失函数示意图;
图13为本公开实施例提供的模型训练示意图;
图14为本公开实施例提供的模型训练流程的另一示意图;
图15为本公开实施例提供的训练数据集的示意图;
图16为本公开实施例提供的加密效果示意图;
图17为本公开实施例提供的计算机设备的结构框图;
图18为本公开实施例提供的计算机设备的另一结构框图;
图19为本公开实施例提供的计算机设备的内部结构图。
具体实施方式
为了使本公开实施例的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本公开实施例进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本公开实施例,并不用于限定本公开实施例。
首先,在具体介绍本公开实施例的技术方案之前,先对本公开实施例基于的技术背景或者技术演进脉络进行介绍。通常情况下,用户注册或登陆APP时通常需要输入用户名、密码和验证码,以帮助服务器识别机器行为,从而阻挡恶意注册的机器行为。基于该背景,申请人发现即使使用随机键盘随机打乱字符的位置,也无法抵抗恶意注册行为。攻击方还是可以利用字符识别算法识别出每个位置对应的字符,进行恶意注册。如何抵抗恶意注册行为,成为目前亟待解决的难题。为解决这一问题,申请人付出了大量的创造性劳动,提出了下述实施例介绍的技术方案。
下面结合本公开实施例所应用的场景,对本公开实施例涉及的技术方案进行介绍。
本公开实施例提供的键盘加密方法,可以应用于如图1所示的应用环境中。其中,电子设备10通过网络与服务器20进行通信。电子设备可以安装APP向用户提供各种互联网服务,例如,提供网约车服务、网上购物服务等。服务器20可以是APP的应用服务器,用于支持APP所提供服务的后台实现。
具体实现中,电子设备10可以但不限于是各种个人计算机、笔记本电脑、智能手机、平板电脑和便携式可穿戴设备,服务器20可以用独立的服务器或者是多个服务器组成的服务器集群来实现。
参考图2,用户在APP或网页进行注册时,可以通过电子设备10的键盘(例如,数字键盘)输入用户名、密码和验证码。电子设备10可以向服务器20发送用户输入的用户名、密码和验证码,服务器20可以根据接收到的用户名、密码和验证码完成用户注册。
目前,攻击方可以控制电击器点击数据键盘,依次输入用户名、密码和验证码进行恶意注册。为了提高键盘的安全性,有效抵抗恶意注册行为,本公开实施例提供一种键盘加密方法。所述 方法适用于图1所示系统,执行主体可以是图1所示系统中的服务器20。如图3所示,所述方法包括以下步骤:
步骤301、对键盘字符图像进行特征提取,确定该键盘字符图像的特征。
其中,键盘字符图像为虚拟键盘所包括的字符图像,例如,可以是数字键盘的字符图像。为了有效抵抗恶意注册行为,本公开实施例旨在原始字符图像的特征中加入干扰特征,再根据增加干扰后的特征生成加密的字符图像。在对字符图像进行加密时,服务器首先可以对字符图像进行特征提取,以便与干扰特征进行融合。
示例的,可以对键盘包括的各个键盘字符图像进行加密,以提高键盘的安全性。在针对键盘的各个键盘字符图像进行加密时,首先对键盘字符图像进行特征提取,确定该键盘字符图像的特征。
其中,键盘字符图像与一个字符对应,该字符可以是阿拉伯数字,也可以是英文字母,也可以标点符号,本公开实施例对此不做限制。参考图4,“字符图像1”与字符“1”对应,“字符图像2”与字符“2”对应。需要说明的是,字符图像对应的字符可以称为字符图像的标签。
一种可能的实现方式中,可以利用自编码器(autoencoder,AE)获得键盘字符图像的特征。AE包括编码器(encoder)和解码器(decoder),其中,编码器的输入为图像,输出为图像的特征;解码器的输入可以是编码器输出的特征,输出为根据输入的特征重建的图像。步骤201中,服务器可以利用编码器对键盘字符图像进行特征提取,获得键盘字符图像的特征。具体地,将键盘字符图像输入编码器,编码器的输出为键盘字符图像的特征。
步骤302、根据该键盘字符图像的特征以及错误键盘字符的特征确定融合特征。
其中,错误键盘字符与该键盘字符图像实际对应的字符不同。
服务器在对原始的键盘字符图像添加干扰时,可以指定一个错误键盘字符,以将攻击方的神经网络模型的识别结果指向该错误键盘字符。示例的,在对键盘字符图像进行加密时,指定一个与键盘字符图像不同的错误键盘字符。还可以根据该键盘字符图像的特征以及错误键盘字符的特征确定融合特征,即将该键盘字符图像的特征和第二字符的特征进行融合。
示例性的,服务器可以利用特征融合算法处理该键盘字符图像的特征和错误键盘字符的特征,获得上述融合特征。其中,特征融合算法包括但不限于特征的相加、相乘等,本公开实施例对此不作限制。
步骤303、根据融合特征确定加密后的键盘字符图像,基于加密后的键盘字符图像构建加密键盘。
本公开实施例中通过干扰攻击方神经网络模型的识别结果,来有效抵抗攻击方的恶意注册 行为。加密后的键盘字符图像相较于原始的键盘字符图像视觉上并没有区别,即人眼看上去加密后的键盘字符图像和原始的键盘字符图像没有区别。但是利用神经网络识别加密后的键盘字符图像,其识别结果与原始的字符不同。
示例性的,加密后的键盘字符图像的神经网络识别结果为第二字符,但其视觉识别结果仍为第一字符。
一种可能的实现方式中,为了保证加密后的键盘字符图像的视觉识别结果仍为该图像真实对应的字符,加密后的键盘字符图像相比于原始的键盘字符图像的差异小于或等于预设阈值,该预设阈值为保证原始键盘字符图像和加密后的键盘字符图像的视觉识别结果相同的最小损失值。其中,该预设阈值可以是根据经验设置的阈值,也可以是对神经网络模型迭代训练所得的阈值。
服务器在对各个键盘字符图像完成加密后,还可以基于加密后的键盘字符图像构建键盘。键盘包括多个不同的键盘字符图像。示例性的,参考图5,服务器构建键盘后,还可以向电子设备发送键盘的图像信息。电子设备解码键盘的图像信息后可以显示加密后的键盘。参考图5,该数据键盘包括不同的键盘字符图像,例如,字符“1”对应的键盘字符图像、字符“2”对应的键盘字符图像等。数据键盘包括的键盘字符图像均为加密后的字符图像,加密后的字符图像包含了错误字符的特征,足以干扰攻击方神经网络模型的识别结果,但视觉识别结果仍为真实对应的字符,并不会因为引入错误键盘字符的干扰对用户人眼识别造成影响。
示例性的,参考图6,服务器对各个键盘字符图像进行加密后,向电子设备发送字符图像的图像信息。电子设备可以根据字符图像的图像信息确定单个字符图像,按照特定的组合方式组合多个字符图像构建加密键盘,或者,随机组合多个加密字符图像构建键盘构建加密键盘。
图3所示的方法中,服务器可以在原始的键盘字符图像中加入错误键盘字符的特征以干扰攻击方神经网络模型的识别结果,帮助服务器识别机器注册行为,有效阻挡恶意注册行为,提高键盘的安全性。同时加密后的键盘字符图像与原始的键盘字符图像的视觉识别结果相同,即加密后的字符图像人眼看起来仍为原始字符图像,不会影响用户人眼识别。
本公开实施例提供的方法中,服务器可以利用加密模型对原始的键盘字符图像进行加密。参考图7,具体包括以下步骤:
步骤701、将键盘字符图像输入图像加密模型的图像编码器获得键盘字符图像的特征,将错误键盘字符输入图像加密模型的字符编码器获得错误键盘字符的特征。
在针对键盘图像进行加密时,可以指定一个错误的键盘字符。首先利用图像编码器提取该键盘字符图像的特征,还可以利用字符编码器提取错误键盘字符的特征。
具体实现中,服务器还可以训练图像加密模型,图像加密模型包括图像编码器、图像解码 器以及字符编码器。
其中,图像加密模型用于在原始的字符图像中增加错误字符的特征,以干扰神经网络模型对字符图像的识别结果。例如,可以对键盘字符图像进行加密,干扰黑产所使用神经网络模型的识别结果。
示例性的,参考图8,图像编码器用于提取字符图像的特征,其输入为字符图像,输出可以是字符;字符编码器用于提取字符的特征,其输入为字符,输出为字符的特征;图像解码器用于生成加密后的字符图像,其输入为原始字符图像的特征和错误字符的特征融合后的特征,输出为加密后的字符图像。
一种可能的实现方式中,以原始字符图像和加密字符图像的视觉识别结果相同为模型训练目标,训练图像编码器和图像解码器;以加密字符图像的神经网络识别结果为错误字符作为模型训练目标,训练字符编码器;错误字符与原始字符图像对应的字符不同。
步骤702、将键盘字符图像的特征和错误键盘字符的特征融合后输入图像加密模型的图像解码器,获得加密后的键盘字符图像。
示例的,参考图9,“原始键盘字符图像4”经过图像编码器后输出“原始键盘字符图像4”的特征,该特征与错误键盘字符“6”的特征融合后输入图像解码器,输出加密后的键盘字符图像。加密后的键盘字符图像人眼看上去仍然是字符“4”,但采用深度学习方法对其识别,识别结果就是“6”,这就导致攻击方的自动识别算法识别错误,从而达到加密的效果。
图7所示的方法中,训练特殊的神经网络模型生成加密的字符图像,该神经网络模型输出的加密字符图像可以干扰攻击方所使用神经网络模型的识别结果,同时不会对用户肉眼识别产生影响,加密字符图像与原始字符图像在视觉上是相同的。
本公开实施例提供的方法中,以原始字符图像和加密字符图像的视觉识别结果相同为模型训练目标,训练图像编码器和图像解码器的流程如图10所示。参考图10,具体包括以下步骤:
步骤1001、训练自编码学习模块。
其中,自编码学习模块包括编码模块(encoder)和解码模块(decoder),用于重建图像。例如,将图像输入自编码学习模块,首先由编码模块对图像进行编码,获得图像的特征,还可以将图像的特征输入解码模块,解码输入的特征重建图像。自编码学习模块的训练目标是输入图像与重建图像的损失值足够小,保证输入图像与重建图像视觉上不会存在太大的差异。
示例的,参考图11,编码模块包括卷积层和池化层,解码模块包括上采样层和卷积层。输入图像经过层卷积和池化层,得到图像的特征(可以简称为中间特征)。图像的特征经过上采样层和卷积层,输出重建图像。重建图像与输入图像的损失函数记为Loss 1。利用损失函数Loss 1对自 编码网络的参数进行更新。训练完成之后,给定一张输入图像,经过自编码网络能够输出一幅内容与原始图像一致的重建图像。
一种可能的实现方式中,损失函数用于表征输入图像相比于重建图像的损失值与编码模块神经网络参数之间的函数关系,训练自编码模块的目标是使损失函数取得最小值。示例的,参考图12,损失函数Loss 1可以表征编码模块神经网络与损失值之间的函数关系,当编码模块神经网络参数为x时,损失函数Loss 1的值最小。其中,神经网络参数x可以代表一系列参数的集合,也可以是某个参数,本申请实施例对此不做限制。需要说明的是,本公开实施例中神经网络参数包括但不限于神经网络的权重、偏置、梯度值等。
步骤1002、确定编码模块为上述图像编码器。
编码模块用于提取字符图像的特征,符合加密模型的需求,因此将编码模块确定为上述图像编码器。
步骤1003、调整解码模块的神经网络参数,以获得上述图像解码模型。
训练好自编码模块后,还可以对解码模块的神经网络参数进行调整,调整参数后的解码模块即本公开实施例所述的图像解码模型。该图像解码模型的输入为原始字符图像的特征与错误字符的特征融合后的特征,图像解码模型的输出为原始字符图像的加密字符图像。
对解码模块的神经网络参数进行调整是为了保证加密字符图像与原始字符图像的损失值足够小,保证原始字符图像与加密字符图像在视觉上不会存在太大的差异。
一种可能的实现方式中,对解码模块的神经网络参数的调整包括以下步骤:
S1、将原始字符图像的特征与错误字符的特征进行融合后输入解码模块,以获得重建图像。
具体地,将原始字符图像的特征与错误字符的特征的融合向量输入解码模块,解码模块可以根据输入的特征解码出图像,即上述重建图像。
S2、确定第一损失函数,第一损失函数用于表征原始字符图像相比于重建图像的损失值和初始的图像解码模型的神经网络参数之间的函数关系。
其中,原始字符图像对应的字符与错误字符不同。
S3、调整上述图像解码模块的神经网络参数直至第一损失函数的值为上述预设阈值,确定图像解码模块为图像解码模型。
示例的,参考图13,原始字符图像的特征与错误字符的特征融合后的特征为v,特征为v输入图像解码模块获得重建图像。原始字符图像相比于重建图像的损失值为Loss(x),当图像解码模块的神经网络参数发生变化,Loss(x)也会随之变化。当Loss(x)取得最小值,即上述预设阈值,确定当前的图像解码模块为上述图像解码模型,用于输出加密字符图像。
本公开的实施例中,服务器可以对字符编码器进行训练,以保证与原始字符图像的特征进行融合的特征对攻击方神经网络模型的识别造成干扰。参考图14,具体包括以下步骤:
步骤1401、训练字符编码模块。
其中,字符编码模型包括嵌入(embedding)层,用于将字符编码为特征。
一种可能的实现方式中,可以采用深度学习方法训练初始的字符编码模块。参考图15,训练数据集包括大量的字符图像及标签,其中,标签可以认为是字符。训练过程中,将字符图像作为神经网络模型的输入,计算输出结果与字符图像真实标签之间的交叉熵损失。以最小化交叉熵损失为优化目标调整神经网络模型的参数,得到稳定的、能够准确识别字符图像所对应字符的字符编码模块。
步骤1402、将错误字符输入字符编码模块,获得错误字符的特征,将错误字符的特征与原始字符图像的特征融合后输入上述图像解码器,获得重建图像。
本公开实施例中,为了保证神经网络模型可以将加密后的字符图像识别为错误字符,首先可以根据融合了错误字符的特征重建图像,以迭代训练,使得重建图像的神经网络结果不断接近错误字符。
步骤1403、根据重建图像的神经网络识别结果与错误字符的差异调整字符编码模块的神经网络参数,直至重建图像的神经网络识别结果为错误字符。
具体实现中,将重建图像输入字符图像分类模型,根据字符图像分类模型的输出确定第二损失函数,调整初始的字符编码模型的神经网络参数直至第二损失函数取得最小值,获得稳定的字符编码器。
其中,字符图像分类模型用于识别字符图像对应的字符,其输入为字符图像,输出为字符。重建图像是将原始字符图像的特征和错误字符的特征融合后输入上述图像解码模型所获得的图像。第二损失函数用于表征确定字符图像分类模型的输出结果相比于错误字符的损失值和字符编码模型的神经网络参数之间的函数关系。字符编码模型的神经网络参数可以是embedding层参数。
需要说明的是,当字符编码模型的神经网络变化,第二损失函数的值也会发生变化,即字符图像分类模型的输出结果与错误字符之间的损失值也会变化。攻击方通常使用字符图像分类模型识别字符图像对应的字符,本公开实施例的方法中通过对字符编码模型的神经网络参数,减小字符图像分类模型的输出结果与错误字符之间的损失值,使得字符图像分类模型对加密字符图像的识别结果趋近于错误字符,以有效干扰攻击方使用的字符图像分类模型的识别结果。
当第二损失函数取得最小值,字符图像分类模型对加密字符图像的识别结果与错误字符的接近程度最高,可以根据当前的神经网络参数确定最终的字符编码模型,用于对错误字符进行编码, 获得错误字符的特征。
示例性的,将第二字符输入字符编码模型,获得第二字符的特征。可以保证第二字符的特征与上述键盘字符图像的特征融合后对攻击方使用的字符图像分类模型造成有效的干扰。
步骤1404、确定调整后的字符编码模块为所述字符编码器。
需要说明的是,调整后的字符编码模块输出的字符特征有利于将重建图像的视觉识别结果指错误字符,满足上述图像加密模型的需求。
图14所示的方法中,为了干扰攻击方所使用神经网络模型的识别结果,在原始字符图像的特征中融合错误字符的特征。另外,还可以对字符编码模型进行训练,调整字符编码模型的神经网络参数,以确保攻击方所使用神经网络模型的识别结果与字符图像对应的真实字符不同。
以下结合具体示例,介绍本公开实施例提供的键盘加密方法。具体包括模型训练过程和应用过程。其中,训练过程包括以下步骤:
步骤a.“原始字符图像4”输入到编码器得到字符图像的特征v1。
步骤b.确定一个错误字符“6”,将字符“6”输入字符编码模型。得到字符“6”的特征v2。
步骤c.将特征v1与特征v2进行融合后获得特征v3。
步骤d.特征v3输入解码器,获得重建图像。
步骤e.重建图像输入字符图像分类模型,获得预测字符。
步骤f.迭代训练以对字符编码模型的embedding层参数和解码器的神经网络参数进行更新,获得稳定的字符编码模型、解码器。
示例性的,计算重建图像与“原始字符图像4”的损失函数,以该损失函数取最小值为优化目标,调整解码器的神经网络参数,使得重建图像与“原始字符图像4”损失值最小,即人眼看上去相同均为字符“4”对应的字符图像。
计算预测字符与指定错误字符“6”求交叉熵损失的损失函数,以该损失函数取最小值为优化目标,调整字符编码模型的embedding层参数,使得字符图像分类模型的识别结果尽可能趋近于错误字符“6”,从而达到干扰攻击方神经网络模型识别结果的目的。
上述应用过程包括:将原始字符图像输入编码器获得原始字符图像的特征,将错误字符输入字符编码模型获得错误字符的特征。将原始字符图像的特征和错误字符的特征进行融合后输入图像解码器,输出加密的字符图像。
示例性的,参考图16,假设在APP注册时服务器设置的验证码是480,电子设备显示的是服务器加密后的字符图像构建的键盘,用户可以准确识别字符“4”、字符“8”以及字符“0”,准确输入验证码“480”。但如果当前注册行为是黑产的注册行为,由于加密后的字符图像增加了错误 字符的特征,黑产所使用神经网络模型(例如,字符图像分类模型)无法识别真实的字符“4”、字符“8”以及字符“0”。例如,字符“4”被黑产识别为字符“6”,错误地将字符“6”识别为字符“4”,将字符“3”识别为字符“8”,将字符“9”识别为字符“0”,黑产控制电击器输入的验证码为“639”,无法成功注册,可见本公开能够有效阻挡恶意注册行为。
应该理解的是,虽然图3、图7、图10以及图14的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图3、图7、图10以及图14中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。
本公开实施例提供了一种计算机设备,可以是本公开实施例所述的服务器20。如图17所示,该计算机设备包括:加密单元1701和构建单元1702。
加密单元1701用于,对键盘字符图像进行特征提取,确定键盘字符图像的特征,根据键盘字符图像的特征以及错误键盘字符的特征确定融合特征,根据融合特征确定加密后的键盘字符图像;错误键盘字符与键盘字符图像对应的字符不同;
构建单元1702用于,基于加密后的键盘字符图像构建加密键盘;加密后的键盘字符图像的神经网络识别结果为错误键盘字符,加密后的键盘字符图像和键盘字符图像的视觉识别结果相同。
加密单元1701具体用于,将键盘字符图像输入图像加密模型的图像编码器获得键盘字符图像的特征,将错误键盘字符输入图像加密模型的字符编码器获得错误键盘字符的特征;将键盘字符图像的特征和错误键盘字符的特征融合后输入图像加密模型的图像解码器,获得加密后的键盘字符图像。
一种可能的实现方式中,参考图18,该计算机设备还包括训练单元1703。训练单元1703用于,训练图像加密模型,该图像加密模型包括图像编码器、图像解码器以及字符编码器。
示例性的,图像加密模型的训练过程包括:以原始字符图像和加密字符图像的视觉识别结果相同为模型训练目标,训练图像编码器和图像解码器;以加密字符图像的神经网络识别结果为错误字符作为模型训练目标,训练字符编码器;错误字符与原始字符图像对应的字符不同。
一种可能的实现方式中,训练单元1703用于,训练图像自编码器;图像自编码器包括编码模块和解码模块;编码模块用于确定输入编码器的字符图像的特征;解码模块用于根据输入解码模块的特征重建字符图像;
确定编码模块为图像编码器;
调整解码模块的神经网络参数,以获得图像解码器;图像解码器用于输出原始字符图像的加密字符图像,原始字符图像的加密字符图像和原始字符图像之间的差异小于或等于预设阈值,预设阈值用于保证加密字符图像和原始字符图像的视觉识别结果相同。
一种可能的实现方式中,训练单元1703具体用于,将原始字符图像的特征与错误字符的特征进行融合后输入解码器,以获得重建图像;
调整解码模块的神经网络参数直至第一损失函数取得最小值,确定调整后的解码模块为图像解码器;第一损失函数用于表征原始字符图像相比于重建图像的损失值和解码模型的神经网络参数之间的函数关系。
一种可能的实现方式中,训练单元1703还用于,训练字符编码模块,将错误字符输入字符编码模块,获得错误字符的特征;字符编码模块用于提取字符的特征;
将错误字符的特征与原始字符图像的特征融合后输入图像解码器,获得重建图像;
根据重建图像的神经网络识别结果与错误字符的差异调整字符编码模块的神经网络参数,直至重建图像的神经网络识别结果为错误字符;
确定调整后的字符编码模块为字符编码器。
一种可能的实现方式中,训练单元1703具体用于,将重建图像输入字符图像分类模型,根据字符图像分类模型的输出确定第二损失函数;第二损失函数用于表征确定字符图像分类模型的输出结果相比于错误字符的损失值和字符编码器的神经网络参数之间的函数关系,字符图像分类模型的输入为字符图像,字符图像分类模型的输出为字符;调整字符编码器的神经网络参数直至第二损失函数取得最小值。
需要说明的是,图像加密模型的训练可以由计算机设备来实现,例如,由上述训练单元1703来实现,也可以是其它设备训练图像加密模型,将训练好的图像加密模型发送给计算机设备。
图19是根据一示例性实施例示出的一种服务器1900的框图。参照图19,服务器1900包括处理组件1920,其进一步包括一个或多个处理器,以及由存储器1922所代表的存储器资源,用于存储可由处理组件1920执行的指令或者计算机程序,例如应用程序。存储器1922中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外,处理组件1920被配置为执行指令,以执行上述键盘加密方法。
服务器1900还可以包括一个电源组件1924被配置为执行设备1900的电源管理,一个有线或无线网络接口1926被配置为将设备1900连接到网络,和一个输入输出(I/O)接口1928。服务器1900可以操作基于存储在存储器1922的操作系统,例如Window19 19erverTM,Mac O19 XTM,UnixTM,LinuxTM,FreeB19DTM或类似。
在示例性实施例中,本公开还提供了一种计算机程序产品,该计算机程序被处理器执行时,可以实现上述方法。该计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行这些计算机指令时,可以全部或部分地按照本公开实施例所述的流程或功能实现上述方法中的部分或者全部。
本公开实施例还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器,上述指令可由计算机设备(例如,前文所述的服务器)的处理器执行以完成上述方法。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
本公开实施例还提供了一种包括指令的存储介质,例如包括指令的存储器,上述指令可由服务器的处理器执行以完成上述方法。存储介质可以是非临时性计算机可读存储介质,例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本公开实施例所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存或光存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本公开实施例的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本公开实施例构思的前提下,还可以做出若干变形和改进,这些都属于本公开实施例的保护范围。因此,本公开实施例专利的保护范围应以所附权利要求为准。

Claims (18)

  1. 一种键盘加密方法,其特征在于,所述方法包括:
    对键盘字符图像进行特征提取,确定所述键盘字符图像的特征,根据所述键盘字符图像的特征以及错误键盘字符的特征确定融合特征,根据所述融合特征确定加密后的键盘字符图像;所述错误键盘字符与所述键盘字符图像对应的字符不同;
    基于所述加密后的键盘字符图像构建加密键盘;所述加密后的键盘字符图像的神经网络识别结果为所述错误键盘字符,所述加密后的键盘字符图像和所述键盘字符图像的视觉识别结果相同。
  2. 根据权利要求1所述的方法,其特征在于,所述对键盘字符图像进行特征提取,确定所述键盘字符图像的特征,根据所述键盘字符图像的特征以及错误键盘字符的特征确定融合特征,根据所述融合特征确定加密后的键盘字符图像,包括:
    将所述键盘字符图像输入图像加密模型的图像编码器获得所述键盘字符图像的特征,将所述错误键盘字符输入所述图像加密模型的字符编码器获得所述错误键盘字符的特征;
    将所述键盘字符图像的特征和所述错误键盘字符的特征融合后输入所述图像加密模型的图像解码器,获得所述加密后的键盘字符图像。
  3. 根据权利要求2所述的方法,其特征在于,所述图像加密模型的训练过程包括:
    以原始字符图像和加密字符图像的视觉识别结果相同为模型训练目标,训练所述图像编码器和所述图像解码器;
    以所述加密字符图像的神经网络识别结果为错误字符作为模型训练目标,训练所述字符编码器;所述错误字符与所述原始字符图像对应的字符不同。
  4. 根据权利要求3所述的方法,其特征在于,所述以原始字符图像和加密字符图像的视觉识别结果相同为模型训练目标,训练所述图像编码器和所述图像解码器,包括:训练图像自编码器;所述图像自编码器包括编码模块和解码模块;所述编码模块用于确定输入所述编码器的字符图像的特征;所述解码模块用于根据输入所述解码模块的特征重建字符图像;
    确定所述编码模块为所述图像编码器;
    调整所述解码模块的神经网络参数,以获得所述图像解码器;所述图像解码器用于输出所述原始字符图像的加密字符图像,所述原始字符图像的加密字符图像和所述原始字符图像之间的差异小于或等于预设阈值,所述预设阈值用于保证所述加密字符图像和所述原始字符图像的视觉识别结果相同。
  5. 根据权利要求4所述的方法,其特征在于,所述调整所述解码模块的神经网络参数, 以获得所述图像解码器,包括:将所述原始字符图像的特征与所述错误字符的特征进行融合后输入所述解码器,以获得重建图像;
    调整所述解码模块的神经网络参数直至第一损失函数取得最小值,确定调整后的解码模块为所述图像解码器;所述第一损失函数用于表征所述原始字符图像相比于所述重建图像的损失值和所述解码模型的神经网络参数之间的函数关系。
  6. 根据权利要求所述的方法,其特征在于,所述以所述加密字符图像的神经网络识别结果为错误字符作为模型训练目标,训练所述字符编码器,包括:训练字符编码模块,将所述错误字符输入所述字符编码模块,获得所述错误字符的特征;所述字符编码模块用于提取字符的特征;
    将所述错误字符的特征与所述原始字符图像的特征融合后输入所述图像解码器,获得重建图像;
    根据所述重建图像的神经网络识别结果与所述错误字符的差异调整所述字符编码模块的神经网络参数,直至所述重建图像的神经网络识别结果为所述错误字符;
    确定调整后的字符编码模块为所述字符编码器。
  7. 根据权利要求6所述的方法,其特征在于,所述根据所述重建图像的神经网络识别结果与所述错误字符的差异调整所述字符编码模块的神经网络参数,直至所述重建图像的神经网络识别结果为所述错误字符,包括:将所述重建图像输入字符图像分类模型,根据所述字符图像分类模型的输出确定第二损失函数;所述第二损失函数用于表征所述确定所述字符图像分类模型的输出结果相比于所述错误字符的损失值和所述字符编码器的神经网络参数之间的函数关系,所述字符图像分类模型的输入为字符图像,所述字符图像分类模型的输出为字符;
    调整所述字符编码器的神经网络参数直至所述第二损失函数取得最小值。
  8. 一种计算机设备,其特征在于,包括:
    加密单元,用于对键盘字符图像进行特征提取,确定所述键盘字符图像的特征,根据所述键盘字符图像的特征以及错误键盘字符的特征确定融合特征,根据所述融合特征确定加密后的键盘字符图像;所述错误键盘字符与所述键盘字符图像对应的字符不同;
    构建单元,用于基于所述加密后的键盘字符图像构建加密键盘;所述加密后的键盘字符图像的神经网络识别结果为所述错误键盘字符,所述加密后的键盘字符图像和所述键盘字符图像的视觉识别结果相同。
  9. 根据权利要求8所述的计算机设备,其特征在于,
    所述加密单元具体用于,将所述键盘字符图像输入图像加密模型的图像编码器获得所述键盘字符图像的特征,将所述错误键盘字符输入所述图像加密模型的字符编码器获得所述错误键盘字符的特征;将所述键盘字符图像的特征和所述错误键盘字符的特征融合后输入所述图像加密模型的图像解码器,获得所述加密后的键盘字符图像。
  10. 根据权利要求9所述的计算机设备,其特征在于,所述图像加密模型的训练过程包括:
    以原始字符图像和加密字符图像的视觉识别结果相同为模型训练目标,训练所述图像编码器和所述图像解码器;
    以所述加密字符图像的神经网络识别结果为错误字符作为模型训练目标,训练所述字符编码器;所述错误字符与所述原始字符图像对应的字符不同。
  11. 权利要求10所述的设备,其特征在于,所述以原始字符图像和加密字符图像的视觉识别结果相同为模型训练目标,训练所述图像编码器和所述图像解码器的训练过程具体包括:训练图像自编码器;所述图像自编码器包括编码模块和解码模块;所述编码模块用于确定输入所述编码器的字符图像的特征;所述解码模块用于根据输入所述解码模块的特征重建字符图像;
    确定所述编码模块为所述图像编码器;
    调整所述解码模块的神经网络参数,以获得所述图像解码器;所述图像解码器用于输出所述原始字符图像的加密字符图像,所述原始字符图像的加密字符图像和所述原始字符图像之间的差异小于或等于预设阈值,所述预设阈值用于保证所述加密字符图像和所述原始字符图像的视觉识别结果相同。
  12. 根据权利要求11所述的设备,其特征在于,所述调整所述解码模块的神经网络参数,以获得所述图像解码器的训练过程具体包括:将所述原始字符图像的特征与所述错误字符的特征进行融合后输入所述解码器,以获得重建图像;
    调整所述解码模块的神经网络参数直至第一损失函数取得最小值,确定调整后的解码模块为所述图像解码器;所述第一损失函数用于表征所述原始字符图像相比于所述重建图像的损失值和所述解码模型的神经网络参数之间的函数关系。
  13. 根据权利要求10所述的设备,其特征在于,所述以所述加密字符图像的神经网络识别结果为错误字符作为模型训练目标,训练所述字符编码器的训练过程具体包括:训练字符编码模块,将所述错误字符输入所述字符编码模块,获得所述错误字符的特征;所述字符编码模块用于提取字符的特征;
    将所述错误字符的特征与所述原始字符图像的特征融合后输入所述图像解码器,获得重建图像;
    根据所述重建图像的神经网络识别结果与所述错误字符的差异调整所述字符编码模块的神经网络参数,直至所述重建图像的神经网络识别结果为所述错误字符;
    确定调整后的字符编码模块为所述字符编码器。
  14. 根据权利要求13所述的设备,其特征在于,所述调整所述解码模块的神经网络参数,以获得所述图像解码器的训练过程具体包括:将所述重建图像输入字符图像分类模型,根据所述字符图像分类模型的输出确定第二损失函数;所述第二损失函数用于表征所述确定所述字符图像分类模型的输出结果相比于所述错误字符的损失值和所述字符编码器的神经网络参数之间的函数关系,所述字符图像分类模型的输入为字符图像,所述字符图像分类模型的输出为字符;
    调整所述字符编码器的神经网络参数直至所述第二损失函数取得最小值。
  15. 一种服务器,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至7中任一项所述的方法的步骤。
  16. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至7中任一项所述的方法的步骤。
  17. 一种计算机程序产品,包括计算机程序,其特征在于,该计算机程序被处理器执行时实现权利要求1-7中任一项所述的方法的步骤。
  18. 一种用于显示经过权利要求1-7任一项所述的方法生成的加密后的键盘字符图像的方法。
PCT/CN2022/072040 2021-01-19 2022-01-14 键盘加密方法、设备、存储介质及计算机程序产品 WO2022156609A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110069624.9 2021-01-19
CN202110069624.9A CN114817937A (zh) 2021-01-19 2021-01-19 键盘加密方法、设备、存储介质及计算机程序产品

Publications (1)

Publication Number Publication Date
WO2022156609A1 true WO2022156609A1 (zh) 2022-07-28

Family

ID=82525010

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/072040 WO2022156609A1 (zh) 2021-01-19 2022-01-14 键盘加密方法、设备、存储介质及计算机程序产品

Country Status (2)

Country Link
CN (1) CN114817937A (zh)
WO (1) WO2022156609A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114817937A (zh) * 2021-01-19 2022-07-29 北京嘀嘀无限科技发展有限公司 键盘加密方法、设备、存储介质及计算机程序产品

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103996011A (zh) * 2014-06-05 2014-08-20 福建天晴数码有限公司 一种保护密码输入安全的方法和装置
CN108446700A (zh) * 2018-03-07 2018-08-24 浙江工业大学 一种基于对抗攻击的车牌攻击生成方法
CN110866238A (zh) * 2019-11-13 2020-03-06 北京工业大学 基于对抗样本的验证码图像的生成方法
CN111507093A (zh) * 2020-04-03 2020-08-07 广州大学 一种基于相似字典的文本攻击方法、装置及存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107360137A (zh) * 2017-06-15 2017-11-17 深圳市牛鼎丰科技有限公司 用于验证码识别的神经网络模型的构建方法和装置
CN111460837A (zh) * 2020-03-31 2020-07-28 广州大学 一种用于神经机器翻译的字符级对抗样本生成方法及装置
CN114817937A (zh) * 2021-01-19 2022-07-29 北京嘀嘀无限科技发展有限公司 键盘加密方法、设备、存储介质及计算机程序产品

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103996011A (zh) * 2014-06-05 2014-08-20 福建天晴数码有限公司 一种保护密码输入安全的方法和装置
CN108446700A (zh) * 2018-03-07 2018-08-24 浙江工业大学 一种基于对抗攻击的车牌攻击生成方法
CN110866238A (zh) * 2019-11-13 2020-03-06 北京工业大学 基于对抗样本的验证码图像的生成方法
CN111507093A (zh) * 2020-04-03 2020-08-07 广州大学 一种基于相似字典的文本攻击方法、装置及存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114817937A (zh) * 2021-01-19 2022-07-29 北京嘀嘀无限科技发展有限公司 键盘加密方法、设备、存储介质及计算机程序产品

Also Published As

Publication number Publication date
CN114817937A (zh) 2022-07-29

Similar Documents

Publication Publication Date Title
US10216923B2 (en) Dynamically updating CAPTCHA challenges
US11520923B2 (en) Privacy-preserving visual recognition via adversarial learning
KR102138289B1 (ko) 이미지 기반의 captcha 과제
US10305889B2 (en) Identity authentication method and device and storage medium
EP3803664A1 (en) Systems and methods for machine learning based application security testing
WO2021260398A1 (en) Determining risk metrics for access requests in network environments using multivariate modeling
US11847210B2 (en) Detecting device and detecting method
US20210200612A1 (en) Anomaly detection in data object text using natural language processing (nlp)
Cao et al. Generative steganography based on long readable text generation
WO2022156609A1 (zh) 键盘加密方法、设备、存储介质及计算机程序产品
WO2022241307A1 (en) Image steganography utilizing adversarial perturbations
Wang et al. DeepC2: Ai-powered covert command and control on OSNs
CN107623664A (zh) 一种密码输入方法及装置
EP3335144B1 (en) Browser attestation challenge and response system
CN113055153A (zh) 一种基于全同态加密算法的数据加密方法、系统和介质
Ivasenko et al. Information Transmission Protection Using Linguistic Steganography With Arithmetic Encoding And Decoding Approach
CN114817893A (zh) 验证码图像加密方法、设备、存储介质和计算机程序产品
KR102672181B1 (ko) 프라이버시 보호 애플리케이션 및 장치 오류 검출
Islam et al. Compact: Approximating complex activation functions for secure computation
CN113961962A (zh) 一种基于隐私保护的模型训练方法、系统及计算机设备
US20190356678A1 (en) Network security tool
US20240126899A1 (en) Method, apparatus, device and medium for protecting sensitive data
CN116956356B (zh) 一种基于数据脱敏处理的信息传输方法及设备
CN111373416B (zh) 通过离散神经网络输入来增强神经网络的安全性
US12033233B2 (en) Image steganography utilizing adversarial perturbations

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22742088

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 30.10.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 22742088

Country of ref document: EP

Kind code of ref document: A1