WO2021031242A1 - 字符验证方法、装置、计算机设备及存储介质 - Google Patents

字符验证方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2021031242A1
WO2021031242A1 PCT/CN2019/103664 CN2019103664W WO2021031242A1 WO 2021031242 A1 WO2021031242 A1 WO 2021031242A1 CN 2019103664 W CN2019103664 W CN 2019103664W WO 2021031242 A1 WO2021031242 A1 WO 2021031242A1
Authority
WO
WIPO (PCT)
Prior art keywords
verification
image
character
input
model
Prior art date
Application number
PCT/CN2019/103664
Other languages
English (en)
French (fr)
Inventor
黎立桂
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021031242A1 publication Critical patent/WO2021031242A1/zh

Links

Images

Classifications

    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/36User authentication by graphic or iconic representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/21Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/2133Verifying human interaction, e.g., Captcha

Definitions

  • the embodiments of the present application relate to the field of data security, in particular, a character verification method, device, computer equipment, and storage medium.
  • a verification code is usually used for verification.
  • a terminal performs a verification operation, it first obtains the verification code from the server, and then receives the verification information input by the user according to the verification code, and finally, the terminal sends the collected user information To the server side, the server side determines whether the verification is passed by comparing the verification code and the text in the verification information.
  • the inventor of this application found in research that the verification code technology in the prior art simply sets the verification code on the background image for display, and the verification code can be identified without obstacles through the image recognition technology, and then the identified verification code is directly The verification code is sent to the server for verification without manual input. Therefore, the verification code in the prior art is easy to be identified, the verification security level is low, and it is impossible to truly protect network resources from being used safely.
  • the embodiments of the present application provide a character verification method, device, computer equipment, and storage medium that increase the confusion of verification images and increase the difficulty of image recognition through style conversion.
  • a technical solution adopted in the embodiment created by this application is to provide a character verification method, including:
  • verification materials to be synthesized, where the verification materials include a background image and verification characters;
  • the verification material is input into a preset style conversion model to generate a verification image with the same style as the preset style mode, wherein the style conversion model is pre-trained to a convergent state for converting the input image Neural network model for the preset style mode;
  • the verification image output by the style conversion model is read to use the verification image for character verification.
  • an embodiment of the present application also provides a character verification device, including:
  • the processing module is used to input the verification material into a preset style conversion model to generate a verification image with the same style as the preset style mode, wherein the style conversion model is pre-trained to a convergent state, A neural network model for converting the input image into a preset style model;
  • the verification image output by the style conversion model is read to use the verification image for character verification.
  • an embodiment of the present application further provides a computer device including a memory and a processor.
  • the memory stores computer-readable instructions.
  • the processor executes the steps of the character verification method, wherein the steps of the character verification method include:
  • verification materials to be synthesized, where the verification materials include a background image and verification characters;
  • the verification material is input into a preset style conversion model to generate a verification image with the same style as the preset style mode, wherein the style conversion model is pre-trained to a convergent state for converting the input image Neural network model for the preset style mode;
  • the verification image output by the style conversion model is read to use the verification image for character verification.
  • embodiments of the present application also provide a non-volatile storage medium storing computer-readable instructions.
  • the computer-readable instructions are executed by one or more processors, one or more processing
  • the device executes the steps of the character verification method described above, wherein the steps of the character verification method include:
  • verification materials to be synthesized, where the verification materials include a background image and verification characters;
  • the verification material is input into a preset style conversion model to generate a verification image with the same style as the preset style mode, wherein the style conversion model is pre-trained to a convergent state for converting the input image Neural network model for the preset style mode;
  • the verification image output by the style conversion model is read to use the verification image for character verification.
  • the beneficial effect of the embodiment of the present application is that the background image and the verification character are simultaneously input into the style conversion model for style conversion, and the background image and the verification character in the converted verification image are all converted into the same style. Since the verification character and the background image can be deeply fused during the style conversion process, the confusion between the background image and the verification character is improved. At the same time, the background image and the verification characters are converted into the same style image, so that the texture changes in the entire verification image are coherent and smooth. There is no sharp pixel comparison between the background image and the verification characters, which improves the extraction of verification characters through image processing technology. The difficulty further improves the confusion between the background image and the verification character, increases the recognition difficulty and error rate, and effectively guarantees the safety of character verification.
  • Figure 1 is a schematic diagram of the basic flow of a character verification method according to an embodiment of this application.
  • FIG. 2 is a schematic diagram of the process of pixel filling for verification characters according to an embodiment of the application
  • FIG. 3 is a schematic diagram of a flow chart of performing vectorization processing on background images and verification characters according to an embodiment of the application;
  • FIG. 4 is a schematic diagram of a process of verifying user behavior in an embodiment of the application.
  • FIG. 5 is a schematic diagram of a process of identifying abnormal behaviors through a neural network model according to an embodiment of the application
  • FIG. 6 is a schematic diagram of the process of screening and verifying images through a neural network model according to an embodiment of the application
  • FIG. 7 is a schematic diagram of a process of obtaining a verification image in a display area according to an embodiment of the application.
  • FIG. 8 is a schematic diagram of the basic structure of a character verification device according to an embodiment of the application.
  • Fig. 9 is a block diagram of the basic structure of a computer device according to an embodiment of the application.
  • FIG. 1 is a schematic diagram of the basic flow of the character verification method in this embodiment.
  • a character verification method includes:
  • verification materials to be synthesized, where the verification materials include a background image and verification characters;
  • the content of the verification image includes a background image and a verification character.
  • the background image and the verification character are stored in a corresponding database, and the background image and the verification character are obtained by random extraction in the corresponding database during verification.
  • the storage method of the verification material is not limited to this.
  • the background image and the verification character are synthesized and stored in the database in advance, and a synthesized image is extracted from the database as the verification material during verification.
  • the verification character is composed of a limited number of characters, for example, 4 characters constitute a verification character, but the length of the verification character is not limited to this. According to different application scenarios, in some embodiments, the length of the verification character is Can be (not limited to): 2, 3, 5, 6 or more characters.
  • the character type composing the verification character can be a known written character or a combination of multiple characters.
  • the character verification scenarios in this embodiment include (not included): the user enters the same character according to the verification character for verification, the user selects some characters in the verification character according to the verification prompt for input verification, or the user clicks in the verification character according to the verification prompt Some characters are entered for verification.
  • the obtained verification material is input into a preset style conversion model, which is a neural network model that is pre-trained to a convergent state and is used to convert the input image into the preset style mode.
  • the style transfer model is a neural network model that has learned one or more style modes.
  • the style transfer model learns one style mode fixedly, but the style mode learned by the style transfer model is not limited to this.
  • the style conversion model learns multiple sets of style patterns, and converts the verification material into a corresponding style pattern according to the user's selection.
  • the preset style mode is an inherent style mode that the style conversion model has learned, or a style mode selected by the user among multiple style modes.
  • the style mode essentially means that when the style transfer model learns a certain style mode, the weight of the convolutional layer in the style transfer model is recorded to save the style transfer ability of the style transfer model.
  • the weight of the corresponding transform convolutional layer can adjust the style mode of the style transfer model.
  • the style transfer model can be a convolutional neural network model (CNN) that has been trained to a convergent state. However, it is not limited to this.
  • the style transfer model can also be: deep neural network model (DNN), recurrent neural network model (RNN) or Deformation model of the above three network models.
  • DNN deep neural network model
  • RNN recurrent neural network model
  • Deformation model of the above three network models.
  • the generation of the verification image can be processed by the server side, or can be processed locally by the terminal.
  • the generated verification image is sent to the terminal for verification.
  • the terminal obtains the verification information entered by the user and uploads the verification information to the server.
  • the server determines the verification result according to whether the verification characters are consistent with the verification information.
  • the verification characters are uploaded to the server, and then a verification image is generated.
  • the verification information input by the user is collected, and the verification information is sent to the server. Whether the verification information is consistent and judge the verification result.
  • the verification image generated locally by the terminal can improve the verification efficiency, and the verification image does not need to be transmitted during verification. Therefore, network resources can be effectively saved.
  • the background image and the verification character are simultaneously input into the style conversion model for style conversion, and the background image and the verification character in the converted verification image are all converted into the same style. Since the verification character and the background image can be deeply fused during the style conversion process, the confusion between the background image and the verification character is improved. At the same time, the background image and the verification characters are converted into the same style image, so that the texture changes in the entire verification image are coherent and smooth. There is no sharp pixel comparison between the background image and the verification characters, which improves the extraction of verification characters through image processing technology. The difficulty further improves the confusion between the background image and the verification character, increases the recognition difficulty and error rate, and effectively guarantees the safety of character verification.
  • FIG. 2 is a schematic diagram of a process of pixel filling for verification characters in this embodiment.
  • the background pixel value in the background image is extracted, where the background pixel value is the pixel value with the largest proportion of the pixel value in the background image.
  • the value of the background pixel value is not limited to this. According to different specific application scenarios, in some embodiments, the background pixel value is the pixel value with the largest proportion of the pixel value in the area covered by the verification character.
  • R, G, and B are all natural numbers greater than or equal to 0 and less than or equal to 255.
  • the pixel calculation rule is: the color difference between the calculated and the background pixel value is equal to the preset first color difference threshold
  • the filled pixel value is defined as 2, but the value of the first color difference threshold is not limited to this. According to different application scenarios, in some embodiments, the value of the first color difference threshold can be: 3, 4, or 5. .
  • the image color corresponding to the filling pixel value is called to fill the verification character. Since the filled pixel value is also composed of three channel colors (R, G, and B), the filled pixel value also represents an image color.
  • the filling pixel value is calculated by the background pixel value, so that the color difference between the background image and the verification character can be within a range that can be recognized by the human eye, and the range is limited enough to make the background image and the verification character.
  • the deeper integration between the two further increases the difficulty of image recognition.
  • the background image and verification characters are subjected to image vectorization processing.
  • FIG. 3 is a schematic diagram of a flow of vectorization processing on the background image and the verification character in this embodiment.
  • the verification characters are placed on the background image according to the spatial sequence between the characters in the verification characters.
  • the verification characters are distorted when setting the verification characters.
  • the verification character After the verification character is set on the background image, the verification character covers the area where it is located. At this time, the background image and the verification character generate a composite image.
  • the image vectorization processing of the composite image is to convert the composite image from a bitmap to a vector diagram.
  • the converted vector image is formed by line segments to form the outline of the frame, and the color of the frame and the color enclosed by the frame determine the pattern display s color.
  • the vector image will be input into the style transfer model for style transfer.
  • vector graphics can be calculated by formulas
  • vector graphics files are generally small. It is convenient for the calculation of the style conversion model to improve the calculation efficiency.
  • the behavior verification in addition to verifying the result by comparing the verification character with the verification information, when determining whether the user is manually inputting, the behavior verification can also be performed by the behavior when the user enters the verification information. Please refer to FIG. 4, which is a schematic diagram of the process of verifying user behavior in this embodiment.
  • step S1300 shown in Fig. 1 it includes:
  • the keyboard referred to in this embodiment is a peripheral keyboard connected to the terminal or a soft keyboard virtually displayed in the terminal display area.
  • the time when the user inputs each character is defined as the input time, and the set of the input time of all characters input by the user during verification is the input node information.
  • S1320 Determine whether the input behavior of the user is an abnormal input behavior according to the input node information
  • the judgment method is to calculate whether the time difference between two adjacent input times is consistent.
  • the judgment method is not limited to this.
  • a neural network model can also be used to judge user behavior.
  • the classification result of the neural network model is used to judge whether the user behavior is abnormal.
  • the input time interval for inputting various characters is modified to prevent a deciphering solution that is consistent with input intervals and is determined to be abnormal behavior. Need to master more methods of identifying non-human operations or identifying traces of non-human operations from a deeper dimension.
  • FIG. 5 is a schematic diagram of a process of identifying abnormal behaviors through a neural network model in this embodiment.
  • the S1320 steps shown in Figure 4 include:
  • the first verification model can be a convolutional neural network model (CNN) that has been trained to a convergent state. However, it is not limited to this.
  • the first verification model can also be: deep neural network model (DNN), recurrent neural network model (RNN) ) Or deformed models of the above three network models.
  • DNN deep neural network model
  • RNN recurrent neural network model
  • the initial neural network model as the first verification model is trained by collecting a large amount of input time information converted time matrix as a training sample, after manually observing the main body of the data input (human input or non-human input) , Calibrate each training sample (calibrate the classification result of each training sample). Then input the training sample into the initial neural network model.
  • the neural network model extracts the feature vector of the training sample, and compares the feature vector with the classification category of the classification layer to obtain the feature vector and each classification category. The confidence level of the classification category with the highest confidence is the classification result.
  • the classification result is the classification result of the input time information calculated by the model
  • the distance between the classification result and the calibration result through the loss function of the neural network model (for example: Euclidean distance, Mahalanobis distance) Distance or cosine distance, etc.)
  • the calculation result with the set distance threshold (the value of the distance threshold is inversely proportional to the accuracy of the dialogue interaction model, that is, the higher the accuracy requirement, the lower the value of the distance threshold) Yes, if the calculation result is less than or equal to the distance threshold, the verification is passed, and the training of the next training sample is continued.
  • the neural network model is corrected through back propagation
  • the inner weight value enables the neural network model to increase the weight of the elements that accurately express the input subject in the training sample, thereby increasing the accuracy and comprehensiveness of the extraction.
  • the first verification model trained to the convergent state can accurately classify the time matrix.
  • Read the classification result output by the first verification model, and the information recorded in the classification result is the judgment result of the user behavior represented by the time matrix by the first verification model.
  • the judgment result is abnormal, the user behavior is abnormal; otherwise, the user behavior is normal.
  • the neural network model can quickly and accurately judge user behaviors, and can also identify non-human operation behaviors that intentionally simulate human input, which improves the convenience and safety of verification.
  • FIG. 6 is a schematic diagram of the process of screening and verifying images through a neural network model in this embodiment.
  • step S1300 shown in FIG. 1 it includes:
  • the obtained verification image is input into a preset second verification model, where the second verification model is a neural network model that is pre-trained to a convergent state and is used to extract character information in the verification image.
  • the second verification model in the prior art has been trained to a convergent second verification model for character recognition.
  • the character information is compared with the verification character, and when the character information is consistent with the verification character, the verification image is refreshed.
  • the character information is compared with the verification character.
  • the method of comparison is to use the Hamming distance or the Hamming distance. Specifically, the Hamming distance or the Hamming distance between the character information and the verification character is calculated. When the Hamming distance or Hamming distance between them is 0, it indicates that the character information is consistent with the verification character, otherwise, it indicates that the character information is inconsistent with the verification character. When the character information is consistent with the verification character, it indicates that the verification character in the verification image can be recognized and extracted by the AI model in the prior art. The verification image does not meet the verification requirements and needs to be replaced. Therefore, when the character information is consistent with the verification character When they are consistent, the verification image is refreshed and the verification image is regenerated.
  • the verification image is screened through the neural network model, which reduces the probability of the verification image being recognized by the AI image and effectively ensures the security of the verification.
  • some verification methods that are not manually verified directly extract the verification image in the background, and upload the verification parameters after calculation to complete the verification.
  • FIG. 7 is a schematic diagram of a process of obtaining a verification image in the display area in this embodiment.
  • the terminal When the terminal displays the verification image, it needs to store the verification page including the verification image in the frame buffer memory, that is, the frame buffer memory is a direct image of the screen displayed on the screen, which is also called a bit map.
  • the data is displayed.
  • the verification image Since the verification image has a set area in the bitmap, according to the information of the set area, the data area representing the content of the verification area is extracted from the bitmap to generate a local bitmap, that is, the target data that represents the display content of the verification image .
  • the target data is converted into a conventional picture format, such as (not limited to) JPG, PNG, or TIF, etc., to generate a verification image.
  • a conventional picture format such as (not limited to) JPG, PNG, or TIF, etc.
  • the verification image when the verification image cannot be obtained in the frame buffer memory, it indicates that the verification method is virtual verification.
  • an embodiment of the present application also provides a character verification device.
  • FIG. 8 is a schematic diagram of the basic structure of the character verification device in this embodiment.
  • a character verification device includes: an acquisition module 2100, a processing module 2200, and an execution module 2300.
  • the obtaining module 2100 is used to obtain the verification material to be synthesized, wherein the verification material includes a background image and verification characters
  • the processing module 2200 is used to input the verification material into a preset style conversion model to generate a preset style Patterns have verification images with the same style, where the style conversion model is pre-trained to a convergent state, and is used to convert the input image into a neural network model of the preset style pattern
  • the execution module 2300 is used to read the verification of the style conversion model output Image to use the verification image for character verification.
  • the character verification device inputs the background image and the verification characters into the style conversion model at the same time for style conversion, and the background image and verification characters in the converted verification image are all converted into the same style. Since the verification character and the background image can be deeply fused during the style conversion process, the confusion between the background image and the verification character is improved. At the same time, the background image and the verification characters are converted into the same style image, so that the texture changes in the entire verification image are coherent and smooth. There is no sharp pixel comparison between the background image and the verification characters, which improves the extraction of verification characters through image processing technology. The difficulty further improves the confusion between the background image and the verification character, increases the recognition difficulty and error rate, and effectively guarantees the security of character verification.
  • the character verification device further includes: a first acquisition submodule, a first processing submodule, and a first execution submodule.
  • the first obtaining sub-module is used to obtain the background pixel value in the background image
  • the first processing sub-module is used to calculate the filling pixel value corresponding to the background pixel value according to a preset pixel calculation rule, wherein the filling pixel value and the background pixel The color difference between the values is equal to the preset first color difference threshold
  • the first execution submodule is used to call the image color mapped to the filling pixel value to fill the verification character.
  • the character verification device further includes: a second processing submodule, a first synthesis submodule, and a second execution submodule.
  • the second processing sub-module is used to set the verification character on the background image;
  • the first synthesis sub-module is used to perform image synthesis based on the verification character and the background image to generate a synthesized image;
  • the second execution sub-module is used to image the synthesized image
  • the vectorization process generates a vector image, where the vector image replaces the verification material and is input into the style conversion model.
  • the character verification apparatus further includes: a second acquisition submodule, a third processing submodule, and a third execution submodule.
  • the second acquisition sub-module is used to acquire input node information when the user inputs verification information according to the verification image, where the input node information includes the input time when the user enters each character;
  • the third processing sub-module is used to determine the user’s status based on the input node information Whether the input behavior is an abnormal input behavior;
  • the third execution sub-module is used to confirm that the verification result is a verification failure when the user's input behavior is determined to be an abnormal input behavior.
  • the character verification apparatus further includes: a first generation submodule, a fourth processing submodule, and a fourth execution submodule.
  • the first generation sub-module is used to arrange the input time to generate a time matrix
  • the fourth processing sub-module is used to input the time matrix into the preset first verification model to determine whether the user's input behavior is abnormal input
  • the first verification model is a neural network model that is pre-trained to a convergent state and is used to determine whether the user input behavior is abnormal according to the input time
  • the fourth execution sub-module is used to read the judgment result output by the first verification model.
  • the character verification device further includes: a fifth processing submodule, a third obtaining submodule, and a fifth execution submodule.
  • the fifth processing submodule is used to input the verification image into a preset second verification model, where the second verification model is a neural network model that is pre-trained to a convergence state and is used to extract character information in the verification image;
  • the third acquisition sub-module is used to acquire the classification results output by the second verification model, where the classification results include the character information in the verification image extracted by the second verification model;
  • the fifth execution sub-module is used to compare the character information with the verification characters Yes, when the character information is consistent with the verification character, refresh the verification image.
  • the character verification device further includes: a fourth acquisition submodule, a sixth processing submodule, and a sixth execution submodule.
  • the fourth acquisition sub-module is used to acquire the display data in the frame buffer memory
  • the sixth processing sub-module is used to extract the target data representing the verification image from the display data according to the preset display position of the verification image in the verification page
  • the sixth execution sub-module is used to convert the target data into a picture format to generate a verification image.
  • FIG. 9 is a block diagram of the basic structure of the computer device in this embodiment.
  • the computer device includes a processor, a nonvolatile storage medium, a memory, and a network interface connected through a system bus.
  • the non-volatile storage medium of the computer device stores an operating system, a database, and computer-readable instructions.
  • the database may store control information sequences.
  • the processor can realize a A method of character verification.
  • the processor of the computer equipment is used to provide calculation and control capabilities, and supports the operation of the entire computer equipment.
  • a computer readable instruction may be stored in the memory of the computer device, and when the computer readable instruction is executed by the processor, the processor may execute a character verification method.
  • the network interface of the computer device is used to connect and communicate with the terminal.
  • FIG. 9 is only a block diagram of part of the structure related to the solution of the present application, and does not constitute a limitation on the computer equipment to which the solution of the present application is applied.
  • the specific computer equipment may Including more or fewer parts than shown in the figure, or combining some parts, or having a different arrangement of parts.
  • the processor is used to execute the specific functions of the acquisition module 2100, the processing module 2200, and the execution module 2300 in FIG. 8, and the memory stores the program codes and various data required to execute the above modules.
  • the network interface is used for data transmission between user terminals or servers.
  • the memory in this embodiment stores the program codes and data required to execute all the sub-modules in the human face image key point detection device, and the server can call the program codes and data of the server to execute the functions of all the sub-modules.
  • the computer device inputs the background image and the verification character into the style conversion model at the same time for style conversion, and the background image and the verification character in the converted verification image are all converted into the same style. Since the verification character and the background image can be deeply fused during the style conversion process, the confusion between the background image and the verification character is improved. At the same time, the background image and the verification characters are converted into the same style image, so that the texture changes in the entire verification image are coherent and smooth. There is no sharp pixel comparison between the background image and the verification characters, which improves the extraction of verification characters through image processing technology. The difficulty further improves the confusion between the background image and the verification character, increases the recognition difficulty and error rate, and effectively guarantees the security of character verification.
  • the present application also provides a non-volatile storage medium storing computer-readable instructions.
  • the computer-readable instructions are executed by one or more processors, one or more processors execute the character verification method in any of the above embodiments. A step of.
  • the computer program can be stored in a computer readable storage medium. When executed, it may include the processes of the above-mentioned method embodiments.
  • the aforementioned storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.

Abstract

一种字符验证方法、装置、计算机设备及存储介质,包括:获取待合成的验证素材,其中,所述验证素材包括背景图像和验证字符(S1100);将所述验证素材输入至预设的风格转换模型中,以生成与预设的风格模式具有相同风格的验证图像,其中,所述风格转换模型为预先训练至收敛状态,用于将输入图像转换为预设的风格模式的神经网络模型(S1200);读取所述风格转换模型输出的所述验证图像,以将所述验证图像用于字符验证(S1300)。背景图像与验证字符转换为同一种风格图像,使整个验证图像中纹路变化连贯且平滑,背景图像与验证字符之间不具有尖锐的像素比对,提高了通过图像处理技术提取验证字符的难度,进一步地提高了背景图像与验证字符之间的混淆度,增大了识别难度和错误率,有效的保障字符验证的安全性。

Description

字符验证方法、装置、计算机设备及存储介质
本申请要求于2019年8月21日提交中国专利局、申请号为201910774964.4,发明名称为“字符验证方法、装置、计算机设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及数据安全领域,尤其是一种字符验证方法、装置、计算机设备及存储介质。
背景技术
伴随着科学技术的发展,信息化时代的来临为我们带来了很多的便利,同时,也为人们的生活带来诸多的困扰。例如,通过互联网进行网络购票时,常常有不法商贩通过开发应用程序快速的进行刷票,然后高价进行转卖获得暴利,而真正需要购买的用户却无法通过互联网接口进行购买,且现实生活中,类似的互联网资源抢夺发生在各个领域,通过应用程序快速刷票和领取佣金的行为难以被杜绝。为了限制上述行为的发生,信息验证应用而生。
现有技术中,通常使用验证码进行验证,终端在进行验证操作时,首先向服务器端获取验证码,然后,接收用户根据该验证码输入的验证信息,最终,由终端将采集的用户信息发送至服务器端,服务器端通过比对验证码与验证信息中的文字是否一致,确定验证是否通过。
本申请的发明人在研究中发现,现有技术中验证码技术简单的将验证码设置在背景图像上进行显示,通过图像识别技术能够无障碍的识别出验证码,然后,直接将识别出的验证码发送至服务器端进行验证,无需人工进行输入。因此,现有技术中验证码容易被识别,验证安全级别较低,无法真正保护网络资源被安全使用。
发明内容
本申请实施例提供一种通过风格转换提高验证图像的混淆度,增大图像识别难度的字符验证方法、装置、计算机设备及存储介质。
为解决上述技术问题,本申请创造的实施例采用的一个技术方案是:提供一种字符验证方法,包括:
获取待合成的验证素材,其中,所述验证素材包括背景图像和验证字符;
将所述验证素材输入至预设的风格转换模型中,以生成与预设的风格模式具有相同风格的验证图像,其中,所述风格转换模型为预先训练至收敛状态,用于将输入图像转换为预设的风格模式的神经网络模型;
读取所述风格转换模型输出的所述验证图像,以将所述验证图像用于字符验证。
为解决上述技术问题,本申请实施例还提供一种字符验证装置,包括:
获取模块,用于获取待合成的验证素材,其中,所述验证素材包括背景图像和验证字符;
处理模块,用于将所述验证素材输入至预设的风格转换模型中,以生成与预设的风格模式具有相同风格的验证图像,其中,所述风格转换模型为预先训练至收敛状态,用于将输入图像转换为预设的风格模式的神经网络模型;
读取所述风格转换模型输出的所述验证图像,以将所述验证图像用于字符验证。
为解决上述技术问题,本申请实施例还提供一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行上述所述字符验证方法的步骤,其中,所述字符验证方法的步骤,包括:
获取待合成的验证素材,其中,所述验证素材包括背景图像和验证字符;
将所述验证素材输入至预设的风格转换模型中,以生成与预设的风格模式具有相同风格的验证图像,其中,所述风格转换模型为预先训练至收敛状态,用于将输入图像转换为预设的风格模式的神经网络模型;
读取所述风格转换模型输出的所述验证图像,以将所述验证图像用于字符验证。
为解决上述技术问题,本申请实施例还提供一种存储有计算机可读指令的非易失性存储介质,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行上述所述字符验证方法的步骤,其中,所述字符验证方法的步骤,包括:
获取待合成的验证素材,其中,所述验证素材包括背景图像和验证字符;
将所述验证素材输入至预设的风格转换模型中,以生成与预设的风格模式具有相同风格的验证图像,其中,所述风格转换模型为预先训练至收敛状态,用于将输入图像转换为预设的风格模式的神经网络模型;
读取所述风格转换模型输出的所述验证图像,以将所述验证图像用于字符验证。
本申请实施例的有益效果是:将背景图像和验证字符同时输入至风格转换模型中进行风格转换,转换得到的验证图像中背景图像和验证字符均被转换为同一种风格。由于,风格转换过程中能够使验证字符与背景图像之间进行深度融合,提高了背景图像与验证字符的混淆度。同时,背景图像与验证字符转换为同一种风格图像,使整个验证图像中纹路变化连贯且平滑,背景图像与验证字符之间不具有尖锐的像素比对,提高了通过图像处理技术提取验证字符的难度,进一步地提高了背景图像与验证字符之间的混淆度,增大了识别难度和错误率,有效的保障字符验证的安全性。
附图说明
图1为本申请实施例字符验证方法的基本流程示意图;
图2为本申请实施例对验证字符进行像素填充的流程示意图;
图3为本申请实施例对背景图像和验证字符进行矢量化处理的流程示意图;
图4为本申请实施例验证用户行为的流程示意图;
图5为本申请实施例通过神经网络模型识别异常行为的流程示意图;
图6为本申请实施例通过神经网络模型筛选验证图像的流程示意图;
图7为本申请实施例获取显示区域中验证图像的流程示意图;
图8为本申请实施例字符验证装置基本结构示意图;
图9为本申请实施例计算机设备基本结构框图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
具体请参阅图1,图1为本实施例字符验证方法的基本流程示意图。
如图1所示,一种字符验证方法,包括:
S1100、获取待合成的验证素材,其中,所述验证素材包括背景图像和验证字符;
验证图像中的内容包括背景图像和验证字符,其中,背景图像与验证字符分别存储在对应的数据库中,进行验证时分别在对应的数据库中通过随机抽取的方式获取背景图像与验证字符。但是验证素材的存储方式不局限于此,在一些实施方式中,预先将背景图像与验证字符进行合成存储在数据库中,进行验证时在数据库中抽取一张合成的图像作为验证素材。
在本实施方式中验证字符由有限个字符组成,例如,由4个字符组成验证字符,但是验证字符的长度不局限于此,根据具体应用场景的不同,在一些实施方式中,验证字符的长度能够为(不限于):2个、3个、5个、6个或者更多个字符。组成验证字符的字符类型能够为已知的具有文字记载的文字字符或者多种文字字符的组合。
本实施方式中字符验证的场景包括(不包括):用户按照验证字符输入相同的字符进行验证、用户根据验证提示在验证字符中选择部分字符进行输入验证或者用户根据验证提示在验证字符中点选部分字符进行输入验证。
S1200、将所述验证素材输入至预设的风格转换模型中,以生成与预设的风格模式具有相同风格的验证图像,其中,所述风格转换模型为预先训练至收敛状态,用于将输入图像转换为预设的风格模式的神经网络模型;
将获取的验证素材输入至预设的风格转换模型中,风格转换模型为预先训练至收敛状态,用于将输入图像转换为预设的风格模式的神经网络模型。即风格转换模型为已经学习某种或者多种风格模式的神经网络模型,本实施方式中风格转换模型固定的学习一种风格模式,但是风格转换模型学习到的风格模式不局限于此,根据具体应用场景的不同,在一些选择性实施例中,风格转换模型学习多套风格模式,根据用户的选择将验证素材转换为对应的风格模式。
其中,预设的风格模式为风格转换模型已经学习到的固有的风格模式,或者用户在多种风格模式中选定的一种风格模式。
风格模式实质上是指读取的当风格转换模型学习到某种风格模式后,对风格转换模型中卷积层的权值进行记录以此保存风格转换模型的风格转换能力。当风格转换模型中风格模式为多种时,对应变换 卷积层的权值就能够调节风格转换模型的风格模式。
将验证素材输入至风格转换模型中,对验证素材进行特征提取,然后对提取的特征进行求导,就能够使验证素材具有对应的风格。
风格转换模型能够为已经训练至收敛状态的卷积神经网络模型(CNN),但是,不局限于此,风格转换模型还能够是:深度神经网络模型(DNN)、循环神经网络模型(RNN)或者上述三种网络模型的变形模型。
S1300、读取所述风格转换模型输出的所述验证图像,以将所述验证图像用于字符验证。
读取风格转换模型输出的验证图像,验证图像中背景图像和验证字符均保持有相同的风格。
本实施方式中,验证图像的生成能够由服务器端进行处理,也能够由终端本地进行处理。服务器端处理时,将生成的验证图像发送至终端进行验证,终端获取用户输入的验证信息,将验证信息上传至服务器端,服务器端根据验证字符与验证信息是否一致判断验证结果。终端本地处理时,在抽取验证字符后,将验证字符上传至服务器端,然后生成验证图像,验证图像生成后采集用户输入的验证信息,并将验证信息发送至服务器端,服务器端根据验证字符与验证信息是否一致判断验证结果。终端本地生成验证图像能够提高验证效率,且验证时不需要传输验证图像,因此,能够有效的节约网络资源。
上述实施方式中,将背景图像和验证字符同时输入至风格转换模型中进行风格转换,转换得到的验证图像中背景图像和验证字符均被转换为同一种风格。由于,风格转换过程中能够使验证字符与背景图像之间进行深度融合,提高了背景图像与验证字符的混淆度。同时,背景图像与验证字符转换为同一种风格图像,使整个验证图像中纹路变化连贯且平滑,背景图像与验证字符之间不具有尖锐的像素比对,提高了通过图像处理技术提取验证字符的难度,进一步地提高了背景图像与验证字符之间的混淆度,增大了识别难度和错误率,有效的保障字符验证的安全性。
在一些实施方式中,为加深背景图像与验证字符的融合深度,进一步地提高图像识别的难度,在进行验证图像生成之前,需要对背景图像和验证字符进行初步融合。请参阅图2,图2为本实施例对验证字符进行像素填充的流程示意图。
如图2所示,图1所示的S1200的步骤之前,包括:
S1111、获取所述背景图像中的背景像素值;
在获取验证素材后,提取背景图像中的背景像素值,其中,背景像素值为背景图相中像素值占比最大的像素值。但是,背景像素值的取值不局限于此,根据具体应用场景的不同,在一些实施方式中,背景像素值为验证字符已覆盖区域中像素值占比最大的像素值。
背景像素值的取值(R、G和B),其中,R、G和B均为大于等于0小于等于255的自然数。
S1112、根据预设的像素计算规则计算所述背景像素值对应的填充像素值,其中,所述填充像素值与所述背景像素值之间的色差值等于预设的第一色差阈值;
获取背景像素值后,根据预设的像素计算规则计算与该背景像素值对应的填充像素值,其中,像素计算规则为:计算与背景像素值之间色差值等于预设的第一色差阈值的填充像素值。第一色差阈值被定义为2,但是第一色差阈值的取值不局限于此,根据具体应用场景的不同,在一些实施方式中,第一色差阈值的取值能够为:3、4或5。
需要指出的时,当背景像素值的通道取值(R+G+B)/3≤255时,在背景像素值上加第一色差阈值得到填充像素值;当背景像素值的通道取值(R+G+B)/3>255时,在背景像素值上减去第一色差阈值得到填充像素值。
S1113、调用与所述填充像素值所映射的图像颜色对所述验证字符进行填充。
计算得到填充像素值后,调用与填充像素值对应的图像颜色对验证字符进行填充。由于填充像素值同样也由(R、G和B)三个通道颜色组成,因此,填充像素值所表征的也是一种图像颜色。
通过背景像素值计算得到填充像素值,能够使背景图像与验证字符之间的色差值位于人眼可识别的一个范围内,且该范围被限定的足够小,以使背景图像与验证字符之间融合的更深,进一步地提高图像识别的难度。
在一些实施方式中,为进一步地减少风格转换模型的运算难度,加快风格转换模型的处理速度,对背景图像和验证字符进行图像矢量化处理。请参阅图3,图3为本实施例对背景图像和验证字符进行矢量化处理的流程示意图。
如图3所示,图2所示的S1113的步骤之后,包括:
S1121、将所述验证字符设置在所述背景图像上;
对验证字符进行颜色填充后,根据验证字符中字符之间的空间顺序,将验证字符放置背景图像上。在一些实施方式中,为增加图像识别难度,在进行验证字符设置时对验证字符进行扭曲变形。
S1122、根据所述验证字符与所述背景图像进行图像合成生成合成图像;
将验证字符设置在背景图像上后,验证字符对其所在区域进行像素覆盖,此时,背景图像和验证字符生成合成图像。
S1123、对所述合成图像进行图像矢量化处理生成矢量图像,其中,所述矢量图像替代所述验证素材输入至所述风格转换模型中。
对合成图像进行图像矢量化处理,是将对合成图由位图转化为矢量图,转化后的矢量图像由线段形成外框轮廓,由外框的颜色以及外框所封闭的颜色决定图案显示出的颜色。
在本实施方式中矢量图像将被输入至风格转换模型中进行风格转换。
由于,由于矢量图形可通过公式计算获得,所以矢量图形文件体积一般较小。方便风格转换模型进行运算,提高了运算效率。
在一些实施方式中,除通过验证字符和验证信息比对进行结果验证之外,判断用户是否为人工输入时,还能够通过用户输入验证信息时的行为进行行为验证。请参阅图4,图4为本实施例验证用户行为的流程示意图。
如图4所示,图1所示的S1300步骤之后,包括:
S1310、获取用户根据所述验证图像输入验证信息时的输入节点信息,其中,所述输入节点信息包括用户输入各个字符时的输入时间;
用户参照验证图像中的验证字符输入验证信息时,需要通过在终端的键盘上进行依次输入。本实施方式所指的键盘为与终端连接的外设键盘或者在终端显示区域虚拟显示的软键盘。
定义用户输入每个字符时的时刻为输入时间,则用户进行验证时输入的所有字符的输入时间的集合为输入节点信息。
S1320、根据所述输入节点信息判断所述用户的输入行为是否为异常输入行为;
根据收集得到的输入节点信息判断用户的输入行为是否为异常输入行为,判断的方式为计算两个相邻输入时间之间的时间差是否一致。但是判断方法不局限于此,为应对一些更为复杂的模拟真人输入的验证破解方案,还能够采用神经网络模型对用户行为进行判断。
S1330、当所述用户的输入行为判定为异常输入行为时,确认验证结果为验证失败。
当两个相邻输入时间之间的时间差均为同一个数值时,判断用户输入为异常输入,否则,则用户的输入为正常输入。使用神经网络模型进行异常判断时,则根据神经网络模型的分类结果判断用户行为是否异常。
当确认用户行为验证为异常行为时,则无论验证字符与验证信息的比对是否一致,均确定此次字符验证的验证结果为验证失败。
通过采集用户输入验证信息时的时间信息,判断输入字符的时间是否为具有非人为输入的痕迹,如果存在则判定验证行为异常,有效的防止采用图像识别技术进行图像验证。
在一些实施方式中,为应对复杂的图像识别技术对用户行为进行针对性破解,例如,修改输入各个字符的输入时间间隔,以防止输入间隔一致被确定为异常行为的破解方案。需要掌握更多的识别非人为的操作的方法,或者从更深的维度识别非人为操作痕迹的方法。请参阅图5,图5为本实施例通过神经网络模型识别异常行为的流程示意图。
如图5所示,图4所示的S1320步骤包括:
S1321、将所述输入时间按时序排列生成时间矩阵;
将获取的输入时间按照时序的先后顺序排列生成时间矩阵。
S1322、将所述时间矩阵输入至预设的第一验证模型中,以判断所述用户的输入行为是否为异常输入行为,其中,所述第一验证模型为预先训练至收敛状态,用于根据所述输入时间判断用户输入行为是否异常的神经网络模型;
将时间矩阵输入至第一验证模型中进行特征提取与分类。第一验证模型能够为已经训练至收敛状态的卷积神经网络模型(CNN),但是,不局限于此,第一验证模型还能够是:深度神经网络模型(DNN)、循环神经网络模型(RNN)或者上述三种网络模型的变形模型。
作为第一验证模型的初始神经网络模型在训练时,通过收集大量的输入时间信息转换后的时间矩阵作为训练样本,通过人工在观察了数据的输入时的主体后(人为输入或者非人为输入),对各个训练样本进行标定(标定各个训练样本的分类结果)。然后将训练样本输入到初始的神经网络模型中,神经网络模型提取该训练样本的特征向量,并将该特征向量与分类层的分类类目进行比对,得到特征向量与各个 分类类目之间的置信度,置信度最高的分类类目即为分类结果。
获取模型输出的分类结果(分类结果为模型计算得到的输入时间信息的分类结果),并通过神经网络模型的损失函数计算该分类结果与标定结果之间的距离(例如:欧氏距离、马氏距离或余弦距离等),将计算结果与设定的距离阈值(距离阈值的取值与对言交互模型的准确率成反比,即准确率要求越高,距离阈值的取值越低)进行比对,若计算结果小于等于距离阈值则通过验证,继续进行下一个训练样本的训练,若计算结果大于距离阈值则通过损失函数计算二者之间的差值,并通过反向传播校正神经网络模型内的权值,使神经网络模型能够提高训练样本中准确表达输入主体的元素的权重,以此,增大提取的准确率和全面性。通过循环执行上述方案和大量的训练样本训练后,训练得到的神经网络模型对时间矩阵分类的准确率大于一定数值的,例如,95%,则该神经网络模型训练至收敛状态,则该训练至收敛的神经网络即为第一验证模型。
训练至收敛状态的第一验证模型能够准确的对时间矩阵进行分类。
S1323、读取所述第一验证模型输出的判断结果。
读取第一验证模型输出的分类结果,分类结果中记载的信息即为第一验证模型对时间矩阵所表征的用户行为的判断结果。当判断结果为异常时,则用户行为为异常行为;否则,则用户行为为正常行为。
通过神经网络模型能够快速的对用户行为进行准确判断,也能够对有意模拟人为输入的非人为操作行为进行辨识,提高了验证的便捷性和安全性。
在一些实施方式中,为防止验证图像被恶意者采用深度学习的方法进行破解,验证图像生成后采用现有技术中已经训练至收敛的用于字符识别的第二验证模型对验证图像进行识别,根据识别结果判断是否需要对验证图像进行替换。请参阅图6,图6为本实施例通过神经网络模型筛选验证图像的流程示意图。
如图6所示,图1所示的S1300步骤之后,包括:
S1410、将所述验证图像输入至预设的第二验证模型中,其中,所述第二验证模型为预先训练至收敛状态,用于提取所述验证图像中字符信息的神经网络模型;
将获取得到的验证图像输入至预设的第二验证模型中,其中,第二验证模型为预先训练至收敛状态,用于提取验证图像中字符信息的 神经网络模型。第二验证模型现有技术中已经训练至收敛的用于字符识别的第二验证模型。
S1420、获取所述第二验证模型输出的分类结果,其中,所述分类结果中包括所述第二验证模型提取的所述验证图像中的字符信息;
获取第二验证模型输出的分类结果,分类结果中包括第二验证模型提取的验证图像中的字符信息。
S1430、将所述字符信息与所述验证字符进行比对,当所述字符信息与所述验证字符一致时,对所述验证图像进行刷新。
将字符信息与验证字符进行比对,比对的方式为采用汉明距离或者海明距离进行比对,具体地,计算字符信息与验证字符之间的海明距离或汉明距离,当二者之间的海明距离或汉明距离为0时,表明字符信息与验证字符一致,否则,则表明字符信息与验证字符不一致。当字符信息与验证字符一致时,表明验证图像中的验证字符能够被现有技术中的AI模型识别并提取,该验证图像不符合验证的需求,需要被替换,因此,当字符信息与验证字符一致时对验证图像进行刷新,重新生成验证图像。
通过神经网络模型对验证图像进行筛选,降低了验证图像被AI图像识别的几率,有效的保证了验证的安全性。
在一些实施方式中,部分非人工验证的验证方式直接提取后台的验证图像,经过计算后将验证参数进行上传,完成验证。为限制该模拟验证的行为,需要通过图像分类的方法对验证完成时的验证画面进行判断,以确定是否进行了真实的字符验证。请参阅图7,图7为本实施例获取显示区域中验证图像的流程示意图。
S1401、获取帧缓冲存储器内的显示数据;
终端对验证图像进行显示时,需要将包括验证图像的验页面存储在帧缓冲存储器内,即帧缓冲存储器内是屏幕所显示画面的一个直接映像,又称为位映射图(Bit Map),也即显示数据。
S1402、根据所述验证图像在验证页面中预设的显示位置,在所述显示数据内提取表征所述验证图像的目标数据;
由于,验证图像在位映射图中的具有设定的区域,根据设定区域的信息,在位映射图提取表征验证区域内容的数据区域生成局部位映射图,即表征验证图像显示内容的目标数据。
S1403、将所述目标数据转换为图片格式生成所述验证图像。
最后将目标数据转换为常规的图片格式,例如(不限于)JPG、 PNG或者TIF等格式,生成验证图像。
在一些实施方式中,在帧缓冲存储器内无法获取到验证图像时,则表明该验证方式为虚拟验证。
通过对验证页面中验证图像进行验证,能够有效地防止通过虚拟验证的进行数据上传的验证漏洞,大大地提高了验证的安全性。
为解决上述技术问题,本申请实施例还提供一种字符验证装置。
具体请参阅图8,图8为本实施例字符验证装置基本结构示意图。
如图8所示,一种字符验证装置,包括:获取模块2100、处理模块2200和执行模块2300。其中,获取模块2100用于获取待合成的验证素材,其中,验证素材包括背景图像和验证字符;处理模块2200用于将验证素材输入至预设的风格转换模型中,以生成与预设的风格模式具有相同风格的验证图像,其中,风格转换模型为预先训练至收敛状态,用于将输入图像转换为预设的风格模式的神经网络模型;执行模块2300用于读取风格转换模型输出的验证图像,以将验证图像用于字符验证。
字符验证装置将背景图像和验证字符同时输入至风格转换模型中进行风格转换,转换得到的验证图像中背景图像和验证字符均被转换为同一种风格。由于,风格转换过程中能够使验证字符与背景图像之间进行深度融合,提高了背景图像与验证字符的混淆度。同时,背景图像与验证字符转换为同一种风格图像,使整个验证图像中纹路变化连贯且平滑,背景图像与验证字符之间不具有尖锐的像素比对,提高了通过图像处理技术提取验证字符的难度,进一步地提高了背景图像与验证字符之间的混淆度,增大了识别难度和错误率,有效的保障字符验证的安全性。
在一些实施方式中,字符验证装置还包括:第一获取子模块、第一处理子模块和第一执行子模块。其中,第一获取子模块用于获取背景图像中的背景像素值;第一处理子模块用于根据预设的像素计算规则计算背景像素值对应的填充像素值,其中,填充像素值与背景像素值之间的色差值等于预设的第一色差阈值;第一执行子模块用于调用与填充像素值所映射的图像颜色对验证字符进行填充。
在一些实施方式中,字符验证装置还包括:第二处理子模块、第一合成子模块和第二执行子模块。其中,第二处理子模块用于将验证字符设置在背景图像上;第一合成子模块用于根据验证字符与背景图像进行图像合成生成合成图像;第二执行子模块用于对合成图像进行 图像矢量化处理生成矢量图像,其中,矢量图像替代验证素材输入至风格转换模型中。
在一些实施方式中,字符验证装置还包括:第二获取子模块、第三处理子模块和第三执行子模块。第二获取子模块用于获取用户根据验证图像输入验证信息时的输入节点信息,其中,输入节点信息包括用户输入各个字符时的输入时间;第三处理子模块用于根据输入节点信息判断用户的输入行为是否为异常输入行为;第三执行子模块用于当用户的输入行为判定为异常输入行为时,确认验证结果为验证失败。
在一些实施方式中,字符验证装置还包括:第一生成子模块、第四处理子模块和第四执行子模块。其中,第一生成子模块用于将输入时间按时序排列生成时间矩阵;第四处理子模块用于将时间矩阵输入至预设的第一验证模型中,以判断用户的输入行为是否为异常输入行为,其中,第一验证模型为预先训练至收敛状态,用于根据输入时间判断用户输入行为是否异常的神经网络模型;第四执行子模块用于读取第一验证模型输出的判断结果。
在一些实施方式中,字符验证装置还包括:第五处理子模块、第三获取子模块和第五执行子模块。其中,第五处理子模块用于将验证图像输入至预设的第二验证模型中,其中,第二验证模型为预先训练至收敛状态,用于提取验证图像中字符信息的神经网络模型;第三获取子模块用于获取第二验证模型输出的分类结果,其中,分类结果中包括第二验证模型提取的验证图像中的字符信息;第五执行子模块用于将字符信息与验证字符进行比对,当字符信息与验证字符一致时,对验证图像进行刷新。
在一些实施方式中,字符验证装置还包括:第四获取子模块、第六处理子模块和第六执行子模块。其中,第四获取子模块用于获取帧缓冲存储器内的显示数据;第六处理子模块用于根据验证图像在验证页面中预设的显示位置,在显示数据内提取表征验证图像的目标数据;第六执行子模块用于将目标数据转换为图片格式生成验证图像。
为解决上述技术问题,本申请实施例还提供计算机设备。具体请参阅图9,图9为本实施例计算机设备基本结构框图。
如图9所示,计算机设备的内部结构示意图。该计算机设备包括通过系统总线连接的处理器、非易失性存储介质、存储器和网络接口。其中,该计算机设备的非易失性存储介质存储有操作系统、数据库和计算机可读指令,数据库中可存储有控件信息序列,该计算机可读指 令被处理器执行时,可使得处理器实现一种字符验证方法。该计算机设备的处理器用于提供计算和控制能力,支撑整个计算机设备的运行。该计算机设备的存储器中可存储有计算机可读指令,该计算机可读指令被处理器执行时,可使得处理器执行一种字符验证方法。该计算机设备的网络接口用于与终端连接通信。本领域技术人员可以理解,图9中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
本实施方式中处理器用于执行图8中获取模块2100、处理模块2200和执行模块2300的具体功能,存储器存储有执行上述模块所需的程序代码和各类数据。网络接口用于向用户终端或服务器之间的数据传输。本实施方式中的存储器存储有人脸图像关键点检测装置中执行所有子模块所需的程序代码及数据,服务器能够调用服务器的程序代码及数据执行所有子模块的功能。
计算机设备将背景图像和验证字符同时输入至风格转换模型中进行风格转换,转换得到的验证图像中背景图像和验证字符均被转换为同一种风格。由于,风格转换过程中能够使验证字符与背景图像之间进行深度融合,提高了背景图像与验证字符的混淆度。同时,背景图像与验证字符转换为同一种风格图像,使整个验证图像中纹路变化连贯且平滑,背景图像与验证字符之间不具有尖锐的像素比对,提高了通过图像处理技术提取验证字符的难度,进一步地提高了背景图像与验证字符之间的混淆度,增大了识别难度和错误率,有效的保障字符验证的安全性。
本申请还提供一种存储有计算机可读指令的非易失性存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行上述任一实施例字符验证方法的步骤。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,该计算机程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,前述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)等非易失性存储介质,或随机存储记忆体(Random Access Memory,RAM)等。
应该理解的是,虽然附图的流程图中的各个步骤按照箭头的指示 依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,其可以以其他的顺序执行。而且,附图的流程图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,其执行顺序也不必然是依次进行,而是可以与其他步骤或者其他步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。

Claims (20)

  1. 一种字符验证方法,包括:
    获取待合成的验证素材,其中,所述验证素材包括背景图像和验证字符;
    将所述验证素材输入至预设的风格转换模型中,以生成与预设的风格模式具有相同风格的验证图像,其中,所述风格转换模型为预先训练至收敛状态,用于将输入图像转换为预设的风格模式的神经网络模型;
    读取所述风格转换模型输出的所述验证图像,以将所述验证图像用于字符验证。
  2. 根据权利要求1所述的字符验证方法,所述将所述验证素材输入至预设的风格转换模型中,以生成与预设的风格模式具有相同风格的验证图像之前,包括:
    获取所述背景图像中的背景像素值;
    根据预设的像素计算规则计算所述背景像素值对应的填充像素值,其中,所述填充像素值与所述背景像素值之间的色差值等于预设的第一色差阈值;
    调用与所述填充像素值所映射的图像颜色对所述验证字符进行填充。
  3. 根据权利要求2所述的字符验证方法,所述调用与所述填充像素值所映射的图像颜色对所述验证字符进行填充之后,包括:
    将所述验证字符设置在所述背景图像上;
    根据所述验证字符与所述背景图像进行图像合成生成合成图像;
    对所述合成图像进行图像矢量化处理生成矢量图像,其中,所述矢量图像替代所述验证素材输入至所述风格转换模型中。
  4. 根据权利要求1所述的字符验证方法,所述读取所述风格转换模型输出的所述验证图像,以将所述验证图像用于字符验证之后,包括:
    获取用户根据所述验证图像输入验证信息时的输入节点信息,其中,所述输入节点信息包括用户输入各个字符时的输入时间;
    根据所述输入节点信息判断所述用户的输入行为是否为异常输入行为;
    当所述用户的输入行为判定为异常输入行为时,确认验证结果为验证失败。
  5. 根据权利要求4所述的字符验证方法,所述根据所述输入节点信息判断所述用户的输入行为是否为异常输入行为包括:
    将所述输入时间按时序排列生成时间矩阵;
    将所述时间矩阵输入至预设的第一验证模型中,以判断所述用户的输入行为是否为异常输入行为,其中,所述第一验证模型为预先训练至收敛状态,用于根据所述输入时间判断用户输入行为是否异常的神经网络模型;
    读取所述第一验证模型输出的判断结果。
  6. 根据权利要求1所述的字符验证方法,所述读取所述风格转换模型输出的所述验证图像,以将所述验证图像用于字符验证之后,包括:
    将所述验证图像输入至预设的第二验证模型中,其中,所述第二验证模型为预先训练至收敛状态,用于提取所述验证图像中字符信息的神经网络模型;
    获取所述第二验证模型输出的分类结果,其中,所述分类结果中包括所述第二验证模型提取的所述验证图像中的字符信息;
    将所述字符信息与所述验证字符进行比对,当所述字符信息与所述验证字符一致时,对所述验证图像进行刷新。
  7. 根据权利要求6所述的字符验证方法,所述将所述验证图像输入至预设的第二验证模型中之前,包括:
    获取帧缓冲存储器内的显示数据;
    根据所述验证图像在验证页面中预设的显示位置,在所述显示数据内提取表征所述验证图像的目标数据;
    将所述目标数据转换为图片格式生成所述验证图像。
  8. 一种字符验证装置,包括:
    获取模块,用于获取待合成的验证素材,其中,所述验证素材包括背景图像和验证字符;
    处理模块,用于将所述验证素材输入至预设的风格转换模型中,以生成与预设的风格模式具有相同风格的验证图像,其中,所述风格转换模型为预先训练至收敛状态,用于将输入图像转换为预设的风格模式的神经网络模型;
    执行模块,用于读取所述风格转换模型输出的所述验证图像,以将所述验证图像用于字符验证。
  9. 一种计算机设备,包括存储器和处理器,所述存储器中存储有 计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行上述权利要求所述字符验证方法的步骤。
  10. 根据权利要求9所述的计算机设备,所述将所述验证素材输入至预设的风格转换模型中,以生成与预设的风格模式具有相同风格的验证图像之前,包括:
    获取所述背景图像中的背景像素值;
    根据预设的像素计算规则计算所述背景像素值对应的填充像素值,其中,所述填充像素值与所述背景像素值之间的色差值等于预设的第一色差阈值;
    调用与所述填充像素值所映射的图像颜色对所述验证字符进行填充。
  11. 根据权利要求10所述的计算机设备,所述调用与所述填充像素值所映射的图像颜色对所述验证字符进行填充之后,包括:
    将所述验证字符设置在所述背景图像上;
    根据所述验证字符与所述背景图像进行图像合成生成合成图像;
    对所述合成图像进行图像矢量化处理生成矢量图像,其中,所述矢量图像替代所述验证素材输入至所述风格转换模型中。
  12. 根据权利要求9所述的计算机设备,所述读取所述风格转换模型输出的所述验证图像,以将所述验证图像用于字符验证之后,包括:
    获取用户根据所述验证图像输入验证信息时的输入节点信息,其中,所述输入节点信息包括用户输入各个字符时的输入时间;
    根据所述输入节点信息判断所述用户的输入行为是否为异常输入行为;
    当所述用户的输入行为判定为异常输入行为时,确认验证结果为验证失败。
  13. 根据权利要求12所述的计算机设备,所述根据所述输入节点信息判断所述用户的输入行为是否为异常输入行为包括:
    将所述输入时间按时序排列生成时间矩阵;
    将所述时间矩阵输入至预设的第一验证模型中,以判断所述用户的输入行为是否为异常输入行为,其中,所述第一验证模型为预先训练至收敛状态,用于根据所述输入时间判断用户输入行为是否异常的神经网络模型;
    读取所述第一验证模型输出的判断结果。
  14. 根据权利要求9所述的计算机设备,所述读取所述风格转换模型输出的所述验证图像,以将所述验证图像用于字符验证之后,包括:
    将所述验证图像输入至预设的第二验证模型中,其中,所述第二验证模型为预先训练至收敛状态,用于提取所述验证图像中字符信息的神经网络模型;
    获取所述第二验证模型输出的分类结果,其中,所述分类结果中包括所述第二验证模型提取的所述验证图像中的字符信息;
    将所述字符信息与所述验证字符进行比对,当所述字符信息与所述验证字符一致时,对所述验证图像进行刷新。
  15. 一种存储有计算机可读指令的非易失性存储介质,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行上述权利要求所述字符验证方法的步骤。
  16. 根据权利要求15所述的非易失性存储介质,所述将所述验证素材输入至预设的风格转换模型中,以生成与预设的风格模式具有相同风格的验证图像之前,包括:
    获取所述背景图像中的背景像素值;
    根据预设的像素计算规则计算所述背景像素值对应的填充像素值,其中,所述填充像素值与所述背景像素值之间的色差值等于预设的第一色差阈值;
    调用与所述填充像素值所映射的图像颜色对所述验证字符进行填充。
  17. 根据权利要求16所述的非易失性存储介质,所述调用与所述填充像素值所映射的图像颜色对所述验证字符进行填充之后,包括:
    将所述验证字符设置在所述背景图像上;
    根据所述验证字符与所述背景图像进行图像合成生成合成图像;
    对所述合成图像进行图像矢量化处理生成矢量图像,其中,所述矢量图像替代所述验证素材输入至所述风格转换模型中。
  18. 根据权利要求15所述的非易失性存储介质,所述读取所述风格转换模型输出的所述验证图像,以将所述验证图像用于字符验证之后,包括:
    获取用户根据所述验证图像输入验证信息时的输入节点信息,其中,所述输入节点信息包括用户输入各个字符时的输入时间;
    根据所述输入节点信息判断所述用户的输入行为是否为异常输 入行为;
    当所述用户的输入行为判定为异常输入行为时,确认验证结果为验证失败。
  19. 根据权利要求18所述的非易失性存储介质,所述根据所述输入节点信息判断所述用户的输入行为是否为异常输入行为包括:
    将所述输入时间按时序排列生成时间矩阵;
    将所述时间矩阵输入至预设的第一验证模型中,以判断所述用户的输入行为是否为异常输入行为,其中,所述第一验证模型为预先训练至收敛状态,用于根据所述输入时间判断用户输入行为是否异常的神经网络模型;
    读取所述第一验证模型输出的判断结果。
  20. 根据权利要求15所述的非易失性存储介质,所述读取所述风格转换模型输出的所述验证图像,以将所述验证图像用于字符验证之后,包括:
    将所述验证图像输入至预设的第二验证模型中,其中,所述第二验证模型为预先训练至收敛状态,用于提取所述验证图像中字符信息的神经网络模型;
    获取所述第二验证模型输出的分类结果,其中,所述分类结果中包括所述第二验证模型提取的所述验证图像中的字符信息;
    将所述字符信息与所述验证字符进行比对,当所述字符信息与所述验证字符一致时,对所述验证图像进行刷新。
PCT/CN2019/103664 2019-08-21 2019-08-30 字符验证方法、装置、计算机设备及存储介质 WO2021031242A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910774964.4A CN110675308B (zh) 2019-08-21 2019-08-21 字符验证方法、装置、计算机设备及存储介质
CN201910774964.4 2019-08-21

Publications (1)

Publication Number Publication Date
WO2021031242A1 true WO2021031242A1 (zh) 2021-02-25

Family

ID=69075429

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/103664 WO2021031242A1 (zh) 2019-08-21 2019-08-30 字符验证方法、装置、计算机设备及存储介质

Country Status (2)

Country Link
CN (1) CN110675308B (zh)
WO (1) WO2021031242A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111953647B (zh) * 2020-06-22 2022-09-27 北京百度网讯科技有限公司 安全校验方法、装置、电子设备和存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170310653A1 (en) * 2016-04-22 2017-10-26 Sony Corporation Client, server, method and identity verification system
CN108229130A (zh) * 2018-01-30 2018-06-29 中国银联股份有限公司 一种验证方法及装置
CN108846274A (zh) * 2018-04-09 2018-11-20 腾讯科技(深圳)有限公司 一种安全验证方法、装置及终端
CN109711136A (zh) * 2017-10-26 2019-05-03 武汉极意网络科技有限公司 存储设备、验证码图片生成方法和装置
CN109918893A (zh) * 2019-02-13 2019-06-21 平安科技(深圳)有限公司 图片验证码生成方法、装置、存储介质和计算机设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170310653A1 (en) * 2016-04-22 2017-10-26 Sony Corporation Client, server, method and identity verification system
CN109711136A (zh) * 2017-10-26 2019-05-03 武汉极意网络科技有限公司 存储设备、验证码图片生成方法和装置
CN108229130A (zh) * 2018-01-30 2018-06-29 中国银联股份有限公司 一种验证方法及装置
CN108846274A (zh) * 2018-04-09 2018-11-20 腾讯科技(深圳)有限公司 一种安全验证方法、装置及终端
CN109918893A (zh) * 2019-02-13 2019-06-21 平安科技(深圳)有限公司 图片验证码生成方法、装置、存储介质和计算机设备

Also Published As

Publication number Publication date
CN110675308A (zh) 2020-01-10
CN110675308B (zh) 2024-04-26

Similar Documents

Publication Publication Date Title
US11899927B2 (en) Simulated handwriting image generator
US11068746B2 (en) Image realism predictor
CN109815924B (zh) 表情识别方法、装置及系统
WO2017193906A1 (zh) 一种图像处理方法及处理系统
CN107679466B (zh) 信息输出方法和装置
CN112150450B (zh) 一种基于双通道U-Net模型的图像篡改检测方法及装置
WO2022188697A1 (zh) 提取生物特征的方法、装置、设备、介质及程序产品
JP2021532434A (ja) 顔特徴抽出モデル訓練方法、顔特徴抽出方法、装置、機器および記憶媒体
Vieira et al. Learning good views through intelligent galleries
Duan et al. Face verification with local sparse representation
CN113963409A (zh) 一种人脸属性编辑模型的训练以及人脸属性编辑方法
CN115862120B (zh) 可分离变分自编码器解耦的面部动作单元识别方法及设备
KR102225356B1 (ko) Gui 디자인에 대한 피드백을 제공하는 방법 및 장치
WO2021031242A1 (zh) 字符验证方法、装置、计算机设备及存储介质
Nakanishi Approximate Inverse Model Explanations (AIME): Unveiling Local and Global Insights in Machine Learning Models
WO2021000407A1 (zh) 字符验证方法、装置、计算机设备及存储介质
CN113538254A (zh) 图像恢复方法、装置、电子设备及计算机可读存储介质
CN116823983A (zh) 基于风格收集机制的一对多风格书法图片生成方法
KR20200137129A (ko) 관계형 질의를 이용한 객체 검출방법 및 그 장치
Dapogny et al. On Automatically Assessing Children's Facial Expressions Quality: A Study, Database, and Protocol
Reddy et al. Effect of image colourspace on performance of convolution neural networks
CN113744158A (zh) 图像生成方法、装置、电子设备和存储介质
CN113610080A (zh) 基于跨模态感知的敏感图像识别方法、装置、设备及介质
Wilson et al. Towards mitigating uncann (eye) ness in face swaps via gaze-centric loss terms
CN117078942B (zh) 上下文感知的指称图像分割方法、系统、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19941960

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19941960

Country of ref document: EP

Kind code of ref document: A1