CN113282905A - Login test method and device - Google Patents

Login test method and device Download PDF

Info

Publication number
CN113282905A
CN113282905A CN202110439609.9A CN202110439609A CN113282905A CN 113282905 A CN113282905 A CN 113282905A CN 202110439609 A CN202110439609 A CN 202110439609A CN 113282905 A CN113282905 A CN 113282905A
Authority
CN
China
Prior art keywords
image
area image
characters
character
position information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110439609.9A
Other languages
Chinese (zh)
Inventor
卜玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New H3C Security Technologies Co Ltd
Original Assignee
New H3C Security Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New H3C Security Technologies Co Ltd filed Critical New H3C Security Technologies Co Ltd
Priority to CN202110439609.9A priority Critical patent/CN113282905A/en
Publication of CN113282905A publication Critical patent/CN113282905A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/36User authentication by graphic or iconic representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/413Classification of content, e.g. text, photographs or tables

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computer Security & Cryptography (AREA)
  • Artificial Intelligence (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Character Input (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a login test method and a login test device, wherein the method comprises the following steps: acquiring a verification code image in a login page; denoising the verification code image to obtain a target verification code image; performing image segmentation processing on the target verification code image to obtain a first area image comprising characters and a second area image except the characters; identifying an object class of an object represented by characters in the first area image; according to the object type of the object represented by the characters, carrying out object identification on the second area image so as to identify the position information of the object matched with the object type; and performing login test by using the position information. The integrity of the login test based on the image verification code is ensured, and the situation of test bugs is avoided.

Description

Login test method and device
Technical Field
The present application relates to the field of security technologies, and in particular, to a login testing method and device.
Background
In order to improve the login security of a user, many web pages are verified by adopting a verification code method so as to prevent violent password cracking. But for testing, the implementation of automatic testing of web page login with verification codes is a difficult problem. There are many types of captchas, such as digital captchas, arithmetic captchas, slide captchas and image captchas. Among them, the most difficult is the image verification code, and how to accurately and quickly complete the login verification of the verification code is the key.
In the prior art, when login verification is tested, the function of the verification code is generally closed to carry out login verification, or a universal verification code is provided to carry out test verification, or cookie is utilized to bypass the verification code to carry out login verification.
Therefore, how to perform complete test verification on the login method based on the image verification code and avoid the existence of a bug in the test process is one of the considerable technical problems.
Disclosure of Invention
In view of this, the present application provides a login testing method and device, which are used for performing complete test verification on a login method based on an image verification code, so as to avoid a vulnerability existing in a testing process.
Specifically, the method is realized through the following technical scheme:
according to a first aspect of the present application, there is provided a login testing method, comprising:
acquiring a verification code image in a login page;
denoising the verification code image to obtain a target verification code image;
performing image segmentation processing on the target verification code image to obtain a first area image comprising characters and a second area image except the characters;
identifying an object class of an object represented by characters in the first area image;
according to the object type of the object represented by the characters, carrying out object identification on the second area image so as to identify the position information of the object matched with the object type;
and performing login test by using the position information.
According to a second aspect of the present application, there is provided a login test device comprising:
the image acquisition module is used for acquiring a verification code image in the login page;
the de-noising processing module is used for de-noising the verification code image to obtain a target verification code image;
the image segmentation module is used for carrying out image segmentation processing on the target verification code image to obtain a first area image comprising characters and a second area image except the characters;
the character recognition module is used for recognizing the object category of the object represented by the characters in the first area image;
the object identification module is used for carrying out object identification on the second area image according to the object category of the object represented by the characters so as to identify the position information of the object matched with the object category;
and the login testing module is used for performing login testing by using the position information.
The beneficial effects of the embodiment of the application are as follows:
after denoising processing is carried out on the verification code image in the login page, recognition processing is carried out on the characters and the objects in the target verification code image respectively, so that the position information of the objects which are consistent with the object types of the objects represented by the recognized characters is obtained, then login testing is carried out based on the position information, login testing verification based on the image verification codes is achieved, and the situation that leaks exist in the testing process caused by an avoidance scheme in the prior art is avoided.
Drawings
Fig. 1 is a schematic flowchart of a login testing method according to an embodiment of the present application;
fig. 2 is a schematic diagram of an acquired verification code image provided in an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating image segmentation of a target verification code according to an embodiment of the present disclosure;
FIG. 4 is a schematic flowchart of identifying an object class of an object represented by a text in a first region image according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of a grayscale image obtained by performing grayscale processing on a first region image according to an embodiment of the present application;
fig. 6 is a schematic diagram of a character width of each character obtained after vertically projecting a grayscale image downward according to an embodiment of the present application;
FIG. 7 is an image schematic diagram of a target area image provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of the training and processing logic of a character recognition model according to an embodiment of the present application;
FIG. 9 is a schematic processing diagram of a convolutional neural network model provided in an embodiment of the present application;
FIG. 10a is a schematic sliding view of a sliding window provided by an embodiment of the present application;
FIG. 10b is a diagram illustrating objects matched by object recognition provided by an embodiment of the present application;
fig. 11 is a schematic diagram illustrating calculation of position information of a calculation object in a second area image according to an embodiment of the present application;
fig. 12 is a schematic diagram of a center position of an object region image provided in an embodiment of the present application;
fig. 13 is a schematic structural diagram of a login testing device according to an embodiment of the present application;
fig. 14 is a schematic hardware structure diagram of an electronic device implementing a login test method according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with aspects such as the present application.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the corresponding listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The following describes the login test method provided in the present application in detail.
Referring to fig. 1, fig. 1 is a flowchart of a login testing method provided in the present application, which may include the following steps:
s101, acquiring a verification code image in a login page.
In the step, when the login process is tested, the verification code image appears on the login page, and in order to successfully complete the login test, the correct image in the verification code image needs to be selected to complete the login. For example, the authentication code image in the login page may be obtained by way of crawling, as shown with reference to fig. 2.
S102, denoising the verification code image to obtain a target verification code image.
In the step, in order to improve the security of the verification code and improve the identification difficulty, the verification code image is mostly a fuzzy image added with noise, so that the wavelet transformation is adopted to perform denoising processing on the verification code image, the noise is well removed, the details of the image are reserved, and the target verification code image can be obtained.
S103, carrying out image segmentation processing on the target verification code image to obtain a first area image comprising characters and a second area image except the characters.
In this step, in order to verify the verification code, the verification code image generally includes two parts, that is, an image of the text description of the target object to be selected and an image including the target object, and the recognition mode of the text description part is different from that of the object, so this embodiment proposes to perform image segmentation processing on the target verification code image to segment a first region image including the text and a second region image excluding the text.
Alternatively, step S103 may be performed according to the following procedure: gridding the target verification code image according to a preset length; determining the position information of characters in the target verification code image in the target verification code, and the image height and the image width occupied by the characters; segmenting the first area image from the target verification code image according to the determined position information, the image width and the image height; and determining an image except the first area image in the target verification code image as a second area image.
Specifically, since the distribution of the verification code image is uniform and regular, that is, the region where the text portion is located and the region where the object is located have an obvious boundary, when the target verification code image is divided, the target verification code image may be subjected to grid processing of a fixed length, and an image division range is set in advance, that is, the distribution range of the text portion, that is, the position information of the text portion in the target verification code image is determined, which is described by taking the position information at the upper left as an example, and then the image height and the image width occupied by the text portion are determined, for example, the image height is the image height of the target verification code 3/20, and the image width is the image width of the target verification code. On this basis, the upper left of the frame of the target verification code image is taken as the origin coordinate (0,0), and the image range of the image in which the text part is located is recorded as: from (0,0) to (image width, image height of target authentication code × 3/20), then this range is taken as a first region image including characters, and then a region other than the first region image is taken as a second region image other than characters, that is, a region where an object is located, as shown with reference to fig. 3.
It should be noted that the position information, the image height, and the image width of the characters in the target verification code image can be adjusted according to the actual image layout. In addition, the preset length in the embodiment can be adjusted according to actual conditions, and the specific value of the preset length is not limited in the application.
And S104, identifying the object type of the object represented by the characters in the first area image.
In this step, when performing character recognition on the first region image, the step S104 may be executed by using a currently provided character recognition algorithm, or of course, other algorithms may be used, which is not limited in this application.
For better understanding of the present embodiment, it is proposed that step S104 may be performed according to the procedure shown in fig. 4: s401, carrying out gray level processing on the first area image to obtain a gray level image; s402, carrying out segmentation processing on the characters in the gray level image to obtain a character image of each character; and S403, performing character recognition on the character image of each character by using a pre-trained character recognition model to obtain the object type of the object represented by the characters in the first area image.
Specifically, the first area image may be subjected to a gradation process, such as a gradation binarization process, to obtain a gradation image shown in fig. 5, i.e., an image having only two colors, i.e., black and white. Then, since the characters are also characters in the gray-scale image, the character segmentation processing can be understood as the character segmentation processing on the characters in the gray-scale image, and for example, the method based on the maximum interval width of the adjacent characters can be adopted to segment the characters on the gray-scale image, so that the character image of each character can be obtained quickly. Then, each character image can be input into a pre-trained character recognition model, feature extraction is firstly carried out on the model, and then calculation is carried out by using the extracted character features, so that the object type of the object represented by the characters in the first area image can be obtained.
Optionally, the recognition rate may be affected by the presence of interfering characters in the first region image, such as characters that interfere with recognition, such as characters that may be present, such as characters that interfere with recognition. In order to avoid that the special characters influence the recognition rate to accurately recognize the chinese characters in the first region image, the present embodiment proposes to perform optimization processing on the grayscale image before performing image segmentation on the grayscale image. The method can be implemented according to the following steps: carrying out gray level processing on the first area image to obtain a gray level image; projecting the gray level image in the vertical direction to obtain the character width of each character; and removing characters with the character width smaller than a set width from the gray level image to obtain a target area image, wherein the set width is half of the average width of the characters.
Specifically, the grayscale image may be optimized again to delete characters in the grayscale image that obviously do not conform to the character characteristics. Since the width of the special character is significantly smaller than the width of the character, based on this principle, the projection in the vertical direction, i.e., the vertical downward projection, can be performed on the grayscale image to obtain the projection of each character in the grayscale image in the vertical direction, so that the character width wi of each character can be calculated, as shown in fig. 6, and then the average width of the character in the grayscale image can be calculated, i.e., the character width of each character in the grayscale image is averaged, and as an example, n characters are included in the grayscale image, the average width is (w1+ w2+ … + wn)/n. Then, setting half of the average width as a set width, then setting a region corresponding to the set width as noise, then judging whether characters with widths smaller than the set width exist in the gray level image, if so, deleting the characters, thereby eliminating all special characters in the gray level image and only keeping character parts, namely, the obtained target region image only includes characters, as shown in fig. 7. Therefore, the recognition accuracy of the characters is greatly improved.
On the basis, after the gray image is optimized, the gray image can be segmented according to the following procedures: and carrying out segmentation processing on the target area image to obtain a character image of each character in the target area image.
Specifically, after removing the special characters in the grayscale image, a target area image is obtained, then the target area image is subjected to character segmentation, and similarly, the target area image can be segmented by adopting a method of maximum interval width of adjacent characters, so that a character image of each character can be obtained.
Optionally, the character recognition model in this embodiment may be obtained by training according to the logic shown in fig. 8, and the training process is approximately: firstly, setting a character training sample library, wherein the character training sample library comprises a large number of training samples, if the training samples are character sample images, then training a character recognition model by using the character sample images, the character recognition model can extract the characteristics of the character sample images to obtain character characteristics, then calculating by using the extracted character characteristics, and the training end conditions are as follows: and (4) when the training times reach the set times, or the character recognition model obtained by training is converged, and after the training is finished, the trained character recognition model can be obtained. Thus, after the character images of the respective characters are obtained, feature extraction may be performed on each character image, so as to extract character features, and then the character features are input into a trained character recognition model, so that the character recognition model may output a category, which is an object category of an object represented by the characters, for example, an object category obtained when character recognition is performed based on the target area image obtained in fig. 7 is "rectangular".
Alternatively, the extracted text features may be, but not limited to, histogram features of gradient directions, and the like.
It should be noted that the character recognition model in this embodiment may be a first convolution neural network model, where the first convolution neural network model is composed of an Input Layer (Input Layer), a convolution Layer (volumetric Layer), a Pool Layer (Pool Layer), and a Full Connection Layer (Full Connection Layer), as shown in fig. 9, where the convolution Layer mainly performs convolution operation on an Input character image to extract character features; the pooling layer is mainly responsible for downsampling input character feature samples, calculating new features in small blocks according to the spatial position information of the feature matrix and the set segmentation features, and replacing information in the original blocks. The full link layer is the layer with the largest data size, and the two-dimensional matrix extracted by the input layer and the last pooling layer is converted into a one-dimensional matrix as the last output.
The object class in this embodiment may be a large class of a class to which the object belongs, or may be a subclass of the large class, and the like, which is not limited in this application and may be configured specifically according to the actual situation. When the object type is the large type of the object belonging type, identifying the object matched with the large type from the second area image, for example, the character is 'clock', and knowing that the object type of the clock is 'watch', if the watch (such as clock, watch and the like) in the second area image is the object matched with the object type 'watch', the position information of the image where the watch is located is output; if the object type is a subclass of the major class, an object matching the subclass is recognized from the second area image, for example, if the character is "clock", it can be known that the object type corresponding to the clock is clock, and only the object matching the clock as the subclass in the second area image, that is, the position information of the clock in the second area image is output.
And S105, carrying out object identification on the second area image according to the object type of the object represented by the characters so as to identify the position information of the object matched with the object type.
In this step, after the object type of the object represented by the characters included in the verification code image is identified based on step S104, the object in the second area image may be matched based on the object type, so as to match the object that matches the identified object type, and then the position information of the matched object is output, and then the registration test may be performed based on the position information.
Alternatively, step S105 may be implemented as the following procedure: carrying out object recognition on the second area image by using a pre-trained object recognition model so as to recognize the position information of each object in the second area image and the object type of the object; and determining the position information of the object matched with the object class of the object represented by the characters according to the object class of the object represented by the characters and the identified object class and position information of each object.
Specifically, the object recognition model trained in advance may be used to perform object recognition on the object in the second area image, and then the object recognition model may output the position information of the object and the object type of the object included in the second area image, so that the output result may be matched with the object type obtained by character recognition, and thus the position information of the object corresponding to the object type obtained by character recognition may be matched.
Alternatively, the character-recognized object class and the second region image may be input together into a pre-trained object recognition model, so that the object recognition model outputs only the position information of the object matching the character-recognized object class. Note that the structure of the object recognition model in this case is different from that of the object recognition model described above, and the training process and the training samples are also different.
Optionally, performing object recognition on the second area image by using a pre-trained object recognition model to recognize the position information of each object in the second area image and the object class of the object may also be performed according to the following process: amplifying the second area image to obtain a third area image; carrying out object positioning processing on the third area image by adopting a sliding window; when the object is positioned, the object area image where the object is located is input into a pre-trained object recognition model, and the position information of the object in the third area image and the object type of the object are obtained. The position information of the object in the third area image is processed according to the image width and the image height of the third area image, and the image width of the second area image, so as to obtain the position information of the object in the second area image, which will be described later with reference to fig. 10a, 10b, and 11.
Specifically, the object recognition and the character recognition are similar, and the second area image may also be subjected to denoising correction processing, and since the object recognition accuracy rate has a great relationship with the resolution of the image, this embodiment proposes that the second area image may be subjected to amplification processing before the object recognition, for example, the second area image may be subjected to amplification processing by using a double linear interpolation method to improve the resolution of the image and obtain the third area image, and at the same time, the image width and the image height of the third area image may be recorded. In addition, because the verification code image is different from the image in the natural scene, the background of the verification code image is single, the position of the object to be recognized is generally fixed and centered, and the number of the objects to be recognized is generally fixed, the embodiment proposes that the verification code image can be simplified, and the object can be easily captured without predicting the direction of the object, namely, the object is captured by adopting a sliding window method. The method comprises the following steps: assuming that the image width of the third area image is Wh and there are m objects to be recognized in the third area image, a sliding window with a length of Wh/m may be set, and then the sliding window is used to slide clockwise on the third area image, as shown in fig. 10a, after each sliding, the area corresponding to the sliding window is the object area image of the object. Then, the object region image can be input into a pre-trained object recognition model, so that the position information of the object and the object type of the object can be obtained, namely, the scanning and positioning are carried out simultaneously, and the real-time positioning effect is achieved. After the position information and the object type of the object are obtained, the sliding window can be moved clockwise to locate the next object, then the object area image where the next object is located is input into the trained object recognition model, so that the position information and the object type of the next object in the third area image are output, and so on until all the objects in the third area image are scanned and the object types and the position information of all the objects are determined. After the object types and the position information of all the objects in the third area image are obtained through scanning, an object having the same object type as the object type obtained through character recognition and the position information of the object can be screened out from the third area image, as shown in fig. 10b, a rectangle marked with a clock in fig. 10b is an object, namely a rectangle, which is consistent with the object type obtained through character recognition.
It should be noted that the position information obtained at this time is the position information of the object in the third area image, that is, the position information of the enlarged area image of the second area image, and therefore it is also necessary to determine the position information of the object in the second area image, and the position information may be represented by horizontal and vertical coordinates. For this reason, it is necessary to perform a restoration process on the position information of the object in the third area image, as shown in fig. 11, for example, the image width of the second area image is y, and the image height is x; the image width y 'and the image height x' of the third region image obtained after the enlargement processing are performed, the position information of the object in the third region image after the object recognition is described as (a ', b'), and the position information of the object in the second region image is described as (k, h), and then the abscissa k is x b '/x', and the ordinate h is y a '/y'. Accordingly, the position information of the object in the second area image is obtained, and further the position information of the object in the second area image, which is matched with the object type identified by the characters, is determined, and accordingly, the object area image, in which the object matched with the object type identified by the characters is located, is also the selected image.
Optionally, the training process of the object recognition model is substantially as follows: firstly, setting an object training sample library, wherein the object training sample library comprises a large number of object sample images, then training an object recognition model by using the object sample images, the object recognition model performs feature extraction on the object image samples to obtain object features, then calculating by using the extracted object features, and the training end conditions are as follows: and (4) when the training times reach the set times, or the object recognition model obtained by training is converged, and after the training is finished, the trained object recognition model can be obtained. Thus, after the second area image is obtained, feature extraction can be performed on the second area image, object features can be extracted, then the object features are input into a trained character recognition model, and the object recognition model can output the object type and the position information of each object in the second area image.
It should be noted that the above object recognition model may be a second convolutional neural network model, whose structure is identical to that of the first convolutional neural network model, but the training process is different. Further, the second convolutional neural network model in the present embodiment may be, but is not limited to, 16 layers.
And S106, performing login test by using the position information.
In this step, after calculating the position information of the object in the second area image, which matches the object type identified by the character, the click position may be calculated, the center position of the object area image in fig. 12 is taken as the click position, and then the coordinates of the click position, that is, (k + y/2, h + x/2) are determined based on the position information of the object (which may be understood as the position information of the upper left corner of the object area image, see also fig. 12). By determining the coordinates of the click position, the target click position of the click position in the whole login page can be calculated, and then the login test operation can be performed by simulating the click behavior based on the target click position.
It should be noted that the object referred to in any embodiment of the present application may be understood as an object or the like other than a character.
By implementing the login test method provided by any embodiment of the application, after denoising processing is performed on the verification code image in the login page, the characters and the objects in the target verification code image are respectively identified, so that the position information of the objects consistent with the object categories of the objects represented by the identified characters is obtained, and then login test is performed based on the position information, so that login test verification based on the image verification codes is realized, and the situation that a leak exists in the test process caused by an avoidance scheme in the prior art is avoided.
Based on the same inventive concept, the application also provides a login testing device corresponding to the login testing method. The implementation of the login testing device can refer to the above description of the login testing method, and is not discussed here.
Referring to fig. 13, fig. 13 is a login testing device according to an exemplary embodiment of the present application, including:
an image obtaining module 1301, configured to obtain an authentication code image in a login page;
a denoising module 1302, configured to perform denoising processing on the verification code image to obtain a target verification code image;
an image segmentation module 1303, configured to perform image segmentation processing on the target verification code image to obtain a first area image including characters and a second area image excluding the characters;
a character recognition module 1304, configured to recognize an object class of an object represented by characters in the first area image;
an object recognition module 1305, configured to perform object recognition on the second area image according to an object category of an object represented by the text, so as to recognize position information of the object matching the object category;
and a login testing module 1306, configured to perform a login test using the location information.
Optionally, the image segmentation module 1303 is specifically configured to perform gridding processing on the target verification code image according to a preset length; determining the position information of characters in the target verification code image in the target verification code, and the image height and the image width occupied by the characters; segmenting the first area image from the target verification code image according to the determined position information, the image width and the image height; and determining an image except the first area image in the target verification code image as a second area image.
Optionally, the character recognition module 1304 is specifically configured to perform gray processing on the first area image to obtain a gray image; carrying out segmentation processing on the characters in the gray level image to obtain a character image of each character; and performing character recognition on the character image of each character by using a pre-trained character recognition model to obtain the object class of the object represented by the characters in the first area image.
Optionally, the login testing apparatus provided in this embodiment further includes:
a projection processing module (not shown in the figure) for performing vertical projection on the gray image to obtain the character width of each character before the character recognition module 1304 performs segmentation processing on the characters in the gray image;
a character removing module (not shown in the figure) for removing characters with a character width smaller than a set width from the grayscale image to obtain a target area image, wherein the set width is half of the average width of the characters;
on this basis, the character recognition module 1304 is further configured to perform segmentation processing on the target area image to obtain a character image of each character in the target area image.
Optionally, the object identifying module 1305 is specifically configured to perform object identification on the second area image by using a pre-trained object identification model, so as to identify position information of each object in the second area image and an object class of the object; and determining the position information of the object matched with the object class of the object represented by the characters according to the object class of the object represented by the characters and the identified object class and position information of each object.
Optionally, the object identifying module 1305 is specifically configured to perform amplification processing on the second area image to obtain a third area image; carrying out object positioning processing on the third area image by adopting a sliding window; when an object is positioned, inputting an object area image where the object is located into a pre-trained object recognition model to obtain position information of the object in the third area image and an object type of the object; and processing the position information of the object in the third area image according to the image width and the image height of the third area image and the image width of the second area image to obtain the position information of the object in the second area image.
Based on the same inventive concept, the embodiment of the present application provides an electronic device, as shown in fig. 14, including a processor 1401 and a machine-readable storage medium 1402, where the machine-readable storage medium 1402 stores a computer program capable of being executed by the processor 1401, and the processor 1401 is caused by the computer program to execute the login testing method provided by the embodiment of the present application.
The computer-readable storage medium may include a RAM (Random Access Memory), a DDR SRAM (Double Data Rate Synchronous Dynamic Random Access Memory), and may also include a NVM (Non-volatile Memory), such as at least one disk Memory. Alternatively, the computer readable storage medium may be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
In addition, the embodiment of the present application provides a machine-readable storage medium, which stores a computer program, and when the computer program is called and executed by a processor, the computer program causes the processor to execute the login testing method provided by the embodiment of the present application.
For the embodiments of the electronic device and the machine-readable storage medium, since the contents of the related methods are substantially similar to those of the foregoing embodiments of the methods, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the embodiments of the methods.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The implementation process of the functions and actions of each unit/module in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, wherein the units/modules described as separate parts may or may not be physically separate, and the parts displayed as units/modules may or may not be physical units/modules, may be located in one place, or may be distributed on a plurality of network units/modules. Some or all of the units/modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (12)

1. A login testing method is characterized by comprising the following steps:
acquiring a verification code image in a login page;
denoising the verification code image to obtain a target verification code image;
performing image segmentation processing on the target verification code image to obtain a first area image comprising characters and a second area image except the characters;
identifying an object class of an object represented by characters in the first area image;
according to the object type of the object represented by the characters, carrying out object identification on the second area image so as to identify the position information of the object matched with the object type;
and performing login test by using the position information.
2. The method of claim 1, wherein performing image segmentation on the target verification code image to obtain a first region image including text and a second region image excluding the text comprises:
gridding the target verification code image according to a preset length;
determining the position information of characters in the target verification code image in the target verification code, and the image height and the image width occupied by the characters;
segmenting the first area image from the target verification code image according to the determined position information, the image width and the image height;
and determining an image except the first area image in the target verification code image as a second area image.
3. The method of claim 1, wherein identifying an object class of an object characterized by text in the first region image comprises:
carrying out gray level processing on the first area image to obtain a gray level image;
carrying out segmentation processing on the characters in the gray level image to obtain a character image of each character;
and performing character recognition on the character image of each character by using a pre-trained character recognition model to obtain the object class of the object represented by the characters in the first area image.
4. The method of claim 3, further comprising, prior to the segmenting the text in the grayscale image:
projecting the gray level image in the vertical direction to obtain the character width of each character;
removing characters with character width smaller than a set width from the gray level image to obtain a target area image, wherein the set width is half of the average width of the characters;
then, the dividing process is performed on the characters in the grayscale image to obtain a character image of each character, including:
and carrying out segmentation processing on the target area image to obtain a character image of each character in the target area image.
5. The method of claim 1, wherein performing object recognition on the second region image according to an object class of an object characterized by the text to identify position information of the object matching the object class comprises:
carrying out object recognition on the second area image by using a pre-trained object recognition model so as to recognize the position information of each object in the second area image and the object type of the object;
and determining the position information of the object matched with the object class of the object represented by the characters according to the object class of the object represented by the characters and the identified object class and position information of each object.
6. The method of claim 5, wherein performing object recognition on the second region image by using a pre-trained object recognition model to identify the position information of each object in the second region image and the object class of the object comprises:
amplifying the second area image to obtain a third area image;
carrying out object positioning processing on the third area image by adopting a sliding window;
when an object is positioned, inputting an object area image where the object is located into a pre-trained object recognition model to obtain position information of the object in the third area image and an object type of the object;
and processing the position information of the object in the third area image according to the image width and the image height of the third area image and the image width of the second area image to obtain the position information of the object in the second area image.
7. A login testing device, comprising:
the image acquisition module is used for acquiring a verification code image in the login page;
the de-noising processing module is used for de-noising the verification code image to obtain a target verification code image;
the image segmentation module is used for carrying out image segmentation processing on the target verification code image to obtain a first area image comprising characters and a second area image except the characters;
the character recognition module is used for recognizing the object category of the object represented by the characters in the first area image;
the object identification module is used for carrying out object identification on the second area image according to the object category of the object represented by the characters so as to identify the position information of the object matched with the object category;
and the login testing module is used for performing login testing by using the position information.
8. The apparatus of claim 7,
the image segmentation module is specifically used for carrying out gridding processing on the target verification code image according to a preset length; determining the position information of characters in the target verification code image in the target verification code, and the image height and the image width occupied by the characters; segmenting the first area image from the target verification code image according to the determined position information, the image width and the image height; and determining an image except the first area image in the target verification code image as a second area image.
9. The apparatus of claim 7,
the character recognition module is specifically used for carrying out gray level processing on the first area image to obtain a gray level image; carrying out segmentation processing on the characters in the gray level image to obtain a character image of each character; and performing character recognition on the character image of each character by using a pre-trained character recognition model to obtain the object class of the object represented by the characters in the first area image.
10. The apparatus of claim 9, further comprising:
the projection processing module is used for projecting the gray level image in the vertical direction to obtain the character width of each character before the character recognition module performs segmentation processing on the characters in the gray level image;
the character removing module is used for removing characters with character width smaller than a set width from the gray level image to obtain a target area image, and the set width is half of the average width of the characters;
the character recognition module is further configured to perform segmentation processing on the target area image to obtain a character image of each character in the target area image.
11. The apparatus of claim 7,
the object identification module is specifically configured to perform object identification on the second area image by using a pre-trained object identification model to identify position information of each object in the second area image and an object type of the object; and determining the position information of the object matched with the object class of the object represented by the characters according to the object class of the object represented by the characters and the identified object class and position information of each object.
12. The apparatus of claim 11,
the object identification module is specifically configured to perform amplification processing on the second area image to obtain a third area image; carrying out object positioning processing on the third area image by adopting a sliding window; when an object is positioned, inputting an object area image where the object is located into a pre-trained object recognition model to obtain position information of the object in the third area image and an object type of the object; and processing the position information of the object in the third area image according to the image width and the image height of the third area image and the image width of the second area image to obtain the position information of the object in the second area image.
CN202110439609.9A 2021-04-23 2021-04-23 Login test method and device Pending CN113282905A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110439609.9A CN113282905A (en) 2021-04-23 2021-04-23 Login test method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110439609.9A CN113282905A (en) 2021-04-23 2021-04-23 Login test method and device

Publications (1)

Publication Number Publication Date
CN113282905A true CN113282905A (en) 2021-08-20

Family

ID=77277241

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110439609.9A Pending CN113282905A (en) 2021-04-23 2021-04-23 Login test method and device

Country Status (1)

Country Link
CN (1) CN113282905A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113704128A (en) * 2021-09-03 2021-11-26 四川虹美智能科技有限公司 Automatic testing method and device for interface
CN114066402A (en) * 2021-11-09 2022-02-18 中国电力科学研究院有限公司 Method and system for realizing automatic flow based on character recognition
CN116301456A (en) * 2023-02-21 2023-06-23 广州市保伦电子有限公司 Windows client login test management method, device and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971045A (en) * 2013-01-25 2014-08-06 苏州精易会信息技术有限公司 Click type verification code implementation method
CN108182437A (en) * 2017-12-29 2018-06-19 北京金堤科技有限公司 One kind clicks method for recognizing verification code, device and user terminal
CN109697353A (en) * 2018-11-26 2019-04-30 武汉极意网络科技有限公司 A kind of verification method and device for clicking identifying code
CN111783062A (en) * 2020-05-28 2020-10-16 苏宁金融科技(南京)有限公司 Verification code identification method and device, computer equipment and storage medium
CN112070092A (en) * 2020-09-02 2020-12-11 北京明略昭辉科技有限公司 Verification code parameter acquisition method and device
CN112487394A (en) * 2020-11-30 2021-03-12 携程旅游网络技术(上海)有限公司 Method, system, device and medium for identifying graph reasoning verification code

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971045A (en) * 2013-01-25 2014-08-06 苏州精易会信息技术有限公司 Click type verification code implementation method
CN108182437A (en) * 2017-12-29 2018-06-19 北京金堤科技有限公司 One kind clicks method for recognizing verification code, device and user terminal
CN109697353A (en) * 2018-11-26 2019-04-30 武汉极意网络科技有限公司 A kind of verification method and device for clicking identifying code
CN111783062A (en) * 2020-05-28 2020-10-16 苏宁金融科技(南京)有限公司 Verification code identification method and device, computer equipment and storage medium
CN112070092A (en) * 2020-09-02 2020-12-11 北京明略昭辉科技有限公司 Verification code parameter acquisition method and device
CN112487394A (en) * 2020-11-30 2021-03-12 携程旅游网络技术(上海)有限公司 Method, system, device and medium for identifying graph reasoning verification code

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113704128A (en) * 2021-09-03 2021-11-26 四川虹美智能科技有限公司 Automatic testing method and device for interface
CN114066402A (en) * 2021-11-09 2022-02-18 中国电力科学研究院有限公司 Method and system for realizing automatic flow based on character recognition
CN114066402B (en) * 2021-11-09 2023-11-28 中国电力科学研究院有限公司 Automatic flow implementation method and system based on character recognition
CN116301456A (en) * 2023-02-21 2023-06-23 广州市保伦电子有限公司 Windows client login test management method, device and system
CN116301456B (en) * 2023-02-21 2024-06-11 广东保伦电子股份有限公司 Windows client login test management method, device and system

Similar Documents

Publication Publication Date Title
CN108710866B (en) Chinese character model training method, chinese character recognition method, device, equipment and medium
CN110060237B (en) Fault detection method, device, equipment and system
CN113282905A (en) Login test method and device
CN108846835B (en) Image change detection method based on depth separable convolutional network
CN110032998B (en) Method, system, device and storage medium for detecting characters of natural scene picture
CN109740606B (en) Image identification method and device
CN112001406B (en) Text region detection method and device
KR102141302B1 (en) Object detection method based 0n deep learning regression model and image processing apparatus
CN113781406B (en) Scratch detection method and device for electronic component and computer equipment
CN110502977B (en) Building change classification detection method, system, device and storage medium
CN114529837A (en) Building outline extraction method, system, computer equipment and storage medium
CN109389110B (en) Region determination method and device
US20180293425A1 (en) Symbol Detection for Desired Image Reconstruction
CN113392455B (en) House pattern scale detection method and device based on deep learning and electronic equipment
CN112200789B (en) Image recognition method and device, electronic equipment and storage medium
JP3749726B1 (en) Low contrast defect inspection method under periodic noise, low contrast defect inspection method under repeated pattern
CN111340788B (en) Hardware Trojan horse layout detection method and device, electronic equipment and readable storage medium
CN112613354A (en) Heterogeneous remote sensing image change detection method based on sparse noise reduction self-encoder
CN116631003A (en) Equipment identification method and device based on P & ID drawing, storage medium and electronic equipment
CN116612272A (en) Intelligent digital detection system for image processing and detection method thereof
CN110751623A (en) Joint feature-based defect detection method, device, equipment and storage medium
KR102026280B1 (en) Method and system for scene text detection using deep learning
CN116246161A (en) Method and device for identifying target fine type of remote sensing image under guidance of domain knowledge
CN112308061B (en) License plate character recognition method and device
CN113989632A (en) Bridge detection method and device for remote sensing image, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210820