CN112381105B - Automatic processing method, device, equipment and medium for slide block verification - Google Patents

Automatic processing method, device, equipment and medium for slide block verification Download PDF

Info

Publication number
CN112381105B
CN112381105B CN202011279370.5A CN202011279370A CN112381105B CN 112381105 B CN112381105 B CN 112381105B CN 202011279370 A CN202011279370 A CN 202011279370A CN 112381105 B CN112381105 B CN 112381105B
Authority
CN
China
Prior art keywords
image
slider
characteristic
feature
track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011279370.5A
Other languages
Chinese (zh)
Other versions
CN112381105A (en
Inventor
张杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202011279370.5A priority Critical patent/CN112381105B/en
Publication of CN112381105A publication Critical patent/CN112381105A/en
Application granted granted Critical
Publication of CN112381105B publication Critical patent/CN112381105B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an automatic processing method for slide block verification, which is characterized in that a slide block image and a background image are subjected to module matching to obtain a horizontal area image, so that most of interference areas in the background image are eliminated, the data processing amount provided for a subsequent algorithm is reduced, and the algorithm performance is improved; meanwhile, feature points are extracted after binarization processing is carried out on the slider image and the background image, so that interference of background color of the image is reduced, feature boundary details are more obvious, and error of feature matching is reduced; finally, matching characteristic points on characteristic edges of the slider image and the background image to finish dragging and positioning of the slider image, and dragging the slider image according to the sliding distance and the restored target sliding position to finish slider verification operation; the problem that the target sliding position cannot be found accurately in the automatic operation of the slide block verification in the prior art is effectively solved, and the accuracy of the automatic processing of the slide block verification is improved.

Description

Automatic processing method, device, equipment and medium for slide block verification
Technical Field
The present invention relates to the field of information technologies, and in particular, to an automated processing method, apparatus, device, and medium for slider verification.
Background
Automated login verification is a common front-end automation operation. When the automatic operation is performed on the slide block verification, the current position of the slide block picture and the target sliding position are mainly identified by comparing differences of image pixels in the prior art. The method has good effect when the color difference between the sliding block and the background image is large, but the method only focuses on the displayed combined picture, when the difference between the sliding block and the background image is small, the target sliding position cannot be accurately found, and the automatic login effect is not ideal.
Disclosure of Invention
The embodiment of the invention provides an automatic processing method, device, equipment and medium for slide block verification, which are used for solving the problem that the target sliding position cannot be found accurately in the automatic operation of slide block verification in the prior art.
An automated processing method for slider verification, comprising:
Acquiring a slider image on a check page and a background image corresponding to the slider image;
Performing image clipping according to the position information of the slider image in the background image to obtain a horizontal area image corresponding to the position information, wherein the horizontal area image comprises the slider image and a preset target sliding position in the background image;
extracting features of the slider image to obtain feature edges of the slider image, wherein the feature edges of the slider image are edges with the largest number of feature points in the slider image, and acquiring position information of each feature point;
Intercepting an area except the sliding block image from the horizontal area image as a track image, extracting features of the track image to obtain a feature edge of the track image, wherein the feature edge of the track image is the edge with the largest feature points in the track image, and acquiring position information of each feature point;
performing position information matching and feature point number matching on the feature points on the feature edges of the slider image and the feature points on the feature edges of the track image;
When the matching meets the preset condition, a target sliding position is restored according to the characteristic edge of the track image, the shape and the size information of the slider image;
Calculating the sliding distance of the sliding block image according to the position information matching result of the characteristic points on the characteristic edges of the track image;
and moving the slider image according to the sliding distance and the restored target sliding position to finish slider verification operation.
Optionally, the performing image cropping according to the position information of the slider image in the background image, and obtaining the horizontal area image corresponding to the position information includes:
Performing similarity matching on the slider image and the background image to obtain shape and size information of the slider image and position information in the background image;
And performing image clipping on the background image according to the position information to obtain a horizontal area image corresponding to the position information.
Optionally, the extracting the features of the slider image to obtain a feature edge of the slider image, where the feature edge of the slider image is an edge with the largest number of feature points in the slider image, and the obtaining the position information of each feature point includes:
Performing binarization processing on the slider image to obtain a gray level image corresponding to the slider image;
extracting characteristic points from the gray level image, and classifying the characteristic points according to the coordinate information of the characteristic points;
selecting the classification with the most characteristic points, and taking the classification with the most characteristic points as the characteristic edge of the slider image;
And acquiring coordinate information of each characteristic point in the characteristic edge of the slider image relative to a preset origin.
Optionally, the extracting the features of the track image to obtain a feature edge of the track image, where the feature edge of the track image is an edge with the largest number of feature points in the track image, and the obtaining the position information of each feature point includes:
Performing binarization processing on the track image to obtain a gray level image corresponding to the track image;
extracting characteristic points from the gray level image, and classifying the characteristic points according to the coordinate information of the characteristic points;
Selecting the classification with the most characteristic points, and taking the classification with the most characteristic points as the characteristic edge of the track image;
and acquiring coordinate information of each characteristic point in the characteristic edge of the track image relative to a preset origin.
Optionally, the matching the position information of the feature points on the feature edge of the slider image and the feature points on the feature edge of the track image includes:
pairing the characteristic points on the characteristic edges of the slider image and the characteristic points on the characteristic edges of the track image according to a preset mode to obtain a plurality of characteristic point pairs;
Calculating a horizontal coordinate difference value between the characteristic points of the slider image and the characteristic points of the track image in each characteristic point pair to obtain a horizontal coordinate difference value set;
calculating a longitudinal coordinate difference value between the characteristic points of the slider image and the characteristic points of the track image in each characteristic point pair to obtain a longitudinal coordinate difference value set;
and judging whether each of the vertical coordinate difference values in the vertical coordinate difference value set is smaller than or equal to a first pixel threshold value and whether the difference value of any two horizontal coordinate difference values in the horizontal coordinate difference value set is smaller than or equal to a second pixel threshold value.
Optionally, when the matching meets a preset condition, the restoring the target sliding position according to the feature edge of the track image, the shape and the size information of the slider image includes:
When the number of the feature points corresponding to the feature edges of the slider image is the same as or falls within a preset number range, each of the longitudinal coordinate difference values in the longitudinal coordinate difference value set is smaller than or equal to a first pixel threshold value, and the difference value of any two of the transverse coordinate difference values in the transverse coordinate difference value set is smaller than or equal to a second pixel threshold value, a target sliding position is restored according to the shape and size information of the feature edges of the track image and the slider image.
Optionally, the pairing the feature points on the feature edges of the slider image and the feature points on the feature edges of the track image according to a preset manner, and obtaining a plurality of feature point pairs includes:
pairing the characteristic points on the characteristic edge of the slider image and the characteristic points on the characteristic edge of the track image which are on the same horizontal line to obtain a plurality of characteristic point pairs; and/or
Pairing the characteristic points on the characteristic edge of the slider image with the characteristic points on the characteristic edge of the track image according to a preset horizontal distance to obtain a plurality of characteristic point pairs;
wherein each of the characteristic point pairs includes a characteristic point of one slider image and a characteristic point of one track image.
An automated processing apparatus for slider verification, comprising:
the acquisition module is used for acquiring the slider image on the check page and the corresponding background image thereof;
The clipping module is used for performing image clipping according to the position information of the slider image in the background image to obtain a horizontal area image corresponding to the position information, wherein the horizontal area image comprises the slider image and a preset target sliding position in the background image;
The first feature extraction module is used for carrying out feature extraction on the slider image to obtain feature edges of the slider image, wherein the feature edges of the slider image are edges with the largest number of feature points in the slider image, and position information of each feature point is obtained;
The second feature extraction module is used for intercepting areas except the sliding block image from the horizontal area image as a track image, carrying out feature extraction on the track image to obtain feature edges of the track image, wherein the feature edges of the track image are edges with the largest feature points in the track image, and acquiring position information of each feature point;
the matching module is used for carrying out position information matching and feature point number matching on the feature points on the feature edges of the slider image and the feature points on the feature edges of the track image;
The restoring module is used for restoring the target sliding position according to the characteristic edge of the track image, the shape and the size information of the sliding block image when the matching meets the preset condition;
The distance calculation module is used for calculating the sliding distance of the sliding block image according to the position information matching result of the characteristic points on the characteristic edges of the track image;
And the sliding module is used for moving the sliding block image according to the sliding distance and the restored target sliding position so as to finish the sliding block verification operation.
A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing an automated processing method of slider verification as described above when the computer program is executed by the processor.
A computer readable storage medium storing a computer program which when executed by a processor implements the automated processing method of slider verification described above.
According to the invention, the slide block image and the background image are subjected to module matching to obtain the horizontal area image, so that most of interference areas in the background image are eliminated, the data processing amount provided for a subsequent algorithm is reduced, and the algorithm performance is improved; meanwhile, feature points are extracted after binarization processing is carried out on the slider image and the background image, so that interference of background color of the image is reduced, feature boundary details are more obvious, and error of feature matching is reduced; finally, matching characteristic points on characteristic edges of the slider image and the background image to finish dragging and positioning of the slider image, and dragging the slider image according to the sliding distance and the restored target sliding position to finish slider verification operation; the problem that the target sliding position cannot be found accurately in the automatic operation of the slide block verification in the prior art is effectively solved, and the accuracy of the automatic processing of the slide block verification is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an automated processing method of slider verification in an embodiment of the present invention;
FIG. 2 is a flowchart of step S102 in an automated slider verification processing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a slider image and a corresponding background image according to an embodiment of the present invention;
FIG. 4 is a flowchart of step S103 in an automated slider verification processing method according to an embodiment of the present invention;
FIG. 5 is a flowchart of step S104 in an automated slider verification processing method according to an embodiment of the present invention;
FIG. 6 is a flowchart of matching position information in an automated processing method for slider verification in accordance with an embodiment of the present invention;
FIG. 7 is a schematic block diagram of an automated processing unit for slider verification in accordance with an embodiment of the present invention;
FIG. 8 is a schematic diagram of a computer device in accordance with an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The following will describe in detail the method for automated processing of slider verification provided in this embodiment, as shown in fig. 1, the method for automated processing of slider verification includes:
in step S101, a slider image on a check page and a background image corresponding thereto are acquired.
In existing slider verification, a verification page generally includes a slider image to be dragged by a user and a background image corresponding to the slider image. In the embodiment, after the automatic verification of the starting slide block, a slide block image on a verification page and a corresponding background image thereof are acquired. As a preferred example of the present invention, the slider check is preferably a tile slide check, the slider image is a tile check code, and the background image is an image containing a target slide position matching the tile check code. In other embodiments, other patterns of sliding verification are also possible.
In step S102, image clipping is performed according to the position information of the slider image in the background image, so as to obtain a horizontal area image corresponding to the position information.
The horizontal area image comprises the slider image and a preset target sliding position in the background image. Since the background image provided on the check page is typically larger, more pixel information not useful for the sliding track is mashed up. In view of this, in this embodiment, the position information of the slider image in the background image is located first, and then image clipping is performed based on the position information, so as to remove useless information except for the sliding track in the background image, reduce the workload of image processing, and further improve the efficiency of image recognition. Optionally, as shown in fig. 2, the step S102 includes:
In step S201, similarity matching is performed on the slider image and the background image, so as to obtain shape and size information of the slider image and position information in the background image.
Here, the similarity matching refers to comparing the similarity of the slider image and the background image. As one embodiment of the invention, the similarity function in OPENCV may be used to perform similarity matching on the slider image and the background image.
Because the background image comprises the slider image, the slider image can be outlined from the background image through similarity matching, and the shape and size information of the slider image and the position information of the slider image in the background image are obtained.
In step S202, image cropping is performed on the background image according to the position information, so as to obtain a horizontal area image corresponding to the position information.
Since slider verification is typically dragging the slider image from left to right. Therefore, in this embodiment, the horizontal extension is performed from the position information of the slider image to the right, and the image capturing is performed on the background image, so as to obtain a parallel area, which is used as the horizontal area image corresponding to the position information.
For ease of understanding, fig. 3 is a slider image and a background image corresponding to the slider image provided in the embodiment of the present invention, where a region a represents the slider image, a region B represents the background image, and a region C represents the horizontal region image.
The horizontal area image only keeps the preset target sliding positions in the sliding block image and the background image, so that most of interference areas in the background image are eliminated, subsequent sliding block automatic checking operation is focused in a narrow horizontal area, processing data of a subsequent algorithm are greatly reduced, and performance of the algorithm is improved.
In step S103, feature extraction is performed on the slider image to obtain a feature edge of the slider image, where the feature edge of the slider image is an edge with the largest number of feature points in the slider image, and position information of each feature point is obtained.
Unlike the prior art, the embodiment of the invention searches for the target sliding position from the background image based on the shape of the slider image, so as to avoid the influence of pixel differences in the slider image and the background image. Optionally, as shown in fig. 4, the step S103 includes:
In step S401, binarization processing is performed on the slider image, so as to obtain a gray scale image corresponding to the slider image.
The feature information in the slider image is typically gathered on the boundaries of the slider image. In view of this, the present embodiment converts the slider image into the grayscale image by binarization processing, so that the boundary details of the slider image are more obvious, and meanwhile, the interference of the background color on the slider image is greatly reduced, which is beneficial to extracting the feature points from the slider image.
In step S402, feature point extraction is performed on the grayscale image, and the feature points are classified according to their coordinate information.
And after the gray level image is obtained, extracting the characteristic points of the gray level image to obtain the characteristic points of the slider image and the coordinate information of each characteristic point. As described above, the feature information in the slider image is usually collected on the boundary of the slider image, and the feature points obtained after feature extraction mainly represent the boundary information of the slider image. The embodiment further obtains coordinate information of the feature points, wherein the coordinate information refers to an abscissa and an ordinate of the feature points relative to a preset origin through the preset origin. Optionally, the preset origin may be any vertex angle of the slider image, or may be any vertex angle of the background image, which is not limited herein.
And classifying according to the coordinate information of the feature points, and classifying the feature points with the same or similar abscissa and/or ordinate in the coordinate information into the same class. The abscissa or the ordinate is similar, that is, the value of the abscissa or the ordinate falls within a preset first numerical range, and the preset first numerical range is determined according to the distance between the position information of the slider image and a preset origin. Since the slider image is usually a regular pattern, such as a square, a tile, a rectangle, the feature points can be categorized on the corresponding boundaries by the above classification.
In step S403, the classification with the largest feature points is selected, and the classification with the largest feature points is used as the feature edge of the slider image.
After the classification of the feature points is completed, the number of the feature points of each classification is counted, the number of the feature points of different classifications is compared, and the classification with the largest number of the feature points is selected. The more feature points, the more detailed information the classification can provide. The present embodiment therefore takes the classification with the most feature points as the feature edge of the slider image.
In step S404, coordinate information of each feature point in the feature edge of the slider image with respect to a preset origin is acquired.
After the characteristic edge of the slider image is obtained, coordinate information of each characteristic point on the characteristic edge is further obtained and recorded.
Alternatively, as another preferred example of the present invention, if one feature edge is not enough for positioning, multiple feature edges may be introduced, and the first several classifications with the largest number of feature points are selected as the first feature edge and the second feature edge … … of the slider image; of course, all feature edges of the slider image may be selected.
In step S104, an area except for the slider image is taken as a track image from the horizontal area image, feature extraction is performed on the track image, so as to obtain feature edges of the track image, wherein the feature edges of the track image are edges with the largest number of feature points in the track image, and position information of each feature point is acquired.
Here, since the horizontal area image further includes the slider image, in order to further reduce the workload of image processing, the present embodiment further clips the horizontal area image, and removes the slider image from the horizontal area image to obtain a track image, as in the area D illustrated in fig. 3. And then extracting the characteristics from the track image. The feature extraction of the track image is similar to the step of extracting the feature of the slider image in step S103. Optionally, as shown in fig. 5, the extracting features of the track image in step S104, to obtain a feature edge of the track image, where the feature edge of the track image is an edge with the largest number of feature points in the track image, and the obtaining location information of each feature point includes:
in step S501, binarization processing is performed on the track image, so as to obtain a gray scale image corresponding to the track image.
The feature information in the track image is generally gathered on the boundary of the target sliding position preset in the track image. In view of this, the present embodiment converts the track image into the gray image by binarization processing, so that the boundary details of the gray image at the sliding position of the target are more obvious, and meanwhile, the interference of the background color on the track image is greatly reduced, which is beneficial to extracting the feature points from the track image.
In step S502, feature points are extracted from the gray-scale image, and the feature points are classified according to the coordinate information of the feature points.
And after the gray level image is obtained, extracting the characteristic points of the gray level image to obtain the characteristic points of the track image and the coordinate information of each characteristic point. As described above, the feature information in the track image is usually collected on the boundary of the preset target slider position, and the feature points obtained after feature extraction mainly represent the boundary information of the preset target slider position. The embodiment further obtains coordinate information of the feature points, wherein the coordinate information refers to an abscissa and an ordinate of the feature points relative to a preset origin through the preset origin. It should be understood that, in order to ensure an inertia of the coordinate information, the preset origin is the same as the preset origin mentioned in the above step S103, and may be any vertex angle of the track image or any vertex angle of the background image, which is not limited herein.
And classifying according to the coordinate information of the feature points, and classifying the feature points with the same or similar abscissa and/or ordinate in the coordinate information into the same class. The abscissa or ordinate is similar, that is, the value of the abscissa or ordinate falls within a preset second numerical range, and the preset first numerical range is determined according to the distance between the preset target slider position in the background image and the preset origin. Since the slider image is usually a regular pattern, such as a square, a jigsaw, or a rectangle, the preset target sliding position in the corresponding track image is also a regular pattern, and by the classification, the feature points can be classified on the corresponding boundaries.
In step S503, a classification with the largest feature points is selected, and the classification with the largest feature points is used as a feature edge of the track image.
After the classification of the feature points is completed, the number of the feature points of each classification is counted, the number of the feature points of different classifications is compared, and the classification with the largest number of the feature points is selected. The more feature points, the more detailed information the classification can provide. The present embodiment therefore takes the classification with the most feature points as the feature edge of the track image.
In step S504, coordinate information of each feature point in the feature side of the track image with respect to a preset origin is acquired.
After the characteristic edge of the track image is obtained, coordinate information of each characteristic point on the characteristic edge is further obtained and recorded.
Alternatively, as another preferred example of the present invention, if one feature edge is not enough for positioning, multiple feature edges may be introduced, and the first several classifications with the largest number of feature points are selected as the first feature edge and the second feature edge … … of the track image; of course, all characteristic edges of the target sliding position preset by the track image can also be selected.
In step S105, position information matching and feature point number matching are performed on the feature points on the feature sides of the slider image and the feature points on the feature sides of the track image.
Here, the present embodiment determines whether the characteristic edge of the track image is the target movement position of the characteristic edge of the slider image by matching the characteristic edge of the slider image and the characteristic edge of the track image. The matching includes, but is not limited to, matching the number of feature points and matching the position information of the feature points.
The matching of the number of the characteristic points corresponding to the characteristic edge of the slider image and the characteristic edge of the track image comprises comparing the number of the characteristic points corresponding to the characteristic edge of the slider image with the number of the characteristic points corresponding to the characteristic edge of the track image to judge whether the number of the characteristic points of the slider image and the number of the characteristic points of the track image are the same or similar. In this embodiment, the similar number range is preset according to the tolerance, and as long as the number of feature points of the feature edge of the slider image and the number of feature points corresponding to the feature edge of the track image are both within the similar number range, the number of feature points of the feature edge of the slider image and the number of feature points of the feature edge of the track image are considered to be consistent.
Optionally, as shown in fig. 6, performing the matching of the position information on the feature points on the feature edge of the slider image and the feature points on the feature edge of the track image in the step S105 includes:
in step S601, the feature points on the feature edges of the slider image and the feature points on the feature edges of the track image are paired according to a preset manner, so as to obtain a plurality of feature point pairs.
Wherein each of the characteristic point pairs includes a characteristic point of one slider image and a characteristic point of one track image. Optionally, the preset mode includes a first pairing mode, and feature points on the feature edges of the slider image and feature points on the feature edges of the track image, which fall on the same horizontal line, are paired; and the second pairing mode is also included, and the characteristic points on the characteristic edge of the slider image and the characteristic points on the characteristic edge of the track image are paired according to a preset horizontal distance to obtain characteristic point pairs.
In this embodiment, the matching is performed according to the coordinate information of the feature points on the feature edge of the slider image and the feature points on the feature edge of the track image, so as to obtain a plurality of feature point pairs. The first pairing mode is to pair the characteristic points on the characteristic edge of the slider image and the characteristic points on the characteristic edge of the track image which fall on the same horizontal line. Such as: if the ordinate of the characteristic point X on the characteristic edge of the slider image is the same or similar to the ordinate of the characteristic point X 'on the characteristic edge of the track image, and the difference value of the abscissa falls within a preset difference value range, the characteristic point X and the characteristic point X' are taken as a characteristic point pair, and at the moment, the characteristic edge of the slider image and the characteristic edge of the track image are in a vertical structure. Such as edge A1A2 shown in fig. 3.
And the second pairing mode is to pair the characteristic points on the characteristic edge of the slider image with the characteristic points on the characteristic edge of the track image according to a preset horizontal distance. Such as: if the vertical coordinates of the feature points on the feature edge of the slider image and the feature points on the feature edge of the track image are basically the same or similar, the feature points on the feature edge of the slider image and the feature points on the track image are paired according to a preset horizontal distance, namely if the difference value of the horizontal coordinates between the feature points X on the feature edge of the slider image and the feature points X 'on the track image is equal to or approximately equal to the horizontal distance, the feature points X and the feature points X' are taken as a feature point pair, and at the moment, the feature edge of the slider image and the feature edge of the track image take on a horizontal structure. Such as edge A2A3 shown in fig. 3.
In step S602, a horizontal coordinate difference value between the feature point of the slider image and the feature point of the track image in each feature point pair is calculated, and a horizontal coordinate difference value set is obtained.
Here, in the present embodiment, the difference between the abscissa of the characteristic point of the slider image and the abscissa of the characteristic point of the track image in the characteristic point pair is calculated in units of the characteristic point pair, and the abscissa difference value between the two is obtained. Traversing all the characteristic point pairs to obtain a plurality of horizontal coordinate difference values to form the horizontal coordinate difference value set.
In step S603, a difference value of a vertical coordinate between the feature point of the slider image and the feature point of the track image in each pair of feature points is calculated, to obtain a set of difference values of a vertical coordinate.
Similar to step S602, in this embodiment, the difference between the ordinate of the feature point of the slider image and the ordinate of the feature point of the track image in the feature point pair is calculated in units of feature point pairs, and a difference value between the ordinate and the ordinate is obtained. Traversing all the characteristic point pairs to obtain a plurality of longitudinal coordinate difference values to form the longitudinal coordinate difference value set.
In step S604, it is determined whether each of the vertical coordinate difference values in the vertical coordinate difference value set is smaller than or equal to a first pixel threshold value and whether a difference value of any two horizontal coordinate difference values in the horizontal coordinate difference value set is smaller than or equal to a second pixel threshold value.
As previously described, the slide check is typically pulling the slider image horizontally to the right to a preset target slide position. If the characteristic edge of the track image is the target movement position of the characteristic edge of the slider image, the difference between the characteristic point on the characteristic edge of the slider image and the ordinate of the corresponding characteristic point on the characteristic edge of the track image should be very small. Therefore, the present embodiment sets a first pixel threshold according to an allowable error, and compares each of the sets of the vertical coordinate differences with the first pixel threshold.
Similarly, if the characteristic edge of the track image is the target movement position of the characteristic edge of the slider image, the difference between the characteristic point on the characteristic edge of the slider image and the abscissa of the corresponding characteristic point on the characteristic edge of the track image should be relatively stable or fluctuate within a small range. Therefore, the embodiment sets a second pixel threshold according to the allowable error, and compares the difference value of the two-by-two horizontal coordinate differences in the horizontal coordinate difference value set with the second pixel threshold.
In step S106, when the matching satisfies the preset condition, the target sliding position is restored according to the characteristic edge of the track image, the shape and the size information of the slider image.
In this embodiment, the matching in step S106 satisfies a preset condition means that: when the number of the feature points corresponding to the feature edges of the slider image is the same as or falls within a similar number range, each of the longitudinal coordinate difference values in the longitudinal coordinate difference value set is smaller than or equal to a first pixel threshold value, and the difference value of any two of the transverse coordinate difference values in the transverse coordinate difference value set is smaller than or equal to a second pixel threshold value.
When the number of the feature points corresponding to the feature edges of the slider image and the number of the feature points corresponding to the feature edges of the track image are the same or fall within a similar number range, each of the longitudinal coordinate difference values in the longitudinal coordinate difference value set is smaller than or equal to a first pixel threshold value, and the difference value of any two of the transverse coordinate difference values in the transverse coordinate difference value set is smaller than or equal to a second pixel threshold value, the number of the feature points corresponding to the feature edges of the slider image and the feature edges of the track image and the position information of each feature point meet preset conditions, the feature edges of the track image are matched with one edge on a preset target sliding position in the background image, and the feature edges of the track image are used as matching boundaries of the slider image in the background image, so that the initial positioning of the target sliding position in the background image is completed. It will be appreciated that the characteristic edge of the track image corresponds to the characteristic edge of the slider image, and that the characteristic edge of the slider image should be overlapped with or cover the characteristic edge of the track image when the slider image is moved.
According to the shape and size information of the slider image, the embodiment further uses the characteristic edge of the track image as a boundary of the target sliding position to restore the target sliding position as a stop position in the dragging process of the slider image. Such as region E shown in fig. 3.
Otherwise, if the number of the feature points corresponding to the feature edges of the slider image is the same as the number of the feature points corresponding to the feature edges of the track image or falls within a similar number range, each of the longitudinal coordinate difference values in the longitudinal coordinate difference value set is smaller than or equal to a first pixel threshold, and any two of the difference values in the transverse coordinate difference value set are smaller than or equal to any one of second pixel thresholds, the matching fails, and the feature edges of the track image cannot be matched with one edge on a preset target sliding position in the background image.
In step S107, the sliding distance of the slider image is calculated according to the result of matching the position information of the feature points on the feature sides of the track image.
As previously described, the slide check is typically pulling the slider image horizontally to the right to a preset target slide position. The horizontal coordinate differences of the feature point pairs obtained in the step S602 represent the sliding distance of the individual feature points, where the present embodiment obtains the average value of all the horizontal coordinate differences in the set according to the horizontal coordinate differences set of the feature point pairs, and uses the average value as the sliding distance d of the slider image.
In step S108, the slider image is moved according to the sliding distance and the restored target sliding position, so as to complete a slider verification operation.
Through the steps, the target sliding position and the sliding distance of the sliding block image in the background image are obtained, and in the embodiment, the dragging action of the mouse is simulated by using a random algorithm, and the sliding block image is dragged to the restored target sliding position according to the sliding distance, so that the automatic processing of sliding verification is completed.
According to the automatic processing method for the slide block verification, which is provided by the embodiment, the slide block image and the background image are subjected to module matching to obtain the horizontal area image, so that most of interference areas in the background image are eliminated, the data processing amount provided for a subsequent algorithm is reduced, and the algorithm performance is improved; meanwhile, feature points are extracted after binarization processing is carried out on the slider image and the background image, so that interference of background color of the image is reduced, feature boundary details are more obvious, and error of feature matching is reduced; finally, matching characteristic points on characteristic edges of the slider image and the background image to finish dragging and positioning of the slider image, and dragging the slider image according to the sliding distance and the restored target sliding position to finish slider verification operation; the problem that the target sliding position cannot be found accurately in the automatic operation of the slide block verification in the prior art is effectively solved, and the accuracy of the automatic processing of the slide block verification is improved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
In an embodiment, an automated processing apparatus for slider verification is provided, where the automated processing apparatus for slider verification corresponds to the automated processing method for slider verification in the foregoing embodiment one by one. As shown in fig. 7, the automated processing apparatus for slider verification includes an acquisition module 71, a clipping module 72, a first feature extraction module 73, a second feature extraction module 74, a matching module 75, a restoration module 76, a distance calculation module 77, and a sliding module 78. The functional modules are described in detail as follows:
an obtaining module 71, configured to obtain a slider image on a calibration page and a background image corresponding to the slider image;
A clipping module 72, configured to perform image clipping according to position information of the slider image in the background image, so as to obtain a horizontal area image corresponding to the position information, where the horizontal area image includes the slider image and a preset target sliding position in the background image;
a first feature extraction module 73, configured to perform feature extraction on the slider image to obtain a feature edge of the slider image, where the feature edge of the slider image is an edge with the largest number of feature points in the slider image, and obtain position information of each feature point;
A second feature extraction module 74, configured to extract features of the track image from the horizontal area image, where the features are the edges of the track image that contain the most feature points, and obtain location information of each feature point, where the features are extracted from the track image by using an area other than the slider image as a track image;
A matching module 75, configured to perform position information matching and feature point number matching on feature points on the feature edge of the slider image and feature points on the feature edge of the track image;
a restoring module 76, configured to restore the target sliding position according to the characteristic edge of the track image, the shape and the size information of the slider image when the matching meets the preset condition;
A distance calculating module 77, configured to calculate a sliding distance of the slider image according to a result of matching the position information of the feature points on the feature sides of the track image;
And a sliding module 78, configured to move the slider image according to the sliding distance and the restored target sliding position, so as to complete a slider verification operation.
Optionally, the clipping module 72 includes:
the similarity matching unit is used for performing similarity matching on the slider image and the background image to obtain shape and size information of the slider image and position information in the background image;
And the clipping unit is used for clipping the background image according to the position information to obtain a horizontal area image corresponding to the position information.
Optionally, the first feature extraction module 73 includes:
The first binarization unit is used for binarizing the slider image to obtain a gray level image corresponding to the slider image;
the first feature extraction unit is used for extracting feature points of the gray level image and classifying the feature points according to the coordinate information of the feature points;
The first classification unit is used for selecting the classification with the most characteristic points, and taking the classification with the most characteristic points as the characteristic edge of the slider image;
the first acquisition unit is used for acquiring coordinate information of each characteristic point in the characteristic edge of the slider image relative to a preset origin.
Optionally, the second feature extraction module 74 includes:
The second binarization unit is used for carrying out binarization processing on the track image to obtain a gray level image corresponding to the track image;
The second feature extraction unit is used for extracting feature points of the gray level image and classifying the feature points according to the coordinate information of the feature points;
the second classification unit is used for selecting the classification with the most characteristic points, and taking the classification with the most characteristic points as the characteristic edges of the track image;
And a second acquisition unit, configured to acquire coordinate information of each feature point in the feature edge of the track image relative to a preset origin.
Optionally, the matching module 75 includes:
the pairing unit is used for pairing the characteristic points on the characteristic edges of the slider image and the characteristic points on the characteristic edges of the track image in a preset mode to obtain a plurality of characteristic point pairs;
the first computing unit is used for computing the horizontal coordinate difference value between the characteristic points of the slider image and the characteristic points of the track image in each characteristic point pair to obtain a horizontal coordinate difference value set;
the second calculation unit is used for calculating the longitudinal coordinate difference value between the characteristic points of the slider image and the characteristic points of the track image in each characteristic point pair to obtain a longitudinal coordinate difference value set;
And the judging unit is used for judging whether each of the longitudinal coordinate difference values in the longitudinal coordinate difference value set is smaller than or equal to a first pixel threshold value and whether the difference value of any two horizontal coordinate difference values in the horizontal coordinate difference value set is smaller than or equal to a second pixel threshold value.
Optionally, the reduction module 76 includes:
When the number of the feature points corresponding to the feature edges of the slider image is the same as or falls within a similar number range, each of the longitudinal coordinate difference values in the longitudinal coordinate difference value set is smaller than or equal to a first pixel threshold value, the difference value of any two of the transverse coordinate difference values in the transverse coordinate difference value set is smaller than or equal to a second pixel threshold value, and a target sliding position is restored according to the feature edges of the track image and the shape and size information of the slider image.
Optionally, the pairing unit includes:
the first pairing subunit is used for pairing the characteristic points on the characteristic edge of the slider image and the characteristic points on the characteristic edge of the track image which are on the same horizontal line to obtain a plurality of characteristic point pairs; and/or
The second pairing subunit is used for pairing the characteristic points on the characteristic edge of the slider image with the characteristic points on the characteristic edge of the track image according to a preset horizontal distance to obtain a plurality of characteristic point pairs;
wherein each of the characteristic point pairs includes a characteristic point of one slider image and a characteristic point of one track image.
For specific limitations of the automated processing unit for slider verification, reference may be made to the above limitations of the access restriction method for the application program, and no further description is given here. The above-described blocks may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 8. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by a processor, implements an automated processing method of slider verification.
In one embodiment, a computer device is provided comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of when executing the computer program:
Acquiring a slider image on a check page and a background image corresponding to the slider image;
Performing image clipping according to the position information of the slider image in the background image to obtain a horizontal area image corresponding to the position information, wherein the horizontal area image comprises the slider image and a preset target sliding position in the background image;
extracting features of the slider image to obtain feature edges of the slider image, wherein the feature edges of the slider image are edges with the largest number of feature points in the slider image, and acquiring position information of each feature point;
Intercepting an area except the sliding block image from the horizontal area image as a track image, extracting features of the track image to obtain a feature edge of the track image, wherein the feature edge of the track image is the edge with the largest feature points in the track image, and acquiring position information of each feature point;
performing position information matching and feature point number matching on the feature points on the feature edges of the slider image and the feature points on the feature edges of the track image;
When the matching meets the preset condition, a target sliding position is restored according to the characteristic edge of the track image, the shape and the size information of the slider image;
Calculating the sliding distance of the sliding block image according to the position information matching result of the characteristic points on the characteristic edges of the track image;
and moving the slider image according to the sliding distance and the restored target sliding position to finish slider verification operation.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (7)

1. An automated processing method for slider verification, comprising:
Acquiring a slider image on a check page and a background image corresponding to the slider image;
Performing image clipping according to the position information of the slider image in the background image to obtain a horizontal area image corresponding to the position information, wherein the horizontal area image comprises the slider image and a preset target sliding position in the background image;
extracting features of the slider image to obtain feature edges of the slider image, wherein the feature edges of the slider image are edges with the largest number of feature points in the slider image, and acquiring position information of each feature point;
Intercepting an area except the sliding block image from the horizontal area image as a track image, extracting features of the track image to obtain a feature edge of the track image, wherein the feature edge of the track image is the edge with the largest feature points in the track image, and acquiring position information of each feature point;
performing position information matching and feature point number matching on the feature points on the feature edges of the slider image and the feature points on the feature edges of the track image;
When the matching meets the preset condition, a target sliding position is restored according to the characteristic edge of the track image, the shape and the size information of the slider image;
Calculating the sliding distance of the sliding block image according to the position information matching result of the characteristic points on the characteristic edges of the track image;
Moving the slider image according to the sliding distance and the restored target sliding position to complete slider verification operation;
the matching of the position information of the characteristic points on the characteristic edge of the slider image and the characteristic points on the characteristic edge of the track image comprises the following steps:
pairing the characteristic points on the characteristic edges of the slider image and the characteristic points on the characteristic edges of the track image according to a preset mode to obtain a plurality of characteristic point pairs;
Calculating a horizontal coordinate difference value between the characteristic points of the slider image and the characteristic points of the track image in each characteristic point pair to obtain a horizontal coordinate difference value set;
calculating a longitudinal coordinate difference value between the characteristic points of the slider image and the characteristic points of the track image in each characteristic point pair to obtain a longitudinal coordinate difference value set;
Judging whether each of the vertical coordinate difference values in the vertical coordinate difference value set is smaller than or equal to a first pixel threshold value and whether the difference value of any two horizontal coordinate difference values in the horizontal coordinate difference value set is smaller than or equal to a second pixel threshold value;
When the matching meets the preset condition, the step of restoring the target sliding position according to the characteristic edge of the track image, the shape and the size information of the slider image comprises the following steps:
When the number of the characteristic points corresponding to the characteristic edges of the slider image is the same as or falls within a preset number range, and each longitudinal coordinate difference value in the longitudinal coordinate difference value set is smaller than or equal to a first pixel threshold value, and the difference value of any two transverse coordinate difference values in the transverse coordinate difference value set is smaller than or equal to a second pixel threshold value, a target sliding position is restored according to the characteristic edges of the track image and the shape and size information of the slider image;
pairing the characteristic points on the characteristic edge of the slider image and the characteristic points on the characteristic edge of the track image according to a preset mode to obtain a plurality of characteristic point pairs, wherein the step of obtaining the characteristic point pairs comprises the following steps:
pairing the characteristic points on the characteristic edge of the slider image and the characteristic points on the characteristic edge of the track image which are on the same horizontal line to obtain a plurality of characteristic point pairs; and/or
Pairing the characteristic points on the characteristic edge of the slider image with the characteristic points on the characteristic edge of the track image according to a preset horizontal distance to obtain a plurality of characteristic point pairs;
wherein each of the characteristic point pairs includes a characteristic point of one slider image and a characteristic point of one track image.
2. The automated slider verification processing method of claim 1, wherein performing image cropping based on position information of the slider image in the background image to obtain a horizontal area image corresponding to the position information comprises:
Performing similarity matching on the slider image and the background image to obtain shape and size information of the slider image and position information in the background image;
And performing image clipping on the background image according to the position information to obtain a horizontal area image corresponding to the position information.
3. The automated slider verification processing method of claim 1 wherein the extracting features of the slider image to obtain feature edges of the slider image, the feature edges of the slider image being edges of the slider image that contain the most feature points, and the obtaining location information of each feature point comprises:
Performing binarization processing on the slider image to obtain a gray level image corresponding to the slider image;
extracting characteristic points from the gray level image, and classifying the characteristic points according to the coordinate information of the characteristic points;
selecting the classification with the most characteristic points, and taking the classification with the most characteristic points as the characteristic edge of the slider image;
And acquiring coordinate information of each characteristic point in the characteristic edge of the slider image relative to a preset origin.
4. The automated slider verification processing method of claim 1 wherein the extracting features of the track image to obtain feature edges of the track image, the feature edges of the track image being edges with the largest number of feature points included in the track image, and the obtaining location information of each feature point comprises:
Performing binarization processing on the track image to obtain a gray level image corresponding to the track image;
extracting characteristic points from the gray level image, and classifying the characteristic points according to the coordinate information of the characteristic points;
Selecting the classification with the most characteristic points, and taking the classification with the most characteristic points as the characteristic edge of the track image;
and acquiring coordinate information of each characteristic point in the characteristic edge of the track image relative to a preset origin.
5. An automated handling apparatus for slider verification, the apparatus comprising:
the acquisition module is used for acquiring the slider image on the check page and the corresponding background image thereof;
The clipping module is used for performing image clipping according to the position information of the slider image in the background image to obtain a horizontal area image corresponding to the position information, wherein the horizontal area image comprises the slider image and a preset target sliding position in the background image;
The first feature extraction module is used for carrying out feature extraction on the slider image to obtain feature edges of the slider image, wherein the feature edges of the slider image are edges with the largest number of feature points in the slider image, and position information of each feature point is obtained;
The second feature extraction module is used for intercepting areas except the sliding block image from the horizontal area image as a track image, carrying out feature extraction on the track image to obtain feature edges of the track image, wherein the feature edges of the track image are edges with the largest feature points in the track image, and acquiring position information of each feature point;
the matching module is used for carrying out position information matching and feature point number matching on the feature points on the feature edges of the slider image and the feature points on the feature edges of the track image;
The restoring module is used for restoring the target sliding position according to the characteristic edge of the track image, the shape and the size information of the sliding block image when the matching meets the preset condition;
The distance calculation module is used for calculating the sliding distance of the sliding block image according to the position information matching result of the characteristic points on the characteristic edges of the track image;
The sliding module is used for moving the sliding block image according to the sliding distance and the restored target sliding position so as to finish sliding block verification operation;
The matching module comprises:
the pairing unit is used for pairing the characteristic points on the characteristic edges of the slider image and the characteristic points on the characteristic edges of the track image in a preset mode to obtain a plurality of characteristic point pairs;
the first computing unit is used for computing the horizontal coordinate difference value between the characteristic points of the slider image and the characteristic points of the track image in each characteristic point pair to obtain a horizontal coordinate difference value set;
the second calculation unit is used for calculating the longitudinal coordinate difference value between the characteristic points of the slider image and the characteristic points of the track image in each characteristic point pair to obtain a longitudinal coordinate difference value set;
The judging unit is used for judging whether each of the longitudinal coordinate difference values in the longitudinal coordinate difference value set is smaller than or equal to a first pixel threshold value and whether the difference value of any two of the transverse coordinate difference values in the transverse coordinate difference value set is smaller than or equal to a second pixel threshold value;
The reduction module includes:
When the number of the characteristic points corresponding to the characteristic edges of the slider image is the same as or falls within a similar number range, each of the longitudinal coordinate difference values in the longitudinal coordinate difference value set is smaller than or equal to a first pixel threshold value, the difference value of any two of the transverse coordinate difference values in the transverse coordinate difference value set is smaller than or equal to a second pixel threshold value, and a target sliding position is restored according to the characteristic edges of the track image and the shape and size information of the slider image;
The pairing unit includes:
the first pairing subunit is used for pairing the characteristic points on the characteristic edge of the slider image and the characteristic points on the characteristic edge of the track image which are on the same horizontal line to obtain a plurality of characteristic point pairs; and/or
The second pairing subunit is used for pairing the characteristic points on the characteristic edge of the slider image with the characteristic points on the characteristic edge of the track image according to a preset horizontal distance to obtain a plurality of characteristic point pairs;
wherein each of the characteristic point pairs includes a characteristic point of one slider image and a characteristic point of one track image.
6. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements an automated processing method of slider verification according to any one of claims 1 to 4 when the computer program is executed by the processor.
7. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the automated slider verification processing method according to any one of claims 1 to 4.
CN202011279370.5A 2020-11-16 2020-11-16 Automatic processing method, device, equipment and medium for slide block verification Active CN112381105B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011279370.5A CN112381105B (en) 2020-11-16 2020-11-16 Automatic processing method, device, equipment and medium for slide block verification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011279370.5A CN112381105B (en) 2020-11-16 2020-11-16 Automatic processing method, device, equipment and medium for slide block verification

Publications (2)

Publication Number Publication Date
CN112381105A CN112381105A (en) 2021-02-19
CN112381105B true CN112381105B (en) 2024-05-07

Family

ID=74585294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011279370.5A Active CN112381105B (en) 2020-11-16 2020-11-16 Automatic processing method, device, equipment and medium for slide block verification

Country Status (1)

Country Link
CN (1) CN112381105B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109446900A (en) * 2018-09-21 2019-03-08 平安科技(深圳)有限公司 Certificate authenticity verification method, apparatus, computer equipment and storage medium
WO2019237520A1 (en) * 2018-06-11 2019-12-19 平安科技(深圳)有限公司 Image matching method and apparatus, computer device, and storage medium
CN111681280A (en) * 2020-06-03 2020-09-18 中国建设银行股份有限公司 Sliding verification code notch positioning method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6641123B2 (en) * 2015-08-28 2020-02-05 キヤノン株式会社 Position accuracy management slide, position accuracy management device and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019237520A1 (en) * 2018-06-11 2019-12-19 平安科技(深圳)有限公司 Image matching method and apparatus, computer device, and storage medium
CN109446900A (en) * 2018-09-21 2019-03-08 平安科技(深圳)有限公司 Certificate authenticity verification method, apparatus, computer equipment and storage medium
CN111681280A (en) * 2020-06-03 2020-09-18 中国建设银行股份有限公司 Sliding verification code notch positioning method and device

Also Published As

Publication number Publication date
CN112381105A (en) 2021-02-19

Similar Documents

Publication Publication Date Title
CN109859227B (en) Method and device for detecting flip image, computer equipment and storage medium
CN110414507B (en) License plate recognition method and device, computer equipment and storage medium
CN110569341B (en) Method and device for configuring chat robot, computer equipment and storage medium
CN113239874B (en) Behavior gesture detection method, device, equipment and medium based on video image
CN111626123A (en) Video data processing method and device, computer equipment and storage medium
CN110674712A (en) Interactive behavior recognition method and device, computer equipment and storage medium
CN111461170A (en) Vehicle image detection method and device, computer equipment and storage medium
CN109840524B (en) Text type recognition method, device, equipment and storage medium
CN106683073B (en) License plate detection method, camera and server
CN108280829A (en) Welding seam method, computer installation and computer readable storage medium
CN109492642B (en) License plate recognition method, license plate recognition device, computer equipment and storage medium
CN109255802B (en) Pedestrian tracking method, device, computer equipment and storage medium
JP7026165B2 (en) Text recognition method and text recognition device, electronic equipment, storage medium
CN109325492B (en) Character cutting method, device, computer equipment and storage medium
KR20220093187A (en) Positioning method and apparatus, electronic device, computer readable storage medium
CN112560796A (en) Human body posture real-time detection method and device, computer equipment and storage medium
CN110751500B (en) Processing method and device for sharing pictures, computer equipment and storage medium
CN111242840A (en) Handwritten character generation method, apparatus, computer device and storage medium
CN112052702A (en) Method and device for identifying two-dimensional code
CN112668462A (en) Vehicle loss detection model training method, vehicle loss detection device, vehicle loss detection equipment and vehicle loss detection medium
CN112529014A (en) Straight line detection method, information extraction method, device, equipment and storage medium
CN111553431A (en) Picture definition detection method and device, computer equipment and storage medium
CN111091146A (en) Image similarity obtaining method and device, computer equipment and storage medium
CN114708291A (en) Image processing method, image processing device, electronic equipment and storage medium
CN116740728B (en) Dynamic acquisition method and system for wafer code reader

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant