CN111079730A - Method for determining area of sample image in interface image and electronic equipment - Google Patents
Method for determining area of sample image in interface image and electronic equipment Download PDFInfo
- Publication number
- CN111079730A CN111079730A CN201911141461.XA CN201911141461A CN111079730A CN 111079730 A CN111079730 A CN 111079730A CN 201911141461 A CN201911141461 A CN 201911141461A CN 111079730 A CN111079730 A CN 111079730A
- Authority
- CN
- China
- Prior art keywords
- sample
- image
- graph
- area
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 76
- 238000010586 diagram Methods 0.000 claims abstract description 113
- 238000012795 verification Methods 0.000 claims description 30
- 238000004590 computer program Methods 0.000 claims description 13
- 238000004422 calculation algorithm Methods 0.000 claims description 11
- 230000008602 contraction Effects 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 9
- 238000012216 screening Methods 0.000 claims description 7
- 238000012360 testing method Methods 0.000 description 28
- 238000004364 calculation method Methods 0.000 description 12
- 230000000694 effects Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000012447 hatching Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012567 pattern recognition method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method and electronic equipment for determining an area where a sample diagram is located in an interface diagram, which are used for solving the problem that the area where the sample diagram is located in the interface diagram is inaccurate. The method and the device for determining the area of the zoomed sample map in the target interface map are based on the sample map, the width and height values of the source map containing the sample map and the position of the sample map in the source map. The scaling is carried out based on the original size of the sample image, the scaled sample image in the target interface image can be efficiently matched, and the accuracy of determining the area of the sample image is improved. And intercepting the target area image in the target interface image, and searching the zoomed sample image in the target area image, so that the calculated amount can be reduced, and the efficiency of determining the area of the sample image is improved. In addition, the method and the device also intercept the target area image by estimating the width and the height of the zoomed sample image and referring to the estimated width and the height of the zoomed sample image and the position of the sample image, so that the target area image comprises the zoomed sample image, and the condition that the intercepted target area image comprises an incomplete zoomed sample image is avoided.
Description
Technical Field
The present invention relates to image recognition, and in particular, to a method and an electronic device for determining an area where a sample image is located in an interface image.
Background
In the mobile application automation test, due to the fact that some controls lack key control information or the control information is different on different devices, the controls are difficult to be positioned through the control information in the test process, and therefore the automation script is difficult to act on the controls.
Because the picture displayed by the control in the interface is often unchanged, the picture displayed by the control in the interface can be used as a sample, and the sample is identified in the tested interface by an image identification method so as to determine the position of the control in the interface. However, the image recognition method is often large in calculation amount and low in recognition efficiency.
How to determine the area of the sample graph in the interface graph is a technical problem to be solved by the application.
Disclosure of Invention
The embodiment of the application aims to provide a method and electronic equipment for determining an area where a sample diagram is located in an interface diagram, so as to solve the problem that the area where the sample diagram is located in the interface diagram is inaccurate.
In a first aspect, a method for determining a region where a sample diagram is located in an interface diagram is provided, which includes:
acquiring a first sample diagram, a width and height value of a source diagram containing the first sample diagram, a position of the first sample diagram in the source diagram and a target interface diagram containing a second sample diagram, wherein the second sample diagram is the first sample diagram after zooming;
determining an estimated width-height value of the second sample graph according to the width-value ratio of the target interface graph to the source graph, the height-value ratio of the target interface graph to the source graph and the width-height value of the first sample graph;
intercepting a target area graph in the target interface graph according to the position of the first sample graph in the source graph and the estimated width and height values of the second sample graph;
and identifying the second sample graph with the estimated width and height values in the target area graph so as to determine the area of the second sample graph in the target interface graph.
In a second aspect, an electronic device is provided, comprising:
the acquisition module is used for acquiring a first sample image, a width and height value of a source image containing the first sample image, the position of the first sample image in the source image and a target interface image containing a second sample image, wherein the second sample image is the first sample image after zooming;
the first determining module is used for determining the estimated width and height values of the second sample graph according to the width value ratio of the target interface graph to the source graph, the height value ratio of the target interface graph to the source graph and the width and height values of the first sample graph;
the intercepting module intercepts a target area graph in the target interface graph according to the position of the first sample graph in the source graph and the estimated width and height values of the second sample graph;
and the second determining module is used for identifying the second sample graph with the estimated width and height values in the target area graph so as to determine the area of the second sample graph in the target interface graph.
In a third aspect, an electronic device is provided, the mobile terminal comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the method according to the first aspect.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, realizes the steps of the method according to the first aspect.
In the embodiment of the application, firstly, the width and height values of a source image containing a sample image, the position of the sample image in the source image and a target interface image containing the zoomed sample image are obtained, then, the estimated width and height values of the zoomed sample image are calculated according to the width and height values of the target interface image relative to the source image, then, a corresponding region is intercepted in the target interface image based on the position of the sample image in the source image, and finally, the zoomed sample image is identified in the intercepted region so as to determine the region where the sample image is located in the target interface image. When the width and height values of the target interface graph and the source graph are different, the picture displayed in the source graph is wholly zoomed according to the width and height values of the target interface graph, wherein the sample graph in the source graph is also zoomed according to the corresponding proportion, so that the size of the zoomed sample graph can be determined according to the width and height values of the source graph and the target interface graph, and the identification accuracy is improved. In addition, according to the scheme, the target area is intercepted in the target interface image according to the position of the sample image in the source image, and the zoomed sample image is searched in the target area, so that the accuracy of the identification can be improved, the calculation amount can be reduced, the zoomed sample image does not need to be searched in the whole target interface image, and the efficiency of the image identification is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1a is a schematic flowchart of a method for determining a region where a sample diagram is located in an interface diagram according to an embodiment of the present disclosure;
FIG. 1b is a schematic diagram of a sample diagram position of a method for determining a region where the sample diagram is located in an interface diagram according to an embodiment of the present disclosure;
FIG. 1c is a schematic diagram of a target area map location of a method for determining an area where a sample map is located in an interface map according to an embodiment of the present disclosure;
FIG. 2a is a second schematic flowchart of a method for determining a region where a sample is located in an interface diagram according to an embodiment of the present disclosure;
FIG. 2b is a schematic diagram of an abbreviated processing of a method for determining an area where a sample diagram is located in an interface diagram according to an embodiment of the present specification;
FIG. 3 is a third schematic flowchart of a method for determining a region where a sample diagram is located in an interface diagram according to an embodiment of the present disclosure;
FIG. 4a is a fourth schematic flowchart illustrating a method for determining a region where a sample diagram is located in an interface diagram according to an embodiment of the present disclosure;
fig. 4b is a schematic flowchart of generating a similarity binary image according to the method for determining the area where the sample image is located in the interface diagram in the embodiment of the present specification;
FIG. 5 is a fifth flowchart illustrating a method for determining a region where a sample diagram is located in an interface diagram according to an embodiment of the present disclosure;
FIG. 6a is a sixth schematic flowchart illustrating a method for determining a region where a sample diagram is located in an interface diagram according to an embodiment of the present disclosure;
fig. 6b is a schematic view of a binarization processing flow of a method for determining a region where a sample image is located in an interface diagram in the embodiment of the present specification;
FIG. 6c is a schematic flowchart of a truncated target area diagram of a method for determining an area where a sample diagram is located in an interface diagram according to an embodiment of the present specification;
FIG. 7 is a seventh schematic flowchart illustrating a method for determining a region where a sample diagram is located in an interface diagram according to an embodiment of the present disclosure;
fig. 8 is one of schematic structural diagrams of an electronic apparatus according to an embodiment of the present specification;
fig. 9 is a second schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. The reference numbers in the present application are only used for distinguishing the steps in the scheme and are not used for limiting the execution sequence of the steps, and the specific execution sequence is described in the specification.
In the mobile application automatic test, the control information can refer to id, class, text, xpath and the like, and part of the to-be-tested controls lack of key control information or are inconsistent in control information on different devices, so that the to-be-tested controls cannot be accurately positioned through the control information in the test process.
For the to-be-tested controls lacking key control information, because the pictures displayed by the to-be-tested controls on the display equipment are often the same, the picture displayed by the to-be-tested controls can be identified in the to-be-tested interface by adopting an image identification method, so that the position of the to-be-tested controls is determined according to the identified position of the picture, and then the testing operation can be executed on the position. If the image identification method is adopted, the images displayed on different devices on the same interface to be tested have different sizes and different resolutions because of different electronic device display images. The size of the control to be tested displayed on the equipment with the larger screen is larger, and the size of the control to be tested displayed on the equipment with the smaller screen is smaller. For example, when a control to be tested in an interface to be tested on an electronic device with a screen resolution of 720 × 1280 is displayed with a size of 60 × 60 pixels, an image of the 60 × 60 pixels may be a sample image, and when the same interface to be tested is displayed on an electronic device with a resolution of 1080 × 1920, the size of the display of the control to be tested may be 90 × 90 pixels, or 85 × 85 pixels, or other sizes, that is, the sample image is zoomed and displayed in the interface.
When the control to be detected is identified in the detected interface, the identification can be carried out by adopting a template matching mode, but the template matching method can successfully identify only under the condition that the width scaling ratio and the height scaling ratio of the sample drawing are consistent, and if the width scaling ratio and the height scaling ratio are not consistent, the template matching mode is difficult to accurately identify the control to be detected. In addition, the feature point matching method can also be used for identification, and the feature point matching method is used for identifying the features of the sample graph. However, the simple style line bar control displays too few feature points in the screen, and accurate identification is difficult to achieve by adopting the feature point matching method. In addition, the amount of calculation for feature point matching is large, and efficient recognition is difficult to achieve.
In order to solve the problems in the prior art, the present embodiment provides a method for determining an area where a sample diagram is located in an interface diagram, as shown in fig. 1a, including the following steps:
s11: acquiring a first sample diagram, a width and height value of a source diagram containing the first sample diagram, a position of the first sample diagram in the source diagram and a target interface diagram containing a second sample diagram, wherein the second sample diagram is the first sample diagram after zooming;
s12: determining an estimated width-height value of the second sample graph according to the width-value ratio of the target interface graph to the source graph, the height-value ratio of the target interface graph to the source graph and the width-height value of the first sample graph;
s13: intercepting a target area graph in the target interface graph according to the position of the first sample graph in the source graph and the estimated width and height values of the second sample graph;
s14: and identifying the second sample graph with the estimated width and height values in the target area graph so as to determine the area of the second sample graph in the target interface graph.
In the automatic test, the tested interface which successfully executes the test script can be recorded to obtain a test video. The recorded test video may be referred to for testing while performing related tests on different electronic devices. In this embodiment, the first icon may be an icon of a control that successfully executes the test and is captured in the test video, or may be an image preset by a tester before the test is executed.
In step S11, the acquired first sample may include information such as an image, a size, and the like of the first sample. The source graph containing the first graph may be the interface under test intercepted in the test video that successfully performed the test. The width and height values of the obtained source map containing the first pattern and the position of the first pattern in the source map can be obtained according to the tested interface, and can also be obtained through manual entry or other modes. The target interface diagram containing the second sample diagram may be a tested interface to be tested, and the features of the tested interface, such as resolution, size, etc., may be different from those in the recorded test video.
Since the width and the height of the same tested interface displayed in the screens with different resolutions are different, the second sample graph is the first sample graph which is zoomed, and the width and the height are different from the first sample graph. In step S12, the estimated width-to-height value of the second sample graph is determined according to the width-to-height ratio of the target interface graph to the source graph, the height-to-height ratio of the target interface graph to the source graph, and the width-to-height value of the first sample graph. Specifically, the width ratio and the height ratio may be determined according to the acquired target interface map and source map, and if the resolution of the source map is 720 × 1280 and the resolution of the target interface map is 1080 × 1920, the width ratio of the target interface map to the source map is 1080/720, i.e., 1.5, and the height ratio of the target interface map to the source map is 1920/1280, i.e., 1.5. And then, the estimated width and height values of the second sample diagram can be determined by combining the calculated width value ratio, height value ratio and width and height values of the first sample diagram. Assume that the first sample has a width of 60 pixels and a height of 60 pixels. The width value of the second sample is calculated to be 60 × 1.5, i.e., 90 pixels, based on the width value ratio, and the height value of the second sample is calculated to be 60 × 1.5, i.e., 90 pixels, based on the height value ratio.
Subsequently, in step S13, referring to fig. 1b, a coordinate system is first established in the source map, the upper left position of the source map is set as the origin O, and the horizontal and vertical coordinates are established. Subsequently, the position of the first map in the source map is determined in the established coordinates. Specifically, the position of the first sample image may refer to the position of any corner point of the first sample image, or may refer to the position of the center point of the first sample image. In this embodiment, the position of the first sample map refers to the position of the upper left corner of the first sample map in the source map, and the position coordinate is (X)1,Y1). Then, a coordinate system corresponding to the source graph is established in the target interface graph according to the position (X) of the first sample graph in the source graph1,Y1) Determining the position (X) of the upper left corner point of the second sample image in the target interface image2,Y2). And intercepting a target area graph in the target interface based on the estimated width and height values of the second sample graph determined in the step and the coordinate position. For example, the size of the second sample image determined in the above step is 90 × 90 pixels, and the coordinates (X) of the upper left corner point of the second sample image determined in the above step2,Y2) The area of the second sample in the target interface diagram can be preliminarily defined, as shown by the shading in the target interface diagram in fig. 1 b.
However, in practical applications, there may be other variations in the size of the second aspect in the target interface, such as a greater scale in height than in width, which may cause the size of the second aspect actually displayed to be different from the size of the second aspect in the ideal case determined in the above step. In order to further determine the area of the second sample in the target interface, the accuracy is improved. In this step, the size of the target area map may be determined by enlarging a certain area based on the preliminarily determined second template. For example, if the size of the second sample map preliminarily determined in the above step is 90 × 90 pixels, the size may be enlarged according to a predetermined scale in this step to be used as the size of the target area map, and the size of the truncated target area map may be 100 × 100 pixels, as shown by hatching in fig. 1 c. In practical application, even if the width and height of the second sample map are changed to a certain extent, the target area map intercepted in the step can also contain the second sample map, so that the complete second sample map can be identified in the target area map later.
Subsequently, in step S14, a second sample map is identified in the intercepted target area map, specifically, an appropriate image identification method may be selected for identification according to the features of the second sample map, the area of the second sample map in the target area map is determined, and then the area of the second sample map in the target interface map is determined. Since the second sample diagram is the zoomed first sample diagram, the region determined by the step of this embodiment is the region where the zoomed first sample diagram is located in the target interface diagram. In the automatic test process, after the area of the second sample image in the target interface image is determined, a test step can be performed on the area to perform a test on the interface to be tested.
In the embodiment of the application, a source image containing a sample image and a target interface image containing a zoomed sample image are firstly obtained, then the width-height value of the zoomed sample image is calculated according to the width-height value ratio of the target interface image relative to the source image, a corresponding area is intercepted in the target interface image based on the position of the sample image in the source image, and finally the zoomed sample image is identified in the intercepted area so as to determine the area where the sample image is located in the target interface image. When the width and height values of the target interface graph and the source graph are different, the picture displayed in the source graph is wholly zoomed according to the width and height values of the target interface graph, wherein the sample graph in the source graph is also zoomed according to the corresponding proportion, so that the size of the zoomed sample graph can be determined according to the width and height values of the source graph and the target interface graph, and the identification accuracy is improved. In addition, according to the scheme, the target area is intercepted in the target interface image according to the position of the sample image in the source image, and the zoomed sample image is searched in the target area, so that the accuracy of the identification can be improved, the calculation amount can be reduced, the zoomed sample image does not need to be searched in the whole target interface image, and the efficiency of the image identification is improved.
The invention discloses a method and electronic equipment for determining an area where a sample diagram is located in an interface diagram, which are used for solving the problem that the area where the sample diagram is located in the interface diagram is inaccurate. The method and the device for determining the area of the zoomed sample map in the target interface map are based on the sample map, the width and height values of the source map containing the sample map and the position of the sample map in the source map. The scaling is carried out based on the original size of the sample image, the scaled sample image in the target interface image can be efficiently matched, and the accuracy of determining the area of the sample image is improved. And intercepting the target area image in the target interface image, and searching the zoomed sample image in the target area image, so that the calculated amount can be reduced, and the efficiency of determining the area of the sample image is improved. In addition, the method and the device also intercept the target area image by estimating the width and the height of the zoomed sample image and referring to the estimated width and the height of the zoomed sample image and the position of the sample image, so that the target area image comprises the zoomed sample image, and the condition that the intercepted target area image comprises an incomplete zoomed sample image is avoided.
Based on the solution provided by the foregoing embodiment, preferably, as shown in fig. 2a, the foregoing step S14 includes:
s21: abbreviating the target area graph and the second sample graph according to a preset abbreviating standard;
s22: identifying the second thumbnail after the target area graph is abbreviated so as to determine the area of the second thumbnail in the target area graph after the target area graph is abbreviated;
s23: and determining the area of the second sample image in the target interface image according to a preset abbreviating standard and the area of the second sample image in the target area image after abbreviating.
In step S21, the preset abbreviating criteria may be preset by the tester according to actual requirements, or may be adjusted according to actual conditions of the target area map and the second sample map. For example, referring to fig. 2b, the predetermined thumbnail standard may be 50%, that is, the width is reduced to 50% and the height is reduced to 50%, after the thumbnail, the area of the target area map and the second sample map is 25% of the original area. And then, the second sample image after the thumbnail is identified in the target area image after the thumbnail can further reduce the calculated amount, shorten the time of the image identification process and improve the overall test efficiency.
In addition, since the thumbnail image may reduce the accuracy of identification in the subsequent steps, preferably, the thumbnail minimum value may be preset, that is, the width value and the height value of the target area map after the thumbnail and the width value and the height value of the second sample map are not less than the thumbnail minimum value, so as to ensure that the image contains more features for identification.
For example, the minimum value of the thumbnail is 30 pixels, and the size of the second sample before the thumbnail is assumed to be 45 × 45 pixels. Then, if the thumbnail is made according to the 50% preset thumbnail standard, the width value and the height value of the second sample will be less than 30 pixels. In order to ensure that the second thumbnail can contain more features for recognition and avoid losing the accuracy in the image recognition process, in this embodiment, the size of the second thumbnail can be reduced to 30 × 30 pixels to obtain the second thumbnail after the reduction.
Subsequently, in step S22, the second thumbnail is recognized in the target area map after the thumbnail. Because the target area image and the second sample image after the thumbnail are both subjected to the thumbnail in a certain proportion, the number of contained pixel points is less than that before the thumbnail, and the calculation amount in the image identification process can be effectively reduced. After the second sample image after the thumbnail is identified in the target area image after the thumbnail, in step S23, the second sample image and the target area image may be enlarged and retracted by a small amount according to a preset thumbnail standard, and the area occupied by the second sample image in the target area image may be determined. Then, the area of the second sample graph in the target interface graph can be further determined based on the position of the target area graph in the target interface graph.
According to the scheme provided by the embodiment, the second sample image and the target area image are further abbreviated based on the preset abbreviation standard, so that the operation amount in the image identification process can be further reduced, better identification accuracy can be ensured, and the position of the second sample image in the target interface image can be quickly and efficiently determined.
Based on the solution provided by the foregoing embodiment, as shown in fig. 3, the foregoing step S21 preferably includes:
s31: determining a width value sequence of the second sample graph after the contraction, wherein the nth width value in the width value sequence satisfies the following rule:
when n is 1, the width value W of the second sample graph after the abbreviation1When n is more than or equal to 2, the width value W of the second sample graph after the abbreviationn=Wn-1+(-1)nX a x (n-1), wherein n is a positive integer, and the preset zooming step a is a positive integer;
s32: determining a high value array corresponding to the wide value array according to the estimated width-height ratio of the second sample graph and the wide value array;
s33: and abbreviating the second sample graph according to the wide value number sequence and the corresponding high value number sequence.
In practical applications, the width and height of the second pattern may vary according to actual display requirements, for example, theoretically, the size of the second pattern is 90 × 90 pixels, but in actual display, the height may vary more than the width, and the actual display size may be 90 × 92 pixels. If the second sample is recognized in the target area map in a size of 90 × 90 pixels, it is difficult to accurately recognize the second sample having a size of 90 × 92 pixels.
In order to solve the problem that the width value and the height value change, in this embodiment, a width value sequence of the abbreviated second sample is generated based on the width value of the second sample, and a corresponding height value sequence is obtained through calculation. In step S31, the first item of the determined width value sequence is the width value of the second sample chart abbreviated according to the preset abbreviation standard. For example, assuming that the width value of the second sample after the thumbnail is 40 pixels, the first item W of the width value sequence is determined in this step1Is 40. Then, based on formula Wn=Wn-1+(-1)nAnd x a x (n-1) other terms of the broad value array are calculated. The preset zoom step a is a positive integer, and in this embodiment, the preset zoom step is 2 pixels. The preset scaling step is related to the recognition accuracy, the larger the scaling step is, the lower the recognition accuracy is but the smaller the calculation amount is, and the smaller the scaling step is, the higher the accuracy is but the larger the calculation amount is. Calculating a sequence of width values according to the formula above, wherein W2Is 42, W3Is 38, W4Is 44, W536 and so on.
In practical application, the number of entries of the width sequence may be preset to reduce the calculation amount, for example, if the width sequence is set to be 5 entries, then the calculation is performed to W according to the above rule5Namely, the obtained wide value sequence is as follows: 40, 42, 38, 44, 36. It is also possible to set a threshold value for the sequence of width values, e.g. to preset each term in the sequence of width values to be greater than or equal to W1-5 and less than or equal to W1+5, then due to W646, the threshold of the predetermined sequence of width values is exceeded, so the sequence of width values has a total of five entries: 40, 42, 38, 44, 36. In addition, the threshold of the width value sequence may be determined according to the width value of the second sample image after the reduction or the width value of the target area image after the reduction, for example, each item in the width value sequence may be set to be smaller than the width value of the target area image after the reduction, and the setting may be specifically set according to actual requirements.
After the width value sequence of the abbreviated second sample is determined, in step S32, a corresponding high value sequence is determined according to the estimated width-to-height ratio of the second sample and the width value sequence. Specifically, assuming that the size of the second sample after the thumbnail is 40 × 30 pixels, the estimated aspect ratio of the second sample after the thumbnail is 4: 3. Calculating a corresponding high value for each item in the wide value sequence based on the ratio, when the wide value sequence is: 40, 42, 38, 44, 36, the high value sequence may be: 30, 42 × 3/4, 38 × 3/4, 44 × 3/4, 36 × 3/4, and since the number of pixels is an integer, each term in the high value sequence can be rounded, and the high value sequence can be: 30, 32, 29, 33, 27.
After the wide value sequence and the corresponding high value sequence are obtained, the second look can be abbreviated based on the wide values and the corresponding high values in the sequences. When 5 items are included in the high value sequence, 5 second patterns with different width and height values can be obtained. In the subsequent step, the 5 kinds of second patterns with different width and height values can be sequentially identified in the abbreviated target area map in the order in the sequence until the abbreviated second pattern is identified in the abbreviated target area map.
By the scheme provided by the embodiment, the width value and the height value of the second sample map can be further adjusted to obtain the second sample map after the second sample map is abbreviated, and the possibility that the second sample map after the second sample map is abbreviated is improved in the target area map after the second sample map is abbreviated. In the automatic test process, the test success can be further improved.
Based on the solution provided by the foregoing embodiment, as shown in fig. 4a, the foregoing step S22 preferably includes:
s41: performing template matching on the contracted target area image and the contracted second sample image to obtain a similarity two-dimensional image, wherein when any target pixel in the similarity two-dimensional image represents the similarity between the contracted second sample image and the partial area of the overlapped contracted target area image when the center point of the contracted second sample image is located at the target pixel;
s42: and determining the region of the contracted second sample image in the contracted target region image according to the pixel point with the highest similarity represented in the similarity two-dimensional image.
In step S41 of the present embodiment, any template matching method may be adopted to identify the second thumbnail in the target area map after the thumbnail. The template matching is a pattern recognition method, and can be used to identify in what area of the target area map the thumbnail sample image is located, so as to identify the control corresponding to the second sample image. Specifically, the template matching may include moving the second sample image after the thumbnail in the target area image after the thumbnail, traversing the target area image after the thumbnail, performing matching identification on the second sample image and a partial area of the target area image that is overlapped with the second sample image during the moving process, and determining a matching degree when the second sample image after the thumbnail is located in one area in the target area image.
In the present embodiment, the number of pixels included in the similarity two-dimensional image obtained by template matching is related to the size of the thumbnail target area image or the thumbnail second appearance image. For example, referring to fig. 4b, assuming that the size of the second sample image after the thumbnail is 40 × 30 pixels and the size of the target area image after the thumbnail is 40 × 40 pixels, in the process of template matching, the second sample image after the thumbnail moves in the target area image after the thumbnail by using 1 pixel as a step, and may cover 10 different positions, and the size of the similarity two-dimensional image obtained after the matching is 1 × 10 pixels. Each pixel may represent a similarity between the abbreviated second sample map and the partial region of the overlapped abbreviated target region map when the center point of the second sample map is located at the pixel, and the similarity may range from greater than or equal to 0 to less than or equal to 1. The obtained similarity two-dimensional image can also be displayed by a gray scale image, for example, a pixel point with the similarity of 0 is displayed as white, a pixel point with the similarity of 1 is displayed as black, and a numerical value between 0 and 1 is displayed as gray with different depths.
After the similarity two-dimensional image is obtained, in step S42, a pixel point with the highest similarity in the similarity two-dimensional image may be selected first, and then the region of the second thumbnail in the target region image after the thumbnail is determined by taking the pixel point as the center. When the similarity two-dimensional image is displayed as a gray scale image, a pixel point with the highest similarity can be determined according to the display color of each pixel, and then the region of the contracted second sample image in the contracted target region image is determined.
According to the scheme provided by the embodiment, the second thumbnail is matched in the target thumbnail after the thumbnail in a template matching mode, image recognition is carried out based on the similarity between the second thumbnail after the thumbnail and the covered target area image at different positions, and the area where the second thumbnail is located in the target area image after the thumbnail is determined. The scheme provided by the embodiment has low calculation amount, and the second sample image in the target area image can be accurately identified through template matching.
Preferably, based on the wide value sequence and the high value sequence determined in the above embodiment, a plurality of corresponding abbreviated second patterns can be obtained. When the sequence of width values is: 40, 42, 38, 44, 36, and the high value sequence is: 30, 32, 29, 33, 27, the sizes of the second thumbnail after the thumbnail can be obtained as follows: 40 × 30 pixels, 42 × 32 pixels, 38 × 29 pixels, 44 × 33 pixels, 36 × 27 pixels. In this embodiment, template matching may be performed on the second sample images of different sizes in sequence, for example, the template matching is performed on the second sample image of the size 40 × 30 pixels in the thumbnail target area image, if the second sample image is not matched in the thumbnail target area image, the template matching may be performed on the second sample image of the size 42 × 32 pixels in the thumbnail target area image, and so on until the second sample image of the corresponding size is matched in the thumbnail target area image.
The scheme provided by the embodiment can improve the accuracy of image recognition, and can also realize recognition on the second sample graph with different changes of the width value and the height value in practical application. The second sample images with different sizes are identified in the identification process, so that the successful identification possibility can be improved. In the automatic test process, the test success can be improved.
Based on the solution provided by the foregoing embodiment, as shown in fig. 5, the foregoing step S42 preferably includes:
s51: determining an area image to be verified in the contracted target area image by taking a pixel point with the highest similarity represented in the similarity two-dimensional image as a center and taking the size of the contracted second sample image as a target size;
s52: verifying the area image to be verified and the contracted second sample image through a similarity matching algorithm to obtain verification similarity;
s53: when the verification similarity meets a preset similarity standard, determining the area of the to-be-verified area image in the target area image as the area of the second sample image after being reduced in the target area image after being reduced.
In order to further improve the identification accuracy, in this embodiment, a target area map is first captured based on the pixel point with the highest similarity and the size of the second sample map after the thumbnail, the captured image is used as an area map to be verified, and the similarity verification is performed on the area to be verified through a similarity matching algorithm. In step S51, the pixel point with the highest similarity may represent that, when the center of the abbreviated second sample is located at the pixel point, the similarity between the abbreviated second sample and the overlapped abbreviated target area is highest. And taking the pixel point with the highest similarity of the representations as a center, intercepting the target area graph based on the size of the second sample graph after the contraction, and taking the obtained area graph to be verified as the area which is most similar to the second sample graph after the contraction in the target area graph after the contraction.
Subsequently, in step S52, the similarity verification is performed on the to-be-verified region map by a similarity matching algorithm, which may include, for example, an algorithm that calculates the similarity using a histogram or that uses a locality sensitive hash, or the like. In practical application, similarity verification can be performed on the to-be-verified area through a similarity matching algorithm, so that verification similarity is obtained. The area to be verified can be verified for multiple times through multiple different similarity algorithms, and the verification similarity is obtained according to the results of multiple verification. Wherein, the verification similarity can be shown by percentage or other forms.
After the verification similarity is obtained, the verification similarity may be compared with a preset similarity standard. For example, the preset similarity criterion is not less than 90%, and if the verification similarity is less than 90%, for example, 87%, the verification similarity is determined not to meet the preset similarity criterion. And if the verification similarity is greater than or equal to 90%, if the verification similarity is 98%, judging that the verification similarity meets the preset similarity standard. When the verification similarity meets the preset similarity standard, determining the area of the to-be-verified area image in the target area image as the area of the contracted second sample image in the contracted target area image.
According to the scheme provided by the embodiment, based on the pixel point with the highest similarity determined in the above steps and the size of the contracted second sample image, the area image to be verified is intercepted from the target area image, then the similarity verification is performed on the area image to be verified through the similarity matching algorithm, and when the verification similarity meets the preset similarity standard, the area of the contracted second sample image in the target area image is determined. Through the scheme of the embodiment, the identified region can be verified for the second time, the matching degree of the identified region and the second sample image is ensured, and the identification result is optimized. In addition, when there are a plurality of pixel points with the highest similarity in the similarity two-dimensional image, a plurality of areas to be verified can be intercepted based on the plurality of pixel points with the highest similarity and by combining the size of the contracted second sample image. And then, calculating the verification similarity of each to-be-verified area through a similarity matching algorithm, and determining the area of the contracted second sample image in the contracted target area image based on the to-be-verified area with the highest verification similarity.
Based on the solution provided by the foregoing embodiment, as shown in fig. 6a, after the step S52, the method further includes:
s61: when the verification similarity does not meet the preset similarity standard, carrying out binarization processing on the similarity two-dimensional image based on the pixel point with the highest similarity represented in the similarity two-dimensional image and a preset binarization standard to obtain at least one pixel point meeting a preset similarity screening standard;
s62: intercepting the contracted target area graph containing at least one pixel point meeting the preset similarity screening standard;
s63: and identifying the second sample image after the contraction in the intercepted target area image so as to determine the area of the second sample image in the target interface image.
If the verification similarity does not meet the preset similarity standard, the fact that the area which is matched with the second sample image after the reduction is probably not identified in the target area image after the reduction through the scheme is indicated. In this embodiment, the binarization processing is performed on the similarity two-dimensional image based on the pixel point with the highest similarity represented in the similarity two-dimensional image. The binarization standard can be a preset binarization value, pixel points larger than or equal to the preset binarization value are processed into a first color during binarization processing, and pixel points smaller than the preset binarization value are processed into a second color, so that a binarization image is obtained. The binarization criterion may also be a preset similarity criterion or a numerical value related to similarity represented by a certain pixel in the similarity two-dimensional image, and may be specifically set according to an actual situation.
For example, as shown in fig. 6b, if the size of the binarized similarity two-dimensional image is 1 × 10 pixels, the similarity represented by each pixel is as shown in the figure, wherein the similarity represented by the pixel with the highest represented similarity is 0.9, and the preset binarization criterion is the similarity represented by the pixel with the highest represented similarity, which is-0.03. Then, in the above step S61, a preset binary value may be first calculated, and in this embodiment, the preset binary value is 0.9-0.03, i.e. 0.87. In the binarization processing, the similarity of each pixel representation in the similarity two-dimensional image is compared with 0.87, the pixels with the similarity of the representation being more than or equal to 0.87 are processed into a first color (such as white) and the pixels with the similarity of the representation being less than 0.87 are processed into a second color (such as black), so that the two-dimensional image with the similarity after binarization is obtained.
In the binarized similarity two-dimensional image, a pixel point of the first color (white) can represent that when the center point of the contracted second sample image is located at the pixel, the similarity between the contracted second sample image and the covered contracted target area image is higher. Determining at least one pixel point of the first color (white) as at least one pixel point satisfying the preset similarity screening criterion, and then intercepting a partial region in the reduced target region map based on the pixel point of the first color (white), as shown in fig. 6 c. The intercepted area contains the area of the target area map covered by the first color (white) pixels when the center point of the second pixel point after the abbreviation is positioned in the pixels.
And then, the second sample image after the thumbnail is continuously identified in the intercepted target area image after the thumbnail, on one hand, the possibility of identifying the second sample image can be improved, the test success rate is improved in the automatic test process, on the other hand, the calculated amount can be further reduced, and the time length of image identification processing is shortened. In addition, when the second thumbnail is identified in the truncated target area map, the size of the identified second thumbnail can be determined according to the wide value sequence and the corresponding high value sequence determined by the scheme, so as to optimize the image identification process.
Based on the solution provided by the foregoing embodiment, as shown in fig. 7, the foregoing step S23 preferably includes:
s71: and restoring the area of the second sample image after the abbreviation in the target area image after the abbreviation according to a preset abbreviation standard to obtain the area of the second sample image in the target interface image.
In the embodiment, the abbreviation criterion may be 50%, and the area where the abbreviated second sample image is located in the abbreviated target area map is reduced based on the abbreviation criterion. For example, the width value of the area determined by the above method is doubled, and the height value of the area is doubled, so as to obtain an area four times as large as the area where the second sample map after the reduction is located in the target area map after the reduction. And defining the position of the reduced second sample diagram in the reduced target area diagram based on the position of the reduced second sample diagram relative to the reduced target area diagram. Specifically, the method can be determined according to parameters such as a center point position, an angular point position, a frame line position and the like of the second thumbnail relative to the target area map after the thumbnail.
Through the scheme provided by the embodiment, the area of the second sample image in the target area image after the second sample image is reduced based on the preset abbreviating standard, so that the area of the second sample image in the target interface image is determined, and the area is the area of the first sample image after the second sample image is zoomed in the target interface image.
In order to solve the problems in the prior art, the present embodiment provides an electronic device 80, including:
an obtaining module 81, configured to obtain a first sample diagram, a width and height value of a source diagram including the first sample diagram, a position of the first sample diagram in the source diagram, and a target interface diagram including a second sample diagram, where the second sample diagram is the first sample diagram after being zoomed;
the first determining module 82 determines an estimated width-height value of the second sample graph according to a width-value ratio of the target interface graph to the source graph, a height-value ratio of the target interface graph to the source graph, and a width-height value of the first sample graph;
the intercepting module 83 intercepts a target area graph in the target interface graph according to the position of the first sample graph in the source graph and the estimated width and height values of the second sample graph;
and a second determining module 84, configured to identify the second sample map with the estimated width and height values in the target area map, so as to determine an area of the second sample map in the target interface map.
Through the electronic device provided by the embodiment, a source diagram including a sample diagram and a target interface diagram including a scaled sample diagram are obtained, then the aspect ratio of the scaled sample diagram to the source diagram is calculated according to the aspect ratio of the target interface diagram, a corresponding region is intercepted in the target interface diagram based on the position of the sample diagram in the source diagram, and finally the scaled sample diagram is identified in the intercepted region, so as to determine the region where the sample diagram is located in the target interface diagram. When the width and height values of the target interface graph and the source graph are different, the picture displayed in the source graph is wholly zoomed according to the width and height values of the target interface graph, wherein the sample graph in the source graph is also zoomed according to the corresponding proportion, so that the size of the zoomed sample graph can be determined according to the width and height values of the source graph and the target interface graph, and the identification accuracy is improved. In addition, according to the scheme, the target area is intercepted in the target interface image according to the position of the sample image in the source image, and the zoomed sample image is searched in the target area, so that the accuracy of the identification can be improved, the calculation amount can be reduced, the zoomed sample image does not need to be searched in the whole target interface image, and the efficiency of the image identification is improved.
Based on the electronic device provided in the foregoing embodiment, preferably, the second determining module 84 is configured to:
abbreviating the target area graph and the second sample graph according to a preset abbreviating standard;
identifying the second thumbnail after the target area graph is abbreviated so as to determine the area of the second thumbnail in the target area graph after the target area graph is abbreviated;
and determining the area of the second sample image in the target interface image according to a preset abbreviating standard and the area of the second sample image in the target area image after abbreviating.
Based on the electronic device provided in the foregoing embodiment, preferably, the second determining module 84 is configured to:
determining a width value sequence of the second sample graph after the contraction, wherein the nth width value in the width value sequence satisfies the following rule:
when n is 1, the width value W of the second sample graph after the abbreviation1When n is more than or equal to 2, the width value W of the second sample graph after the abbreviationn=Wn-1+(-1)nX a x (n-1), wherein n is a positive integer, and the preset zooming step a is a positive integer;
determining a high value array corresponding to the wide value array according to the estimated width-height ratio of the second sample graph and the wide value array;
and abbreviating the second sample graph according to the wide value number sequence and the corresponding high value number sequence.
Based on the electronic device provided in the foregoing embodiment, preferably, the second determining module 84 is configured to:
performing template matching on the contracted target area image and the contracted second sample image to obtain a similarity two-dimensional image, wherein when any target pixel in the similarity two-dimensional image represents the similarity between the contracted second sample image and the partial area of the overlapped contracted target area image when the center point of the contracted second sample image is located at the target pixel;
and determining the region of the contracted second sample image in the contracted target region image according to the pixel point with the highest similarity represented in the similarity two-dimensional image.
Based on the electronic device provided in the foregoing embodiment, preferably, the second determining module 84 is configured to:
determining an area image to be verified in the contracted target area image by taking a pixel point with the highest similarity represented in the similarity two-dimensional image as a center and taking the size of the contracted second sample image as a target size;
verifying the area image to be verified and the contracted second sample image through a similarity matching algorithm to obtain verification similarity;
when the verification similarity meets a preset similarity standard, determining the area of the to-be-verified area image in the target area image as the area of the second sample image after being reduced in the target area image after being reduced.
The electronic device provided based on the above embodiment preferably further includes, as shown in fig. 9, a verification module 85:
when the verification similarity does not meet the preset similarity standard, carrying out binarization processing on the similarity two-dimensional image based on the pixel point with the highest similarity represented in the similarity two-dimensional image and a preset binarization standard to obtain at least one pixel point meeting a preset similarity screening standard;
intercepting the contracted target area graph containing at least one pixel point meeting the preset similarity screening standard;
and identifying the second sample image after the contraction in the intercepted target area image so as to determine the area of the second sample image in the target interface image.
Based on the electronic device provided in the foregoing embodiment, preferably, the second determining module 84 is configured to:
and restoring the area of the second sample image after the abbreviation in the target area image after the abbreviation according to a preset abbreviation standard to obtain the area of the second sample image in the target interface image.
Preferably, an embodiment of the present invention further provides a mobile terminal, which includes a processor, a memory, and a computer program stored in the memory and capable of running on the processor, where the computer program, when executed by the processor, implements each process of the above-mentioned method embodiment for determining a region where a sample image is located in an interface image, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned method embodiment for determining the area where the sample diagram is located in the interface diagram, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (10)
1. A method for determining the area of a sample graph in an interface graph is characterized by comprising the following steps:
acquiring a first sample diagram, a width and height value of a source diagram containing the first sample diagram, a position of the first sample diagram in the source diagram and a target interface diagram containing a second sample diagram, wherein the second sample diagram is the first sample diagram after zooming;
determining an estimated width-height value of the second sample graph according to the width-value ratio of the target interface graph to the source graph, the height-value ratio of the target interface graph to the source graph and the width-height value of the first sample graph;
intercepting a target area graph in the target interface graph according to the position of the first sample graph in the source graph and the estimated width and height values of the second sample graph;
and identifying the second sample graph with the estimated width and height values in the target area graph so as to determine the area of the second sample graph in the target interface graph.
2. The method of claim 1, wherein identifying the second aspect having the estimated width-to-height value within the target area map to determine the area of the second aspect in the target interface map comprises:
abbreviating the target area graph and the second sample graph according to a preset abbreviating standard;
identifying the second thumbnail after the target area graph is abbreviated so as to determine the area of the second thumbnail in the target area graph after the target area graph is abbreviated;
and determining the area of the second sample image in the target interface image according to a preset abbreviating standard and the area of the second sample image in the target area image after abbreviating.
3. The method of claim 2, wherein the abbreviating the target area map and the second sample map according to a preset abbreviating standard comprises:
determining a width value sequence of the second sample graph after the contraction, wherein the nth width value in the width value sequence satisfies the following rule:
when n is 1, the width value W of the second sample graph after the abbreviation1When n is more than or equal to 2, the width value W of the second sample graph after the abbreviationn=Wn-1+(-1)nX a x (n-1), wherein n is a positive integer, and the preset zooming step a is a positive integer;
determining a high value array corresponding to the wide value array according to the estimated width-height ratio of the second sample graph and the wide value array;
and abbreviating the second sample graph according to the wide value number sequence and the corresponding high value number sequence.
4. The method of claim 3, wherein identifying the second thumbnail within the target area graph after the thumbnail to determine an area in which the second thumbnail is located in the target area graph after the thumbnail comprises:
performing template matching on the contracted target area image and the contracted second sample image to obtain a similarity two-dimensional image, wherein when any target pixel in the similarity two-dimensional image represents the similarity between the contracted second sample image and the partial area of the overlapped contracted target area image when the center point of the contracted second sample image is located at the target pixel;
and determining the region of the contracted second sample image in the contracted target region image according to the pixel point with the highest similarity represented in the similarity two-dimensional image.
5. The method of claim 4, wherein determining the region of the second thumbnail in the target region map after the thumbnail according to the pixel point with the highest similarity represented in the similarity two-dimensional image comprises:
determining an area image to be verified in the contracted target area image by taking a pixel point with the highest similarity represented in the similarity two-dimensional image as a center and taking the size of the contracted second sample image as a target size;
verifying the area image to be verified and the contracted second sample image through a similarity matching algorithm to obtain verification similarity;
when the verification similarity meets a preset similarity standard, determining the area of the to-be-verified area image in the target area image as the area of the second sample image after being reduced in the target area image after being reduced.
6. The method of claim 5, wherein after verifying the to-be-verified area map and the abbreviated second sample map by a similarity matching algorithm and obtaining the verified similarity, the method further comprises:
when the verification similarity does not meet the preset similarity standard, carrying out binarization processing on the similarity two-dimensional image based on the pixel point with the highest similarity represented in the similarity two-dimensional image and a preset binarization standard to obtain at least one pixel point meeting a preset similarity screening standard;
intercepting the contracted target area graph containing at least one pixel point meeting the preset similarity screening standard;
and identifying the second sample image after the contraction in the intercepted target area image so as to determine the area of the second sample image in the target interface image.
7. The method according to any one of claims 2 to 6, wherein determining the area of the second sample diagram in the target interface diagram according to a preset abbreviating standard and the area of the second sample diagram in the target area diagram after abbreviating comprises:
and restoring the area of the second sample image after the abbreviation in the target area image after the abbreviation according to a preset abbreviation standard to obtain the area of the second sample image in the target interface image.
8. An electronic device, comprising:
the acquisition module is used for acquiring a first sample image, a width and height value of a source image containing the first sample image, the position of the first sample image in the source image and a target interface image containing a second sample image, wherein the second sample image is the first sample image after zooming;
the first determining module is used for determining the estimated width and height values of the second sample graph according to the width value ratio of the target interface graph to the source graph, the height value ratio of the target interface graph to the source graph and the width and height values of the first sample graph;
the intercepting module intercepts a target area graph in the target interface graph according to the position of the first sample graph in the source graph and the estimated width and height values of the second sample graph;
and the second determining module is used for identifying the second sample graph with the estimated width and height values in the target area graph so as to determine the area of the second sample graph in the target interface graph.
9. An electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps of the method according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911141461.XA CN111079730B (en) | 2019-11-20 | 2019-11-20 | Method for determining area of sample graph in interface graph and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911141461.XA CN111079730B (en) | 2019-11-20 | 2019-11-20 | Method for determining area of sample graph in interface graph and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111079730A true CN111079730A (en) | 2020-04-28 |
CN111079730B CN111079730B (en) | 2023-12-22 |
Family
ID=70311334
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911141461.XA Active CN111079730B (en) | 2019-11-20 | 2019-11-20 | Method for determining area of sample graph in interface graph and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111079730B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111870950A (en) * | 2020-08-10 | 2020-11-03 | 网易(杭州)网络有限公司 | Display control method and device of game control and electronic equipment |
CN112162930A (en) * | 2020-10-21 | 2021-01-01 | 腾讯科技(深圳)有限公司 | Control identification method, related device, equipment and storage medium |
CN112434641A (en) * | 2020-12-10 | 2021-03-02 | 成都市精卫鸟科技有限责任公司 | Test question image processing method, device, equipment and medium |
CN112733862A (en) * | 2021-01-05 | 2021-04-30 | 卓望数码技术(深圳)有限公司 | Terminal image automatic matching method, system and storage medium |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103577809A (en) * | 2013-11-12 | 2014-02-12 | 北京联合大学 | Ground traffic sign real-time detection method based on intelligent driving |
CN105513038A (en) * | 2014-10-20 | 2016-04-20 | 网易(杭州)网络有限公司 | Image matching method and mobile phone application test platform |
CN106228194A (en) * | 2016-08-05 | 2016-12-14 | 腾讯科技(深圳)有限公司 | Image lookup method and device |
CN106920252A (en) * | 2016-06-24 | 2017-07-04 | 阿里巴巴集团控股有限公司 | A kind of image processing method, device and electronic equipment |
CN107145889A (en) * | 2017-04-14 | 2017-09-08 | 中国人民解放军国防科学技术大学 | Target identification method based on double CNN networks with RoI ponds |
CN107274442A (en) * | 2017-07-04 | 2017-10-20 | 北京云测信息技术有限公司 | A kind of image-recognizing method and device |
CN107403443A (en) * | 2017-07-28 | 2017-11-28 | 中南大学 | A kind of more rope multi-lay windings row's rope form state online test method and device based on machine vision |
CN107562877A (en) * | 2017-09-01 | 2018-01-09 | 北京搜狗科技发展有限公司 | Display methods, device and the device shown for view data of view data |
CN108182457A (en) * | 2018-01-30 | 2018-06-19 | 百度在线网络技术(北京)有限公司 | For generating the method and apparatus of information |
CN108985233A (en) * | 2018-07-19 | 2018-12-11 | 常州智行科技有限公司 | One kind being based on the relevant high-precision wireless vehicle tracking of digital picture |
CN109543701A (en) * | 2018-11-30 | 2019-03-29 | 长沙理工大学 | Vision significance method for detecting area and device |
CN109684225A (en) * | 2018-12-29 | 2019-04-26 | 广州云测信息技术有限公司 | A kind of method for testing software and device |
CN109753435A (en) * | 2018-12-29 | 2019-05-14 | 广州云测信息技术有限公司 | A kind of method for testing software and device |
CN109858504A (en) * | 2017-11-30 | 2019-06-07 | 阿里巴巴集团控股有限公司 | A kind of image-recognizing method, device, system and calculate equipment |
CN109919164A (en) * | 2019-02-22 | 2019-06-21 | 腾讯科技(深圳)有限公司 | The recognition methods of user interface object and device |
-
2019
- 2019-11-20 CN CN201911141461.XA patent/CN111079730B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103577809A (en) * | 2013-11-12 | 2014-02-12 | 北京联合大学 | Ground traffic sign real-time detection method based on intelligent driving |
CN105513038A (en) * | 2014-10-20 | 2016-04-20 | 网易(杭州)网络有限公司 | Image matching method and mobile phone application test platform |
CN106920252A (en) * | 2016-06-24 | 2017-07-04 | 阿里巴巴集团控股有限公司 | A kind of image processing method, device and electronic equipment |
CN106228194A (en) * | 2016-08-05 | 2016-12-14 | 腾讯科技(深圳)有限公司 | Image lookup method and device |
CN107145889A (en) * | 2017-04-14 | 2017-09-08 | 中国人民解放军国防科学技术大学 | Target identification method based on double CNN networks with RoI ponds |
CN107274442A (en) * | 2017-07-04 | 2017-10-20 | 北京云测信息技术有限公司 | A kind of image-recognizing method and device |
CN107403443A (en) * | 2017-07-28 | 2017-11-28 | 中南大学 | A kind of more rope multi-lay windings row's rope form state online test method and device based on machine vision |
CN107562877A (en) * | 2017-09-01 | 2018-01-09 | 北京搜狗科技发展有限公司 | Display methods, device and the device shown for view data of view data |
CN109858504A (en) * | 2017-11-30 | 2019-06-07 | 阿里巴巴集团控股有限公司 | A kind of image-recognizing method, device, system and calculate equipment |
CN108182457A (en) * | 2018-01-30 | 2018-06-19 | 百度在线网络技术(北京)有限公司 | For generating the method and apparatus of information |
CN108985233A (en) * | 2018-07-19 | 2018-12-11 | 常州智行科技有限公司 | One kind being based on the relevant high-precision wireless vehicle tracking of digital picture |
CN109543701A (en) * | 2018-11-30 | 2019-03-29 | 长沙理工大学 | Vision significance method for detecting area and device |
CN109684225A (en) * | 2018-12-29 | 2019-04-26 | 广州云测信息技术有限公司 | A kind of method for testing software and device |
CN109753435A (en) * | 2018-12-29 | 2019-05-14 | 广州云测信息技术有限公司 | A kind of method for testing software and device |
CN109919164A (en) * | 2019-02-22 | 2019-06-21 | 腾讯科技(深圳)有限公司 | The recognition methods of user interface object and device |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111870950A (en) * | 2020-08-10 | 2020-11-03 | 网易(杭州)网络有限公司 | Display control method and device of game control and electronic equipment |
CN111870950B (en) * | 2020-08-10 | 2023-10-31 | 网易(杭州)网络有限公司 | Game control display control method and device and electronic equipment |
CN112162930A (en) * | 2020-10-21 | 2021-01-01 | 腾讯科技(深圳)有限公司 | Control identification method, related device, equipment and storage medium |
CN112434641A (en) * | 2020-12-10 | 2021-03-02 | 成都市精卫鸟科技有限责任公司 | Test question image processing method, device, equipment and medium |
CN112733862A (en) * | 2021-01-05 | 2021-04-30 | 卓望数码技术(深圳)有限公司 | Terminal image automatic matching method, system and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111079730B (en) | 2023-12-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111079730B (en) | Method for determining area of sample graph in interface graph and electronic equipment | |
CN110717489B (en) | Method, device and storage medium for identifying text region of OSD (on Screen display) | |
JP3964327B2 (en) | Method and apparatus for determining a region of interest in an image and image transmission method and apparatus | |
US7653238B2 (en) | Image filtering based on comparison of pixel groups | |
CN111309618B (en) | Page element positioning method, page testing method and related devices | |
CN107480666B (en) | Image capturing device, method and device for extracting scanning target of image capturing device, and storage medium | |
WO2014160433A2 (en) | Systems and methods for classifying objects in digital images captured using mobile devices | |
CN111899270A (en) | Card frame detection method, device and equipment and readable storage medium | |
CN110414649B (en) | DM code positioning method, device, terminal and storage medium | |
CN111539238B (en) | Two-dimensional code image restoration method and device, computer equipment and storage medium | |
CN113469921A (en) | Image defect repairing method, system, computer device and storage medium | |
CN113222921A (en) | Image processing method and system | |
CN114170227A (en) | Product surface defect detection method, device, equipment and storage medium | |
CN108960247A (en) | Image significance detection method, device and electronic equipment | |
CN108647570B (en) | Zebra crossing detection method and device and computer readable storage medium | |
CN112232390A (en) | Method and system for identifying high-pixel large image | |
CN114255493A (en) | Image detection method, face detection device, face detection equipment and storage medium | |
CN114170229B (en) | Method, device and equipment for registering defect images of printed circuit board and storage medium | |
CN116631003A (en) | Equipment identification method and device based on P & ID drawing, storage medium and electronic equipment | |
CN110874814A (en) | Image processing method, image processing device and terminal equipment | |
CN111753573B (en) | Two-dimensional code image recognition method and device, electronic equipment and readable storage medium | |
CN108345893B (en) | Straight line detection method and device, computer storage medium and terminal | |
CN112529923A (en) | Control identification method and device | |
WO2014178241A1 (en) | Image processing device, image processing method, and image processing program | |
CN114677443B (en) | Optical positioning method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20240417 Address after: 100016 building 4, Dongfang Science Park, 52 Jiuxianqiao Road, Chaoyang District, Beijing Patentee after: BEIJING TESTIN INFORMATION TECHNOLOGY Co.,Ltd. Country or region after: China Address before: 102425 building 31, 69 Yanfu Road, Fangshan District, Beijing Patentee before: Beijing Yunju Intelligent Technology Co.,Ltd. Country or region before: China |
|
TR01 | Transfer of patent right |