WO2016062159A1 - Image matching method and platform for testing of mobile phone applications - Google Patents

Image matching method and platform for testing of mobile phone applications Download PDF

Info

Publication number
WO2016062159A1
WO2016062159A1 PCT/CN2015/087745 CN2015087745W WO2016062159A1 WO 2016062159 A1 WO2016062159 A1 WO 2016062159A1 CN 2015087745 W CN2015087745 W CN 2015087745W WO 2016062159 A1 WO2016062159 A1 WO 2016062159A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
matching
template
feature
template image
Prior art date
Application number
PCT/CN2015/087745
Other languages
French (fr)
Chinese (zh)
Inventor
孙圣翔
刘欣
熊博
Original Assignee
网易(杭州)网络有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 网易(杭州)网络有限公司 filed Critical 网易(杭州)网络有限公司
Publication of WO2016062159A1 publication Critical patent/WO2016062159A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the invention relates to the technical field of mobile phone testing, and in particular to an image matching method mobile phone application testing platform.
  • Image matching refers to identifying the same-named point between two or more images by a certain matching algorithm. For example, in the two-dimensional image matching, the correlation coefficient of the same size window in the target area and the search area is compared, and the correlation coefficient in the search area is taken. The maximum corresponding window center point is the same name point.
  • the first solution is to accurately identify the same or similar image from the test image (such as the image captured from the mobile game) in the case of a given image, that is, the image matching problem. .
  • the identified image positions are transmitted, they are transmitted to various terminals for automatic testing. For example, according to the identified image position, the analog click is transmitted to the mobile phone to implement the simulation operation of the mobile game.
  • the prior art mainly provides two relatively common image matching methods: gray-based image matching method and feature-based image matching method.
  • the gray-based image matching method regards the image as a two-dimensional signal, and uses statistical correlation methods (such as correlation function, covariance function, difference square sum, etc.) to find correlation matching between signals.
  • the most classic gray matching method is normalized gray matching.
  • the basic principle is to pixelize the gray matrix of a real-time image window with a certain size and all possible window grayscale arrays of the reference image.
  • the gray-based image matching method has the advantages of simple and direct calculation, but it also has obvious defects, that is, it has no rotation invariance and no scale invariance, and requires the template image and the target image to have the same size and direction. .
  • the feature-based image matching method refers to extracting features of two or more images (points, lines, faces, etc.), parameterizing the features, and then using the described parameters to perform matching. Algorithm. Firstly, the image is preprocessed to extract its high-level features, and then the matching correspondence between the two images is established.
  • the commonly used feature primitives are point features, edge features and region features.
  • the feature-based image matching method can overcome the shortcomings of using image gray information to match. Moreover, the feature point extraction process can reduce the influence of noise, and has good adaptability to gray level changes, image deformation and occlusion. However, it also has some shortcomings: 1) the real-time performance is not high, the calculation of feature points is time-consuming; 2) for some images, there may be few feature points extracted; 3) the feature points with smooth edges cannot accurately extract feature points.
  • the efficiency of image matching has a direct impact on the test effect of the application (APP).
  • APP application
  • the display image of the image will not change. Therefore, by correctly and quickly using image matching technology to recognize these button images, the simulated clicks on these buttons in the game can be completed, and the corresponding game operations can be automatically completed.
  • the technical problem to be solved by the present invention is to provide an image matching method and a mobile phone application testing method based on image matching, which improves the accuracy and flexibility of image matching, reduces the complexity of the matching algorithm, and thereby improves image matching and image-based optimization. Match the efficiency of mobile app testing.
  • an embodiment of the present invention provides a method for image matching, including:
  • the positioning coordinates of the best matching image are calculated.
  • the template image is globally matched in the source image, and the template image is controlled to slide in the source image to find a best matching area, specifically:
  • the height of the template image is greater than the height of the source image, or the width of the template image is greater than the width of the source image, determining that there is no matching area in the source image;
  • the coordinate position corresponding to the maximum coefficient value is (m, n), the height of the template image is h1, and the width is w1;
  • the position of the best matching area is: a rectangular area on the source image whose coordinate position (m, n) is the upper left corner, the length is h1, and the height is w1.
  • the calculating the feature points and feature vectors of the template image and the source image specifically including:
  • the feature point is a SIFT feature point of the template image; the feature vector is a SIFT feature vector of the template image; and when the image to be detected is the In the case of the source image, the feature point is a SIFT feature point of the source image; the feature vector is a SIFT feature vector of the source image.
  • calculating a visual similarity between the best matching area and the template image according to the feature point and the feature vector specifically:
  • the length of the SIFT feature point of the template image is zero, or the length of the SIFT feature point of the best matching area is zero, determining that the visual similarity between the best matching area and the template image is zero ;
  • the feature of the template image and the best matching area is calculated The number of matching point pairs; the quotient of dividing the number of the feature matching point pairs by the length of the SIFT feature point of the template image as the visual similarity.
  • obtaining a feature matching point pair between the template image and the source image specifically:
  • the template image and the feature point of the source image are used as the feature matching point pair, and the feature is matched
  • the number of matching point pairs is superimposed.
  • the calculating the positioning coordinates of the best matching image according to the feature matching point pair includes:
  • the center point coordinates of the best matching area are calculated, and the center point coordinates are used as the positioning coordinates of the best matching image.
  • the coordinates of the pair of matching points are randomly selected, and a mapping is performed between the template image and the source image to obtain a first equation:
  • mapping coefficients forming the mapping coefficients into a coefficient matrix H, and obtaining a second equation:
  • the coefficient matrix H is updated by the first equation and the second equation until the coefficient matrix H does not change, and the coefficient matrix H that does not change is used as the single mapping matrix;
  • the coordinates (x', y') of the N matching points of the template image in the best matching region are calculated one by one by the following third equation:
  • the center point coordinates of the coordinates of the N matching points are used as the positioning coordinates of the best matching image.
  • the positioning coordinates of the best matching image are calculated, specifically:
  • Performing SIFT strong matching on the template image includes: acquiring coordinates of SIFT feature points on the template image according to the feature matching point pairs, and one-to-one matching SIFT feature points on the source image coordinate;
  • the coordinates of the SIFT feature points on the source image are averaged, and the obtained mean coordinate values are used as the positioning coordinates of the best matching image.
  • the local visual similarity is higher than the third threshold, determining that the matching is successful, and calculating the positioning coordinates of the best matching image according to the coordinates obtained by the partial template matching;
  • the list of scales includes a plurality of scale factors
  • the area corresponding to the largest matching value in the best matching set is taken as the best matching image, and the central coordinate value of the best matching image is calculated as the best Match the positioning coordinates of the image.
  • the embodiment of the present invention further provides a mobile phone application testing platform, where the mobile phone application testing platform includes a test script of the mobile phone application to be tested and an image resource required for testing, and further includes:
  • test resource downloading unit configured to download a test script of the mobile phone application to be tested and the image resource to the tested mobile phone
  • a screenshot unit for taking a screenshot and uploading a test image of the mobile phone application to be tested displayed on the screen of the tested mobile phone;
  • An image matching unit configured to perform image matching on the corresponding image resource as the template image by using the image matching method according to any one of the above items, and find the location of the best matching image of the test image. Coordinates; and,
  • test unit configured to start testing the test code associated with the test image according to the positioning coordinates of the best matching image searched by the image matching unit, and feed back the positioning coordinate and the test result data to the tested mobile phone.
  • the mobile phone application test platform is provided with a plurality of common interfaces, and a corresponding driver layer is disposed on the mobile phone application test platform for the universal interface.
  • the mobile phone application to be tested is a mobile game application; and the mobile application test platform is a mobile game test platform.
  • the mobile phone application testing platform further includes a testing center; the testing unit is further configured to transmit test data result data to the testing center;
  • the test result data includes the model information of the mobile phone to be tested, the screenshot generated by the testing process, the CPU information, the memory information, the power consumption information, and the network card traffic information.
  • the image matching method provided by the embodiment of the invention firstly uses the template matching method to perform template matching on the template image in the source image, and preferably uses the SIFT (Scale-Invariant Feature Transform) feature matching algorithm to determine the template. Image with best matching area Similarity, finally, according to the feature matching point pair, the positioning coordinates of the best matching image are calculated, and the grayscale-based template matching method and the SIFT feature matching method can be combined to enhance the strength and avoid shortness, and also have grayscale-based images.
  • the calculation method of the matching method is simple, and the advantages of the rotation invariance and the scale invariance of the feature-based image matching method are directly improved, thereby improving the accuracy and flexibility of image matching.
  • the image matching method provided by the invention is applied to the mobile phone application test, the target image can be quickly and accurately identified, thereby improving the efficiency of the mobile phone application test.
  • FIG. 1 is a flow chart showing the steps of one embodiment of a method of image matching provided by the present invention.
  • FIG. 2 is a schematic diagram of global template matching in a source image provided by the present invention.
  • FIG. 3 is a flow chart showing the steps of calculating a feature point and a feature vector of a template image and a source image provided by the present invention.
  • FIG. 4 is a flow chart showing the steps of an embodiment of a mobile phone application test platform provided by the present invention.
  • FIG. 5 is a schematic structural diagram of a mobile phone application test platform provided by the present invention for mobile phone application testing.
  • FIG. 1 a flow chart of steps of an embodiment of a method for image matching provided by the present invention is shown.
  • the method for image matching includes the following steps:
  • Step S101 Perform global template matching on the template image T in the source image S, and control the template image T to slide in the source image S to find a best matching area.
  • FIG. 2 it is a schematic diagram of the global template matching in the source image provided by the present invention.
  • the source image S includes images of a plurality of controls or buttons, which are image 1 to image 6, respectively.
  • the control template image T is slid from the upper left corner of the source image S to find the target image 4 in the source image S, and the template image T is calculated every time the matching window (the size of the template image T) is slid once The similarity between the image regions corresponding to the window.
  • step S101 is specifically:
  • the height of the template image T is greater than the height of the source image S, or the width of the template image T is greater than the width of the source image S, it is determined that there is no matching area in the source image S;
  • the template image T is slid in the source image S by unit length, and the standard correlation coefficient of the template image T and the source image S is calculated one by one to obtain a standard correlation coefficient matrix A;
  • the coordinate position corresponding to the maximum coefficient value is (m, n)
  • the height of the template image is h1
  • the width is w1
  • the position of the best matching region is: A rectangular area on the source image having a coordinate position (m, n) as an upper left corner, a length of h1, and a height of w1.
  • an internal function template_match() can be designed to implement the above-mentioned step S101, and the pseudo code implementation process is as follows:
  • the best matching area obtained by the step search is not necessarily a valid match ( That is, the best matching area may not be the target image 4), and further processing analysis of the source image S is required.
  • Step S102 Calculate feature points and feature vectors of the template image T and the source image S.
  • the feature points and feature vectors of the template image T and the source image S are preferably calculated by using a SIFT (Scale-Invariant Feature Transform) feature matching algorithm.
  • SIFT feature matching algorithm is a computer vision algorithm used to detect and describe local features in images, mainly by separately finding the Interest Points or Corner Points and their related scales. The descriptor of the orientation obtains the feature, finds the extreme point in the scale space, and extracts its position, scale, and rotation invariant, and then performs feature point matching of the two images.
  • the essence of SIFT algorithm is to find feature points in different scale spaces and calculate the direction of feature points.
  • the feature points found are some points that are very prominent and will not change due to factors such as illumination, affine transformation and noise. Such as corner points, edge points, bright areas of dark areas and dark spots of bright areas, etc., so SIFT features remain invariant to rotation, scale scaling, brightness changes, and also maintain a certain degree of viewing angle change, affine transformation and noise. stability.
  • step S103 After the feature points and feature vectors of the template image T and the source image S are obtained by the above steps, the visual similarities of the two can be further compared by step S103.
  • Step S103 Calculate a visual similarity between the best matching area and the template image T according to the feature point and the feature vector; and determine whether the visual similarity is zero, if the visual similarity is 0, step S104 is performed; if the visual similarity is not zero, step S105 is performed;
  • Step S104 It is determined that the best matching area does not match the template image T.
  • Step S105 Obtain a feature matching point pair of the template image T and the source image S, and perform step S106.
  • Step S106 Calculate the positioning coordinates of the best matching image according to the feature matching point pair.
  • FIG. 3 it is a flow chart of steps for calculating a feature point and a feature vector of a template image and a source image provided by the present invention.
  • step S102 can be specifically implemented by the following steps, including:
  • Step S201 Scale space extreme value detection. Image positions of all scales are searched on the image to be detected, and extreme points that are invariant to scale and rotation (also known as potential points of interest for scale and rotation) are detected by a Gaussian differential function.
  • Step S202 Feature point location. According to the degree of stability of the extreme points, the position and scale of the feature points are determined by establishing a fitting model.
  • Step S203 The feature point direction is determined. One or more directions are assigned to the position of each feature point based on the gradient direction of the image local.
  • Step S204 feature point feature description. Within the neighborhood around each feature point, the gradient of the image local is measured at a selected scale, and the gradient is transformed into a feature vector representing local shape deformation and illumination variation.
  • the feature point is a SIFT feature point of the template image T;
  • the feature vector is the template a SIFT feature vector of the image T;
  • the feature point is a SIFT feature point of the source image S;
  • the feature vector is a SIFT feature vector of the source image S .
  • step S103 can be implemented by the following steps, specifically:
  • Step S301 Calculate the length len(keypoint1) of the SIFT feature point of the template image T and the length len(keypoint2) of the SIFT feature point of the best matching area. It is determined whether the visual similarity between the best matching region and the template image T is zero according to the length of the SIFT feature point of the template image T and the length of the SIFT feature point of the best matching region.
  • step S302 is performed; if the length of the SIFT feature point of the template image T is not Zero, and the length of the SIFT feature point of the best matching area is not zero, then execution Step S303.
  • Step S302 determining that the visual similarity between the best matching area and the template image T is zero.
  • the visual similarity obtained by the above step S103 is a "global visual similarity" obtained by global template matching of the template image T in the entire source image S by step S101, and the purpose is to implement the source image.
  • the coarse filtering eliminates the source images (test pictures) that do not necessarily have matching areas, improving the efficiency of the image matching process.
  • step S105 the process of matching the feature pairs of the template image T and the source image S is obtained, which specifically includes:
  • the first threshold TH1 is 0.75
  • the minimum Euclidean distance min_E of the SIFT feature vector of the template image T and the SIFT feature vector of the best matching region is smaller than the second small Euclidean distance nextmin_E and the first threshold TH1
  • the calculated SIFT feature point descriptor is its corresponding feature vector.
  • the constructor cv2.SIFT.detectAndCompute() calculates the SIFT feature points of the template image T and the source image S and their SIFT feature point descriptors (ie, feature vectors):
  • cv2.FlannBasedMatcher() is used to perform feature point matching, and then SIFT feature matching point pairs are calculated according to the nearest neighbor distance divided by the next nearest neighbor distance below a certain threshold (ie, the first threshold TH1).
  • distance refers to the Euclidean distance between one SIFT feature vector in the template image T and one SIFT feature vector in the source image S:
  • the embodiment provides a more detailed implementation manner for finding the best matching area.
  • the minimum number of matches may be set to define the size of the number of feature matching point pairs Good_Match. Different calculation strategies are selected by comparing the number of feature matching point pairs Good_Match with the minimum number of matches.
  • the calculating the positioning coordinates of the best matching image according to the feature matching point pair includes: using a single mapping (homography)
  • the so-called homography function finds a single mapping matrix (Homography Matrix) corresponding to the pair of feature matching points.
  • a plurality of coordinate points of the best matching region of the template image T on the source image S are calculated by using a perspective transformation function of the vector array; and a center point of the best matching region is calculated. Coordinates, the center point coordinates are used as positioning coordinates of the best matching image.
  • the minimum matching number MIN_MATCH_COUNT is 5, and if the number of feature matching point pairs Good_Match is higher than 5, the matching region is found by using the homography mapping, and the cv2.findHomography() function is constructed, and the matching key points are used to find the corresponding single. Map the matrix, and then use the cv2.perspectiveTransfrom() function to map the point group, and obtain the template image T to match the four coordinate points of the mapping area on the source image S, and then use the obtained coordinate points to calculate the coordinates of the center point of the matching area.
  • the positioning function conversely, if the number of feature matching point pairs Good_Match is lower than 5, further judgment is needed.
  • the optimal matching region of the template image T on the source image S is calculated according to the single mapping matrix by using a perspective transformation function of a vector array.
  • Multiple coordinate points including the following steps:
  • Step S401 Acquire a SIFT feature on the template image T according to the feature matching point pair The coordinates of the point and the coordinates of the SIFT feature points on the source image S that match one by one.
  • Step S402 Randomly filter out the coordinates of the pair of matching points of N, and perform mapping between the template image T and the source image S to obtain the first equation:
  • mapping coefficients forming the mapping coefficients into a coefficient matrix H, and obtaining a second equation:
  • N is the coordinate of the SIFT feature point on the source image S
  • [x i , y i ] is the coordinate of the SIFT feature point on the template image T
  • H is a coefficient matrix mapped from SIFT feature points on the template image T to SIFT feature points on the source image S, where h 11 to h 33 are respective elements of the coefficient matrix H.
  • Step S403 Calculate real-time coordinates of the SIFT feature points on the template image T onto the source image S by using the coefficient matrix H.
  • Step S404 when the distance between the coordinates of the SIFT feature point on the source image S and the real-time coordinate is less than the second threshold TH2, the first equation (1) and the second equation (2) are used to The coefficient matrix H is updated until the coefficient matrix H no longer changes, and the coefficient matrix H that does not change is used as the single mapping matrix.
  • Step S405 Calculate, according to the single mapping matrix and the first equation (1), the coordinates of the N matching points of the template image T in the best matching region (x' by the following third equation (3). , y'):
  • Step S406 The center point coordinates of the coordinates of the N matching points are used as the positioning coordinates of the best matching image.
  • the step S106 is specifically: performing SIFT strong matching on the template image T, including the following step:
  • Step 61 Acquire, according to the feature matching point pair, coordinates of SIFT feature points on the template image T and coordinates of SIFT feature points on the source image S that are matched one by one;
  • Step 62 Perform averaging processing on the coordinates of the SIFT feature points on the source image S, and obtain the obtained mean coordinate values as the positioning coordinates of the best matching image.
  • the specified rate coefficient ratio_num is smaller than the minimum number of matches MIN_MATCH_COUNT.
  • the purpose of performing strong matching in the step S106 is to prevent missing pairs of images that can be matched.
  • some template images T can only extract a few SIFT feature points, but in fact these template images T just match the source image S, and the traditional SIFT algorithm cannot find the template images T with only a few feature points. Go to the matching area.
  • the defect of the traditional SIFT feature extraction method can be overcome, and the image matching capability can be improved.
  • the step S106 includes: selecting a neighboring area of the feature point in the best matching area to perform partial template matching with the template image T, which may be specifically performed by the following steps. Implement:
  • Step S601 calculating a local visual similarity between the neighboring region of the feature point and the template image T; if the local visual similarity is higher than the third threshold TH3, performing step S602; if the local visual similarity Below the third threshold TH3, step S603 is performed.
  • Step S602 determining that the matching is successful, and calculating the positioning coordinates of the best matching image according to the coordinates obtained by the partial template matching;
  • Step S603 Perform global multi-scale template matching on the template image T and the source image S.
  • the partial visual similarity in step S601 is obtained by performing partial template matching with the template image T by the neighboring regions of the feature points in the best matching region.
  • the feature_similarity() function described above may be used for the implementation; and the similarity of the color histogram may also be used for calculation.
  • the color histogram H1(i) of the template image T and the color histogram H2(i) of the adjacent region in the source image S may be separately calculated, and then the local visual similarity is solved by using the fourth equation (4):
  • the adjacent area of the feature point in step S601 may be selected as a rectangular area whose height and width are respectively twice the height and width of the template image T centered on the coordinates of the feature point, and the template image T is obtained on the rectangular area. Matching is performed to select the best matching area. If the visual similarity with the template image T is higher than a certain threshold (TH3), the matching is considered successful, otherwise step S603 is performed.
  • TH3 a certain threshold
  • the step S603 includes:
  • Step S6031 Establish a scale list; the scale list includes a plurality of scale coefficients;
  • Step S6032 scaling the template image T according to the scale factor in the scale list
  • Step S6033 performing global template matching on the template image T after the scaling is performed, and recording the matching value and the matching area obtained by each matching to form a best matching set;
  • Step S6034 After calculating the global template matching of all the scales, the area corresponding to the maximum matching value in the best matching set is taken as the best matching image, and the central coordinate value of the best matching image is calculated as the The positioning coordinates of the best matching image.
  • the multi-scale matching similarity between the template T and the source image S is mainly calculated, and the multi-scale scaling is implemented on the template T, and the problem that the template matching is sensitive to the scale transformation is solved to some extent, if the matching value ( If the matching value of the scaled template image and the source image S is lower than a certain threshold, the matching is considered to be failed, otherwise the visual matching degree of the best matching region and the template T is calculated.
  • Steps S6031 to S6034 adopt a multi-scale template matching method, which functions as fine filtering to eliminate interference of the source image S which is easy to match errors, and thus is more abundant than the template matching process of step S101 described above.
  • the image matching method provided by the embodiment of the invention can be implemented in the Python language, and has the characteristics of high efficiency and legibility, and realizes rapid application development of the image matching method.
  • the image matching method provided by the embodiment of the invention uses the template matching method to perform global template matching on the template image in the source image, and uses the SIFT feature matching algorithm to determine the similarity between the template image and the best matching region, and finally according to the feature. Match the pair of points and calculate the coordinates of the best matching image. And different matching processes are adopted according to the number of matched feature point pairs, which reduces the complexity of the algorithm and improves the accuracy of image matching.
  • the embodiment of the present invention can combine the gray-based template matching method and the SIFT feature matching method, and the rotation invariance of the simple and direct feature-based image matching method based on the gray-based image matching method. And the advantages of scale invariance, thus improving the accuracy and flexibility of image matching.
  • the invention also provides a mobile phone application testing platform for applying the above image matching method to testing a mobile phone application (Application, APP).
  • FIG. 4 is a schematic structural diagram of an embodiment of a mobile phone application testing platform provided by the present invention.
  • the mobile phone application testing platform can implement the automatic testing function of the mobile phone application APP based on image matching, and the first problem is the image matching problem. After identifying the correct image position, It is transmitted to the mobile phone to be tested to realize analog click, and realizes the simulation operation of the mobile application (such as mobile game).
  • the mobile application test platform includes:
  • the test resource downloading unit 401 is configured to download a test script of the mobile phone application to be tested and the image resource to the tested mobile phone.
  • the screenshot unit 402 is configured to take a screenshot and upload the test image of the mobile phone application to be tested displayed on the screen of the tested mobile phone;
  • the image matching unit 403 is configured to perform image matching on the corresponding image resource by using the image matching method as the template image to find the best matching image of the test image. Positioning coordinates; and,
  • the testing unit 404 is configured to start testing the test code associated with the test image according to the positioning coordinates of the best matching image searched by the image matching unit 403, and feed back the positioning coordinate and the test result data to the tested Mobile phone.
  • the mobile phone application test platform further includes a storage unit 405 that stores a test script of the mobile phone application to be tested and a required image resource for testing; and the test resource download unit 401 downloads a test corresponding to the application to be tested from the storage unit 405. Script and image resources. Performing image matching on the mobile phone application test platform as the template image, finding the positioning coordinates of the best matching image of the image to be tested, and determining the response state of the tested mobile phone; In the case of a template image, the same or similar images are accurately identified from test pictures (such as pictures taken from a mobile game) using the image matching method described above. Further, the mobile application test platform is provided with a plurality of universal interfaces 406, and a corresponding drive layer is disposed on the mobile application test platform for the universal interface 406.
  • the mobile phone application test platform can be installed in the server 502, and the server 502 can communicate with the mobile phone 501 to be tested through various communication interfaces.
  • the mobile phone application testing platform is provided with a plurality of general-purpose interfaces, and a corresponding driving layer is disposed on the mobile phone application testing platform for the universal interface; and the mobile phone 501 and the mobile phone application testing platform pass through The general interface and the driver layer perform data transmission.
  • an operating system Ios system, Android system, etc.
  • the server 502 operating system implements the corresponding driver layer.
  • the communication interface can be implemented by using an open source Appium tool for the Ios system; for the Android system, the ADB (Android Debug Bridge) tool provided by Google can be used to implement the communication interface; for the Windows system, it can be directly used.
  • the underlying API Application Programming Interface
  • the image resources needed in the test code and the test script are prepared; and the screenshot on the mobile phone 501 to be tested is transmitted to the server 502 through the driver layer.
  • the position of the image (such as the sun icon in FIG. 5) is recognized on the server 502, and the image is positioned by using any of the image matching methods described above to find the position where the target image is located, that is, The abscissa x value and the ordinate y value constitute a target image position (x, y); then, the (x, y) coordinates are transmitted to the mobile phone 501 to be tested through the communication interface, and the mobile application APP (such as a mobile game) is completed.
  • the mobile phone application method further includes: returning the test result data to the test center; the test result data includes the model information of the mobile phone to be tested, the screenshot generated by the test process, the CPU information, the memory information, and the power consumption information. And network card traffic information.
  • the mobile phone application APP to be tested is a mobile game application; and the mobile application test platform is a mobile game test platform.
  • Applying the improved image matching method and mobile phone application testing method to the field of mobile game testing can effectively improve the efficiency of existing mobile game testing, lower the threshold of mobile game testing, improve the convenience of mobile game testing, and realize the mobile game. Remote testing.
  • the mobile phone application testing platform utilizes the advantages of the improved image matching method, reduces the defect that the mobile phone of different resolutions needs to repeatedly write test code, realizes automatic testing of the smart phone application, and reduces the manual test mobile phone application. Cost and improve test efficiency and test accuracy.
  • the test code on the mobile app test platform can support programs running on multiple smartphone operating systems at the same time, improving compatibility.
  • the mobile application test platform is integrated into the mobile application, it can help to test the application APP in the mobile phone anytime and anywhere, especially for ordinary users, and improve the application range of the mobile application test.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

Disclosed is an image matching method, comprising: performing global template matching of a template image in a source image, and controlling the template image to slide in the source image so as to find the best matching region; calculating feature points and feature vectors of the template image and the source image; calculating, according to the feature points and the feature vectors, calculating a visual similarity between the optimal matching region and the template image; if the visual similarity is zero, determining that the optimal matching region does not match with the template image; if the visual similarity is not zero, obtaining a pair of matching feature points of the template image and the source image, and calculating location coordinates of the best matching image according to the pair of matching feature points. Also disclosed is a platform for testing of mobile phone applications on the basis of image matching. The present invention reduces the complexity of the matching algorithm, thereby improving the efficiency of image matching and the efficiency of the testing of mobile phone applications on the basis of image matching.

Description

图像匹配方法及手机应用测试平台Image matching method and mobile application test platform 技术领域Technical field
本发明涉及手机测试技术领域,尤其涉及一种图像匹配方法手机应用测试平台。The invention relates to the technical field of mobile phone testing, and in particular to an image matching method mobile phone application testing platform.
背景技术Background technique
图像匹配是指通过一定的匹配算法在两幅或多幅图像之间识别同名点,如二维图像匹配中通过比较目标区和搜索区中相同大小的窗口的相关系数,取搜索区中相关系数最大所对应的窗口中心点作为同名点。Image matching refers to identifying the same-named point between two or more images by a certain matching algorithm. For example, in the two-dimensional image matching, the correlation coefficient of the same size window in the target area and the search area is compared, and the correlation coefficient in the search area is taken. The maximum corresponding window center point is the same name point.
为了实现基于图像的自动测试功能,首要解决的就是在给定图像的情况下,如何从测试图像(如从手机游戏中截取的图像)中精确得识别出相同或者相似的图像,即图像匹配问题。通过识别出来的图像位置之后,再传送至各种终端上进行自动测试。如根据识别出的图像位置,传送至手机上实现模拟点击,以实现手机游戏的模拟操作。In order to realize the image-based automatic test function, the first solution is to accurately identify the same or similar image from the test image (such as the image captured from the mobile game) in the case of a given image, that is, the image matching problem. . After the identified image positions are transmitted, they are transmitted to various terminals for automatic testing. For example, according to the identified image position, the analog click is transmitted to the mobile phone to implement the simulation operation of the mobile game.
现有技术主要提供了两种较为通用的图像匹配方法:基于灰度的图像匹配方法和基于特征的图像匹配方法。The prior art mainly provides two relatively common image matching methods: gray-based image matching method and feature-based image matching method.
其中,基于灰度的图像匹配方法将图像看成是二维信号,采用统计相关的方法(如相关函数、协方差函数、差平方和等)寻找信号间的相关匹配。最经典的灰度匹配法是归一化的灰度匹配,其基本原理是逐像素地将一个大小一定的实时图像窗口的灰度矩阵,与参考图像的所有可能的窗口灰度阵列,按某种相似性度量方法进行搜索比较的匹配方法。基于灰度的图像匹配方法具有计算简单、直接的优点,但是其也具有明显的缺陷,即其不具有旋转不变性和不具有尺度不变性,要求模板图像与目标图像的尺寸大小以及方向均相同。Among them, the gray-based image matching method regards the image as a two-dimensional signal, and uses statistical correlation methods (such as correlation function, covariance function, difference square sum, etc.) to find correlation matching between signals. The most classic gray matching method is normalized gray matching. The basic principle is to pixelize the gray matrix of a real-time image window with a certain size and all possible window grayscale arrays of the reference image. A similarity measure method for matching methods for search comparison. The gray-based image matching method has the advantages of simple and direct calculation, but it also has obvious defects, that is, it has no rotation invariance and no scale invariance, and requires the template image and the target image to have the same size and direction. .
而基于特征的图像匹配方法是指通过分别提取两个或多个图像的特征(点、线、面等特征),对特征进行参数描述,然后运用所描述的参数来进行匹配的一 种算法。首先对图像进行预处理来提取其高层次的特征,然后建立两幅图像之间特征的匹配对应关系,通常使用的特征基元为点特征、边缘特征和区域特征。基于特征的图像匹配方法可以克服利用图像灰度信息进行匹配的缺点,而且,特征点的提取过程可以减少噪声的影响,对灰度变化,图像形变以及遮挡等都有较好的适应能力。但是其也存在一些缺点:1)实时性不高,计算特征点比较耗时;2)对于有些图像,可能提取的特征点很少;3)对边缘光滑的目标无法准确提取特征点。The feature-based image matching method refers to extracting features of two or more images (points, lines, faces, etc.), parameterizing the features, and then using the described parameters to perform matching. Algorithm. Firstly, the image is preprocessed to extract its high-level features, and then the matching correspondence between the two images is established. The commonly used feature primitives are point features, edge features and region features. The feature-based image matching method can overcome the shortcomings of using image gray information to match. Moreover, the feature point extraction process can reduce the influence of noise, and has good adaptability to gray level changes, image deformation and occlusion. However, it also has some shortcomings: 1) the real-time performance is not high, the calculation of feature points is time-consuming; 2) for some images, there may be few feature points extracted; 3) the feature points with smooth edges cannot accurately extract feature points.
而图像匹配的效率对手机应用(Application,简称APP)的测试效果产生直接的影响。尤其是在测试手机游戏测试过程中,如手机游戏的开始或进攻按钮图像在不同的分辨率的手机上位置会有变化,但是其图像的显示图像是不会变的。因此正确、快速地利用图像匹配技术识别到这些按钮图像,便可以完成游戏中对这些按钮的模拟点击,相应的游戏操作也可以自动完成。The efficiency of image matching has a direct impact on the test effect of the application (APP). Especially in the test of mobile phone game testing, such as the start of mobile games or the image of the offensive button on the phone with different resolutions, the display image of the image will not change. Therefore, by correctly and quickly using image matching technology to recognize these button images, the simulated clicks on these buttons in the game can be completed, and the corresponding game operations can be automatically completed.
发明内容Summary of the invention
本发明所要解决的技术问题是,提供一种图像匹配的方法以及基于图像匹配的手机应用测试方法,提高图像匹配的准确度与灵活性,降低匹配算法的复杂度,从而提高图像匹配以及基于图像匹配的手机应用测试的效率。The technical problem to be solved by the present invention is to provide an image matching method and a mobile phone application testing method based on image matching, which improves the accuracy and flexibility of image matching, reduces the complexity of the matching algorithm, and thereby improves image matching and image-based optimization. Match the efficiency of mobile app testing.
为解决以上技术问题,一方面,本发明实施例提供一种图像匹配的方法,包括:To solve the above technical problem, in one aspect, an embodiment of the present invention provides a method for image matching, including:
将模板图像在源图像中进行全局模板匹配,控制所述模板图像在所述源图像中滑动查找出最佳匹配区域;Performing global template matching on the template image in the source image, and controlling the template image to slide in the source image to find a best matching area;
计算出所述模板图像与所述源图像的特征点及特征向量;Calculating a feature point and a feature vector of the template image and the source image;
根据所述特征点及特征向量,计算出所述最佳匹配区域与所述模板图像的视觉相似度;Calculating a visual similarity between the best matching area and the template image according to the feature point and the feature vector;
若所述视觉相似度为零,则判定所述最佳匹配区域与所述模板图像不匹配;If the visual similarity is zero, determining that the best matching area does not match the template image;
若所述视觉相似度不为零,则获得所述模板图像与所述源图像的特征匹配点对; If the visual similarity is not zero, obtaining a feature matching point pair between the template image and the source image;
根据所述特征匹配点对,计算出最佳匹配图像的定位坐标。According to the feature matching point pair, the positioning coordinates of the best matching image are calculated.
进一步地,所述将模板图像在源图像中进行全局模板匹配,控制所述模板图像在所述源图像中滑动查找出最佳匹配区域,具体为:Further, the template image is globally matched in the source image, and the template image is controlled to slide in the source image to find a best matching area, specifically:
分别获取所述模板图像与所述源图像的高度和宽度;Obtaining a height and a width of the template image and the source image respectively;
若所述模板图像的高度大于所述源图像的高度,或者,所述模板图像的宽度大于所述源图像的宽度,则判定所述源图像中不存在匹配区域;If the height of the template image is greater than the height of the source image, or the width of the template image is greater than the width of the source image, determining that there is no matching area in the source image;
若所述模板图像的高度小于或等于所述源图像的高度,并且,所述模板图像的宽度小于或等于所述源图像的宽度,则:If the height of the template image is less than or equal to the height of the source image, and the width of the template image is less than or equal to the width of the source image, then:
将所述模板图像在所述源图像中以单位长度进行滑动,逐一计算出所述模板图像与所述源图像的标准相关系数,获得标准相关系数矩阵;And sliding the template image in the source image by a unit length, calculating a standard correlation coefficient between the template image and the source image one by one, and obtaining a standard correlation coefficient matrix;
查找出所述标准相关系数矩阵中的最大系数值,以及所述最大系数值所对应的坐标位置;Finding a maximum coefficient value in the standard correlation coefficient matrix, and a coordinate position corresponding to the maximum coefficient value;
根据所述最大系数值所对应的坐标位置以及所述模板图像的高度与宽度,确定所述最佳匹配区域的位置。And determining a position of the best matching area according to a coordinate position corresponding to the maximum coefficient value and a height and a width of the template image.
优选地,所述最大系数值所对应的坐标位置为(m,n),所述模板图像的高度为h1,宽度为w1;Preferably, the coordinate position corresponding to the maximum coefficient value is (m, n), the height of the template image is h1, and the width is w1;
则所述最佳匹配区域的位置为:在所述源图像上的、以坐标位置(m,n)为左上角,长为h1,高为w1的矩形区域。Then, the position of the best matching area is: a rectangular area on the source image whose coordinate position (m, n) is the upper left corner, the length is h1, and the height is w1.
进一步地,所述计算出所述模板图像与所述源图像的特征点及特征向量,具体包括:Further, the calculating the feature points and feature vectors of the template image and the source image, specifically including:
在待检测图像上搜索所有尺度的图像位置,通过高斯微分函数检测出对于尺度和旋转不变的极值点;Searching for image positions of all scales on the image to be detected, and detecting extreme points that are invariant to scale and rotation by a Gaussian differential function;
依据所述极值点的稳定程度,通过建立一个拟合模型来确定特征点的位置和尺度;Determining the position and scale of the feature points by establishing a fitting model according to the degree of stability of the extreme points;
基于图像局部的梯度方向,为每个特征点的位置分配一个或多个方向;Assigning one or more directions to the position of each feature point based on the gradient direction of the image local;
在每个特征点周围的邻域内,在选定的尺度上测量图像局部的梯度,将所述梯度变换为表示局部形状变形和光照变化的特征向量; Measuring a local gradient of the image on a selected scale within a neighborhood around each feature point, transforming the gradient into a feature vector representing local shape deformation and illumination variation;
当所述待检测图像为所述模板图像时,所述特征点为所述模板图像的SIFT特征点;所述特征向量为所述模板图像的SIFT特征向量;当所述待检测图像为所述源图像时,所述特征点为所述源图像的SIFT特征点;所述特征向量为所述源图像的SIFT特征向量。When the image to be detected is the template image, the feature point is a SIFT feature point of the template image; the feature vector is a SIFT feature vector of the template image; and when the image to be detected is the In the case of the source image, the feature point is a SIFT feature point of the source image; the feature vector is a SIFT feature vector of the source image.
进一步地,根据所述特征点及特征向量,计算出所述最佳匹配区域与所述模板图像的视觉相似度,具体为:Further, calculating a visual similarity between the best matching area and the template image according to the feature point and the feature vector, specifically:
计算出所述模板图像的SIFT特征点的长度和所述最佳匹配区域的SIFT特征点的长度;Calculating a length of a SIFT feature point of the template image and a length of a SIFT feature point of the best matching area;
若所述模板图像的SIFT特征点的长度为零,或者,所述最佳匹配区域的SIFT特征点的长度为零,则确定所述最佳匹配区域与所述模板图像的视觉相似度为零;If the length of the SIFT feature point of the template image is zero, or the length of the SIFT feature point of the best matching area is zero, determining that the visual similarity between the best matching area and the template image is zero ;
若所述模板图像的SIFT特征点的长度不为零,并且,所述最佳匹配区域的SIFT特征点的长度不为零,则,计算出所述模板图像与所述最佳匹配区域的特征匹配点对的数目;将所述特征匹配点对的数目除以所述模板图像的SIFT特征点的长度的商作为所述视觉相似度。If the length of the SIFT feature point of the template image is not zero, and the length of the SIFT feature point of the best matching area is not zero, then the feature of the template image and the best matching area is calculated The number of matching point pairs; the quotient of dividing the number of the feature matching point pairs by the length of the SIFT feature point of the template image as the visual similarity.
优选地,若所述视觉相似度不为零,则获得所述模板图像与所述源图像的特征匹配点对,具体包括:Preferably, if the visual similarity is not zero, obtaining a feature matching point pair between the template image and the source image, specifically:
计算出所述模板图像的SIFT特征向量与所述最佳匹配区域的SIFT特征向量的最小欧氏距离和次小欧氏距离;Calculating a minimum Euclidean distance and a sub-Euclidean distance of the SIFT feature vector of the template image and the SIFT feature vector of the best matching region;
在所述最小欧氏距离除以所述次小欧氏距离的商小于第一阈值时,将所述模板图像与所述源图像的特征点作为所述特征匹配点对,并对所述特征匹配点对的数目进行叠加。And when the quotient of the minimum Euclidean distance divided by the second small Euclidean distance is less than the first threshold, the template image and the feature point of the source image are used as the feature matching point pair, and the feature is matched The number of matching point pairs is superimposed.
进一步地,当所述特征匹配点对的数目高于最小匹配数目时,所述根据所述特征匹配点对,计算出最佳匹配图像的定位坐标,包括:Further, when the number of the feature matching point pairs is higher than the minimum matching number, the calculating the positioning coordinates of the best matching image according to the feature matching point pair includes:
利用单映射函数查找出与所述特征匹配点对相对应的单映射矩阵;Using a single mapping function to find a single mapping matrix corresponding to the pair of feature matching points;
根据所述单映射矩阵,利用向量数组的透视变换函数计算出所述模板图像在所述源图像上的最佳匹配区域的多个坐标点; Calculating, according to the single mapping matrix, a plurality of coordinate points of the best matching region of the template image on the source image by using a perspective transformation function of the vector array;
计算出最佳匹配区域的中心点坐标,将所述中心点坐标作为所述最佳匹配图像的定位坐标。The center point coordinates of the best matching area are calculated, and the center point coordinates are used as the positioning coordinates of the best matching image.
在一种可实现的方式中,所述根据所述单映射矩阵,利用向量数组的透视变换函数计算出所述模板图像在所述源图像上的最佳匹配区域的多个坐标点,具体包括:In an implementation manner, the calculating, according to the single mapping matrix, a plurality of coordinate points of the best matching area of the template image on the source image by using a perspective transformation function of the vector array, specifically including :
根据所述特征匹配点对,获取所述模板图像上的SIFT特征点的坐标及其一一匹配的、在所述源图像上的SIFT特征点的坐标;Obtaining, according to the feature matching point pair, coordinates of the SIFT feature points on the template image and coordinates of the SIFT feature points on the source image that are matched one by one;
随机筛选出N对匹配点对的坐标,在所述模板图像和所述源图像之间进行映射,获得第一方程:The coordinates of the pair of matching points are randomly selected, and a mapping is performed between the template image and the source image to obtain a first equation:
Figure PCTCN2015087745-appb-000001
Figure PCTCN2015087745-appb-000001
并且获得对应的映射系数,将所述映射系数组建为系数矩阵H,获得第二方程:And obtaining corresponding mapping coefficients, forming the mapping coefficients into a coefficient matrix H, and obtaining a second equation:
Figure PCTCN2015087745-appb-000002
Figure PCTCN2015087745-appb-000002
其中,N≥4;[x’i,y’i]是所述源图像上的SIFT特征点的坐标;[xi,yi]是所述模板图像上的SIFT特征点的坐标;H是从所述模板图像上的SIFT特征点映射到所述源图像上的SIFT特征点的系数矩阵;Wherein N≥4; [x' i , y' i ] is the coordinate of the SIFT feature point on the source image; [x i , y i ] is the coordinate of the SIFT feature point on the template image; H is Mapping from a SIFT feature point on the template image to a coefficient matrix of SIFT feature points on the source image;
利用所述系数矩阵计算出所述模板图像上的SIFT特征点映射到所述源图像上的实时坐标;Calculating, by using the coefficient matrix, the SIFT feature points on the template image to real-time coordinates on the source image;
在所述源图像上的SIFT特征点的坐标与所述实时坐标的之间的距离小于第二阈值时,利用第一方程和第二方程对所述系数矩阵H进行更新,直到所述系数矩阵H不再变化,并将不再变化的系数矩阵H作为所述单映射矩阵;When the distance between the coordinates of the SIFT feature point on the source image and the real-time coordinate is less than the second threshold, the coefficient matrix H is updated by the first equation and the second equation until the coefficient matrix H does not change, and the coefficient matrix H that does not change is used as the single mapping matrix;
根据所述单映射矩阵和第一方程,通过以下第三方程逐一计算出所述模板图像在所述最佳匹配区域的N个匹配点的坐标(x’,y’): According to the single mapping matrix and the first equation, the coordinates (x', y') of the N matching points of the template image in the best matching region are calculated one by one by the following third equation:
Figure PCTCN2015087745-appb-000003
Figure PCTCN2015087745-appb-000003
将所述N个匹配点的坐标的中心点坐标作为所述最佳匹配图像的定位坐标。The center point coordinates of the coordinates of the N matching points are used as the positioning coordinates of the best matching image.
进一步地,当所述特征匹配点对的数目低于所述最小匹配数目,并且大于指定倍率系数时,其中,所述指定倍率系数小于所述最小匹配数目;Further, when the number of the feature matching point pairs is lower than the minimum matching number and greater than the specified magnification coefficient, wherein the specified magnification coefficient is smaller than the minimum matching number;
则所述根据所述特征匹配点,计算出最佳匹配图像的定位坐标,具体为:Then, according to the feature matching point, the positioning coordinates of the best matching image are calculated, specifically:
对所述模板图像进行SIFT强匹配,包括:根据所述特征匹配点对,获取所述模板图像上的SIFT特征点的坐标及其一一匹配的、在所述源图像上的SIFT特征点的坐标;Performing SIFT strong matching on the template image includes: acquiring coordinates of SIFT feature points on the template image according to the feature matching point pairs, and one-to-one matching SIFT feature points on the source image coordinate;
对所述源图像上的SIFT特征点的坐标求均值处理,并将获得的均值坐标值作为所述最佳匹配图像的定位坐标。The coordinates of the SIFT feature points on the source image are averaged, and the obtained mean coordinate values are used as the positioning coordinates of the best matching image.
进一步地,当所述特征匹配点对的数目小于指定倍率系数时,其中,所述指定倍率系数小于所述最小匹配数目;Further, when the number of the feature matching point pairs is smaller than the specified rate coefficient, wherein the specified magnification coefficient is smaller than the minimum matching number;
则选定所述最佳匹配区域中的特征点的邻近区域与所述模板图像进行局部模板匹配,包括:And then selecting a neighboring area of the feature point in the best matching area to perform partial template matching with the template image, including:
计算出所述特征点的邻近区域与所述模板图像的局部视觉相似度;Calculating a local visual similarity between the neighboring region of the feature point and the template image;
若所述局部视觉相似度高于第三阈值,则判定匹配成功,根据局部模板匹配获得的坐标计算出所述最佳匹配图像的定位坐标;If the local visual similarity is higher than the third threshold, determining that the matching is successful, and calculating the positioning coordinates of the best matching image according to the coordinates obtained by the partial template matching;
若所述局部视觉相似度低于所述第三阈值,则对所述模板图像与所述源图像进行全局多尺度模板匹配。If the local visual similarity is lower than the third threshold, global multi-scale template matching is performed on the template image and the source image.
进一步地,若所述局部视觉相似度低于所述第三阈值,则对所述模板图像与所述源图像进行全局多尺度模板匹配,具体包括:Further, if the local visual similarity is lower than the third threshold, performing global multi-scale template matching on the template image and the source image, specifically:
建立尺度列表;所述尺度列表包括多个尺度系数;Establishing a list of scales; the list of scales includes a plurality of scale factors;
根据所述尺度列表中的尺度系数,对所述模板图像进行放缩;And scaling the template image according to the scale factor in the scale list;
对进行放缩后的模板图像在所述源图像中进行全局模板匹配,记录每一次匹配获得的匹配值和匹配区域,形成最佳匹配集合; Performing global template matching on the template image after scaling, recording matching values and matching regions obtained by each matching, and forming a best matching set;
计算完所有尺度的全局模板匹配后,将所述最佳匹配集合中的最大匹配值所对应的区域作为最佳匹配图像,并计算出所述最佳匹配图像的中心坐标值作为所述最佳匹配图像的定位坐标。After calculating the global template matching of all the scales, the area corresponding to the largest matching value in the best matching set is taken as the best matching image, and the central coordinate value of the best matching image is calculated as the best Match the positioning coordinates of the image.
另一方面,本发明实施例还提供了一种手机应用测试平台,所述手机应用测试平台上包括待测试手机应用的测试脚本及测试所需图像资源,还包括:On the other hand, the embodiment of the present invention further provides a mobile phone application testing platform, where the mobile phone application testing platform includes a test script of the mobile phone application to be tested and an image resource required for testing, and further includes:
测试资源下载单元,用于下载待测试手机应用的测试脚本及所述图像资源至被测试手机中;a test resource downloading unit, configured to download a test script of the mobile phone application to be tested and the image resource to the tested mobile phone;
截图单元,用于对被测试手机屏幕上显示的待测试手机应用的测试图像进行截图和上传;a screenshot unit for taking a screenshot and uploading a test image of the mobile phone application to be tested displayed on the screen of the tested mobile phone;
图像匹配单元,用于采用以上任一项所述的图像匹配的方法,将所述测试图像作为模板图像在相应的图像资源上进行图像匹配,查找出所述测试图像的最佳匹配图像的定位坐标;以及,An image matching unit, configured to perform image matching on the corresponding image resource as the template image by using the image matching method according to any one of the above items, and find the location of the best matching image of the test image. Coordinates; and,
测试单元,用于根据所述图像匹配单元查找的最佳匹配图像的定位坐标,启动对所述测试图像所关联的测试代码的测试,将所述定位坐标和测试结果数据反馈至被测试手机。And a test unit, configured to start testing the test code associated with the test image according to the positioning coordinates of the best matching image searched by the image matching unit, and feed back the positioning coordinate and the test result data to the tested mobile phone.
进一步地,所述手机应用测试平台上设有多种通用接口,并针对所述通用接口在所述手机应用测试平台上设有相应的驱动层。Further, the mobile phone application test platform is provided with a plurality of common interfaces, and a corresponding driver layer is disposed on the mobile phone application test platform for the universal interface.
优选地,所述待测试手机应用为手机游戏应用;则所述手机应用测试平台为手机游戏测试平台。Preferably, the mobile phone application to be tested is a mobile game application; and the mobile application test platform is a mobile game test platform.
进一步地,所述手机应用测试平台还包括测试中心;所述测试单元还用于将测试数据结果数据传输至所述测试中心;Further, the mobile phone application testing platform further includes a testing center; the testing unit is further configured to transmit test data result data to the testing center;
所述测试结果数据包括待测试的手机型号信息、测试过程所产生的截图、CPU信息、内存信息、耗电信息和网卡流量信息。The test result data includes the model information of the mobile phone to be tested, the screenshot generated by the testing process, the CPU information, the memory information, the power consumption information, and the network card traffic information.
本发明实施例提供的图像匹配的方法,首先利用模板匹配的方法将模板图像在源图像中进行全局模板匹配,优选利用SIFT(Scale-Invariant Feature Transform,尺度不变特征转换)特征匹配算法判断模板图像与最佳匹配区域的 相似度,最终根据所述特征匹配点对,计算出最佳匹配图像的定位坐标,可以将基于灰度的模板匹配方法和基于SIFT特征匹配的方法相结合,扬长避短,兼有基于灰度的图像匹配方法的计算简单、直接与基于特征的图像匹配方法的旋转不变性和尺度不变性的优点,因此提高了图像匹配的准确度和灵活性。将本发明提供的图像匹配方法应用在手机应用测试时,可以快速准确地识别出目标图像,从而提高手机应用测试的效率。The image matching method provided by the embodiment of the invention firstly uses the template matching method to perform template matching on the template image in the source image, and preferably uses the SIFT (Scale-Invariant Feature Transform) feature matching algorithm to determine the template. Image with best matching area Similarity, finally, according to the feature matching point pair, the positioning coordinates of the best matching image are calculated, and the grayscale-based template matching method and the SIFT feature matching method can be combined to enhance the strength and avoid shortness, and also have grayscale-based images. The calculation method of the matching method is simple, and the advantages of the rotation invariance and the scale invariance of the feature-based image matching method are directly improved, thereby improving the accuracy and flexibility of image matching. When the image matching method provided by the invention is applied to the mobile phone application test, the target image can be quickly and accurately identified, thereby improving the efficiency of the mobile phone application test.
附图说明DRAWINGS
图1是本发明提供的图像匹配的方法的一个实施例的步骤流程图。1 is a flow chart showing the steps of one embodiment of a method of image matching provided by the present invention.
图2是本发明提供的将模板图像在源图像中进行全局模板匹配的示意图。2 is a schematic diagram of global template matching in a source image provided by the present invention.
图3是本发明提供的计算模板图像与源图像的特征点及特征向量一种可实现方式的步骤流程图。FIG. 3 is a flow chart showing the steps of calculating a feature point and a feature vector of a template image and a source image provided by the present invention.
图4是本发明提供的手机应用测试平台的一个实施例的步骤流程图。4 is a flow chart showing the steps of an embodiment of a mobile phone application test platform provided by the present invention.
图5是本发明提供的手机应用测试平台进行手机应用测试的一种架构示意图。FIG. 5 is a schematic structural diagram of a mobile phone application test platform provided by the present invention for mobile phone application testing.
具体实施方式detailed description
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述。The technical solutions in the embodiments of the present invention will be clearly and completely described in the following with reference to the accompanying drawings.
参见图1,是本发明提供的图像匹配的方法的一个实施例的步骤流程图。Referring to FIG. 1, a flow chart of steps of an embodiment of a method for image matching provided by the present invention is shown.
在本实施例中,所述的图像匹配的方法,包括以下步骤:In this embodiment, the method for image matching includes the following steps:
步骤S101:将模板图像T在源图像S中进行全局模板匹配,控制所述模板图像T在所述源图像S中滑动查找出最佳匹配区域。Step S101: Perform global template matching on the template image T in the source image S, and control the template image T to slide in the source image S to find a best matching area.
如图2所示,是本发明提供的将模板图像在源图像中进行全局模板匹配的示意图。其中,源图像S中包含有多个控件或按键的图像,分别为图像1~图像6。控制模板图像T从源图像S的左上角开始滑动,以在源图像S中找到目标图像4,在每滑动一次匹配窗口(模板图像T的大小)时,计算模板图像T与该 窗口对应的图像区域之间的相似度。As shown in FIG. 2, it is a schematic diagram of the global template matching in the source image provided by the present invention. The source image S includes images of a plurality of controls or buttons, which are image 1 to image 6, respectively. The control template image T is slid from the upper left corner of the source image S to find the target image 4 in the source image S, and the template image T is calculated every time the matching window (the size of the template image T) is slid once The similarity between the image regions corresponding to the window.
具体实施时,在一种可实现的方式中,所述步骤S101具体为:In a specific implementation, the step S101 is specifically:
a.分别获取所述模板图像T与所述源图像S的高度和宽度;a. respectively acquiring the height and width of the template image T and the source image S;
b.若所述模板图像T的高度大于所述源图像S的高度,或者,所述模板图像T的宽度大于所述源图像S的宽度,则判定所述源图像S中不存在匹配区域;If the height of the template image T is greater than the height of the source image S, or the width of the template image T is greater than the width of the source image S, it is determined that there is no matching area in the source image S;
c.若所述模板图像T的高度小于或等于所述源图像S的高度,并且,所述模板图像T的宽度小于或等于所述源图像S的宽度,则:c. If the height of the template image T is less than or equal to the height of the source image S, and the width of the template image T is less than or equal to the width of the source image S, then:
c1.将所述模板图像T在所述源图像S中以单位长度进行滑动,逐一计算出所述模板图像T与所述源图像S的标准相关系数,获得标准相关系数矩阵A;C1. The template image T is slid in the source image S by unit length, and the standard correlation coefficient of the template image T and the source image S is calculated one by one to obtain a standard correlation coefficient matrix A;
c2.查找出所述标准相关系数矩阵A中的最大系数值,以及所述最大系数值所对应的坐标位置;C2. Find a maximum coefficient value in the standard correlation coefficient matrix A, and a coordinate position corresponding to the maximum coefficient value;
c3.根据所述最大系数值所对应的坐标位置以及所述模板图像T的高度h1与宽度w1,确定所述最佳匹配区域的位置。C3. determining a position of the best matching region according to a coordinate position corresponding to the maximum coefficient value and a height h1 and a width w1 of the template image T.
在一种可实现的方式中,所述最大系数值所对应的坐标位置为(m,n),所述模板图像的高度为h1,宽度为w1;则所述最佳匹配区域的位置为:在所述源图像上的、以坐标位置(m,n)为左上角,长为h1,高为w1的矩形区域。具体地,可以设计一内部函数template_match()实现以上所述步骤S101,其伪代码实现过程如下:In an achievable manner, the coordinate position corresponding to the maximum coefficient value is (m, n), the height of the template image is h1, and the width is w1; then the position of the best matching region is: A rectangular area on the source image having a coordinate position (m, n) as an upper left corner, a length of h1, and a height of w1. Specifically, an internal function template_match() can be designed to implement the above-mentioned step S101, and the pseudo code implementation process is as follows:
Figure PCTCN2015087745-appb-000004
Figure PCTCN2015087745-appb-000004
Figure PCTCN2015087745-appb-000005
Figure PCTCN2015087745-appb-000005
在进行模板匹配的过程中,由于模板图像T与源图像S中的目标图像4的尺寸大小和/或方向有可能不一致,因此,该步骤查找得到的最佳匹配区域并不一定是有效匹配(即最佳匹配区域有可能不是目标图像4),还需要通过进一步对源图像S进行处理分析。In the process of performing template matching, since the size and/or direction of the target image 4 in the template image T and the source image S may be inconsistent, the best matching area obtained by the step search is not necessarily a valid match ( That is, the best matching area may not be the target image 4), and further processing analysis of the source image S is required.
步骤S102:计算出所述模板图像T与所述源图像S的特征点及特征向量。在本实施例中,优选采用SIFT(Scale-Invariant Feature Transform,尺度不变特征转换)特征匹配算法对所述模板图像T与所述源图像S的特征点及特征向量进行计算。SIFT特征匹配算法是一种用来侦测与描述影像中的局部性特征的计算机视觉算法,主要通过分别求两幅图中的特征点(Interest Points or Corner Points)及其有关尺寸(scale)和方向(orientation)的描述子得到特征,在尺度空间中寻找极值点,并提取出其位置、尺度、旋转不变量,然后进行两幅图像的特征点匹配。SIFT算法的实质是在不同尺度空间上查找特征点,并计算出特征点的方向,其所查找到的特征点是一些十分突出,不会因光照、仿射变换和噪音等因素而变化的点,如角点、边缘点、暗区的亮点及亮区的暗点等,因而SIFT特征对旋转、尺度缩放、亮度变化保持不变性,对视角变化、仿射变换和噪音等也保持一定程度的稳定性。Step S102: Calculate feature points and feature vectors of the template image T and the source image S. In this embodiment, the feature points and feature vectors of the template image T and the source image S are preferably calculated by using a SIFT (Scale-Invariant Feature Transform) feature matching algorithm. The SIFT feature matching algorithm is a computer vision algorithm used to detect and describe local features in images, mainly by separately finding the Interest Points or Corner Points and their related scales. The descriptor of the orientation obtains the feature, finds the extreme point in the scale space, and extracts its position, scale, and rotation invariant, and then performs feature point matching of the two images. The essence of SIFT algorithm is to find feature points in different scale spaces and calculate the direction of feature points. The feature points found are some points that are very prominent and will not change due to factors such as illumination, affine transformation and noise. Such as corner points, edge points, bright areas of dark areas and dark spots of bright areas, etc., so SIFT features remain invariant to rotation, scale scaling, brightness changes, and also maintain a certain degree of viewing angle change, affine transformation and noise. stability.
通过以上步骤计算获得模板图像T与源图像S的特征点及特征向量后,可以进一步通过步骤S103对两者的视觉相似度进行比较。After the feature points and feature vectors of the template image T and the source image S are obtained by the above steps, the visual similarities of the two can be further compared by step S103.
步骤S103:根据所述特征点及特征向量,计算出所述最佳匹配区域与所述模板图像T的视觉相似度;对所述视觉相似度是否为零进行判断,若所述视觉相似度为零,则执行步骤S104;若所述视觉相似度不为零,则执行步骤S105;Step S103: Calculate a visual similarity between the best matching area and the template image T according to the feature point and the feature vector; and determine whether the visual similarity is zero, if the visual similarity is 0, step S104 is performed; if the visual similarity is not zero, step S105 is performed;
步骤S104:判定所述最佳匹配区域与所述模板图像T不匹配。Step S104: It is determined that the best matching area does not match the template image T.
步骤S105:获得所述模板图像T与所述源图像S的特征匹配点对,并执行步骤S106。Step S105: Obtain a feature matching point pair of the template image T and the source image S, and perform step S106.
步骤S106:根据所述特征匹配点对,计算出最佳匹配图像的定位坐标。 Step S106: Calculate the positioning coordinates of the best matching image according to the feature matching point pair.
参看图3,是本发明提供的计算模板图像与源图像的特征点及特征向量一种可实现方式的步骤流程图。Referring to FIG. 3, it is a flow chart of steps for calculating a feature point and a feature vector of a template image and a source image provided by the present invention.
具体实施时,所述步骤S102具体可以通过以下步骤进行实现,包括:In the specific implementation, the step S102 can be specifically implemented by the following steps, including:
步骤S201:尺度空间极值检测。在待检测图像上搜索所有尺度的图像位置,通过高斯微分函数检测出对于尺度和旋转不变的极值点(又称为潜在的对于尺度和旋转不变的兴趣点)。Step S201: Scale space extreme value detection. Image positions of all scales are searched on the image to be detected, and extreme points that are invariant to scale and rotation (also known as potential points of interest for scale and rotation) are detected by a Gaussian differential function.
步骤S202:特征点定位。依据所述极值点的稳定程度,通过建立一个拟合模型来确定特征点的位置和尺度。Step S202: Feature point location. According to the degree of stability of the extreme points, the position and scale of the feature points are determined by establishing a fitting model.
步骤S203:特征点方向确定。基于图像局部的梯度方向,为每个特征点的位置分配一个或多个方向。Step S203: The feature point direction is determined. One or more directions are assigned to the position of each feature point based on the gradient direction of the image local.
步骤S204:特征点特征描述。在每个特征点周围的邻域内,在选定的尺度上测量图像局部的梯度,将所述梯度变换为表示局部形状变形和光照变化的特征向量。Step S204: feature point feature description. Within the neighborhood around each feature point, the gradient of the image local is measured at a selected scale, and the gradient is transformed into a feature vector representing local shape deformation and illumination variation.
具体地,在所述步骤S201~步骤S204中,当所述待检测图像为所述模板图像T时,所述特征点为所述模板图像T的SIFT特征点;所述特征向量为所述模板图像T的SIFT特征向量;当所述待检测图像为所述源图像S时,所述特征点为所述源图像S的SIFT特征点;所述特征向量为所述源图像S的SIFT特征向量。Specifically, in the step S201 to the step S204, when the image to be detected is the template image T, the feature point is a SIFT feature point of the template image T; the feature vector is the template a SIFT feature vector of the image T; when the image to be detected is the source image S, the feature point is a SIFT feature point of the source image S; the feature vector is a SIFT feature vector of the source image S .
进一步地,在一种可实现的方式中,所述步骤S103可以通过以下步骤实现,具体为:Further, in an achievable manner, the step S103 can be implemented by the following steps, specifically:
步骤S301:计算出所述模板图像T的SIFT特征点的长度len(keypoint1)和所述最佳匹配区域的SIFT特征点的长度len(keypoint2)。根据模板图像T的SIFT特征点的长度和最佳匹配区域的SIFT特征点的长度的大小,判断最佳匹配区域与模板图像T的视觉相似度是否为零。Step S301: Calculate the length len(keypoint1) of the SIFT feature point of the template image T and the length len(keypoint2) of the SIFT feature point of the best matching area. It is determined whether the visual similarity between the best matching region and the template image T is zero according to the length of the SIFT feature point of the template image T and the length of the SIFT feature point of the best matching region.
若所述模板图像T的SIFT特征点的长度为零,或者,所述最佳匹配区域的SIFT特征点的长度为零,则执行步骤S302;若所述模板图像T的SIFT特征点的长度不为零,并且,所述最佳匹配区域的SIFT特征点的长度不为零,则执行 步骤S303。If the length of the SIFT feature point of the template image T is zero, or the length of the SIFT feature point of the best matching area is zero, step S302 is performed; if the length of the SIFT feature point of the template image T is not Zero, and the length of the SIFT feature point of the best matching area is not zero, then execution Step S303.
步骤S302:确定所述最佳匹配区域与所述模板图像T的视觉相似度为零。Step S302: determining that the visual similarity between the best matching area and the template image T is zero.
步骤S303:计算出所述模板图像T与所述最佳匹配区域的特征匹配点对的数目Good_Match;将所述特征匹配点对的数目Good_Match除以所述模板图像T的SIFT特征点的长度len(keypoint1)的商作为所述视觉相似度,即,视觉相似度的值=Good_Match/len(keypoint1)。Step S303: Calculate the number of feature matching point pairs of the template image T and the best matching area, Good_Match; divide the number of the feature matching point pairs Good_Match by the length of the SIFT feature point of the template image T The quotient of (keypoint1) is taken as the visual similarity, that is, the value of visual similarity = Good_Match/len (keypoint1).
在本实施例中,上述步骤S103计算获得的视觉相似度,是通过步骤S101将模板图像T在整个源图像S中进行全局模板匹配获得的“全局视觉相似度”,其目的是实现对源图像的粗过滤,排除那些必然不存在匹配区域的源图像(测试图片),提高图像匹配过程的运行效率。In the embodiment, the visual similarity obtained by the above step S103 is a "global visual similarity" obtained by global template matching of the template image T in the entire source image S by step S101, and the purpose is to implement the source image. The coarse filtering eliminates the source images (test pictures) that do not necessarily have matching areas, improving the efficiency of the image matching process.
具体实施时,若所述视觉相似度不为零,则在所述步骤S105中,获得所述模板图像T与所述源图像S的特征匹配点对的过程,具体包括:In a specific implementation, if the visual similarity is not zero, in the step S105, the process of matching the feature pairs of the template image T and the source image S is obtained, which specifically includes:
计算出所述模板图像T的SIFT特征向量与所述最佳匹配区域的SIFT特征向量的最小欧氏距离min_E和次小欧氏距离nextmin_E;在所述最小欧氏距离min_E除以所述次小欧氏距离nextmin_E的商小于第一阈值时,将所述模板图像T与所述源图像S的特征点作为所述特征匹配点对,并对所述特征匹配点对的数目Good_Match进行叠加。例如,假设第一阈值TH1为0.75,则当所述模板图像T的SIFT特征向量与所述最佳匹配区域的SIFT特征向量的最小欧氏距离min_E小于次小欧氏距离nextmin_E与第一阈值TH1的乘积,即min_E<0.75*nextmin_E时,对特征匹配点对的数目Good_Match进行叠加:Good_Match=Good_Match+1。Calculating a minimum Euclidean distance min_E and a second small Euclidean distance next_E of the SIFT feature vector of the template image T and the SIFT feature vector of the best matching region; dividing the minimum Euclidean distance min_E by the second small When the quotient of the Euclidean distance nextmin_E is smaller than the first threshold, the template image T and the feature point of the source image S are used as the feature matching point pair, and the number of the feature matching point pairs Good_Match is superimposed. For example, assuming that the first threshold TH1 is 0.75, when the minimum Euclidean distance min_E of the SIFT feature vector of the template image T and the SIFT feature vector of the best matching region is smaller than the second small Euclidean distance nextmin_E and the first threshold TH1 When the product of min_E<0.75*nextmin_E, the number of feature matching point pairs Good_Match is superimposed: Good_Match=Good_Match+1.
以上步骤S103~步骤S105可以通过构造一函数feature_similarity()进行实现,其伪代码可表示为:The above steps S103 to S105 can be implemented by constructing a function feature_similarity(), and the pseudo code can be expressed as:
Figure PCTCN2015087745-appb-000006
Figure PCTCN2015087745-appb-000006
Figure PCTCN2015087745-appb-000007
Figure PCTCN2015087745-appb-000007
具体实施时,计算获得的SIFT特征点描述子即为其相应的特征向量。构造函数cv2.SIFT.detectAndCompute()计算出模板图像T和源图像S的SIFT特征点及其SIFT特征点描述子(即特征向量):In a specific implementation, the calculated SIFT feature point descriptor is its corresponding feature vector. The constructor cv2.SIFT.detectAndCompute() calculates the SIFT feature points of the template image T and the source image S and their SIFT feature point descriptors (ie, feature vectors):
Figure PCTCN2015087745-appb-000008
Figure PCTCN2015087745-appb-000008
其次利用cv2.FlannBasedMatcher()进行特征点匹配,接着按照最近邻距离除以次近邻距离低于某一阈值(即第一阈值TH1)准则,计算SIFT特征匹配点对。其中,“距离”指模板图像T中一个SIFT特征向量与源图像S中的一个SIFT特征向量间的欧氏距离:Secondly, cv2.FlannBasedMatcher() is used to perform feature point matching, and then SIFT feature matching point pairs are calculated according to the nearest neighbor distance divided by the next nearest neighbor distance below a certain threshold (ie, the first threshold TH1). Wherein, "distance" refers to the Euclidean distance between one SIFT feature vector in the template image T and one SIFT feature vector in the source image S:
Figure PCTCN2015087745-appb-000009
Figure PCTCN2015087745-appb-000009
保留SIFT特征匹配点对,记作Good_Match。 Keep the SIFT feature matching point pairs and record them as Good_Match.
在获得所述模板图像T与所述源图像S的特征匹配点对的数目Good_Match后,根据该特征匹配点对的数目Good_Match的大小选择执行不同的策略来实现最优匹配图像的定位。After obtaining the number Good_Match of the feature matching point pairs of the template image T and the source image S, different strategies are selected according to the size of the feature matching point pair Good_Match to achieve positioning of the optimal matching image.
进一步地,在进行全局模板匹配后获得的所述最佳匹配区域与所述模板图像T的视觉相似度不为零时,本实施例提供更加详细的查找出最佳的匹配区域的实施方式。Further, when the visual similarity between the best matching area and the template image T obtained after the global template matching is not zero, the embodiment provides a more detailed implementation manner for finding the best matching area.
具体实施时,可以设定最小匹配数目(MIN_MATCH_COUNT),来界定所述特征匹配点对的数目Good_Match的大小。通过所述特征匹配点对的数目Good_Match与最小匹配数目的比较,选定不同的计算策略。In a specific implementation, the minimum number of matches (MIN_MATCH_COUNT) may be set to define the size of the number of feature matching point pairs Good_Match. Different calculation strategies are selected by comparing the number of feature matching point pairs Good_Match with the minimum number of matches.
一方面,当所述特征匹配点对的数目Good_Match高于最小匹配数目(MIN_MATCH_COUNT)时,所述根据所述特征匹配点对,计算出最佳匹配图像的定位坐标,包括:利用单映射(homography,也称为单应性)函数查找出与所述特征匹配点对相对应的单映射矩阵(Homography Matrix)。进一步,根据所述单映射矩阵,利用向量数组的透视变换函数计算出所述模板图像T在所述源图像S上的最佳匹配区域的多个坐标点;计算出最佳匹配区域的中心点坐标,将所述中心点坐标作为所述最佳匹配图像的定位坐标。In one aspect, when the number of feature matching point pairs Good_Match is higher than the minimum matching number (MIN_MATCH_COUNT), the calculating the positioning coordinates of the best matching image according to the feature matching point pair includes: using a single mapping (homography) The so-called homography function finds a single mapping matrix (Homography Matrix) corresponding to the pair of feature matching points. Further, according to the single mapping matrix, a plurality of coordinate points of the best matching region of the template image T on the source image S are calculated by using a perspective transformation function of the vector array; and a center point of the best matching region is calculated. Coordinates, the center point coordinates are used as positioning coordinates of the best matching image.
具体地,假设最小匹配数目MIN_MATCH_COUNT为5,若特征匹配点对的数目Good_Match高于5,则用homography映射找出匹配区域,构建cv2.findHomography()函数,利用匹配的关键点找出相应的单映射矩阵,再用cv2.perspectiveTransfrom()函数映射点群,得到模板图像T在源图像S上匹配映射区域的四个坐标点,接着就利用得到的坐标点计算出匹配区域的中心点坐标,实现了定位功能;反之,若特征匹配点对的数目Good_Match低于5,则需要进行进一步的判断。Specifically, it is assumed that the minimum matching number MIN_MATCH_COUNT is 5, and if the number of feature matching point pairs Good_Match is higher than 5, the matching region is found by using the homography mapping, and the cv2.findHomography() function is constructed, and the matching key points are used to find the corresponding single. Map the matrix, and then use the cv2.perspectiveTransfrom() function to map the point group, and obtain the template image T to match the four coordinate points of the mapping area on the source image S, and then use the obtained coordinate points to calculate the coordinates of the center point of the matching area. The positioning function; conversely, if the number of feature matching point pairs Good_Match is lower than 5, further judgment is needed.
在本实施例中,在一种可实现的方式中,所述根据所述单映射矩阵,利用向量数组的透视变换函数计算出所述模板图像T在所述源图像S上的最佳匹配区域的多个坐标点,具体包括以下步骤:In this embodiment, in an implementable manner, the optimal matching region of the template image T on the source image S is calculated according to the single mapping matrix by using a perspective transformation function of a vector array. Multiple coordinate points, including the following steps:
步骤S401:根据所述特征匹配点对,获取所述模板图像T上的SIFT特征 点的坐标及其一一匹配的、在所述源图像S上的SIFT特征点的坐标。Step S401: Acquire a SIFT feature on the template image T according to the feature matching point pair The coordinates of the point and the coordinates of the SIFT feature points on the source image S that match one by one.
步骤S402:随机筛选出N对匹配点对的坐标,在所述模板图像T和所述源图像S之间进行映射,获得第一方程:Step S402: Randomly filter out the coordinates of the pair of matching points of N, and perform mapping between the template image T and the source image S to obtain the first equation:
Figure PCTCN2015087745-appb-000010
Figure PCTCN2015087745-appb-000010
并且获得对应的映射系数,将所述映射系数组建为系数矩阵H,获得第二方程:And obtaining corresponding mapping coefficients, forming the mapping coefficients into a coefficient matrix H, and obtaining a second equation:
Figure PCTCN2015087745-appb-000011
Figure PCTCN2015087745-appb-000011
其中,N≥4;[x’i,y’i]是所述源图像S上的SIFT特征点的坐标;[xi,yi]是所述模板图像T上的SIFT特征点的坐标;H是从所述模板图像T上的SIFT特征点映射到所述源图像S上的SIFT特征点的系数矩阵,其中h11~h33是系数矩阵H的各个元素。Wherein N≥4; [x' i , y' i ] is the coordinate of the SIFT feature point on the source image S; [x i , y i ] is the coordinate of the SIFT feature point on the template image T; H is a coefficient matrix mapped from SIFT feature points on the template image T to SIFT feature points on the source image S, where h 11 to h 33 are respective elements of the coefficient matrix H.
步骤S403:利用所述系数矩阵H计算出所述模板图像T上的SIFT特征点映射到所述源图像S上的实时坐标。Step S403: Calculate real-time coordinates of the SIFT feature points on the template image T onto the source image S by using the coefficient matrix H.
步骤S404:在所述源图像S上的SIFT特征点的坐标与所述实时坐标的之间的距离小于第二阈值TH2时,利用第一方程(1)和第二方程(2)对所述系数矩阵H进行更新,直到所述系数矩阵H不再变化,并将不再变化的系数矩阵H作为所述单映射矩阵。Step S404: when the distance between the coordinates of the SIFT feature point on the source image S and the real-time coordinate is less than the second threshold TH2, the first equation (1) and the second equation (2) are used to The coefficient matrix H is updated until the coefficient matrix H no longer changes, and the coefficient matrix H that does not change is used as the single mapping matrix.
步骤S405:根据所述单映射矩阵和第一方程(1),通过以下第三方程(3)逐一计算出所述模板图像T在所述最佳匹配区域的N个匹配点的坐标(x’,y’):Step S405: Calculate, according to the single mapping matrix and the first equation (1), the coordinates of the N matching points of the template image T in the best matching region (x' by the following third equation (3). , y'):
Figure PCTCN2015087745-appb-000012
Figure PCTCN2015087745-appb-000012
步骤S406:将所述N个匹配点的坐标的中心点坐标作为所述最佳匹配图像的定位坐标。Step S406: The center point coordinates of the coordinates of the N matching points are used as the positioning coordinates of the best matching image.
另一方面,当所述特征匹配点对的数目Good_Match低于所述最小匹配数目 MIN_MATCH_COUNT,并且大于指定倍率系数ratio_num(例如,系数ratio_num可优选为模板图像T的SIFT特征点数的0.1倍)时,则所述步骤S106具体为:对所述模板图像T进行SIFT强匹配,包括以下步骤:On the other hand, when the number of feature matching point pairs Good_Match is lower than the minimum number of matches When the MIN_MATCH_COUNT is greater than the specified magnification factor ratio_num (for example, the coefficient ratio_num may preferably be 0.1 times the number of SIFT feature points of the template image T), the step S106 is specifically: performing SIFT strong matching on the template image T, including the following step:
步骤61:根据所述特征匹配点对,获取所述模板图像T上的SIFT特征点的坐标及其一一匹配的、在所述源图像S上的SIFT特征点的坐标;Step 61: Acquire, according to the feature matching point pair, coordinates of SIFT feature points on the template image T and coordinates of SIFT feature points on the source image S that are matched one by one;
步骤62:对所述源图像S上的SIFT特征点的坐标求均值处理,并将获得的均值坐标值作为所述最佳匹配图像的定位坐标。其中,所述指定倍率系数ratio_num小于所述最小匹配数目MIN_MATCH_COUNT。Step 62: Perform averaging processing on the coordinates of the SIFT feature points on the source image S, and obtain the obtained mean coordinate values as the positioning coordinates of the best matching image. The specified rate coefficient ratio_num is smaller than the minimum number of matches MIN_MATCH_COUNT.
在所述步骤S106中执行强匹配的目的是:防止遗漏能够匹配上的图像对。具体实施时,有些模板图像T本身只能提取几个SIFT特征点,但是实际上这些模板图像T刚好又与源图像S匹配,而传统的SIFT算法无法为这些只有少数特征点的模板图像T查找到匹配区域。而本发明实施例进行强匹配后,可以克服传统SIFT特征提取方法的这一缺陷,提高图像匹配的能力。The purpose of performing strong matching in the step S106 is to prevent missing pairs of images that can be matched. In the specific implementation, some template images T can only extract a few SIFT feature points, but in fact these template images T just match the source image S, and the traditional SIFT algorithm cannot find the template images T with only a few feature points. Go to the matching area. After the strong matching is performed in the embodiment of the present invention, the defect of the traditional SIFT feature extraction method can be overcome, and the image matching capability can be improved.
进一步地,当所述特征匹配点对的数目Good_Match小于指定倍率系数ratio_num时,或者,特征匹配点对的数目Good_Match小于零,即可以提取的特征匹配点对的数目Good_Match非常少时,(其中,所述指定倍率系数小于所述最小匹配数目),则所述步骤S106包括:选定所述最佳匹配区域中的特征点的邻近区域与所述模板图像T进行局部模板匹配,具体可以通过以下步骤进行实现:Further, when the number of feature matching point pairs Good_Match is smaller than the specified magnification coefficient ratio_num, or the number of feature matching point pairs Good_Match is less than zero, that is, the number of feature matching point pairs that can be extracted is very small, (where The step S106 includes: selecting a neighboring area of the feature point in the best matching area to perform partial template matching with the template image T, which may be specifically performed by the following steps. Implement:
步骤S601:计算出所述特征点的邻近区域与所述模板图像T的局部视觉相似度;若所述局部视觉相似度高于第三阈值TH3,则执行步骤S602;若所述局部视觉相似度低于所述第三阈值TH3,则执行步骤S603。Step S601: calculating a local visual similarity between the neighboring region of the feature point and the template image T; if the local visual similarity is higher than the third threshold TH3, performing step S602; if the local visual similarity Below the third threshold TH3, step S603 is performed.
步骤S602:判定匹配成功,根据局部模板匹配获得的坐标计算出所述最佳匹配图像的定位坐标;Step S602: determining that the matching is successful, and calculating the positioning coordinates of the best matching image according to the coordinates obtained by the partial template matching;
步骤S603:对所述模板图像T与所述源图像S进行全局多尺度模板匹配。Step S603: Perform global multi-scale template matching on the template image T and the source image S.
其中,步骤S601中的局部视觉相似度是通过最佳匹配区域中的特征点的邻近区域与所述模板图像T进行局部模板匹配来获得的。 The partial visual similarity in step S601 is obtained by performing partial template matching with the template image T by the neighboring regions of the feature points in the best matching region.
具体实施时,在计算局部视觉相似度时,可以采用前文所述的feature_similarity()函数进行实现;还可以采用颜色直方图的相似度来计算。具体地,可以分别计算模板图像T的颜色直方图H1(i)和源图像S中的邻近区域的颜色直方图H2(i),然后采用第四方程(4)解算出局部视觉相似度:In the specific implementation, when calculating the local visual similarity, the feature_similarity() function described above may be used for the implementation; and the similarity of the color histogram may also be used for calculation. Specifically, the color histogram H1(i) of the template image T and the color histogram H2(i) of the adjacent region in the source image S may be separately calculated, and then the local visual similarity is solved by using the fourth equation (4):
Figure PCTCN2015087745-appb-000013
Figure PCTCN2015087745-appb-000013
而步骤S601中的特征点的邻近区域,可选择为:以特征点所在坐标为中心,长、宽分别为模板图像T高度和宽度两倍的矩形区域,通过在该矩形区域上对模板图像T进行匹配,选出最好的匹配区域,如果其与模板图像T的视觉相似度高于某一阈值(TH3),则认为匹配成功,否则执行步骤S603。The adjacent area of the feature point in step S601 may be selected as a rectangular area whose height and width are respectively twice the height and width of the template image T centered on the coordinates of the feature point, and the template image T is obtained on the rectangular area. Matching is performed to select the best matching area. If the visual similarity with the template image T is higher than a certain threshold (TH3), the matching is considered successful, otherwise step S603 is performed.
在一种可实现的方式中,若所述局部视觉相似度低于所述第三阈值TH3,则所述步骤S603,具体包括:In an achievable manner, if the local visual similarity is lower than the third threshold TH3, the step S603 includes:
步骤S6031:建立尺度列表;所述尺度列表包括多个尺度系数;Step S6031: Establish a scale list; the scale list includes a plurality of scale coefficients;
步骤S6032:根据所述尺度列表中的尺度系数,对所述模板图像T进行放缩;Step S6032: scaling the template image T according to the scale factor in the scale list;
步骤S6033:对进行放缩后的模板图像T在所述源图像S中进行全局模板匹配,记录每一次匹配获得的匹配值和匹配区域,形成最佳匹配集合;Step S6033: performing global template matching on the template image T after the scaling is performed, and recording the matching value and the matching area obtained by each matching to form a best matching set;
步骤S6034:计算完所有尺度的全局模板匹配后,将所述最佳匹配集合中的最大匹配值所对应的区域作为最佳匹配图像,并计算出所述最佳匹配图像的中心坐标值作为所述最佳匹配图像的定位坐标。Step S6034: After calculating the global template matching of all the scales, the area corresponding to the maximum matching value in the best matching set is taken as the best matching image, and the central coordinate value of the best matching image is calculated as the The positioning coordinates of the best matching image.
可以采用将以上过程采用函数multi_scale_match()实现,并通过伪代码将以上过程表示如下:The above process can be implemented by using the function multi_scale_match(), and the above process is represented by pseudo code as follows:
Figure PCTCN2015087745-appb-000014
Figure PCTCN2015087745-appb-000014
Figure PCTCN2015087745-appb-000015
Figure PCTCN2015087745-appb-000015
在本实施例中,主要计算模板T与源图像S多尺度匹配相似度,对模板T实现了多尺度放缩,在一定程度上解决模板匹配存在的对尺度变换敏感的问题,如果匹配值(放缩后的模板图像与源图像S的匹配值)低于一定阈值,则认为匹配失败,否则计算最佳匹配区域与模板T的视觉相识度。In this embodiment, the multi-scale matching similarity between the template T and the source image S is mainly calculated, and the multi-scale scaling is implemented on the template T, and the problem that the template matching is sensitive to the scale transformation is solved to some extent, if the matching value ( If the matching value of the scaled template image and the source image S is lower than a certain threshold, the matching is considered to be failed, otherwise the visual matching degree of the best matching region and the template T is calculated.
步骤S6031~S6034采用的是多尺度模板匹配方法,其作用是细过滤,排除那些容易匹配错误的源图像S的干扰,因而比前文所述步骤S101的模板匹配过程更加丰富。Steps S6031 to S6034 adopt a multi-scale template matching method, which functions as fine filtering to eliminate interference of the source image S which is easy to match errors, and thus is more abundant than the template matching process of step S101 described above.
本发明实施例提供的图像匹配的方法,具体实施时可以采用Python语言实现,其具有高效和易读性的特点,实现对图像匹配方法的快速应用开发。The image matching method provided by the embodiment of the invention can be implemented in the Python language, and has the characteristics of high efficiency and legibility, and realizes rapid application development of the image matching method.
本发明实施例提供的图像匹配的方法,利用模板匹配的方法将模板图像在源图像中进行全局模板匹配,利用SIFT特征匹配算法判断模板图像与最佳匹配区域的相似度,最终根据所述特征匹配点对,计算出最佳匹配图像的定位坐标。并根据匹配特征点对的数量而采用不同的匹配过程,降低了算法复杂度的同时提高了图像匹配的准确度。本发明实施例可以将基于灰度的模板匹配方法和基于SIFT特征匹配的方法相结合,扬长避短,兼有基于灰度的图像匹配方法的计算简单、直接与基于特征的图像匹配方法的旋转不变性和尺度不变性的优点,因此提高了图像匹配的准确度和灵活性。The image matching method provided by the embodiment of the invention uses the template matching method to perform global template matching on the template image in the source image, and uses the SIFT feature matching algorithm to determine the similarity between the template image and the best matching region, and finally according to the feature. Match the pair of points and calculate the coordinates of the best matching image. And different matching processes are adopted according to the number of matched feature point pairs, which reduces the complexity of the algorithm and improves the accuracy of image matching. The embodiment of the present invention can combine the gray-based template matching method and the SIFT feature matching method, and the rotation invariance of the simple and direct feature-based image matching method based on the gray-based image matching method. And the advantages of scale invariance, thus improving the accuracy and flexibility of image matching.
本发明还提供了一种将以上图像匹配的方法应用在对手机应用(Application,APP)进行测试的手机应用测试平台。The invention also provides a mobile phone application testing platform for applying the above image matching method to testing a mobile phone application (Application, APP).
如图4所示,是本发明提供的手机应用测试平台的一个实施例的结构示意图。FIG. 4 is a schematic structural diagram of an embodiment of a mobile phone application testing platform provided by the present invention.
本实施例提供的手机应用测试平台可以实现基于图像匹配的手机应用APP的自动测试功能,首要解决的就是图像匹配问题,在识别正确的图像位置之后, 传送到待测试手机中实现模拟点击,实现手机应用(如手机游戏)的模拟操作。The mobile phone application testing platform provided by the embodiment can implement the automatic testing function of the mobile phone application APP based on image matching, and the first problem is the image matching problem. After identifying the correct image position, It is transmitted to the mobile phone to be tested to realize analog click, and realizes the simulation operation of the mobile application (such as mobile game).
所述手机应用测试平台包括:The mobile application test platform includes:
测试资源下载单元401,用于下载待测试手机应用的测试脚本及所述图像资源至被测试手机中。The test resource downloading unit 401 is configured to download a test script of the mobile phone application to be tested and the image resource to the tested mobile phone.
截图单元402,用于对被测试手机屏幕上显示的待测试手机应用的测试图像进行截图和上传;The screenshot unit 402 is configured to take a screenshot and upload the test image of the mobile phone application to be tested displayed on the screen of the tested mobile phone;
图像匹配单元403,用于采用以上任一项所述的图像匹配的方法,将所述测试图像作为模板图像在相应的图像资源上进行图像匹配,查找出所述测试图像的最佳匹配图像的定位坐标;以及,The image matching unit 403 is configured to perform image matching on the corresponding image resource by using the image matching method as the template image to find the best matching image of the test image. Positioning coordinates; and,
测试单元404,用于根据所述图像匹配单元403查找的最佳匹配图像的定位坐标,启动对所述测试图像所关联的测试代码的测试,将所述定位坐标和测试结果数据反馈至被测试手机。The testing unit 404 is configured to start testing the test code associated with the test image according to the positioning coordinates of the best matching image searched by the image matching unit 403, and feed back the positioning coordinate and the test result data to the tested Mobile phone.
具体实施时,所述手机应用测试平台上还设有存储有待测试手机应用的测试脚本及测试所需图像资源的存储单元405;测试资源下载单元401从存储单元405下载待测试应用相对应的测试脚本和图像资源。将所述待测试图像作为模板图像在所述手机应用测试平台上进行图像匹配,查找出所述待测试图像的最佳匹配图像的定位坐标,判别所述被测试手机的响应状态;在给定模板图像的情况下,利用前文所述的图像匹配方法,从测试图片(如从手机游戏中截取的图片)中精确得识别出相同或者相似的图像。进一步地,所述手机应用测试平台上设有多种通用接口406,并针对所述通用接口406在所述手机应用测试平台上设有相应的驱动层。In a specific implementation, the mobile phone application test platform further includes a storage unit 405 that stores a test script of the mobile phone application to be tested and a required image resource for testing; and the test resource download unit 401 downloads a test corresponding to the application to be tested from the storage unit 405. Script and image resources. Performing image matching on the mobile phone application test platform as the template image, finding the positioning coordinates of the best matching image of the image to be tested, and determining the response state of the tested mobile phone; In the case of a template image, the same or similar images are accurately identified from test pictures (such as pictures taken from a mobile game) using the image matching method described above. Further, the mobile application test platform is provided with a plurality of universal interfaces 406, and a corresponding drive layer is disposed on the mobile application test platform for the universal interface 406.
在一种可实现的方式中,如图5所示,所述手机应用测试平台可以安装在服务器502中,服务器502可以通过多种通信接口与待测试手机501通信连接。In an implementable manner, as shown in FIG. 5, the mobile phone application test platform can be installed in the server 502, and the server 502 can communicate with the mobile phone 501 to be tested through various communication interfaces.
所述手机应用测试平台上设有多种通用接口,并针对所述通用接口在所述手机应用测试平台上设有相应的驱动层;所述手机501与所述手机应用测试平台之间通过所述通用接口和所述驱动层进行数据传输。具体地,针对服务器502的各种接口,分别在待测试手机501的操作系统(Ios系统或Android系统等), 以及服务器502操作系统(windows平台)实现相应的驱动层。其中,针对Ios系统可以采用开源的Appium工具实现所述通信接口;对于Android系统,可以采用Google提供的ADB(Android Debug Bridge,安卓调试桥)工具实现所述通信接口;对于windows系统,可以直接使用的系统底层的API(Application Programming Interface,应用程序编程接口)进行通信。The mobile phone application testing platform is provided with a plurality of general-purpose interfaces, and a corresponding driving layer is disposed on the mobile phone application testing platform for the universal interface; and the mobile phone 501 and the mobile phone application testing platform pass through The general interface and the driver layer perform data transmission. Specifically, for various interfaces of the server 502, respectively, an operating system (Ios system, Android system, etc.) of the mobile phone 501 to be tested, And the server 502 operating system (windows platform) implements the corresponding driver layer. The communication interface can be implemented by using an open source Appium tool for the Ios system; for the Android system, the ADB (Android Debug Bridge) tool provided by Google can be used to implement the communication interface; for the Windows system, it can be directly used. The underlying API (Application Programming Interface) communicates.
通过该手机应用测试平台对手机应用APP进行自动化测试时:首先,准备好测试代码和测试脚本中需要用到的图像资源;并通过驱动层将待测试手机501上的屏幕截图传递到服务器502上;第二,采在服务器502上识别出图像(如图5中的太阳图标)的位置,用前文所述的任一项图像匹配方法,进行图像的定位查找出目标图像所在的位置,即其横坐标x值和纵坐标y值,构成目标图像位置(x,y);然后,通过所述通信接口将(x,y)坐标传递到待测试手机501上,完成手机应用APP(如手机游戏)中的模拟点击操作。具体实施时,所述手机应用方法还包括:将测试结果数据回程至测试中心;所述测试结果数据包括待测试的手机型号信息、测试过程所产生的截图,CPU信息,内存信息,耗电信息和网卡流量信息。When the mobile application APP is automatically tested by the mobile application test platform: first, the image resources needed in the test code and the test script are prepared; and the screenshot on the mobile phone 501 to be tested is transmitted to the server 502 through the driver layer. Secondly, the position of the image (such as the sun icon in FIG. 5) is recognized on the server 502, and the image is positioned by using any of the image matching methods described above to find the position where the target image is located, that is, The abscissa x value and the ordinate y value constitute a target image position (x, y); then, the (x, y) coordinates are transmitted to the mobile phone 501 to be tested through the communication interface, and the mobile application APP (such as a mobile game) is completed. The analog click operation in ). In a specific implementation, the mobile phone application method further includes: returning the test result data to the test center; the test result data includes the model information of the mobile phone to be tested, the screenshot generated by the test process, the CPU information, the memory information, and the power consumption information. And network card traffic information.
在优选的实施方式中,所述待测试手机应用APP为手机游戏应用;则所述手机应用测试平台为手机游戏测试平台。将改进后的图像匹配方法和手机应用测试方法应用在手机游戏测试领域,可以有效提高现有的手机游戏测试效率,降低手机游戏测试的门槛,提高手机游戏测试的便捷性,实现对手机游戏的远程测试。In a preferred embodiment, the mobile phone application APP to be tested is a mobile game application; and the mobile application test platform is a mobile game test platform. Applying the improved image matching method and mobile phone application testing method to the field of mobile game testing can effectively improve the efficiency of existing mobile game testing, lower the threshold of mobile game testing, improve the convenience of mobile game testing, and realize the mobile game. Remote testing.
本实施例提供的手机应用测试平台,利用改进后的图像匹配方法的优点,减少了不同分辨率手机需要重复编写测试代码的缺陷,实现对智能手机应用的自动化测试,减少了人工测试手机应用的成本,并提高了测试效率和测试准确度。手机应用测试平台上的测试代码可以同时支持多种智能手机操作系统上运行的程序,提高了兼容性。在将手机应用测试平台集成在手机应用上时,更可以有助于对手机中的应用APP进行随时随地的测试,尤其适用于普通用户,提高了手机应用测试的适用范围。 The mobile phone application testing platform provided by the embodiment utilizes the advantages of the improved image matching method, reduces the defect that the mobile phone of different resolutions needs to repeatedly write test code, realizes automatic testing of the smart phone application, and reduces the manual test mobile phone application. Cost and improve test efficiency and test accuracy. The test code on the mobile app test platform can support programs running on multiple smartphone operating systems at the same time, improving compatibility. When the mobile application test platform is integrated into the mobile application, it can help to test the application APP in the mobile phone anytime and anywhere, especially for ordinary users, and improve the application range of the mobile application test.
以上所述是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也视为本发明的保护范围。 The above is a preferred embodiment of the present invention, and it should be noted that those skilled in the art can also make several improvements and retouchings without departing from the principles of the present invention. It is the scope of protection of the present invention.

Claims (16)

  1. 一种图像匹配的方法,其特征在于,所述方法包括:A method for image matching, characterized in that the method comprises:
    将模板图像在源图像中进行全局模板匹配,控制所述模板图像在所述源图像中滑动查找出最佳匹配区域;Performing global template matching on the template image in the source image, and controlling the template image to slide in the source image to find a best matching area;
    计算出所述模板图像与所述源图像的特征点及特征向量;Calculating a feature point and a feature vector of the template image and the source image;
    根据所述特征点及特征向量,计算出所述最佳匹配区域与所述模板图像的视觉相似度;Calculating a visual similarity between the best matching area and the template image according to the feature point and the feature vector;
    若所述视觉相似度为零,则判定所述最佳匹配区域与所述模板图像不匹配;If the visual similarity is zero, determining that the best matching area does not match the template image;
    若所述视觉相似度不为零,则获得所述模板图像与所述源图像的特征匹配点对;If the visual similarity is not zero, obtaining a feature matching point pair between the template image and the source image;
    根据所述特征匹配点对,计算出最佳匹配图像的定位坐标。According to the feature matching point pair, the positioning coordinates of the best matching image are calculated.
  2. 如权利要求1所述的图像匹配的方法,其特征在于,所述将模板图像在源图像中进行全局模板匹配,控制所述模板图像在所述源图像中滑动查找出最佳匹配区域,具体为:The image matching method according to claim 1, wherein the template image is globally matched in the source image, and the template image is controlled to slide in the source image to find a best matching area, specifically for:
    分别获取所述模板图像与所述源图像的高度和宽度;Obtaining a height and a width of the template image and the source image respectively;
    若所述模板图像的高度大于所述源图像的高度,或者,所述模板图像的宽度大于所述源图像的宽度,则判定所述源图像中不存在匹配区域;If the height of the template image is greater than the height of the source image, or the width of the template image is greater than the width of the source image, determining that there is no matching area in the source image;
    若所述模板图像的高度小于或等于所述源图像的高度,并且,所述模板图像的宽度小于或等于所述源图像的宽度,则:If the height of the template image is less than or equal to the height of the source image, and the width of the template image is less than or equal to the width of the source image, then:
    将所述模板图像在所述源图像中以单位长度进行滑动,逐一计算出所述模板图像与所述源图像的标准相关系数,获得标准相关系数矩阵;And sliding the template image in the source image by a unit length, calculating a standard correlation coefficient between the template image and the source image one by one, and obtaining a standard correlation coefficient matrix;
    查找出所述标准相关系数矩阵中的最大系数值,以及所述最大系数值所对应的坐标位置;Finding a maximum coefficient value in the standard correlation coefficient matrix, and a coordinate position corresponding to the maximum coefficient value;
    根据所述最大系数值所对应的坐标位置以及所述模板图像的高度与宽度,确定所述最佳匹配区域的位置。 And determining a position of the best matching area according to a coordinate position corresponding to the maximum coefficient value and a height and a width of the template image.
  3. 如权利要求2所述的图像匹配的方法,其特征在于,所述最大系数值所对应的坐标位置为(m,n),所述模板图像的高度为h1,宽度为w1;The image matching method according to claim 2, wherein the coordinate position corresponding to the maximum coefficient value is (m, n), the height of the template image is h1, and the width is w1;
    则所述最佳匹配区域的位置为:在所述源图像上的、以坐标位置(m,n)为左上角,长为h1,高为w1的矩形区域。Then, the position of the best matching area is: a rectangular area on the source image whose coordinate position (m, n) is the upper left corner, the length is h1, and the height is w1.
  4. 如权利要求2所述的图像匹配的方法,其特征在于,所述计算出所述模板图像与所述源图像的特征点及特征向量,具体包括:The method of image matching according to claim 2, wherein the calculating the feature points and the feature vectors of the template image and the source image comprises:
    在待检测图像上搜索所有尺度的图像位置,通过高斯微分函数检测出对于尺度和旋转不变的极值点;Searching for image positions of all scales on the image to be detected, and detecting extreme points that are invariant to scale and rotation by a Gaussian differential function;
    依据所述极值点的稳定程度,通过建立一个拟合模型来确定特征点的位置和尺度;Determining the position and scale of the feature points by establishing a fitting model according to the degree of stability of the extreme points;
    基于图像局部的梯度方向,为每个特征点的位置分配一个或多个方向;Assigning one or more directions to the position of each feature point based on the gradient direction of the image local;
    在每个特征点周围的邻域内,在选定的尺度上测量图像局部的梯度,将所述梯度变换为表示局部形状变形和光照变化的特征向量;Measuring a local gradient of the image on a selected scale within a neighborhood around each feature point, transforming the gradient into a feature vector representing local shape deformation and illumination variation;
    当所述待检测图像为所述模板图像时,所述特征点为所述模板图像的SIFT特征点;所述特征向量为所述模板图像的SIFT特征向量;当所述待检测图像为所述源图像时,所述特征点为所述源图像的SIFT特征点;所述特征向量为所述源图像的SIFT特征向量。When the image to be detected is the template image, the feature point is a SIFT feature point of the template image; the feature vector is a SIFT feature vector of the template image; and when the image to be detected is the In the case of the source image, the feature point is a SIFT feature point of the source image; the feature vector is a SIFT feature vector of the source image.
  5. 如权利要求4所述的图像匹配的方法,其特征在于,根据所述特征点及特征向量,计算出所述最佳匹配区域与所述模板图像的视觉相似度,具体为:The image matching method according to claim 4, wherein the visual similarity between the best matching area and the template image is calculated according to the feature point and the feature vector, specifically:
    计算出所述模板图像的SIFT特征点的长度和所述最佳匹配区域的SIFT特征点的长度;Calculating a length of a SIFT feature point of the template image and a length of a SIFT feature point of the best matching area;
    若所述模板图像的SIFT特征点的长度为零,或者,所述最佳匹配区域的SIFT特征点的长度为零,则确定所述最佳匹配区域与所述模板图像的视觉相似度为零; If the length of the SIFT feature point of the template image is zero, or the length of the SIFT feature point of the best matching area is zero, determining that the visual similarity between the best matching area and the template image is zero ;
    若所述模板图像的SIFT特征点的长度不为零,并且,所述最佳匹配区域的SIFT特征点的长度不为零,则,计算出所述模板图像与所述最佳匹配区域的特征匹配点对的数目;将所述特征匹配点对的数目除以所述模板图像的SIFT特征点的长度的商作为所述视觉相似度。If the length of the SIFT feature point of the template image is not zero, and the length of the SIFT feature point of the best matching area is not zero, then the feature of the template image and the best matching area is calculated The number of matching point pairs; the quotient of dividing the number of the feature matching point pairs by the length of the SIFT feature point of the template image as the visual similarity.
  6. 如权利要求5所述的图像匹配的方法,其特征在于,若所述视觉相似度不为零,则获得所述模板图像与所述源图像的特征匹配点对,具体包括:The image matching method according to claim 5, wherein if the visual similarity is not zero, obtaining a feature matching point pair between the template image and the source image comprises:
    计算出所述模板图像的SIFT特征向量与所述最佳匹配区域的SIFT特征向量的最小欧氏距离和次小欧氏距离;Calculating a minimum Euclidean distance and a sub-Euclidean distance of the SIFT feature vector of the template image and the SIFT feature vector of the best matching region;
    在所述最小欧氏距离除以所述次小欧氏距离的商小于第一阈值时,将所述模板图像与所述源图像的特征点作为所述特征匹配点对,并对所述特征匹配点对的数目进行叠加。And when the quotient of the minimum Euclidean distance divided by the second small Euclidean distance is less than the first threshold, the template image and the feature point of the source image are used as the feature matching point pair, and the feature is matched The number of matching point pairs is superimposed.
  7. 如权利要求6所述的图像匹配的方法,其特征在于,当所述特征匹配点对的数目高于最小匹配数目时,所述根据所述特征匹配点对,计算出最佳匹配图像的定位坐标,包括:The image matching method according to claim 6, wherein when the number of the feature matching point pairs is higher than the minimum matching number, the positioning of the best matching image is calculated according to the feature matching point pair Coordinates, including:
    利用单映射函数查找出与所述特征匹配点对相对应的单映射矩阵;Using a single mapping function to find a single mapping matrix corresponding to the pair of feature matching points;
    根据所述单映射矩阵,利用向量数组的透视变换函数计算出所述模板图像在所述源图像上的最佳匹配区域的多个坐标点;Calculating, according to the single mapping matrix, a plurality of coordinate points of the best matching region of the template image on the source image by using a perspective transformation function of the vector array;
    计算出最佳匹配区域的中心点坐标,将所述中心点坐标作为所述最佳匹配图像的定位坐标。The center point coordinates of the best matching area are calculated, and the center point coordinates are used as the positioning coordinates of the best matching image.
  8. 如权利要求7所述的图像匹配的方法,其特征在于,所述根据所述单映射矩阵,利用向量数组的透视变换函数计算出所述模板图像在所述源图像上的最佳匹配区域的多个坐标点,具体包括:The method for image matching according to claim 7, wherein said calculating, based on said single mapping matrix, a best matching region of said template image on said source image by using a perspective transformation function of a vector array Multiple coordinate points, including:
    根据所述特征匹配点对,获取所述模板图像上的SIFT特征点的坐标及其一一匹配的、在所述源图像上的SIFT特征点的坐标; Obtaining, according to the feature matching point pair, coordinates of the SIFT feature points on the template image and coordinates of the SIFT feature points on the source image that are matched one by one;
    随机筛选出N对匹配点对的坐标,在所述模板图像和所述源图像之间进行映射,获得第一方程:The coordinates of the pair of matching points are randomly selected, and a mapping is performed between the template image and the source image to obtain a first equation:
    Figure PCTCN2015087745-appb-100001
    Figure PCTCN2015087745-appb-100001
    并且获得对应的映射系数,将所述映射系数组建为系数矩阵H,获得第二方程:And obtaining corresponding mapping coefficients, forming the mapping coefficients into a coefficient matrix H, and obtaining a second equation:
    Figure PCTCN2015087745-appb-100002
    Figure PCTCN2015087745-appb-100002
    其中,N≥4;[x’i,y’i]是所述源图像上的SIFT特征点的坐标;[xi,yi]是所述模板图像上的SIFT特征点的坐标;H是从所述模板图像上的SIFT特征点映射到所述源图像上的SIFT特征点的系数矩阵;Wherein N≥4; [x' i , y' i ] is the coordinate of the SIFT feature point on the source image; [x i , y i ] is the coordinate of the SIFT feature point on the template image; H is Mapping from a SIFT feature point on the template image to a coefficient matrix of SIFT feature points on the source image;
    利用所述系数矩阵计算出所述模板图像上的SIFT特征点映射到所述源图像上的实时坐标;Calculating, by using the coefficient matrix, the SIFT feature points on the template image to real-time coordinates on the source image;
    在所述源图像上的SIFT特征点的坐标与所述实时坐标的之间的距离小于第二阈值时,利用第一方程和第二方程对所述系数矩阵H进行更新,直到所述系数矩阵H不再变化,并将不再变化的系数矩阵H作为所述单映射矩阵;When the distance between the coordinates of the SIFT feature point on the source image and the real-time coordinate is less than the second threshold, the coefficient matrix H is updated by the first equation and the second equation until the coefficient matrix H does not change, and the coefficient matrix H that does not change is used as the single mapping matrix;
    根据所述单映射矩阵和第一方程,通过以下第三方程逐一计算出所述模板图像在所述最佳匹配区域的N个匹配点的坐标(x’,y’):According to the single mapping matrix and the first equation, the coordinates (x', y') of the N matching points of the template image in the best matching region are calculated one by one by the following third equation:
    Figure PCTCN2015087745-appb-100003
    Figure PCTCN2015087745-appb-100003
    将所述N个匹配点的坐标的中心点坐标作为所述最佳匹配图像的定位坐标。The center point coordinates of the coordinates of the N matching points are used as the positioning coordinates of the best matching image.
  9. 如权利要求6所述的图像匹配的方法,其特征在于,当所述特征匹配点对的数目低于所述最小匹配数目,并且大于指定倍率系数时,其中,所述指定倍率系数小于所述最小匹配数目; The image matching method according to claim 6, wherein when the number of the feature matching point pairs is lower than the minimum matching number and larger than the specified magnification coefficient, wherein the specified magnification coefficient is smaller than the Minimum number of matches;
    则所述根据所述特征匹配点,计算出最佳匹配图像的定位坐标,具体为:Then, according to the feature matching point, the positioning coordinates of the best matching image are calculated, specifically:
    对所述模板图像进行SIFT强匹配,包括:根据所述特征匹配点对,获取所述模板图像上的SIFT特征点的坐标及其一一匹配的、在所述源图像上的SIFT特征点的坐标;Performing SIFT strong matching on the template image includes: acquiring coordinates of SIFT feature points on the template image according to the feature matching point pairs, and one-to-one matching SIFT feature points on the source image coordinate;
    对所述源图像上的SIFT特征点的坐标求均值处理,并将获得的均值坐标值作为所述最佳匹配图像的定位坐标。The coordinates of the SIFT feature points on the source image are averaged, and the obtained mean coordinate values are used as the positioning coordinates of the best matching image.
  10. 如权利要求6所述的图像匹配的方法,其特征在于,当所述特征匹配点对的数目小于指定倍率系数时,其中,所述指定倍率系数小于所述最小匹配数目;The image matching method according to claim 6, wherein when the number of the feature matching point pairs is smaller than the specified magnification coefficient, wherein the specified magnification coefficient is smaller than the minimum matching number;
    则所述根据所述特征匹配点,计算出最佳匹配图像的定位坐标,包括:选定所述最佳匹配区域中的特征点的邻近区域与所述模板图像进行局部模板匹配。And calculating, according to the feature matching point, the positioning coordinates of the best matching image, including: selecting a neighboring region of the feature point in the best matching region to perform partial template matching with the template image.
  11. 如权利要求10所述的图像匹配的方法,其特征在于,选定所述最佳匹配区域中的特征点的邻近区域与所述模板图像进行局部模板匹配,包括:The image matching method according to claim 10, wherein selecting a neighboring region of the feature point in the best matching region to perform partial template matching with the template image comprises:
    计算出所述特征点的邻近区域与所述模板图像的局部视觉相似度;Calculating a local visual similarity between the neighboring region of the feature point and the template image;
    若所述局部视觉相似度高于第三阈值,则判定匹配成功,根据局部模板匹配获得的坐标计算出所述最佳匹配图像的定位坐标;If the local visual similarity is higher than the third threshold, determining that the matching is successful, and calculating the positioning coordinates of the best matching image according to the coordinates obtained by the partial template matching;
    若所述局部视觉相似度低于所述第三阈值,则对所述模板图像与所述源图像进行全局多尺度模板匹配。If the local visual similarity is lower than the third threshold, global multi-scale template matching is performed on the template image and the source image.
  12. 如权利要求11所述的图像匹配的方法,其特征在于,若所述局部视觉相似度低于所述第三阈值,则对所述模板图像与所述源图像进行全局多尺度模板匹配,具体包括:The image matching method according to claim 11, wherein if the local visual similarity is lower than the third threshold, global multi-scale template matching is performed on the template image and the source image, specifically include:
    建立尺度列表;所述尺度列表包括多个尺度系数;Establishing a list of scales; the list of scales includes a plurality of scale factors;
    根据所述尺度列表中的尺度系数,对所述模板图像进行放缩; And scaling the template image according to the scale factor in the scale list;
    对进行放缩后的模板图像在所述源图像中进行全局模板匹配,记录每一次匹配获得的匹配值和匹配区域,形成最佳匹配集合;Performing global template matching on the template image after scaling, recording matching values and matching regions obtained by each matching, and forming a best matching set;
    计算完所有尺度的全局模板匹配后,将所述最佳匹配集合中的最大匹配值所对应的区域作为最佳匹配图像,并计算出所述最佳匹配图像的中心坐标值作为所述最佳匹配图像的定位坐标。After calculating the global template matching of all the scales, the area corresponding to the largest matching value in the best matching set is taken as the best matching image, and the central coordinate value of the best matching image is calculated as the best Match the positioning coordinates of the image.
  13. 一种手机应用测试平台,所述手机应用测试平台上包括待测试手机应用的测试脚本及测试所需图像资源,其特征在于,包括:A mobile phone application testing platform, comprising: a test script for a mobile phone application to be tested and an image resource required for testing, wherein the mobile application test platform comprises:
    测试资源下载单元,用于下载待测试手机应用的测试脚本及所述图像资源至被测试手机中;a test resource downloading unit, configured to download a test script of the mobile phone application to be tested and the image resource to the tested mobile phone;
    截图单元,用于对被测试手机屏幕上显示的待测试手机应用的测试图像进行截图和上传;a screenshot unit for taking a screenshot and uploading a test image of the mobile phone application to be tested displayed on the screen of the tested mobile phone;
    图像匹配单元,用于采用如权利要求1~12任一项所述的图像匹配的方法,将所述测试图像作为模板图像在相应的图像资源上进行图像匹配,查找出所述测试图像的最佳匹配图像的定位坐标;以及,An image matching unit for performing image matching according to any one of claims 1 to 12, performing image matching on the corresponding image resource as the template image, and finding the most of the test image Good matching image positioning coordinates; and,
    测试单元,用于根据所述图像匹配单元查找的最佳匹配图像的定位坐标,启动对所述测试图像所关联的测试代码的测试,将所述定位坐标和测试结果数据反馈至被测试手机。And a test unit, configured to start testing the test code associated with the test image according to the positioning coordinates of the best matching image searched by the image matching unit, and feed back the positioning coordinate and the test result data to the tested mobile phone.
  14. 如权利要求13所述的手机应用测试平台,其特征在于,所述手机应用测试平台上设有多种通用接口,并针对所述通用接口上设有相应的驱动层。The mobile phone application testing platform according to claim 13, wherein the mobile phone application testing platform is provided with a plurality of universal interfaces, and a corresponding driving layer is disposed on the universal interface.
  15. 如权利要求13或14所述的手机应用测试平台,其特征在于,所述待测试手机应用为手机游戏应用;则所述手机应用测试平台为手机游戏测试平台。The mobile phone application testing platform according to claim 13 or 14, wherein the mobile phone application testing platform is a mobile game testing platform;
  16. 如权利要求15所述的手机应用测试平台,其特征在于,所述平台还包括测试中心;所述测试单元还用于将测试数据结果数据传输至所述测试中心; 所述测试结果数据包括待测试的手机型号信息、测试过程所产生的截图、CPU信息、内存信息、耗电信息和网卡流量信息。 The mobile phone application testing platform according to claim 15, wherein the platform further comprises a testing center; the testing unit is further configured to transmit test data result data to the testing center; The test result data includes the model information of the mobile phone to be tested, the screenshot generated by the testing process, the CPU information, the memory information, the power consumption information, and the network card traffic information.
PCT/CN2015/087745 2014-10-20 2015-08-21 Image matching method and platform for testing of mobile phone applications WO2016062159A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410557254.3 2014-10-20
CN201410557254.3A CN105513038B (en) 2014-10-20 2014-10-20 Image matching method and mobile phone application test platform

Publications (1)

Publication Number Publication Date
WO2016062159A1 true WO2016062159A1 (en) 2016-04-28

Family

ID=55720996

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/087745 WO2016062159A1 (en) 2014-10-20 2015-08-21 Image matching method and platform for testing of mobile phone applications

Country Status (2)

Country Link
CN (1) CN105513038B (en)
WO (1) WO2016062159A1 (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109313708A (en) * 2017-12-22 2019-02-05 深圳配天智能技术研究院有限公司 Image matching method and vision system
CN109859225A (en) * 2018-12-24 2019-06-07 中国电子科技集团公司第二十研究所 A kind of unmanned plane scene matching aided navigation localization method based on improvement ORB Feature Points Matching
CN109919222A (en) * 2019-03-05 2019-06-21 巢湖学院 A kind of image matching method based on SIFT feature and guarantor's distortion mapping
CN110533647A (en) * 2019-08-28 2019-12-03 东北大学 A kind of liquid crystal display Mark independent positioning method based on line characteristic matching
CN110738222A (en) * 2018-07-18 2020-01-31 深圳兆日科技股份有限公司 Image matching method and device, computer equipment and storage medium
CN110851368A (en) * 2019-11-19 2020-02-28 天津车之家数据信息技术有限公司 Multi-device collaborative testing method and device, computing device and system
CN110929741A (en) * 2019-11-22 2020-03-27 腾讯科技(深圳)有限公司 Image feature descriptor extraction method, device, equipment and storage medium
CN111028231A (en) * 2019-12-27 2020-04-17 易思维(杭州)科技有限公司 Workpiece position acquisition system based on ARM and FPGA
CN111462196A (en) * 2020-03-03 2020-07-28 中国电子科技集团公司第二十八研究所 Remote sensing image matching method based on cuckoo search and Krawtchouk moment invariant
CN111640126A (en) * 2020-05-29 2020-09-08 成都金盘电子科大多媒体技术有限公司 Artificial intelligence diagnosis auxiliary method based on medical image
CN111724438A (en) * 2019-03-18 2020-09-29 阿里巴巴集团控股有限公司 Data processing method and device
CN112260882A (en) * 2019-07-02 2021-01-22 北京融核科技有限公司 Mobile application and network service integrated test device capable of being deployed rapidly and method thereof
CN112434705A (en) * 2020-11-09 2021-03-02 中国航空工业集团公司洛阳电光设备研究所 Real-time SIFT image matching method based on Gaussian pyramid grouping
CN112818989A (en) * 2021-02-04 2021-05-18 成都工业学院 Image matching method based on gradient amplitude random sampling
CN112990228A (en) * 2021-03-05 2021-06-18 浙江商汤科技开发有限公司 Image feature matching method and related device, equipment and storage medium
CN113158928A (en) * 2021-04-27 2021-07-23 浙江云奕科技有限公司 Image recognition-based anti-counterfeiting method for concrete test block
CN113222028A (en) * 2021-05-19 2021-08-06 中国电子科技集团公司第二十八研究所 Image feature point real-time matching method based on multi-scale neighborhood gradient model
WO2022267287A1 (en) * 2021-06-25 2022-12-29 浙江商汤科技开发有限公司 Image registration method and related apparatus, and device and storage medium
CN116612306A (en) * 2023-07-17 2023-08-18 山东顺发重工有限公司 Computer vision-based intelligent flange plate alignment method and system
CN117764912A (en) * 2023-11-08 2024-03-26 东莞市中钢模具有限公司 Visual inspection method for deformation abnormality of automobile part die casting die

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228194B (en) * 2016-08-05 2018-09-18 腾讯科技(深圳)有限公司 Image lookup method and device
CN106775701B (en) * 2016-12-09 2021-02-05 武汉中软通证信息技术有限公司 Client automatic evidence obtaining method and system
CN106898017B (en) * 2017-02-27 2019-05-31 网易(杭州)网络有限公司 The method, apparatus and terminal device of image local area for identification
CN109150571B (en) * 2017-06-27 2021-10-12 中国电信股份有限公司 Grid mapping method and device
CN107274442B (en) * 2017-07-04 2020-03-10 北京云测信息技术有限公司 Image identification method and device
CN107784306A (en) * 2017-09-19 2018-03-09 浙江师范大学 A kind of automatic shopping car based on OpenCV
CN107885661A (en) * 2017-11-08 2018-04-06 百度在线网络技术(北京)有限公司 The terminal transparency method of testing and system of Mobile solution, equipment, medium
CN109901988A (en) * 2017-12-11 2019-06-18 北京京东尚科信息技术有限公司 A kind of page elements localization method and device for automatic test
CN108009033B (en) * 2017-12-25 2021-07-13 北京奇虎科技有限公司 Touch simulation method and device and mobile terminal
CN108211363B (en) * 2018-02-08 2021-05-04 腾讯科技(深圳)有限公司 Information processing method and device
CN108416801B (en) * 2018-02-28 2022-02-22 哈尔滨工程大学 Har-SURF-RAN characteristic point matching method for stereoscopic vision three-dimensional reconstruction
CN109044398B (en) * 2018-06-07 2021-10-19 深圳华声医疗技术股份有限公司 Ultrasound system imaging method, device and computer readable storage medium
CN109376289B (en) * 2018-10-17 2020-06-30 北京云测信息技术有限公司 Method and device for determining target application ranking in application search result
CN109447148A (en) * 2018-10-24 2019-03-08 北京赢销通软件技术有限公司 The method and device of images match during a kind of script execution
CN109544663B (en) * 2018-11-09 2023-01-06 腾讯科技(深圳)有限公司 Virtual scene recognition and interaction key position matching method and device of application program
CN109766420B (en) * 2018-12-27 2023-12-15 新疆大学 High-precision matching algorithm for printed Uyghur image words
CN109767447B (en) * 2019-01-04 2021-03-02 腾讯科技(深圳)有限公司 Template matching method, device, equipment and medium
CN109766943B (en) * 2019-01-10 2020-08-21 哈尔滨工业大学(深圳) Template matching method and system based on global perception diversity measurement
CN110196152A (en) * 2019-03-29 2019-09-03 山东建筑大学 The method for diagnosing faults and system of large-scale landscape lamp group based on machine vision
CN110134816B (en) * 2019-05-20 2021-01-15 清华大学深圳研究生院 Single picture geographical positioning method and system based on voting smoothing
CN110415276B (en) * 2019-07-30 2022-04-05 北京字节跳动网络技术有限公司 Motion information calculation method and device and electronic equipment
CN111079730B (en) * 2019-11-20 2023-12-22 北京云聚智慧科技有限公司 Method for determining area of sample graph in interface graph and electronic equipment
CN113066121A (en) * 2019-12-31 2021-07-02 深圳迈瑞生物医疗电子股份有限公司 Image analysis system and method for identifying repeat cells
CN111413350A (en) * 2020-03-24 2020-07-14 江苏斯德雷特通光光纤有限公司 Method and device for detecting defects of optical fiber flat cable
CN111476780B (en) * 2020-04-07 2023-04-07 腾讯科技(深圳)有限公司 Image detection method and device, electronic equipment and storage medium
CN111832571B (en) * 2020-07-09 2021-03-05 哈尔滨市科佳通用机电股份有限公司 Automatic detection method for truck brake beam strut fault
CN112015650B (en) * 2020-08-28 2022-06-03 上海冰鉴信息科技有限公司 Event testing method and device based on computer vision
CN112203023B (en) * 2020-09-18 2023-09-12 西安拙河安见信息科技有限公司 Billion pixel video generation method and device, equipment and medium
CN112528761B (en) * 2020-11-24 2023-04-07 上海墨说科教设备有限公司 Method and system for extracting specific target in image, electronic device and storage medium
CN112569591B (en) * 2021-03-01 2021-05-18 腾讯科技(深圳)有限公司 Data processing method, device and equipment and readable storage medium
CN113537351B (en) * 2021-07-16 2022-06-24 重庆邮电大学 Remote sensing image coordinate matching method for mobile equipment shooting
CN115661472A (en) * 2022-11-15 2023-01-31 中国平安财产保险股份有限公司 Image duplicate checking method and device, computer equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101147159A (en) * 2005-02-21 2008-03-19 三菱电机株式会社 Fast method of object detection by statistical template matching
CN101639858A (en) * 2009-08-21 2010-02-03 深圳创维数字技术股份有限公司 Image search method based on target area matching
CN101770582A (en) * 2008-12-26 2010-07-07 鸿富锦精密工业(深圳)有限公司 Image matching system and method
US20100296736A1 (en) * 2009-05-25 2010-11-25 Canon Kabushiki Kaisha Image search apparatus and method thereof
US20120139942A1 (en) * 2010-12-03 2012-06-07 Palanisamy Onankuttai Subbian Image registration system
CN103607558A (en) * 2013-11-04 2014-02-26 深圳市中瀛鑫科技股份有限公司 Video monitoring system, target matching method and apparatus thereof
CN103823758A (en) * 2014-03-13 2014-05-28 北京金山网络科技有限公司 Browser testing method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1275201C (en) * 2001-09-25 2006-09-13 松下电器产业株式会社 Parameter estimation apparatus and data collating apparatus
WO2007130688A2 (en) * 2006-05-10 2007-11-15 Evolution Robotics, Inc. Mobile computing device with imaging capability
CN102263957B (en) * 2011-07-25 2013-07-03 北京航空航天大学 Search-window adaptive parallax estimation method
CN103955931A (en) * 2014-04-29 2014-07-30 江苏物联网研究发展中心 Image matching method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101147159A (en) * 2005-02-21 2008-03-19 三菱电机株式会社 Fast method of object detection by statistical template matching
CN101770582A (en) * 2008-12-26 2010-07-07 鸿富锦精密工业(深圳)有限公司 Image matching system and method
US20100296736A1 (en) * 2009-05-25 2010-11-25 Canon Kabushiki Kaisha Image search apparatus and method thereof
CN101639858A (en) * 2009-08-21 2010-02-03 深圳创维数字技术股份有限公司 Image search method based on target area matching
US20120139942A1 (en) * 2010-12-03 2012-06-07 Palanisamy Onankuttai Subbian Image registration system
CN103607558A (en) * 2013-11-04 2014-02-26 深圳市中瀛鑫科技股份有限公司 Video monitoring system, target matching method and apparatus thereof
CN103823758A (en) * 2014-03-13 2014-05-28 北京金山网络科技有限公司 Browser testing method and device

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109313708B (en) * 2017-12-22 2023-03-21 深圳配天智能技术研究院有限公司 Image matching method and vision system
CN109313708A (en) * 2017-12-22 2019-02-05 深圳配天智能技术研究院有限公司 Image matching method and vision system
CN110738222A (en) * 2018-07-18 2020-01-31 深圳兆日科技股份有限公司 Image matching method and device, computer equipment and storage medium
CN110738222B (en) * 2018-07-18 2022-12-06 深圳兆日科技股份有限公司 Image matching method and device, computer equipment and storage medium
CN109859225A (en) * 2018-12-24 2019-06-07 中国电子科技集团公司第二十研究所 A kind of unmanned plane scene matching aided navigation localization method based on improvement ORB Feature Points Matching
CN109919222A (en) * 2019-03-05 2019-06-21 巢湖学院 A kind of image matching method based on SIFT feature and guarantor's distortion mapping
CN111724438B (en) * 2019-03-18 2024-04-02 阿里巴巴集团控股有限公司 Data processing method and device
CN111724438A (en) * 2019-03-18 2020-09-29 阿里巴巴集团控股有限公司 Data processing method and device
CN112260882A (en) * 2019-07-02 2021-01-22 北京融核科技有限公司 Mobile application and network service integrated test device capable of being deployed rapidly and method thereof
CN112260882B (en) * 2019-07-02 2022-06-24 北京融核科技有限公司 Mobile application and network service integrated test device capable of being deployed rapidly and method thereof
CN110533647B (en) * 2019-08-28 2023-02-03 东北大学 Liquid crystal display Mark point positioning method based on line feature matching
CN110533647A (en) * 2019-08-28 2019-12-03 东北大学 A kind of liquid crystal display Mark independent positioning method based on line characteristic matching
CN110851368A (en) * 2019-11-19 2020-02-28 天津车之家数据信息技术有限公司 Multi-device collaborative testing method and device, computing device and system
CN110929741A (en) * 2019-11-22 2020-03-27 腾讯科技(深圳)有限公司 Image feature descriptor extraction method, device, equipment and storage medium
CN111028231A (en) * 2019-12-27 2020-04-17 易思维(杭州)科技有限公司 Workpiece position acquisition system based on ARM and FPGA
CN111028231B (en) * 2019-12-27 2023-06-30 易思维(杭州)科技有限公司 Workpiece position acquisition system based on ARM and FPGA
CN111462196A (en) * 2020-03-03 2020-07-28 中国电子科技集团公司第二十八研究所 Remote sensing image matching method based on cuckoo search and Krawtchouk moment invariant
CN111640126A (en) * 2020-05-29 2020-09-08 成都金盘电子科大多媒体技术有限公司 Artificial intelligence diagnosis auxiliary method based on medical image
CN111640126B (en) * 2020-05-29 2023-08-22 成都金盘电子科大多媒体技术有限公司 Artificial intelligent diagnosis auxiliary method based on medical image
CN112434705A (en) * 2020-11-09 2021-03-02 中国航空工业集团公司洛阳电光设备研究所 Real-time SIFT image matching method based on Gaussian pyramid grouping
CN112818989A (en) * 2021-02-04 2021-05-18 成都工业学院 Image matching method based on gradient amplitude random sampling
CN112818989B (en) * 2021-02-04 2023-10-03 成都工业学院 Image matching method based on gradient amplitude random sampling
CN112990228A (en) * 2021-03-05 2021-06-18 浙江商汤科技开发有限公司 Image feature matching method and related device, equipment and storage medium
CN112990228B (en) * 2021-03-05 2024-03-29 浙江商汤科技开发有限公司 Image feature matching method, related device, equipment and storage medium
CN113158928A (en) * 2021-04-27 2021-07-23 浙江云奕科技有限公司 Image recognition-based anti-counterfeiting method for concrete test block
CN113158928B (en) * 2021-04-27 2023-09-19 浙江云奕科技有限公司 Concrete test block anti-counterfeiting method based on image recognition
CN113222028A (en) * 2021-05-19 2021-08-06 中国电子科技集团公司第二十八研究所 Image feature point real-time matching method based on multi-scale neighborhood gradient model
CN113222028B (en) * 2021-05-19 2022-09-06 中国电子科技集团公司第二十八研究所 Image feature point real-time matching method based on multi-scale neighborhood gradient model
WO2022267287A1 (en) * 2021-06-25 2022-12-29 浙江商汤科技开发有限公司 Image registration method and related apparatus, and device and storage medium
CN116612306B (en) * 2023-07-17 2023-09-26 山东顺发重工有限公司 Computer vision-based intelligent flange plate alignment method and system
CN116612306A (en) * 2023-07-17 2023-08-18 山东顺发重工有限公司 Computer vision-based intelligent flange plate alignment method and system
CN117764912A (en) * 2023-11-08 2024-03-26 东莞市中钢模具有限公司 Visual inspection method for deformation abnormality of automobile part die casting die

Also Published As

Publication number Publication date
CN105513038A (en) 2016-04-20
CN105513038B (en) 2019-04-09

Similar Documents

Publication Publication Date Title
WO2016062159A1 (en) Image matching method and platform for testing of mobile phone applications
CN105701766B (en) Image matching method and device
US9519968B2 (en) Calibrating visual sensors using homography operators
US7616807B2 (en) System and method for using texture landmarks for improved markerless tracking in augmented reality applications
Palenichka et al. Automatic extraction of control points for the registration of optical satellite and LiDAR images
CN109977191B (en) Problem map detection method, device, electronic equipment and medium
US11551388B2 (en) Image modification using detected symmetry
CN111444781B (en) Water meter reading identification method, device and storage medium
US20080205769A1 (en) Apparatus, method and program product for matching with a template
CN106355197A (en) Navigation image matching filtering method based on K-means clustering algorithm
CN110222641B (en) Method and apparatus for recognizing image
Son et al. A multi-vision sensor-based fast localization system with image matching for challenging outdoor environments
CN115061769B (en) Self-iteration RPA interface element matching method and system for supporting cross-resolution
CN110910445A (en) Object size detection method and device, detection equipment and storage medium
US20230146924A1 (en) Neural network analysis of lfa test strips
CN104268550A (en) Feature extraction method and device
Mu et al. Finding autofocus region in low contrast surveillance images using CNN-based saliency algorithm
CN112233186A (en) Equipment air tightness detection camera self-calibration method based on image perception
CN105930813B (en) A method of detection composes a piece of writing this under any natural scene
JP2019220163A (en) System and method for finding line with vision system
WO2019165626A1 (en) Methods and apparatus to match images using semantic features
CN114463534A (en) Target key point detection method, device, equipment and storage medium
Reji et al. Comparative analysis in satellite image registration
CN113537026A (en) Primitive detection method, device, equipment and medium in building plan
CN112036398A (en) Text correction method and system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15852536

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15852536

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 10/10/2017)

122 Ep: pct application non-entry in european phase

Ref document number: 15852536

Country of ref document: EP

Kind code of ref document: A1