CN112446850A - Adaptation test method and device and electronic equipment - Google Patents

Adaptation test method and device and electronic equipment Download PDF

Info

Publication number
CN112446850A
CN112446850A CN201910749852.3A CN201910749852A CN112446850A CN 112446850 A CN112446850 A CN 112446850A CN 201910749852 A CN201910749852 A CN 201910749852A CN 112446850 A CN112446850 A CN 112446850A
Authority
CN
China
Prior art keywords
image
screen
detected
key feature
page
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910749852.3A
Other languages
Chinese (zh)
Inventor
周小群
陈琴
张亚男
李子乐
倪伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201910749852.3A priority Critical patent/CN112446850A/en
Publication of CN112446850A publication Critical patent/CN112446850A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis

Abstract

The embodiment of the application discloses an adaptation test method, an adaptation test device and electronic equipment, wherein the method comprises the following steps: obtaining a reference image and an image to be detected; determining personalized resource positions included in the image to be detected and the reference image by calculating the difference between the image to be detected and the reference image, and performing covering processing on image contents in the personalized resource positions in a mode of generating an image mask; and comparing the covered image to be detected with the reference image, and determining the adaptation result of the target page in the equipment to be tested according to the feature comparison result. By the embodiment of the application, under the condition that personalized data aiming at the user exist in the page, the adaptation test of various types of wireless terminal equipment can be realized at lower cost.

Description

Adaptation test method and device and electronic equipment
Technical Field
The present application relates to the field of adaptation testing technologies, and in particular, to an adaptation testing method and apparatus, and an electronic device.
Background
When the commodity object information service system is changed from the PC era to the wireless era, the problem of adaptation of different machine types in the page front-end test always puzzles the guarantee of wireless quality. The main reasons of the complexity and difficulty of the wireless quality guarantee are fragmentation of model systems of wireless terminal equipment such as mobile phones and the like and manual verification of the verification mode. The inefficiency of manual adaptive testing is well known, especially during each large activity, the activity pages and business iterations that need to be brought online are numerous, during which the task of functional and performance testing is heavy, and at the same time, the communication cost for repairing, verifying and developing page defects and the tester takes a lot of time. Therefore, aiming at the work that the adaptation test is simple and visual but needs a lot of time and investment cost, the intelligent adaptation by means of the computer vision technology gradually becomes the inevitable trend of the development of the wireless adaptation test technology.
However, with the popularization and deepening of an accurate operation mode of 'thousands of people and thousands of faces' of the internet, personalized recommended data contents in a plurality of pages in a commodity object information service system are common, and therefore, a new difficulty is brought forward by an intelligent adaptation mode based on image contrast. Meanwhile, the introduction of personalized recommended content may also bring about the problems of 'pit dropping' (some resource bit missing), 'empty floor' and the like caused by data problems. In summary, since the data presented by the same page for different users are different, the problem caused by the data is high in coverage cost in the wireless adaptation test.
Therefore, when personalized data for a user exists in a page, how to implement adaptation tests of various types of wireless terminal devices at lower cost becomes a technical problem to be solved by those skilled in the art.
Disclosure of Invention
The application provides an adaptation test method, an adaptation test device and electronic equipment, and under the condition that personalized data aiming at a user exists in a page, adaptation test of various types of wireless terminal equipment can be realized at lower cost.
The application provides the following scheme:
an adaptation test method comprising:
obtaining a reference image and an image to be detected;
determining personalized resource positions included in the image to be detected and the reference image by calculating the difference between the image to be detected and the reference image, and performing covering processing on image contents in the personalized resource positions in a mode of generating an image mask;
and comparing the covered image to be detected with the reference image, and determining the adaptation result of the target page in the equipment to be tested according to the feature comparison result.
An image stitching processing method comprises the following steps:
acquiring a plurality of screen capture images aiming at the same target page, wherein a superposition area exists between the screen capture images of adjacent screens;
extracting key feature points in a target area in the screen capture images of adjacent screens to obtain at least one group of successfully matched key feature points, wherein each group of key feature points comprises two key feature points which are respectively feature points with similarity meeting conditions in the target area in a previous screen and a next screen;
determining a page sliding distance from the previous screen to the next screen according to the distance between the key feature point in the previous screen and the bottom of the previous screen and the distance between the key feature point in the next screen and the top of the next screen aiming at the same group of key feature points;
and splicing the screen capture images of the previous screen and the next screen according to the page sliding distance.
An adaptation test device, comprising:
an image obtaining unit for obtaining a reference image and a to-be-detected image;
the covering processing unit is used for determining the individualized resource positions included in the image to be detected and the reference image by calculating the difference between the image to be detected and the reference image, and covering the image content in the individualized resource positions by generating an image mask;
and the characteristic comparison unit is used for comparing the covered to-be-detected image with the reference image and determining the adaptation result of the target page in the to-be-detected device according to the characteristic comparison result.
An image stitching processing apparatus comprising:
the screen capture image acquisition unit is used for acquiring a plurality of screen capture images aiming at the same target page, wherein an overlapped area exists between the screen capture images of adjacent screens;
the characteristic point extraction unit is used for extracting key characteristic points in a target area in the screen capture images of adjacent screens to obtain at least one group of successfully matched key characteristic points, wherein each group of key characteristic points comprises two key characteristic points which are respectively characteristic points with similarity meeting conditions in the target area in the previous screen and the next screen;
the page sliding distance determining unit is used for determining the page sliding distance from the previous screen to the next screen according to the distance between the key feature point in the previous screen and the bottom of the previous screen and the distance between the key feature point in the next screen and the top of the next screen aiming at the same group of key feature points;
and the splicing unit is used for splicing the screen capture images of the previous screen and the next screen according to the page sliding distance.
An electronic device, comprising:
one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read and executed by the one or more processors, perform operations comprising:
obtaining a reference image and an image to be detected;
determining personalized resource positions included in the image to be detected and the reference image by calculating the difference between the image to be detected and the reference image, and performing covering processing on image contents in the personalized resource positions in a mode of generating an image mask;
and comparing the covered image to be detected with the reference image, and determining the adaptation result of the target page in the equipment to be tested according to the feature comparison result.
An electronic device, comprising:
one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read and executed by the one or more processors, perform operations comprising:
acquiring a plurality of screen capture images aiming at the same target page, wherein a superposition area exists between the screen capture images of adjacent screens;
extracting key feature points in a target area in the screen capture images of adjacent screens to obtain at least one group of successfully matched key feature points, wherein each group of key feature points comprises two key feature points which are respectively feature points with similarity meeting conditions in the target area in a previous screen and a next screen;
determining a page sliding distance from the previous screen to the next screen according to the distance between the key feature point in the previous screen and the bottom of the previous screen and the distance between the key feature point in the next screen and the top of the next screen aiming at the same group of key feature points;
and splicing the screen capture images of the previous screen and the next screen according to the page sliding distance.
According to the specific embodiments provided herein, the present application discloses the following technical effects:
by the embodiment of the application, the adaptation condition of the target page in the equipment to be tested can be tested based on the characteristic comparison between the image to be tested and the reference image, wherein the difference image between the image to be tested and the reference image can be firstly determined according to the personalized area in the page, then the image content in the personalized resource position can be shielded in an image mask mode, and then the characteristic comparison is carried out on the image to be tested and the reference image after shielding treatment, so that whether the structural adaptation problem exists in the target page when the target page is displayed in the equipment to be tested can be better determined, and the influence of the content displayed in the personalized area is avoided.
Of course, it is not necessary for any product to achieve all of the above-described advantages at the same time for the practice of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic diagram of a system architecture provided by an embodiment of the present application;
FIG. 2 is a flow chart of a first method provided by an embodiment of the present application;
3-1 to 3-4 are schematic views of the personalized region processing process provided in the embodiment of the present application;
FIG. 4 is a flow chart of a second method provided by embodiments of the present application;
FIG. 5 is a schematic diagram of a first apparatus provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of a second apparatus provided by an embodiment of the present application;
fig. 7 is a schematic diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments that can be derived from the embodiments given herein by a person of ordinary skill in the art are intended to be within the scope of the present disclosure.
In the embodiment of the application, in order to perform the adaptation test based on the computer vision, a target page can be displayed on a terminal device in advance, manual inspection is completed, and if the test is passed, a reference image can be obtained according to the display condition of the test page on the device. Then, an image to be inspected can be obtained according to the display condition of the same target page on other equipment, and then the adaptation condition of the target page on other equipment is judged by comparing the image to be inspected with the reference image. However, the above adaptation test based on an image comparison formula presupposes that the image to be examined and the reference image should be identical or at least have a very high similarity. Obviously, in the case of "thousands of people and thousands of faces" being implemented in the target page, it is not feasible to directly perform the above comparison, which may cause a large amount of misjudgments. This is because, when testing the fitting condition of a specific page on a target device, it is mainly to test whether there are some conditions such as poor appearance or misalignment, for example, structural problems of the page when the page is displayed on the specific device, and what specific object is specifically shown in a specific resource slot (colloquially called "pit slot") is generally not the content concerned by such fitting test. In the implementation process of "thousands of people and thousands of faces", pages provided by different users are just consistent in terms of structural problems, and different objects are only objects displayed in specific "pit positions", or different "floors" (resource bit groups formed by resource bits where service objects having commonality in a certain dimension are located) are different in arrangement order and the like. Therefore, in the process of comparing the image to be detected with the reference image, if the difference between specific business objects in the pit can be shielded, the adaptive test based on the comparison between the image to be detected and the reference image can be better performed.
Based on the above analysis, in the embodiment of the present application, after obtaining the specific to-be-detected image and the reference image, the difference between the two business objects in the specific resource location in the personalized region may be first masked, and then the processed to-be-detected image and the reference image are compared in terms of features, so that the adaptation result of the target page in the specific to-be-detected device may be determined according to information such as similarity between the two.
Specifically, a difference image of the image to be detected and the reference image is calculated, and in an optimal mode, the personalized resource bit in the image to be detected and the personalized resource bit in the reference image can be determined through denoising and the circumscribed rectangular area of the difference region, and then the image content in the personalized resource bit is covered in a mode of generating an image mask. In this way, differences exhibited between resource bits due to personalization may be masked. And then, comparing the characteristics of the image to be detected and the reference image, wherein the obtained comparison result is whether obvious structural difference exists between the image to be detected and the reference image, and if so, the situation that the target page is not well adapted in the equipment to be tested is proved, and adjustment is possibly needed, and the like.
In specific implementation, from the perspective of system architecture, referring to fig. 1, an embodiment of the present application may first provide a test server, where the test server may be connected with a plurality of devices to be tested, and another device is used as a reference device. Each device to be tested and the reference device can be provided with an application program with a test packet, and the application program is mainly used for testing the adaptation condition of a target page realized in the application program on each device. The server can initiate a test starting instruction and the like to specific reference equipment and equipment to be tested, each piece of equipment can open a specific target page, screen capture and other processing are carried out by running preset scripts and other modes, the obtained image can be uploaded to the test server, the test server can complete subsequent image covering processing, characteristic comparison and other operations, and specific adaptation results are given.
The following describes in detail a specific implementation provided by the embodiments of the present application.
Example one
First, in this embodiment, from the perspective of the foregoing test server, an adaptation test method is provided, and referring to fig. 2, the method may specifically include:
s210: obtaining a reference image and an image to be detected;
during specific implementation, the reference image and the image to be detected can be obtained according to the display conditions of the target page in the reference device and the device to be detected respectively. The target page may be a newly online page in an application, and the like. For example, in the case of an information service system for merchandise objects, a large promotion event or the like is often held, and an event venue page is correspondingly provided, which is only brought online during the event, and before the event is brought online, the matching of the event on various devices is usually tested, and the like.
The reference device may specifically be a terminal device that completes the test in advance in a manual detection manner and determines that the target page is successfully adapted to the target page. That is, a commonly used device in which a target application is installed may be prepared in advance, and the application may be a program with a test package. Therefore, before a specific target page is on line, the page can be displayed on the reference equipment firstly, whether structural problems exist when the page is displayed on the reference equipment can be judged in modes of manual detection and the like, and if the structural problems do not exist, the page can be subjected to adaptation test on other devices to be tested by taking the display condition of the page on the reference equipment as the standard. In addition, the target application installed on the reference device may have a test package, and may perform operations required for a specific test according to an instruction of the test server, including screen capturing of a page.
The device to be tested can be mobile terminal devices of various brands and models, and due to the fact that the sizes, the resolutions and the like of various devices are different, the same page needs to be adapted to various devices, and expected display effects can be obtained on various devices. In the embodiment of the present application, the target application program may also be installed on the device to be tested, and the target application program may also have a test package, and can execute operations required by a specific test according to an instruction of the test server, and the like.
The test server may be located on one or more computer devices, and particularly, when performing a test, the test server may provide a two-dimensional code and other modes, and provide information such as a website of a target page. The reference device can display a specific target page in a way that the specific target application scans the two-dimensional code. Certainly, for other devices to be tested, the test server can directly push information such as the website of the target page to the devices to be tested, so that the target page can be displayed on the devices to be tested.
In addition, as described above, since the target application installed on the reference device and the device to be tested are provided with the test package, after the target page is displayed, the processing such as screen capture of the target page can be realized by the script written in the test package.
The length of the target page may be relatively long, so that in the process of screen capture operation, the target page can be controlled by the script to slide vertically in the length direction, and multiple screen captures are performed in the process to obtain multiple screen capture images. Subsequent specific contrast operations will be performed on the basis of such a screen shot.
It should be noted here that, since the sliding and the screen capturing of the target page can be controlled by the script, the number of the screen capturing obtained during the display process of the same target page in different devices is the same, and in addition, the capturing positions between each screen and each screen can be completely aligned, so that the captured images of the same screen are comparable. Of course, in a specific implementation, the screenshot image may be obtained by a manual screenshot or the like, in addition to the control by the script.
In the embodiment of the present application, the to-be-detected image and the reference image may be determined according to screen capture images of a specific target page in the device to be tested and the reference device. Specifically, the single-screen image captured each time can be directly used as the image to be detected and the reference image. Namely, the single-screen image obtained in the reference device is used as a single-screen reference image, and the single-screen image of the corresponding screen number obtained in the device to be tested is used as a single-screen image to be tested, so that the characteristic comparison between the single-screen images is realized. Or, in another mode, the multi-screen images obtained in the reference device and the device to be tested can be spliced to obtain a panoramic reference image and a panoramic image to be tested, and then feature comparison is performed based on the panoramic reference image and the panoramic image to be tested. In addition, based on the contrast of the panoramic image, whether the page has the self-adaptive adaptation problem or not can be checked more efficiently (if a certain first screen module in the page does not perform the self-adaptive adaptation, the pages in different test devices can slide the same distance under the control of the script during the single-screen contrast, the contents displayed in the page can not be aligned, a large amount of repeated problems are generated, and the manual screening efficiency is influenced). Certainly, in practical application, the two methods may be combined, for example, first, comparison is performed based on a panoramic image, and it is determined whether there are some obvious page structuring problems and adaptive adaptation problems of page modules from a global perspective, if the comparison of the full image is successful, that is, the above problems do not exist, then, comparison of a single-screen image is performed to perform more detailed comparison, and it is determined whether there are some finer structuring problems, and the like. The present application will be described in detail below with respect to a specific panoramic image stitching method and the like.
Whether the comparison is based on a single-screen image or a panoramic image, the shielding of the image content in the specific resource location can be firstly carried out aiming at the personalized resource location so as to highlight whether the difference exists between the personalized resource location and the panoramic image in the aspect of structuring problems.
S220: determining personalized resource positions included in the image to be detected and the reference image by calculating the difference between the image to be detected and the reference image, and performing covering processing on image contents in the personalized resource positions in a mode of generating an image mask;
after the reference image and the image to be detected are obtained specifically, the difference between the two images, that is, the difference between the image contents of the two images, can be calculated first, so that the personalized resource bit included in the image to be detected and the reference image can be determined, and then, the image contents in the personalized resource bit can be covered by generating an image mask.
After the reference image and the to-be-detected image are obtained specifically, both images exist in the form of pictures, so that the position of a specific personalized resource in the image cannot be directly determined. However, since the image content shown in a specific resource location in the personalized area usually shows different according to the identity of the user, in the embodiment of the present application, such a resource location can be found by using such a difference.
In specific implementation, a difference image between the to-be-detected image and the reference image can be obtained by calculation, and binarization and denoising processing can be performed. Wherein, the difference image, i.e. by comparison, obtains the content of the difference between the two images. The binarization processing is a process of setting the gray value of a pixel point on an image to 0 or 255, that is, the whole image exhibits a distinct black-and-white effect. After the binarization processing is performed on the difference image, if the same part between the two images is displayed as black, the different part is displayed as white, so that the same and different parts between the two images can be distinguished obviously.
For example, assuming that a reference image and a to-be-detected image are as shown in fig. 3-1, it can be found that, regarding the "people leaderboard" area, the specific business objects exhibited in each resource position are different, so that there is a large difference if the two images are directly compared with each other although they are identical or similar in overall structure. To this end, in an embodiment of the present application, a difference image between the reference image and the suspect image may be first determined, for example, with respect to the reference image and the suspect image shown in FIG. 3-1, the resulting difference image may be as shown in FIG. 3-2. In order to avoid interference of pixel level difference factors existing in page images displayed in different models (different resolutions) of devices with the same content, morphological operations such as dilation-erosion and the like, that is, denoising, are performed after the difference image is obtained, for example, the processed difference image is shown in fig. 3-3.
And then, masking the image content in the personalized resource bit by generating an image mask. The image mask is a region or a process for controlling image processing by blocking (entirely or partially) an image to be processed with a selected image, graphic, or object. In the embodiment of the present application, a black rectangular region may be specifically used to block a specific image. That is, firstly, the circumscribed rectangle of the white connected region in the difference image can be determined, and then the black rectangular image is selected for shielding according to the area and the position of the circumscribed rectangle.
However, it should be noted that, in the embodiment of the present application, since the image content in the specific resource bit is mainly shielded, in the specific implementation, the area of the image mask may be limited by limiting the area of the specific circumscribed rectangular region, so as to prevent each image mask from exceeding the area of one resource bit. Specifically, the area of the difference region block in the personalized region may be determined by limiting the circumscribed rectangular area of the difference region in the binarized difference image, and the image mask may be generated according to the area of the difference region block. For example, the reference image and the suspect image of FIG. 3-1 may be as shown in FIGS. 3-4 after image masking. That is, the specific image with difference is only shielded within the resource bit range, and shielding in a larger area can not occur, so that the judgment of the resource bit structuring problem is prevented from being influenced.
S230: and comparing the covered image to be detected with the reference image, and determining the adaptation result of the target page in the equipment to be tested according to the feature comparison result.
After the image content in the personalized resource bit is shielded in an image mask mode, the difference between the image content of the personalized resource bit does not exist between the image to be detected and the reference image, so that when the specifically processed image is compared, whether the structural difference exists between the image content of the personalized resource bit and the reference image can be reflected. Specifically, when the feature comparison is performed, there may be multiple ways, for example, in one way, an HOG (directional gradient pyramid) feature or the like may be adopted, two images after covering the personalized region are characterized as an HOG feature vector, a similarity is obtained by calculating cosine values of the two vectors, and if the similarity is smaller than a certain threshold, it is determined that the adaptation condition of the page image does not meet the requirement. For example, whether a certain resource bit is missing, or a certain resource bit group is missing, or whether a certain resource bit that should fill the full screen horizontally is full, whether the aspect ratio of the resource bit is correct, etc.
As described above, when the reference image is selected, two situations may exist, one is that after the display situations of the target page in the reference device and the device to be tested are captured by executing the script, and after the multi-screen images are obtained respectively, the single-screen image obtained in the reference device is directly used as the single-screen reference image, and the single-screen image obtained in the device to be tested corresponding to the screen is used as the single-screen image to be tested, so as to perform the feature comparison on the reference image and the image to be tested by using the single-screen image as a unit. Or, in another case, before performing feature comparison on the reference image and the to-be-detected image by taking a single-screen image as a unit, respectively performing stitching processing on the multi-screen images obtained in the reference device and the to-be-detected device to obtain a panoramic reference image and a panoramic to-be-detected image; then, will panorama reference image and the panorama is waited the image and is carried out after hiding the processing, carry out the characteristic contrast, if the adaptation result shows that the adaptation is successful, then can also trigger and carry out the reference image that uses single-screen image as the unit and the characteristic contrast of waiting to examine the image. That is, a rough comparison is performed through the panoramic image, and then a more detailed comparison is performed based on the single-screen image, and so on.
When the screen capture is carried out in specific implementation, overlapping regions can exist between the screen capture images of adjacent screens, and then the splicing of the screen images is carried out based on the image characteristics between the overlapping regions. Specifically, at least one group of successfully matched key feature points can be obtained by extracting key feature points in a target area in the screen capture images of adjacent screens, wherein each group of key feature points comprises two key feature points which are respectively feature points with similarity meeting conditions in the target area in the previous screen and the next screen. Then, determining a page sliding distance from the previous screen to the next screen according to the distance between the key feature point in the previous screen and the bottom of the previous screen and the distance between the key feature point in the next screen and the top of the next screen aiming at the same group of key feature points; and finally, splicing the screen capture images of the previous screen and the next screen according to the page sliding distance. Specifically, the final sliding distance between the two screens can be obtained by respectively calculating the page sliding distance obtained by each group of successfully matched feature points, and the screen capture images of the previous screen and the next screen are spliced.
In the specific implementation, the input continuous single-screen image can be cut firstly, and the head ceiling position with the fixed length can be cut. In addition, in order to eliminate interference of a fixed position floating layer and the like designed in a single-screen page, only a lower half-screen area can be intercepted from the first screen, and an upper half-screen area can be intercepted from the second screen to be used as a range area for extracting feature points. Then, feature points (for example, SURF, Speeded Up Robust Features, which is a feature extraction algorithm) can be extracted in the designated areas of the two-screen images, and feature point matching can be performed by using a FLANN (Fast approximation new Neighbor Search Library) or the like.
Then, for each group of feature key points successfully matched, the distance di1 between the key point in the first screen image and the bottom of the image and the distance di2 between the key point in the second screen image and the top of the image can be calculated, and the length di of the overlapped page of the two screens is di1+ di 2. d ═ d1, d2, …, di …, dn }. In specific implementation, multiple groups of feature key points successfully matched can be obtained, most of the key points may be correct, but some matching errors may exist, so that after multiple overlapping page lengths are obtained according to the multiple groups of key points, the mode of the translation distance in d (i.e., corresponding to the most correct data) can be calculated therefrom to serve as the translation displacement distance of final splicing, that is, the overlapping page lengths in the two-screen image, and the interference of a small amount of noise and wrongly matched feature points is eliminated. Then, the effective areas of the first screen and the second screen can be respectively intercepted according to the finally obtained translation distance for splicing.
In this way, the multi-screen images intercepted in the reference equipment can be spliced into the panoramic reference image, and the multi-screen images in the equipment to be tested are spliced into the panoramic image to be tested, so that on one hand, whether the problems of misalignment and the like exist in the whole page display process can be judged, and in addition, the comparison is only carried out on one reference image and one image to be tested, and the efficiency is higher. Of course, in the comparison process by the panoramic image, since the size of the panoramic image is relatively large, some slight differences may be ignored due to the relatively small ratio with respect to the whole panoramic image. Therefore, in a preferred embodiment, when the comparison based on the panoramic image is completed and no obvious difference is found, whether a more detailed difference exists can be judged by the single-screen image comparison.
In addition, the premise of the comparison based on the panoramic image or the single-screen image is that the reference image is accurate. However, in practical applications, since the reference image is identified by manual detection, some problems may be missed or false detection. For example, in the manual detection process, the resource bit at a certain position is not found to be missing, and in this case, a large number of misjudgments may occur in the comparison process using this as a reference image. Therefore, in the preferred embodiment of the present application, in addition to finding a possible problem in the image to be inspected by means of image comparison, an abnormal problem which may exist when the target page is displayed in the device to be inspected can be detected by analyzing the single-screen image to be inspected.
Wherein, in the process of analyzing only according to the single-screen image to be detected, different modes can be adopted to realize aiming at different types of problems to be detected. For example, for the problem of "empty pits" (the content in a certain resource bit is empty) or "pit dropping" (a certain resource bit or some resource bits are missing), the single-screen image to be detected may be analyzed by a preset target object deep learning model (which may be obtained through massive data training). That is, some pages with the problem of "empty pit" or "dropped pit" may be collected or produced in advance, and then the features of these problem areas are learned through a target detection method based on deep learning, so that it is possible to determine whether the problem exists in the actual page to be detected through the learned model.
In addition, for the problem of "empty floor" (for example, the area where a group of resource bits is located is missing), continuous resource bit group modules can be identified from the single-screen image to be detected, and then, by judging whether the distance between adjacent resource bit group models is smaller than a threshold value, it is determined whether an abnormal problem that the content of the resource bit group is empty exists. That is, in the case that the "floor" is not empty, a plurality of resource slots are usually included in a specific module, and information of some service objects in the "floor" is shown in each resource slot. However, if the "floor" is empty, these resource locations may not be displayed, which results in the distance between the two "floors" being too small, and therefore, it is possible to determine whether there is an "empty floor" problem by this feature.
Specifically, when a continuous resource bitmap module is identified from the single-screen image, there may be multiple ways, for example, in one way, DOM (Document Object Model) element information when the target page is displayed in the device to be tested may be obtained; then, a target area which accords with the title characteristics of the resource bit group is screened out from the DOM element information to be used as the resource bit group module. In which the objects that organize a page (or document) on a page are organized in a tree structure, the standard model used to represent the objects in the document is known as the DOM, and is actually a document model described in an object-oriented manner. The DOM defines the objects needed to represent and modify a document, the behavior and properties of those objects, and the relationships between those objects. That is, regarding the content specifically displayed in the page to be checked, it is actually described by the DOM element, and therefore, the most authentic content of the page can be obtained by this DOM element information. In addition, since the "floor" module usually has obvious characteristics in the page, the information of the specific "floor" module can be obtained from the DOM through the characteristics and used for further judgment.
In addition, aiming at the problem that different pit bits have the same object, a resource bit area can be identified from the single-screen image to be detected; then, similarity comparison is carried out on the images in different resource bit regions, and whether the abnormal problem that different resource bits are associated with the same target object information exists is determined.
And particularly, when the resource bit region is identified from the single-screen image to be detected, multiple modes can be provided. For example, since the resource bit region usually has a relatively obvious feature, DOM element information of the target page when the target page is displayed in the device to be tested may also be obtained, and then the target region conforming to the resource bit feature is screened out from the DOM element information as the resource bit region.
Moreover, the problems that may exist may also include that some abnormal documents may exist in the documents in the specific page, so that the document information contained in the image to be detected may also be obtained by performing text recognition on the image to be detected, and then, whether the abnormal problems including the abnormal documents exist may be determined in a keyword matching manner.
The detection based on various possible problems of the single-screen inspection image can be performed before or after the comparison between the single-screen inspection image and the single-screen reference image. The problem detection based on the single-screen image to be detected can be used as a supplement of image detection based on comparison, and the possible adaptation problem of the target page in the equipment to be detected can be detected more comprehensively under the condition that the reference image has errors and the like.
Finally, it should be noted that the adaptation test function provided by the embodiment of the present application may be a selective function provided for some special cases (for example, a case where there is a personalized area in the target page). That is, for some common target pages, the adaptation test may also be performed using a conventional scheme. For this purpose, in a specific implementation, a selection switch may also be provided in the test system, which switch may be automatically triggered to turn on or off by a program. For example, in the testing process, the difference between the reference image and the image to be tested can be detected firstly, when the difference between the two detected images is within a certain preset range, namely the difference is not large, the switch can be closed, the traditional mode is used for testing, namely the two images do not need to be covered; and if the difference between the two images is detected to be larger, the switch can be started, and the scheme provided by the embodiment of the application is adopted, so that the efficiency is improved, and the interference is avoided.
In summary, according to the embodiment of the application, the adaptation condition of the target page in the device to be tested can be tested based on the feature comparison between the image to be tested and the reference image, wherein for the personalized area existing in the page, the difference image between the image to be tested and the reference image can be determined at first, then, the image content in the personalized resource location can be shielded in an image mask mode, and then the feature comparison is performed on the image to be tested and the reference image after shielding processing, so that whether the structural adaptation problem exists when the target page is displayed in the device to be tested can be better determined, and the influence of the content displayed in the personalized area can be avoided.
Example two
In the second embodiment, the implementation manner of image stitching mentioned in the first embodiment is separately described, and in some other related application scenarios, the method may also be used to implement the stitching of the screenshot images. Specifically, the second embodiment provides an image stitching processing method, and referring to fig. 4, the method may specifically include:
s410: acquiring a plurality of screen capture images aiming at the same target page, wherein a superposition area exists between the screen capture images of adjacent screens;
s420: extracting key feature points in a target area in the screen capture images of adjacent screens to obtain at least one group of successfully matched key feature points, wherein each group of key feature points comprises two key feature points which are respectively feature points with similarity meeting conditions in the target area in a previous screen and a next screen;
s430: determining a page sliding distance from the previous screen to the next screen according to the distance between the key feature point in the previous screen and the bottom of the previous screen and the distance between the key feature point in the next screen and the top of the next screen aiming at the same group of key feature points;
s440: and splicing the screen capture images of the previous screen and the next screen according to the page sliding distance.
For the parts of the second embodiment that are not described in detail, reference may be made to the descriptions of the first embodiment, and details are not repeated here.
Corresponding to the first embodiment, an embodiment of the present application further provides an adaptation testing apparatus, and referring to fig. 5, the apparatus may specifically include:
an image obtaining unit 510 for obtaining a reference image and a to-be-inspected image;
a covering processing unit 520, configured to determine, by calculating a difference between the to-be-detected image and the reference image, a personalized resource bit included in the to-be-detected image and the reference image, and perform covering processing on image content in the personalized resource bit by generating an image mask;
and the feature comparison unit 530 is configured to perform feature comparison on the covered to-be-detected image and the reference image, and determine an adaptation result of the target page in the to-be-detected device according to a feature comparison result.
Specifically, the image obtaining unit may specifically include:
the screen capturing subunit is used for capturing the display conditions of the target page in the reference equipment and the equipment to be tested in a script executing mode, and respectively obtaining a multi-screen image;
and the single-screen image comparison subunit is used for taking the single-screen image obtained in the reference equipment as a single-screen reference image and taking the single-screen image of the corresponding screen frequency obtained in the equipment to be tested as a single-screen image to be tested so as to compare the characteristics of the reference image and the image to be tested by taking the single-screen image as a unit.
In a specific implementation, the apparatus may further include:
the panoramic image obtaining unit is used for respectively splicing the multi-screen images obtained from the reference equipment and the equipment to be tested to obtain a panoramic reference image and a panoramic image to be tested before carrying out feature comparison on the reference image and the image to be tested by taking a single-screen image as a unit;
and the panoramic image comparison unit is used for carrying out the feature comparison after the panoramic reference image and the panoramic image to be detected are subjected to covering treatment, and triggering the feature comparison of the reference image taking the single-screen image as a unit and the image to be detected if the adaptation result shows that the adaptation is successful.
When screen capture is carried out, overlapping areas exist between screen capture images of adjacent screens;
the panoramic image obtaining unit may specifically include:
the characteristic point extraction subunit is used for extracting key characteristic points in a target area in the screen capture images of adjacent screens to obtain at least one group of successfully matched key characteristic points, wherein each group of key characteristic points comprises two key characteristic points which are respectively characteristic points with similarity meeting conditions in the target area in the previous screen and the next screen;
the page sliding distance determining subunit is configured to determine, for the same group of key feature points, a page sliding distance from a previous screen to a subsequent screen according to a distance between a key feature point in the previous screen and the bottom of the previous screen and a distance between a key feature point in the subsequent screen and the top of the subsequent screen;
and the splicing subunit is used for splicing the screen capture images of the previous screen and the next screen according to the page sliding distance.
Specifically, the covering processing unit may specifically include:
the difference image obtaining subunit is used for calculating to obtain a difference image of the to-be-detected image and the reference image, and performing binarization and denoising processing;
and the image mask determining subunit is used for determining the area of the difference image in the personalized region by limiting the area of the circumscribed rectangle of the difference image, generating the image mask according to the area of the difference image, and covering the image content in the personalized resource position by using the image mask.
In addition, the apparatus may further include:
and the single-screen image analysis unit is used for analyzing the single-screen image to be detected and detecting the possible abnormal problems of the target page when the target page is displayed in the equipment to be detected.
Specifically, the single-screen image analysis unit may be specifically configured to:
and analyzing the single-screen image to be detected through a preset target object deep learning model, and determining whether the abnormal problem that the content in the resource bit is empty or the resource bit is lacked exists.
Or, the single-screen image analysis unit may specifically include:
a resource bit group module determining subunit for identifying consecutive resource bit group modules from the single screen image;
and the distance judgment subunit is used for determining whether the abnormal problem that the content of the resource bit group is empty exists or not by judging whether the distance between the adjacent resource bit group models is smaller than a threshold value or not.
Wherein the determining of the sub-unit by the resource bit group module specifically may include:
a DOM element information obtaining subunit, configured to obtain DOM element information of the document object model when the target page is displayed in the device to be tested;
and the determining subunit is used for screening out a target area which accords with the title characteristics of the resource bit group from the DOM element information and taking the target area as the resource bit group module.
In addition, the determining the sub-unit by the resource bit group module may specifically include:
a resource bit region identification subunit, configured to identify a resource bit region from the single-screen image;
and the abnormal resource bit identification subunit is used for comparing the similarity of the images in different resource bit regions to determine whether an abnormal problem that different resource bits are associated with the same target object information exists.
The resource bit region identification subunit may specifically include:
a DOM element information obtaining subunit, configured to obtain DOM element information of the document object model when the target page is displayed in the device to be tested;
and the resource bit determining subunit is used for screening out a target area meeting the characteristics of the resource bit from the DOM element information as the resource bit area.
Furthermore, the determining the subunit by the resource bit group module may specifically include:
the file obtaining subunit is used for carrying out character recognition on the image to be detected to obtain file information contained in the image to be detected;
and the abnormal case judging subunit is used for judging whether the abnormal problem containing the abnormal case exists or not in a keyword matching mode.
Corresponding to the second embodiment, an embodiment of the present application further provides an image stitching processing apparatus, and referring to fig. 6, the apparatus may specifically include:
a screen capture image obtaining unit 610, configured to obtain multiple screen capture images for a same target page, where overlapping regions exist between screen capture images of adjacent screens;
a feature point extraction unit 620, configured to extract key feature points in a target region in captured images of adjacent screens to obtain at least one group of successfully matched key feature points, where each group of key feature points includes two key feature points, which are feature points whose similarity in the target region in a previous screen and a next screen meets a condition;
a page sliding distance determining unit 630, configured to determine, for the same group of key feature points, a page sliding distance from a previous screen to a next screen according to a distance between a key feature point in the previous screen and the bottom of the previous screen and according to a distance between a key feature point in the next screen and the top of the next screen;
and the splicing unit 640 is configured to splice the screen capture images of the previous screen and the next screen according to the page sliding distance.
In addition, an embodiment of the present application further provides an electronic device, including:
one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read and executed by the one or more processors, perform operations comprising:
obtaining a reference image and an image to be detected;
determining personalized resource positions included in the image to be detected and the reference image by calculating the difference between the image to be detected and the reference image, and performing covering processing on image contents in the personalized resource positions in a mode of generating an image mask;
and comparing the covered image to be detected with the reference image, and determining the adaptation result of the target page in the equipment to be tested according to the feature comparison result.
And another electronic device, comprising:
one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read and executed by the one or more processors, perform operations comprising:
acquiring a plurality of screen capture images aiming at the same target page, wherein a superposition area exists between the screen capture images of adjacent screens;
extracting key feature points in a target area in the screen capture images of adjacent screens to obtain at least one group of successfully matched key feature points, wherein each group of key feature points comprises two key feature points which are respectively feature points with similarity meeting conditions in the target area in a previous screen and a next screen;
determining a page sliding distance from the previous screen to the next screen according to the distance between the key feature point in the previous screen and the bottom of the previous screen and the distance between the key feature point in the next screen and the top of the next screen aiming at the same group of key feature points;
and splicing the screen capture images of the previous screen and the next screen according to the page sliding distance.
Fig. 7 illustrates an architecture of an electronic device, which may include, in particular, a processor 710, a video display adapter 711, a disk drive 712, an input/output interface 713, a network interface 714, and a memory 720. The processor 710, the video display adapter 711, the disk drive 712, the input/output interface 713, the network interface 714, and the memory 720 may be communicatively coupled via a communication bus 730.
The processor 710 may be implemented by a general-purpose CPU (Central Processing Unit), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits, and is configured to execute related programs to implement the technical solution provided in the present Application.
The Memory 720 may be implemented in the form of a ROM (Read Only Memory), a RAM (Random Access Memory), a static storage device, a dynamic storage device, or the like. The memory 720 may store an operating system 721 for controlling the operation of the electronic device 700, a Basic Input Output System (BIOS) for controlling low-level operations of the electronic device 700. In addition, a web browser 723, a data storage management system 724, an adaptation test processing system 725 and the like may also be stored. The adaptation test processing system 725 may be an application program that implements the operations of the foregoing steps in this embodiment. In summary, when the technical solution provided by the present application is implemented by software or firmware, the relevant program codes are stored in the memory 720 and called for execution by the processor 710.
The input/output interface 713 is used for connecting an input/output module to realize information input and output. The i/o module may be configured as a component in a device (not shown) or may be external to the device to provide a corresponding function. The input devices may include a keyboard, a mouse, a touch screen, a microphone, various sensors, etc., and the output devices may include a display, a speaker, a vibrator, an indicator light, etc.
The network interface 714 is used for connecting a communication module (not shown in the figure) to realize communication interaction between the device and other devices. The communication module can realize communication in a wired mode (such as USB, network cable and the like) and also can realize communication in a wireless mode (such as mobile network, WIFI, Bluetooth and the like).
Bus 730 includes a path that transfers information between the various components of the device, such as processor 710, video display adapter 711, disk drive 712, input/output interface 713, network interface 714, and memory 720.
In addition, the electronic device 700 may also obtain information of specific derivation conditions from the virtual resource object derivation condition information database 741, for use in making condition judgment, and the like.
It should be noted that although the above-mentioned devices only show the processor 710, the video display adapter 711, the disk drive 712, the input/output interface 713, the network interface 714, the memory 720, the bus 730, etc., in a specific implementation, the devices may also include other components necessary for normal operation. Furthermore, it will be understood by those skilled in the art that the apparatus described above may also include only the components necessary to implement the solution of the present application, and not necessarily all of the components shown in the figures.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present application.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, the system or system embodiments are substantially similar to the method embodiments and therefore are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for related points. The above-described system and system embodiments are only illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The adaptation test method, the adaptation test device and the electronic device provided by the application are introduced in detail, a specific example is applied in the description to explain the principle and the implementation of the application, and the description of the embodiment is only used to help understand the method and the core idea of the application; meanwhile, for a person skilled in the art, according to the idea of the present application, the specific embodiments and the application range may be changed. In view of the above, the description should not be taken as limiting the application.

Claims (17)

1. An adaptation test method, comprising:
obtaining a reference image and an image to be detected;
determining personalized resource positions included in the image to be detected and the reference image by calculating the difference between the image to be detected and the reference image, and performing covering processing on image contents in the personalized resource positions in a mode of generating an image mask;
and comparing the covered image to be detected with the reference image, and determining the adaptation result of the target page in the equipment to be tested according to the feature comparison result.
2. The method of claim 1,
the obtaining of the reference image and the to-be-detected image comprises the following steps:
capturing the display conditions of the target page in the reference equipment and the equipment to be tested in a script executing mode, and respectively obtaining multi-screen images;
and taking the single-screen image obtained in the reference equipment as a single-screen reference image, and taking the single-screen image of the corresponding screen number obtained in the equipment to be tested as a single-screen image to be tested so as to compare the characteristics of the reference image and the image to be tested by taking the single-screen image as a unit.
3. The method of claim 2, further comprising:
splicing the multi-screen images obtained from the reference equipment and the equipment to be tested respectively before carrying out feature comparison on the reference image and the image to be tested by taking a single-screen image as a unit to obtain a panoramic reference image and a panoramic image to be tested;
and comparing the characteristics of the panoramic reference image and the panoramic to-be-detected image after covering and processing, and triggering the comparison of the characteristics of the reference image and the to-be-detected image which take the single-screen image as a unit if the adaptation result shows that the adaptation is successful.
4. The method of claim 3,
when screen capture is carried out, overlapping areas exist between screen capture images of adjacent screens;
the splicing process comprises:
extracting key feature points in a target area in the screen capture images of adjacent screens to obtain at least one group of successfully matched key feature points, wherein each group of key feature points comprises two key feature points which are respectively feature points with similarity meeting conditions in the target area in a previous screen and a next screen;
determining a page sliding distance from the previous screen to the next screen according to the distance between the key feature point in the previous screen and the bottom of the previous screen and the distance between the key feature point in the next screen and the top of the next screen aiming at the same group of key feature points;
and splicing the screen capture images of the previous screen and the next screen according to the page sliding distance.
5. The method according to claim 1 or 3,
the masking process includes:
calculating to obtain a difference image of the to-be-detected image and the reference image, and performing binarization and denoising processing;
and determining the area of the difference image in the personalized region by limiting the circumscribed rectangular area of the difference image, generating the image mask according to the area of the difference image, and covering the image content in the personalized resource position by using the image mask.
6. The method of claim 2, further comprising:
and analyzing the single-screen image to be detected, and detecting the possible abnormal problems of the target page when the target page is displayed in the equipment to be detected.
7. The method of claim 6,
the detecting of the possible abnormal problem when the target page is displayed in the device to be tested includes:
and analyzing the single-screen image to be detected through a preset target object deep learning model, and determining whether the abnormal problem that the content in the resource bit is empty or the resource bit is lacked exists.
8. The method of claim 6,
the detecting of the possible abnormal problem when the target page is displayed in the device to be tested includes:
identifying a contiguous resource grouping module from the single-screen image;
and determining whether the abnormal problem that the content of the resource bit group is empty exists or not by judging whether the distance between the adjacent resource bit group models is smaller than a threshold value or not.
9. The method of claim 8,
the module for identifying continuous resource bit groups from the single-screen image comprises:
obtaining Document Object Model (DOM) element information of the target page when the target page is displayed in the device to be tested;
and screening out a target area which accords with the title characteristics of the resource bit group from the DOM element information to be used as the resource bit group module.
10. The method of claim 6,
the detecting of the possible abnormal problem when the target page is displayed in the device to be tested includes:
identifying a resource bit region from the single screen image;
and determining whether the abnormal problem that different resource bits are associated with the same target object information exists or not by comparing the similarity of the images in the different resource bit regions.
11. The method of claim 10,
the identifying a resource bit region from the single-screen image comprises:
obtaining Document Object Model (DOM) element information of the target page when the target page is displayed in the device to be tested;
and screening out a target area which accords with the characteristics of the resource bit from the DOM element information to be used as the resource bit area.
12. The method of claim 6,
the detecting of the possible abnormal problem when the target page is displayed in the device to be tested includes:
character recognition is carried out on the image to be detected to obtain the file information contained in the image to be detected;
and judging whether the abnormal problem containing the abnormal case exists or not in a keyword matching mode.
13. An image stitching processing method is characterized by comprising the following steps:
acquiring a plurality of screen capture images aiming at the same target page, wherein a superposition area exists between the screen capture images of adjacent screens;
extracting key feature points in a target area in the screen capture images of adjacent screens to obtain at least one group of successfully matched key feature points, wherein each group of key feature points comprises two key feature points which are respectively feature points with similarity meeting conditions in the target area in a previous screen and a next screen;
determining a page sliding distance from the previous screen to the next screen according to the distance between the key feature point in the previous screen and the bottom of the previous screen and the distance between the key feature point in the next screen and the top of the next screen aiming at the same group of key feature points;
and splicing the screen capture images of the previous screen and the next screen according to the page sliding distance.
14. An adaptation testing device, comprising:
an image obtaining unit for obtaining a reference image and a to-be-detected image;
the covering processing unit is used for determining the individualized resource positions included in the image to be detected and the reference image by calculating the difference between the image to be detected and the reference image, and covering the image content in the individualized resource positions by generating an image mask;
and the characteristic comparison unit is used for comparing the covered to-be-detected image with the reference image and determining the adaptation result of the target page in the to-be-detected device according to the characteristic comparison result.
15. An image stitching processing apparatus, characterized by comprising:
the screen capture image acquisition unit is used for acquiring a plurality of screen capture images aiming at the same target page, wherein an overlapped area exists between the screen capture images of adjacent screens;
the characteristic point extraction unit is used for extracting key characteristic points in a target area in the screen capture images of adjacent screens to obtain at least one group of successfully matched key characteristic points, wherein each group of key characteristic points comprises two key characteristic points which are respectively characteristic points with similarity meeting conditions in the target area in the previous screen and the next screen;
the page sliding distance determining unit is used for determining the page sliding distance from the previous screen to the next screen according to the distance between the key feature point in the previous screen and the bottom of the previous screen and the distance between the key feature point in the next screen and the top of the next screen aiming at the same group of key feature points;
and the splicing unit is used for splicing the screen capture images of the previous screen and the next screen according to the page sliding distance.
16. An electronic device, comprising:
one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read and executed by the one or more processors, perform operations comprising:
obtaining a reference image and an image to be detected;
determining personalized resource positions included in the image to be detected and the reference image by calculating the difference between the image to be detected and the reference image, and performing covering processing on image contents in the personalized resource positions in a mode of generating an image mask;
and comparing the covered image to be detected with the reference image, and determining the adaptation result of the target page in the equipment to be tested according to the feature comparison result.
17. An electronic device, comprising:
one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read and executed by the one or more processors, perform operations comprising:
acquiring a plurality of screen capture images aiming at the same target page, wherein a superposition area exists between the screen capture images of adjacent screens;
extracting key feature points in a target area in the screen capture images of adjacent screens to obtain at least one group of successfully matched key feature points, wherein each group of key feature points comprises two key feature points which are respectively feature points with similarity meeting conditions in the target area in a previous screen and a next screen;
determining a page sliding distance from the previous screen to the next screen according to the distance between the key feature point in the previous screen and the bottom of the previous screen and the distance between the key feature point in the next screen and the top of the next screen aiming at the same group of key feature points;
and splicing the screen capture images of the previous screen and the next screen according to the page sliding distance.
CN201910749852.3A 2019-08-14 2019-08-14 Adaptation test method and device and electronic equipment Pending CN112446850A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910749852.3A CN112446850A (en) 2019-08-14 2019-08-14 Adaptation test method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910749852.3A CN112446850A (en) 2019-08-14 2019-08-14 Adaptation test method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN112446850A true CN112446850A (en) 2021-03-05

Family

ID=74742155

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910749852.3A Pending CN112446850A (en) 2019-08-14 2019-08-14 Adaptation test method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112446850A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113360378A (en) * 2021-06-04 2021-09-07 北京房江湖科技有限公司 Regression testing method and device for application program for generating VR scene

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008116557A2 (en) * 2007-03-24 2008-10-02 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Device and method for adapting a mask image
US20110231823A1 (en) * 2010-03-22 2011-09-22 Lukas Fryc Automated visual testing
US20140189491A1 (en) * 2013-01-03 2014-07-03 Browserbite Oü Visual cross-browser layout testing method and system therefor
CN104133683A (en) * 2014-07-31 2014-11-05 上海二三四五网络科技股份有限公司 Screenshot obtaining method and device
JP2017162120A (en) * 2016-03-08 2017-09-14 三菱電機株式会社 Information processor, information processing method, and information processing program
CN109634788A (en) * 2017-10-09 2019-04-16 阿里巴巴集团控股有限公司 A kind of terminal adaptation verification method and system, terminal
CN109949295A (en) * 2019-03-21 2019-06-28 中国工商银行股份有限公司 Otherness detection method, device and the computer storage medium of browsing device net page

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008116557A2 (en) * 2007-03-24 2008-10-02 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Device and method for adapting a mask image
US20110231823A1 (en) * 2010-03-22 2011-09-22 Lukas Fryc Automated visual testing
US20140189491A1 (en) * 2013-01-03 2014-07-03 Browserbite Oü Visual cross-browser layout testing method and system therefor
CN104133683A (en) * 2014-07-31 2014-11-05 上海二三四五网络科技股份有限公司 Screenshot obtaining method and device
JP2017162120A (en) * 2016-03-08 2017-09-14 三菱電機株式会社 Information processor, information processing method, and information processing program
CN109634788A (en) * 2017-10-09 2019-04-16 阿里巴巴集团控股有限公司 A kind of terminal adaptation verification method and system, terminal
CN109949295A (en) * 2019-03-21 2019-06-28 中国工商银行股份有限公司 Otherness detection method, device and the computer storage medium of browsing device net page

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘煦;王笛;张源;杨珉;: "应用图像对比方法的UI自动化功能测试", 计算机应用与软件, no. 10 *
张艳珠;王涛;: "序列图像的快速配准算法研究", 沈阳理工大学学报, no. 03 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113360378A (en) * 2021-06-04 2021-09-07 北京房江湖科技有限公司 Regression testing method and device for application program for generating VR scene

Similar Documents

Publication Publication Date Title
CN107025174B (en) Method, device and readable storage medium for user interface anomaly test of equipment
CN108664364B (en) Terminal testing method and device
CN109857652A (en) A kind of automated testing method of user interface, terminal device and medium
CN110175609B (en) Interface element detection method, device and equipment
CN112818456B (en) Layer configuration method, electronic equipment and related products
CN111310826B (en) Method and device for detecting labeling abnormality of sample set and electronic equipment
CN112988557A (en) Search box positioning method, data acquisition device and medium
CN111309618A (en) Page element positioning method, page testing method and related device
CN112308069A (en) Click test method, device, equipment and storage medium for software interface
CN113657361A (en) Page abnormity detection method and device and electronic equipment
CN111339368B (en) Video retrieval method and device based on video fingerprint and electronic equipment
US11200694B2 (en) Apparatus and method for extracting object information
CN110765893B (en) Drawing file identification method, electronic equipment and related product
CN112052702A (en) Method and device for identifying two-dimensional code
US10299117B2 (en) Method for authenticating a mobile device and establishing a direct mirroring connection between the authenticated mobile device and a target screen device
CN110110110A (en) One kind is to scheme to search drawing method, device, electronic equipment and storage medium
CN112615873B (en) Internet of things equipment safety detection method, equipment, storage medium and device
CN112446850A (en) Adaptation test method and device and electronic equipment
CN115269359A (en) Terminal interface testing method and device
CN113138916A (en) Automatic testing method and system for picture structuring algorithm based on labeled sample
CN111949356A (en) Popup window processing method and device and electronic equipment
CN110992299A (en) Method and device for detecting browser compatibility
US20170277722A1 (en) Search service providing apparatus, system, method, and computer program
CN115018783A (en) Video watermark detection method and device, electronic equipment and storage medium
US11657489B2 (en) Segmentation of continuous dynamic scans

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination