CN116881175B - Application compatibility evaluation method and device, electronic equipment and storage medium - Google Patents

Application compatibility evaluation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116881175B
CN116881175B CN202311152973.2A CN202311152973A CN116881175B CN 116881175 B CN116881175 B CN 116881175B CN 202311152973 A CN202311152973 A CN 202311152973A CN 116881175 B CN116881175 B CN 116881175B
Authority
CN
China
Prior art keywords
video
application
function
segment
performance data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311152973.2A
Other languages
Chinese (zh)
Other versions
CN116881175A (en
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nfs China Software Co ltd
Original Assignee
Nfs China Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nfs China Software Co ltd filed Critical Nfs China Software Co ltd
Priority to CN202311152973.2A priority Critical patent/CN116881175B/en
Publication of CN116881175A publication Critical patent/CN116881175A/en
Application granted granted Critical
Publication of CN116881175B publication Critical patent/CN116881175B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3692Test management for test results analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3696Methods or tools to render software testable
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The application discloses an application compatibility performance evaluation method, an application compatibility performance evaluation device, electronic equipment and a storage medium, belongs to the technical field of computers, and is used for solving the problem of low application performance evaluation efficiency. The method comprises the following steps: acquiring application operation videos which are respectively recorded under different test environments for an application to be evaluated; processing the application operation video by adopting a video local feature alignment technology to acquire performance data of the application to be evaluated under different test environments; and carrying out compatibility performance evaluation on the application to be evaluated according to the performance data. The method realizes application operation video based on the application to be evaluated, adopts the video local feature alignment technology to align video segments with the same operation function, and extracts performance data, thereby automatically evaluating the compatibility performance of the application and improving the efficiency of the compatibility evaluation of the application.

Description

Application compatibility evaluation method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an application compatibility performance evaluation method, an apparatus, an electronic device, and a storage medium.
Background
The existing application compatibility performance evaluation is generally measured by performance indexes such as response time, concurrent user number, throughput, resource utilization rate and the like. The four performance indexes can be mainly divided into two aspects of system resource utilization rate and system behavior (such as response time, throughput and the like). There is a certain correlation between them, which together reflect different aspects of performance. For example, response time, maximum number of concurrent users, throughput, and resource utilization may be used to measure software timeliness, expansion capacity and capacity, processing capacity, and operating state, respectively. The shorter the response time, the more user concurrency is born, the larger the throughput is, the less resources are occupied, which indicates that the better the system performance is, and the worse the performance is.
In the prior art, performance index analysis of general application compatibility evaluation references, for example, application compatibility test for domestic operating systems, mainly examines response time of application software under two different test environments of domestic desktop operating systems and native operating systems. Existing compatibility tests are typically developed from two aspects: one is manual testing and the other is testing with the aid of a third party compatible performance testing tool. The manual testing efficiency is low, and the automatic testing by means of the third party compatible performance testing tool has at least the defects that the accuracy of performance evaluation is difficult to be completely ensured and the interactive performance cannot be effectively evaluated due to the unpredictability of application response.
It can be seen that there is still a need for improvement in the prior art application compatibility evaluation methods.
Disclosure of Invention
The embodiment of the application provides an application compatibility evaluation method and device, electronic equipment and a storage medium, which can improve the application compatibility evaluation efficiency.
In a first aspect, an embodiment of the present application provides an application compatibility evaluation method, including:
acquiring application operation videos which are respectively recorded under different test environments for an application to be evaluated;
processing the application operation video by adopting a video local feature alignment technology to acquire performance data of the application to be evaluated under different test environments;
and carrying out compatibility performance evaluation on the application to be evaluated according to the performance data.
In a second aspect, an embodiment of the present application provides an application compatibility evaluation apparatus, including:
the application operation video acquisition module is used for acquiring application operation videos which are respectively recorded under different test environments for the application to be evaluated;
the performance data acquisition module is used for processing the application operation video by adopting a video local feature alignment technology to acquire performance data of the application to be evaluated under different test environments;
And the compatibility performance evaluation module is used for evaluating the compatibility performance of the application to be evaluated according to the performance data.
In a third aspect, the embodiment of the present application further discloses an electronic device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor implements the method for evaluating application compatibility according to the embodiment of the present application when executing the computer program.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the application compatibility evaluation method disclosed in the embodiments of the present application.
According to the application compatibility evaluation method disclosed by the embodiment of the application, application operation videos which are respectively recorded under different test environments for the application to be evaluated are obtained; processing the application operation video by adopting a video local feature alignment technology to acquire performance data of the application to be evaluated under different test environments; and carrying out compatibility performance evaluation on the application to be evaluated according to the performance data. The method realizes application operation video based on the application to be evaluated, adopts the video local feature alignment technology to align video segments with the same operation function, and extracts performance data, thereby automatically evaluating the compatibility performance of the application and improving the efficiency of the compatibility evaluation of the application.
The foregoing description is only an overview of the present application, and is intended to be implemented in accordance with the teachings of the present application in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present application more readily apparent.
Drawings
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
FIG. 1 is a flow chart of an application compatibility evaluation method disclosed in an embodiment of the present application;
FIG. 2 is a flowchart illustrating steps for obtaining performance data in an application compatibility evaluation method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an application system architecture of an application compatibility evaluation method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an application compatibility evaluation device according to an embodiment of the present application;
FIG. 5 is a second schematic diagram of an application compatibility evaluation device according to an embodiment of the present application;
fig. 6 schematically shows a block diagram of an electronic device for performing the method according to the application; and
fig. 7 schematically shows a memory unit for holding or carrying program code for implementing the method according to the application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The embodiment of the application discloses an application compatibility evaluation method, as shown in fig. 1, which comprises the following steps: steps 110 to 130.
Step 110, acquiring application operation videos which are respectively recorded under different test environments for the application to be evaluated.
And 120, processing the application operation video by adopting a video local feature alignment technology to acquire performance data of the application to be evaluated under different test environments.
And 130, performing compatibility performance evaluation on the application to be evaluated according to the performance data.
The following is an illustration of specific embodiments of the steps, respectively.
In an embodiment of the present application, the application to be evaluated may be a desktop application having a user interaction function. For example, the method can be applied to industrial design, and the functions of the application are generally not random, are determined functional reactions and can more easily reflect performance gaps among different environments. The different test environments may be, for example: different operating systems. In particular, the different test environments may be compatible platforms and native operating system platforms, for example.
In the embodiment of the application, the test environments with different functions are respectively marked as a first test environment and a second test environment, and a compatibility performance evaluation method of an application to be evaluated is described.
First, it is necessary to operate the application to be evaluated in the first test environment and the second test environment, respectively, and record, for the operation in each test environment, the video of the operation process and the interface feedback of the application to be evaluated, which is noted as "application operation video". For example, the application to be evaluated is run on a compatible platform and a native operating system platform respectively, the same operation is executed on the application to be evaluated, and meanwhile, application operation videos of each platform are recorded respectively.
Optionally, the acquiring application operation videos recorded under different test environments for the application to be evaluated includes: and acquiring application operation videos which are respectively recorded aiming at typical operation scenes of the application to be evaluated in different test environments.
In order to enhance the evaluation efficiency, in the embodiment of the present application, a typical job scene recording application operation video is first selected, in view of the fact that the desktop application generally supports several operations. Wherein the typical job scenario is determined according to specific job scenarios of different desktop applications.
And then, based on application operation videos recorded in typical operation scenes under different test environments, acquiring performance data corresponding to each application operation video.
In the embodiment of the application, the video of the typical operation scene can be decomposed into independent functional fragments, so that the performance data are acquired according to the corresponding functional fragments one by one aiming at the video of the same typical operation scene under different test environments, and are used for carrying out compatibility performance evaluation so as to further improve the accuracy of the evaluation.
Optionally, as shown in fig. 2, in step 120, the processing the application operation video by using a video local feature alignment technique to obtain performance data of the application to be evaluated in the different test environments includes: substeps 1201-1203.
The following is an illustration of specific embodiments of the individual sub-steps.
And step 1201, performing functional segment alignment processing on each application operation video based on the video local features to obtain a single-function video segment group with the same single operation behavior under different test environments.
Embodiments of the present application will be described, for example: one operation behavior of the application such as mouse click, model parameter input, and specific script execution is defined as a "single operation behavior". In the embodiment of the present application, the same single operation behavior means: the same single operation behavior that triggers the application to perform the same action and get the same operation effect. Alternatively, image recognition may be performed on the image frames included in the application operation video to identify the video image frames for performing each operation, and further determine the video segments corresponding to each operation as single-function video segments corresponding to the operation. And then, carrying out function pair on the single-function video clips under different test environments to obtain single-function video clip groups with the same operation behavior under different test environments.
Optionally, in the step 1201, performing functional segment alignment processing on each of the application operation videos based on the video local feature to obtain a single-functional video segment group with the same single operation behavior under different test environments, where the method includes: substeps S11-S12.
Specific embodiments of each sub-step are set forth below.
And S11, performing function segmentation processing on each application operation video to obtain single-function video fragments with single operation behaviors under different test environments.
In some embodiments of the present application, the application operation videos are subjected to functional segmentation processing by combining image recognition and image processing technologies, so as to obtain single-function video segments with single operation behaviors under different test environments.
Optionally, the performing the function segmentation processing on each application operation video in the sub-step S11 to obtain a single-function video segment with a single operation behavior under different test environments includes: substeps S111-S115. Specific embodiments of the various sub-steps are set forth below.
In the substep S111, the starting video position corresponding to the preset operation behavior in each application operation video is obtained by respectively performing the preset operation behavior identification on each application operation video.
In the foregoing substep S111, by performing image recognition on each application operation video, a starting image frame of a preset operation behavior (such as a click behavior) in each application operation video is recognized, and a video position of the starting image frame in the application operation video is used as a starting video position corresponding to the preset operation behavior in the application operation video.
In some embodiments of the present application, an image-based behavior recognition algorithm may be used to perform preset operation behavior recognition on the application operation video respectively; and respectively carrying out preset operation behavior identification on the application operation videos by adopting a template matching-based method.
Optionally, the obtaining, by performing preset operation behavior recognition on each application operation video, a starting video position corresponding to the preset operation behavior in each application operation video includes: image comparison is carried out on the image frames in the application operation video and template images of preset operation behaviors, so that target image frames matched with the template images in the application operation video are obtained; and obtaining a starting video position corresponding to a preset operation behavior according to the time position of the target image frame in the application operation video.
In the implementation, the image comparison is performed on the image frame in the application operation video and the template image of the preset operation behavior to obtain the target image frame matched with the template image in the application operation video, which specifically may include:
(1) And acquiring template images of each preset operation behavior, and converting the template images into gray images for storage.
For example, a template image when a user clicks an application interface, a template image when a user inputs model parameters at the application interface, a template image when a user executes a specific script at the application interface, and the like are acquired.
(2) When the application operation video function segmentation processing is carried out, each image frame included in each application operation video is respectively obtained and used as a target image, and the target image is converted into a gray level image.
(3) And calculating the average value of the template images of each preset operation behavior.
For example, an average value of the template image is obtained by calculating an average value of pixel values of each pixel point in the template image.
(4) For each pixel point (x, y) on the target image, calculating the value of normalized cross-correlation function with the template image in the neighborhood
(5) Finding normalized cross-correlation functionsAnd returns the corresponding displacement +.>. The value is the most similar position in the target image to the template image.
Calculating the value of normalized cross-correlation function between the pixel point (x, y) adjacent domain and the template imageThe specific method of (2) is the prior art and will not be described in detail herein.
The normalized cross-correlation function value can then be normalized using a maximum indexing method The maximum displacement of the pixel point as the displacement +.>
(6) And (3) post-processing the most similar position of the target image and the template image, such as non-maximum value inhibition, connectivity detection and the like, so as to obtain a more accurate matching result of the target image and the template image.
Taking a preset operation behavior as an example of a mouse click action, a mouse highlighting tool Pointer Focus and a template matching algorithm can be used to achieve this goal. For example, using a Pointer Focus can achieve the effect of converting the operation of mouse clicking into a distinct animation at the time of video recording; and performing template matching by using a predefined operation template such as a mouse action and the like to realize the identification of the mouse clicking operation, so as to obtain a target image frame matched with the template image.
After the target image frame corresponding to the preset operation is identified, the target image frame with the forefront time position in the application operation video is further determined, and the target image frame with the forefront time position is marked, for example, the time position of the target image frame in the application operation video is written into a file. The marked time position of the target image frame in the application operation video is the initial video position of a single-function video segment corresponding to the current preset operation.
According to the method, the preset operation behaviors of each application operation video are respectively identified, and the starting video positions of one or more single-function video fragments of each preset operation behavior included in each application operation video can be obtained.
And step S112, acquiring an initial segment video clip corresponding to a single operation behavior according to the initial video position corresponding to each preset operation behavior in each application operation video.
Optionally, the obtaining the initial segment video segment corresponding to the single operation behavior according to the initial video position corresponding to each preset operation behavior in each application operation video includes: for each initial video position except for the first initial video position in initial video positions corresponding to preset operation behaviors in application operation videos, taking the time position of the previous video image frame of the initial video position in the application operation videos as an end video position; and taking the end time position of the application operation video as an end video position; and taking the application operation video from each starting video position to the backward adjacent ending video position as a first segmented video segment corresponding to a single operation behavior.
And for a certain application operation video, performing preliminary segmentation on the application operation video according to the initial video position corresponding to each preset operation behavior determined in the video. Specifically, from a first initial video position of an application operation video, taking each initial video position included in the application operation video as a current initial video position, determining a previous video image frame of a next initial video position of the current initial video position, and taking a time position of the previous video image frame in the application operation video as a current end video position; taking a section of application operation video (including video image frames of the current starting video position but not including video image frames of the next starting video position) from the current starting video position to the current ending video position as a primary segmented video segment of a single operation behavior corresponding to the current starting video position, wherein the video image frame corresponding to the current starting video position is the first video image frame of the primary segmented video segment, and the single operation behavior is a preset operation behavior corresponding to the video image frame of the current starting video position; and taking a section of application operation video from the last starting video position to the end of the application operation video as a primary section video segment of single operation behavior corresponding to the last starting video position until the next starting video position is the last starting video position included in the application operation video, namely completing the primary section operation of the application operation video.
According to the preliminary segmentation method, an application operation video can be segmented to obtain a plurality of initially segmented video segments. Each initial segment video segment corresponds to a single operation behavior, and the single operation behavior is a preset operation behavior corresponding to a first video image frame of the initial segment video segment. A single operational action may correspond to a plurality of initially segmented video segments.
And a substep S113, performing image processing on the primary segment video segment, and determining a steady-state picture image frame in the primary segment video segment.
When video recording is performed, due to reasons such as different human execution time for executing preset operation behaviors, the same operation behavior may be caused by external reasons, and the application execution time is different, so that the generated performance indicates redundant time. For example, the time period for the user to click the mouse may be different, and the time period from the start of the operation of clicking the mouse to the corresponding feedback of the application may be different. In order to eliminate the influence of external factors, the accuracy of single operation behavior response performance data acquisition is improved, and redundancy elimination processing is further carried out on each initial segment video segment to eliminate redundant video image frames.
In some embodiments of the present application, the pre-segmented video segment is cut and segmented again by determining a steady-state picture in the pre-segmented video segment, that is, a steady-state picture after the application to be evaluated enters a certain function after the preset operation behavior is executed, so that a performance indication redundant part caused by artifacts such as delay waiting in the pre-segmented video segment is cut off, and a single-function video segment after redundancy removal of a single operation behavior is obtained.
Optionally, the performing image processing on the primary segmented video segment, determining a steady-state image frame in the primary segmented video segment includes: acquiring a first set comprising image frames in the initial segment video segment; and determining the steady-state picture image frames in the initial segment video segment according to the video local characteristic inter-frame difference of the two adjacent frames in the first set.
In some embodiments of the present application, the steady-state picture image frames in the initially segmented video segment may be determined as follows.
First, a set VS of image frames is created, storing image frames in the initially segmented video segment.
The key frames in the initial segment video segments may be extracted to obtain a set VS of image frames of the initial segment video segments, e.g., denoted as a "first set". Of course, all frames in the initially segmented video segment may also be extracted to obtain the first set.
And secondly, establishing candidate frame set cds.
Thirdly, traversing the first set VS, taking each image frame as a current image frame in sequence from the first image frame in the initial segment video segment according to the time sequence of the image frames in the initial segment video segment, calculating the inter-frame difference of the video local features of the current image frame and the next image frame, and adding the current frame into the candidate frame cds until the image frames in the first set VS are completely traversed if the calculated inter-frame difference of the video local features is greater than a specified threshold TH. Wherein the local video feature may be pixel intensity, then the pixel intensity inter-frame difference between the current image frame and the next image frame is calculated.
Fourth, a key frame set KF is initialized.
Step five, traversing the candidate frame set cds from front to back, and calculating the maximum difference diff_max of local pixel intensities between two image frames of every 1 frame in the candidate frame set cds; if the calculated maximum difference value diff_max of the local pixel intensities is larger than the specified threshold value TH, adding the image frame corresponding to the maximum difference value diff_max of the local pixel intensities larger than the specified threshold value TH into a key frame set KF, and ending the traversal process.
The two image frames every 1 frame may refer to two image frames of the candidate frame set cds that are separated by 1 image frame.
Specifically, from the first image frame of the candidate frame set cds, taking the first image frame as a current image frame, calculating diff_max between the current image frame and the third image frame, and if diff_max is larger than a specified threshold value TH, adding the current image frame (namely the first image frame) into a key frame set KF, and ending traversal; if diff _ max is not greater than the specified threshold TH, the subsequent image frames continue to be traversed. Taking the second image frame as the current image frame, calculating diff_max between the current image frame and the fourth image frame, adding the current image frame (namely the second image frame) into a keyframe set KF if the diff_max is larger than a specified threshold value TH, ending the traversal, and not adding and continuing to traverse the next image frame if the diff_max is not larger than the specified threshold value TH; then taking the third image frame as the current image frame, and calculating diff_max between the current image frame and the fifth image frame; and so on until an image frame having diff _ max greater than the specified threshold TH is traversed. If the image frames in the candidate frame set cds are completely traversed, and the image frames with diff_max larger than the specified threshold TH are not found, the subsequent operation on the initial segment video segment is not executed, and the initial segment video segment is discarded.
And sixthly, taking the image frames in the keyframe set KF as steady-state picture image frames in the initial segment video segment.
Sub-step S114, determining a redundant video start position of the primary segment video segment according to the steady-state picture image frame.
In some embodiments of the present application, the starting position of the redundant video in the primary segment video segment may be determined according to the time position of the steady-state picture image frame in the primary segment video segment. For example, the video image frame next to the last steady-state picture image frame in the initial segment video clip may be used as the redundant video start position.
And step 115, using the video segment before the start position of the redundant video in the initial segment video segment as a single-function video segment corresponding to a single operation behavior in a corresponding test environment. The corresponding test environment refers to a test environment corresponding to an application operation video to which the initial segment video segment belongs.
Further, a video before (excluding) the redundant video start position in the primary segment video segment is determined as a single function video segment corresponding to a single operation behavior.
According to the method, redundancy elimination processing is carried out on each primary segmented video segment, and at least one single-function video segment included in each application operation video can be obtained. The single-function video clips are video clips after redundancy removal, and each single-function video clip corresponds to a single operation behavior. The duration of the single-function video segment can be used as the response time of the compatible application function on the compatible system, and can characterize the performance parameters of the application to be evaluated for the single operation behavior corresponding to the single-function video segment.
Next, a single-function video clip group corresponding to each preset operation behavior in different test environments needs to be determined, so that response performance of the application to be evaluated for the same operation behavior is evaluated based on a plurality of single-function video clips in different test environments of the same operation behavior.
And S12, performing similarity comparison on the single-function video clips corresponding to different test environments based on the video local features to obtain a single-function video clip group with the same single operation behavior under different test environments.
In some embodiments of the present application, similarity comparison may be performed on the single-function video segments in different test environments, so as to determine the similarity between each single-function video segment obtained in the first test environment and each single-function video segment obtained in the second test environment, and the single-function video segments whose similarity satisfies the preset condition are used as a group of single-function video segments.
In the running process of the application, taking clicking operation as an example, clicking different positions of an application page triggers the application to execute different operations, so that the application performance can be accurately evaluated only by comparing video clips which click the same positions and trigger the application to execute clicking actions of the same operations under different test environments.
On the other hand, when the application runs, clicking the same position of the application page under different running conditions may trigger the application to obtain different execution effects. For example, clicking on the same location of a page on different list pages of an application triggers the application to read and display data of different entries, and the performance of the application to read and display data of different entries is incomparable, i.e. video clips that get different operation effects for clicking operations are not suitable for performance evaluation.
In the step, through similarity comparison of the single-function video clips, the single-function video clips respectively corresponding to the same single operation behavior under different test environments are found to form a single-function video clip group.
In some embodiments of the present application, single-function video segment alignment may be performed based on the manner in which the video local features match. For example, the video local features of the redundancy-free single-function video segments can be extracted based on a perceptual hash algorithm, and then similarity comparison is performed based on the extracted video local features to obtain single-function video segments with the same single operation behavior and the same operation effect under two different test environments, so that performance data are further extracted.
Optionally, the step S12 of performing similarity comparison on the single-function video segments corresponding to different test environments based on the video local features to obtain a single-function video segment group with the same single operation behavior under different test environments includes: substeps S121-S125.
Step S121, respectively generating a second set corresponding to each single-function video segment based on a perceptual hash algorithm; the second set includes image features corresponding to all video image frames in the single-function video clip.
Specifically, for each single-function video segment, the image characteristics of each video image frame included in the single-function video segment are calculated based on a perceptual hash algorithm, and the image characteristics of all video image frames included in the single-function video segment are used as segment characteristics of the single-function video segment and stored in a second set corresponding to the single-function video segment. Thus, a second set of segment features corresponding to each single-function video segment for storing the single-function video segment can be obtained.
When the segment characteristics of each single-function video segment are respectively generated based on the perceptual hash algorithm, the video image frames are respectively converted from the pixel domain to the frequency domain through DCT (Discrete Cosine Transform ) by utilizing the perceptual hash algorithm for each single-function video segment, and only elements in the upper left corner area of the coefficient matrix are reserved for calculating hash values capable of representing the video image frames as the image characteristics of the video image frames.
In the embodiment of the application, the specific steps for respectively calculating the image characteristics of each video image frame included in the single-function video clip based on the perceptual hash algorithm are as follows.
The first step, image conversion is carried out on one image frame in the single-function video clip: images of features to be generatedConversion to gray-scale image->
Second, the DCT coefficient matrix is calculated according to the following formula: calculating a gray scale mapDCT coefficient matrix>
Wherein,representing the spatial position of the pixel, +.>Representing the pixel points of the input image frame,XandYis thateC (u), c (v) are the compensation coefficients of the ith row and the ith column, respectively, representing the weights of the frequency components of the image frame in the frequency domain, < ->Pixel points representing input image frames +.>Pixel after DCT conversion, +.>Representing a matrix of DCT coefficients.
Thirdly, obtaining an upper left corner coefficient matrix: selecting DCT coefficient matrixLeft upper corner->The coefficient matrix is used as the upper left corner coefficient matrix +.>. Wherein (1)>Representing DCT coefficient matrix>Is included in the matrix of low frequency components.
Fourth, calculating the mean value of the left upper corner coefficient matrix: calculating the mean value of the upper left corner coefficients
Fifth step, generating digital features
First clear digital featuresFor example, let pointer variable +. >
Traversing the upper left corner coefficient matrix in a specified orderThe current element is marked as +.>The method comprises the steps of carrying out a first treatment on the surface of the Mean value of current element and upper left corner coefficient +.>Comparing the sizes, and setting the current characteristic bit according to the comparison result>Is a value of (a). For example, if the current element->Greater than the upper left corner coefficient mean +.>Then set the current feature bit +.>The value of (2) is 0, if the current element +.>Less than the upper left corner coefficient mean +.>Then set the current feature bit +.>The value of (2) is 1. Pointer variable->Self-increasing 1 to update the value of the characteristic bit pointer, and then calculating the characteristic of the next position until the upper left corner coefficient matrix +.>The traversal of all elements in (a) ends. At this time, get->I.e. the image characteristics of the current image frame.
According to the method, the image characteristics of each video image frame in the single-function video clip can be obtained. The image features of all video image frames in a single-function video clip may constitute a set of clip features for the single-function video clip, denoted as a "second set".
Alternatively, other methods known in the art may be used to calculate the image characteristics of each video image frame in a single-function video clip. In the embodiment of the present application, the method for calculating the image characteristics of each video image frame is not limited.
And a substep S122, wherein, for each second set corresponding to the single-function video segment, feature blocks are performed on each image feature in the second set, so as to obtain feature blocks corresponding to each image feature.
And for each second set, the image characteristics of each video image frame obtained by the perceptual hash can be subjected to a dicing operation, and d equal-length characteristic blocks are averagely diced. For example, for a second set FB, containing a plurality of image features { f1, f2,..fn }, each image feature is divided into d feature blocks of equal length. Each image feature f is assumed to be L in length and to be divided into d feature blocks, each feature block being L/d in length. I.e. for the first image feature f1, it can be segmented into feature blocks 1 to d.
And step S123, establishing an inverted index corresponding to each image feature according to the feature block corresponding to each image feature.
For a given hamming distanceHFor after cuttingdRandom fetching of individual feature blocksd-HBlocks are shared bygAnd a combination mode. According to the formulag=C d d-H Calculation ofgAnd a combination mode. And then, establishing the inverted index of each image characteristic according to the structure of the inverted index.
In an embodiment of the application, the inverted index is built on the feature block. For example, an inverted index may be constructed based on a combination of the above feature blocks.
Taking the example of constructing an inverted index based on the above combinations of feature blocks, each combination will have a corresponding feature number. For each combination of feature blocks, the value of the combination may be used as a key for the inverted index, and the feature number corresponding to the combination may be used as the value of the index. The structure of the inverted index is similar to a mapping table in that the key is the value of the combination of feature blocks and the value is the feature number of the image feature that contains the combination. The value of the combination of the feature blocks may be generated in a preset manner, for example, may be a spliced feature value of the features of the feature blocks in the combination; the feature number is used to identify the image feature and may be an address or identifier of the image feature.
The specific embodiment of the inverted index is referred to in the prior art, and will not be described herein. When using an inverted index, for a value of some combination, the inverted index can quickly find feature numbers for image features that contain the value, which are used to identify potentially similar image features. Thus, in an embodiment of the present application, after a feature block is obtained and g combinations of feature blocks are calculated, an inverted index is constructed based on the values of the g combinations to index the corresponding image features.
And S124, comparing the local feature similarity of the video to the second set corresponding to each single-function video segment under different test environments based on the inverted index of the image feature in the second set corresponding to each single-function video segment, so as to obtain the feature similarity between the second sets corresponding to any two single-function video segments under different test environments.
Then, for the current image feature to be compared, traversing index keys in the inverted index, finding out a combined target image feature containing a certain feature block, comparing the target image feature with the current image feature to be compared according to bits, and searching for image features similar to the current image feature to be compared.
The feature block-based comparison method is described below in conjunction with a specific example.
The first step, cutting the image features in the second set FA of the single-function video clips of a certain operation behavior in the first test environment and the second set FB of the single-function video clips of any operation behavior in the second test environment into d blocks, where it may be assumed that: the second set FB is a feature set with a large number of image features, and the second set FA is a feature set with a small number of image features.
And secondly, establishing an inverted index for the image features in the second set FB with a large number of image features, and accelerating the comparison process.
For example, each image feature included in the second set FB is equally divided into d feature blocks of equal length, respectively. Taking the second set FB as an example, where n image features are { f1, f2, & gt, fn } respectively, d feature blocks of the image feature f1, d feature blocks of the image feature f2, …, and d feature blocks of the image feature fn will be obtained.
And then, establishing an inverted index of each image feature according to the feature block corresponding to each image feature. The specific embodiment of creating the inverted index of the image feature refers to the aforementioned sub-step S123, and will not be described here again.
After creating the inverted index of each image feature, for each image feature contained in the second set FB, an inverted index structure is obtained with the combination of feature blocks as keys and the feature number of the image feature as a value.
Third, initializing a parameterTo record the number of similar image features in the two second sets (i.e., FA and FB).
Fourth, image features corresponding to one video image frame are taken from the second set FATraversing the key of the inverted index to find the image feature +. >The key is further searched for the value of the key index, namely the feature number, according to the inverted index structure, then the image features in the second set FB corresponding to the searched feature number are subjected to feature similarity comparison, and the image features are recorded>And comparing the similarity with the similarity of each image characteristic in the second set FB.
Based on the inverted index structure, the image characteristic can be utilizedThe key is indexed to a plurality of values, so that a plurality of image features to be compared are found, the range of the video image frames covered by the feature comparison can be expanded, and the promotion is facilitatedAnd calculating the accuracy of the feature similarity of the second set according to the similarity comparison result.
Alternatively, the similarity comparison may be performed by calculating the hamming distance between the image features. If the Hamming distance between two segments of image features meets the set threshold, the number of similar image features in the two second sets is updated and recorded, i.e. the parameters are updatedAdd 1 and exit this alignment. Wherein, the formula for calculating the hamming distance can be:
wherein,indicates the threshold value of the setting,/->Representing image features j in the second set FB.
Fifth, the image feature of one video image frame is taken out from the second set FA, the fourth step is repeated until the comparison of all the image features in the second set FA is completed, and finally the number of similar image features in the two second sets FA and FB is output
Sixth, the feature similarity of the two second sets FA and FB is calculated using the following formula:
wherein len (FA) and len (FB) represent the number of image features in the second set FA and FB, respectively, S FA,FB Representing the feature similarity of the second sets FA and FB, min () represents taking the minimum value.
According to the method, for the second set of single-function video clips of any single operation behavior in the first test environment, the feature similarity of the second set of single-function video clips of various single operation behaviors in other test environments can be calculated respectively. And then, the feature similarity between the second set of the single-function video clips of any single operation behavior in any test environment and the second set of the single-function video clips of various single operation behaviors in other test environments can be calculated, namely, the feature similarity between the second sets corresponding to any two single-function video clips in different test environments is calculated.
The method for calculating the feature similarity of the second set of the single-function video is a preferred similarity calculation method. In other embodiments of the present application, other video feature similarity calculation methods may be used to calculate feature similarity between the second set of single-function videos, such as Self-tab hash (STH) algorithm, which is not illustrated in the embodiments of the present application.
And step S125, obtaining the single-function video fragment group with the same single operation behavior under different test environments according to the feature similarity between the second sets corresponding to any two single-function video fragments under different test environments.
After obtaining the feature similarity between the second sets corresponding to every two single-function video clips, for the second set of single-function video clips of a certain operation behavior under one test environment (such as a first test environment) of the application to be evaluated, finding the second sets of single-function video clips under other test environments (such as the second test environment) with the feature similarity meeting the preset condition, and putting the found single-function video clips corresponding to the second sets meeting the preset condition into the same single-function video clip group. The single-function video clip group records the operation behaviors and the operation effects corresponding to the single-function video clips in the group, and can be considered as a plurality of single-function video clips with the same operation behavior under different test environments. The feature similarity satisfies a preset condition may be: the feature similarity is larger than a preset similarity threshold, or the feature similarity is maximum, and the like.
For example: in the first test environment, a second set FA of single-function video clips SA of the operation behavior A; according to the method, under the second test environment, the feature similarity between the second set FB of the single-function video clips SB of the operation behavior A and the FA meets the preset condition, and then the single-function video clips SA and the single-function video clips SB are placed into a single-function video clip group corresponding to the operation behavior A; if the feature similarity between the second set FC of the single-function video clips SC of the operation behavior A and the FA does not meet the preset condition under the second test environment, the single-function video clips SC are not put into the single-function video clip group corresponding to the operation behavior A; the group of single-function video clips corresponding to the operation behavior a under different test environments may include { single-function video clip SA, single-function video clip SB }.
After the single-function video segment groups corresponding to the same operation behavior under different test environments are obtained, single-function performance data corresponding to each single-function video segment in each single-function video segment group are further obtained.
Sub-step 1202, obtaining single-function performance data corresponding to each single-function video clip in each single-function video clip group.
And for each single-function video segment group, acquiring the performance data of each single-function video segment in the single-function video segments included in the group, wherein the performance data is the single-function performance data of the operation function corresponding to the operation behavior in the single-function video segment.
Optionally, the obtaining the single-function performance data corresponding to each single-function video clip in each single-function video clip group includes: and acquiring the duration of each single-function video segment in each single-function video segment group, and taking the duration as single-function performance data corresponding to each single-function video segment. The duration may be a playing duration of the single-function video clip.
Sub-step 1203, obtaining performance data of the application to be evaluated under the same test environment according to the single-function performance data of all single-function video clips corresponding to the same test environment in all the single-function video clip groups.
As previously described, the performance data may be: duration of the single-function video clip. Correspondingly, according to the single-function performance data of all the single-function video clips corresponding to the same test environment in all the single-function video clip groups, the obtaining the performance data of the application to be evaluated under the same test environment may include: and taking the sum of the time lengths of all the single-function video clips of the application operation video recorded in the appointed test environment in all the single-function video clip groups as the performance data of the application to be evaluated in the appointed test environment. For example, the sum of the durations of all the single-function video clips of the application operation video recorded in the first test environment in the M single-function video clip groups obtained in the previous step may be used as the performance data of the application to be evaluated in the first test environment; and taking the sum of the time lengths of the single-function video clips recorded in the second test environment in the M single-function video clip groups obtained in the previous step as performance data of the application to be evaluated in the second test environment. Wherein M is an integer greater than 1.
After obtaining the performance data of the application to be evaluated under different test environments (such as the first test environment and the second test environment), step 130 is executed, so as to perform application compatibility performance evaluation on the application to be evaluated according to the performance data of the application to be evaluated under different test environments. Taking performance data of an application to be evaluated in a specified test environment as the sum T of time durations of M single-function video clips of application operation videos recorded in the specified test environment in M groups of single-function video clips sum For example, the sum T of the time durations of the application to be evaluated under different test environments is obtained sum Thereafter, the sum T of the durations sum As performance data, it is possible to compare the performance data T of the application to be evaluated under different test environments sum And (3) carrying out compatibility performance evaluation on the application to be evaluated. For example, if the application to be evaluated performs data T under different test environments sum The smaller the time interval of the application to be evaluated, the flatter the application to be evaluated isThe platform compatibility is better, otherwise, the platform compatibility of the application to be evaluated can be considered to be poor.
The application compatibility evaluation method disclosed by the embodiment of the application can be applied to an application compatibility evaluation system shown in fig. 3. As shown in fig. 3, the application compatibility evaluation system includes: a typical job scenario video input module 310, a video local feature based functional segment alignment module 320, a performance data acquisition and comparison assessment module 330.
In the typical job scenario video input module 310, a typical job scenario of an application to be evaluated may be input, and an application operation video of the application to be evaluated in the typical job scenario in the evaluation test environment may be acquired. For example, the test environment may be a domestic desktop operating system or a native operating system, and the user may operate the application to be evaluated running in the two test environments and record an application operation video of a typical job scenario.
In the functional segment alignment module 320 based on the video local feature, the functional segment alignment process is performed on each application operation video based on the video local feature, so as to obtain a single functional video segment group with the same single operation behavior under different test environments. Specifically, the video local feature-based functional segment alignment module 320 may include the following three sub-modules: an operation behavior recognition sub-module based on image recognition, a steady-state picture extraction and redundancy removal sub-module and a functional segment division alignment sub-module based on video local characteristics.
The operation behavior recognition sub-module is used for obtaining initial video positions corresponding to preset operation behaviors in each application operation video by respectively carrying out preset operation behavior recognition on each application operation video, and obtaining initial segmentation video fragments corresponding to single operation behaviors according to the initial video positions corresponding to each preset operation behavior in each application operation video.
And the steady-state picture extraction and redundancy elimination sub-module is used for carrying out image processing on the primary segment video segments, determining steady-state picture image frames in the primary segment video segments, then determining the redundant video starting positions of the primary segment video segments according to the steady-state picture image frames, and finally taking the video segments in the primary segment video segments before the redundant video starting positions as single-function video segments corresponding to single operation behaviors under corresponding test environments.
The video local feature-based functional segment division alignment sub-module is used for carrying out similarity comparison on the single-function video segments corresponding to different test environments based on the video local features to obtain single-function video segment groups with the same single operation behavior under different test environments.
The performance data obtaining and comparing and evaluating module 330 is configured to obtain single-function performance data corresponding to each single-function video segment in each single-function video segment, and obtain performance data of the application to be evaluated under the same test environment according to the single-function performance data of all single-function video segments corresponding to the same test environment in all the single-function video segment groups; and the compatibility performance evaluation module is used for evaluating the compatibility performance of the application to be evaluated according to the performance data.
According to the application compatibility evaluation method disclosed by the embodiment of the application, application operation videos which are respectively recorded under different test environments for the application to be evaluated are obtained; processing the application operation video by adopting a video local feature alignment technology to acquire performance data of the application to be evaluated under different test environments; and carrying out compatibility performance evaluation on the application to be evaluated according to the performance data, realizing application operation video based on the application to be evaluated, aligning video clips of the same operation function by adopting a video local feature alignment technology, extracting the performance data, and automatically carrying out compatibility performance evaluation on the application, thereby improving the efficiency of application compatibility evaluation.
The performance evaluation of the compatible application is realized by comparing the performance of the compatible application with that of the original operating system platform, and when the pure manual test evaluation is adopted in the prior art, the manual test evaluation is required to be carried out on all desktop applications due to the various and numerous desktop applications, so that the efficiency is low, and a large amount of manpower and time are required to be consumed. By adopting the application compatibility performance evaluation method disclosed by the embodiment of the application, firstly, the video recording is carried out manually according to the same test case script in the test process, and then, the compatibility performance data is automatically extracted by utilizing the video analysis processing and comparison analysis technology, so that a compatibility application performance evaluation result is formed, and the efficiency is higher.
On the other hand, compared with the method for evaluating the application compatibility by using a third-party compatibility testing tool in the prior art, the method for evaluating the application compatibility disclosed by the embodiment of the application has the advantages that the image analysis and the feature matching processing are performed on the basis of the single-function video segments of the single operation behaviors of the typical operation scene, and the application compatibility evaluation is performed on the operation performance data reflected by the single-function video segments of the same operation, so that the accuracy is higher. Besides, the video processing and performance evaluation processes are fully automatically executed except for recording application operation videos, so that the operation is simpler.
Further, when performing functional segment alignment processing on each application operation video based on video local features to obtain a single-function video segment group of the same single operation behavior under different test environments, in the embodiment of the application, the single-function video segments are obtained based on the single operation behavior segmentation, then, similarity comparison is performed on the single-function video segments corresponding to different test environments based on the video local features, the single-function video segments are aligned based on functions, and the single-function video segment alignment result is more accurate and the speed is faster.
The effects of the present application will be further described with reference to simulation experiments.
The hardware platform of the simulation experiment is as follows: the processor is Intel (R) Core (TM) i9-10900K, the main frequency is 3.70 GHz, the memory is 64GB, and different test environments are respectively: windows 10 specialty and Python3.8. The training data set used for the simulation experiments is a published data set and a structured data set.
Wherein the constructed dataset comprises:
(1) The specific business process video set recorded by the tester during test has the sample format of mp4 and 6 samples;
(2) The video with the duration of 25-80s is used, and the detected video is formed after common video operations such as filter, out-of-order clipping, watermark adding, frame extraction and the like are added.
The data sets disclosed are: cc_web_video data set. The data set is classified into 24 categories according to the keyword classification of the search. In each category, the most popular video is taken as an original video, other videos are taken as tested videos, an average of 27% of the videos are repeated or approximately repeated with the original video, and each repeated version is uploaded by a user without additional manual operation. The video duration is 5s to 10min, the total duration is 800h, and the total duration is 12790.
On the data set, the method for calculating the similarity of the video fragment disclosed by the embodiment of the application is adopted to carry out comparison experiments with three methods of MFH (Multiple Feature Hashing, multi-feature hash), STH (Self-learning Xi Haxi) and SPH (SPectral hash), and the obtained results show that compared with the methods of MFH, STH and SPH, the method for calculating the similarity of the video fragment disclosed by the embodiment of the application has the advantages that the average accuracy average value is improved by 1.4%, 2% and 2.3%, and the calculation time is shortened by 24%, 32% and 16% respectively, and the detection efficiency is improved.
The simulation experiment results show that compared with the existing video similarity calculation technology, the video local feature alignment technology disclosed by the embodiment of the application has higher average accuracy and shorter calculation response time due to the optimized traversal method.
The embodiment of the application also discloses an application compatibility evaluation device, as shown in fig. 4, comprising:
the application operation video acquisition module 410 is configured to acquire application operation videos that are recorded under different test environments for an application to be evaluated.
And the performance data acquisition module 420 is configured to process the application operation video by using a video local feature alignment technology, and acquire performance data of the application to be evaluated under the different test environments respectively.
And the compatibility performance evaluation module 430 is configured to perform compatibility performance evaluation on the application to be evaluated according to the performance data.
Optionally, as shown in fig. 5, the performance data obtaining module 420 further includes:
the single-function video segment alignment submodule 4201 is configured to perform functional segment alignment processing on each application operation video based on the local video feature, so as to obtain a single-function video segment group with the same single operation behavior under different test environments.
The performance data obtaining sub-module 4202 is configured to obtain single-function performance data corresponding to each single-function video clip in each single-function video clip group.
The performance data obtaining submodule 4202 is further configured to obtain performance data of the application to be evaluated under the same test environment according to single-function performance data of all single-function video clips corresponding to the same test environment in all single-function video clip groups.
Optionally, the single-function video clip alignment submodule 4201 is further configured to:
performing function segmentation processing on each application operation video to obtain single-function video fragments of single operation behaviors under different test environments;
and carrying out similarity comparison on the single-function video clips corresponding to different test environments based on the video local features to obtain a single-function video clip group with the same single operation behavior under different test environments.
Optionally, the performing the function segmentation processing on each application operation video to obtain a single-function video segment with a single operation behavior under different test environments includes:
respectively carrying out preset operation behavior identification on each application operation video to obtain a starting video position corresponding to the preset operation behavior in each application operation video;
Acquiring a primary segmented video segment corresponding to a single operation behavior according to a starting video position corresponding to each preset operation behavior in each application operation video;
performing image processing on the initial segment video segment to determine a steady-state picture image frame in the initial segment video segment;
determining a redundant video starting position of the primary segment video segment according to the steady-state picture image frame;
and taking the video segment before the redundant video starting position in the initial segment video segment as a single-function video segment corresponding to a single operation behavior under a corresponding test environment.
Optionally, the performing similarity comparison on the single-function video segments corresponding to different test environments based on the video local features, to obtain a single-function video segment group with the same single operation behavior under different test environments, includes:
generating a second set corresponding to each single-function video segment based on a perceptual hash algorithm, wherein the second set comprises image features of all video image frames in the corresponding single-function video segment;
performing feature blocking on each image feature in the second set aiming at the second set corresponding to each single-function video segment to obtain a feature block corresponding to each image feature;
Establishing an inverted index corresponding to each image feature according to the feature block corresponding to each image feature;
based on the inverted index of the image features in the second set corresponding to each single-function video segment, comparing the local feature similarity of the video to the second set corresponding to each single-function video segment under different test environments to obtain the feature similarity between the second sets corresponding to any two single-function video segments under different test environments;
and acquiring the single-function video fragment group with the same single operation behavior under different test environments according to the feature similarity between the second sets corresponding to any two single-function video fragments under different test environments.
Optionally, the performance data obtaining sub-module 4202 is further configured to:
and acquiring the duration of each single-function video segment in each single-function video segment group, and taking the duration as single-function performance data corresponding to each single-function video segment.
Optionally, the obtaining the initial segment video segment corresponding to the single operation behavior according to the initial video position corresponding to each preset operation behavior in each application operation video includes:
for each initial video position except for the first initial video position in initial video positions corresponding to preset operation behaviors in application operation videos, taking the time position of the previous video image frame of the initial video position in the application operation videos as an end video position; and taking the end time position of the application operation video as an end video position; and taking the application operation video from each starting video position to the backward adjacent ending video position as a first segmented video segment corresponding to a single operation behavior.
The application compatibility evaluation device disclosed in the embodiment of the present application is used to implement the application compatibility evaluation method described in the embodiment of the present application, and specific implementation manners of each module of the device are not repeated, and reference may be made to specific implementation manners of corresponding steps in the method embodiment.
According to the application compatibility evaluation device disclosed by the embodiment of the application, application operation videos which are respectively recorded under different test environments for the application to be evaluated are obtained; processing the application operation video by adopting a video local feature alignment technology to acquire performance data of the application to be evaluated under different test environments; and carrying out compatibility performance evaluation on the application to be evaluated according to the performance data. The method realizes application operation video based on the application to be evaluated, adopts the video local feature alignment technology to align video segments with the same operation function, and extracts performance data, thereby automatically evaluating the compatibility performance of the application and improving the efficiency of the compatibility evaluation of the application.
The performance evaluation of the compatible application is realized by comparing the performance of the compatible application with that of the original operating system platform, and when the pure manual test evaluation is adopted in the prior art, the manual test evaluation is required to be carried out on all desktop applications due to the various and numerous desktop applications, so that the efficiency is low, and a large amount of manpower and time are required to be consumed. By adopting the application compatibility performance evaluation method disclosed by the embodiment of the application, firstly, the video recording is carried out manually according to the same test case script in the test process, then, the compatibility performance data is automatically extracted by utilizing video analysis processing and comparative analysis, so that a compatibility application performance evaluation result is formed, and the efficiency is higher.
On the other hand, compared with the prior art that the application compatibility evaluation device is tested by means of a third-party compatibility testing tool, the application compatibility evaluation device is used for performing image analysis and feature matching processing on the single-function video segments based on the single operation behaviors of the typical operation scene, and is higher in accuracy as compared with the application compatibility evaluation device which is used for performing the test by means of the third-party compatibility testing tool and is used for performing application compatibility evaluation on operation performance data reflected by the single-function video segments of the same operation. Besides, the video processing and performance evaluation processes are fully automatically executed except for recording application operation videos, so that the operation is simpler.
Further, when performing functional segment alignment processing on each application operation video based on video local features to obtain a single-function video segment group of the same operation behavior under different test environments, in the embodiment of the application, single-function video segments are obtained based on segmentation of the single operation behavior, and then similarity comparison is performed on the single-function video segments corresponding to different test environments based on the video local features, and the single-function video segments are aligned based on functions, so that the single-function video segment alignment result is more accurate and the speed is faster.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other. For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
The foregoing has described in detail a method and apparatus for evaluating compatibility of applications provided by the present application, and specific examples have been used herein to illustrate the principles and embodiments of the present application, and the above description of the examples is only for aiding in understanding the method and a core idea of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present application without undue burden.
Various component embodiments of the application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some or all of the components in an electronic device according to embodiments of the present application may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present application can also be implemented as an apparatus or device program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present application may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
For example, fig. 6 shows an electronic device in which the method according to the application may be implemented. The electronic device may be a PC, a mobile terminal, a personal digital assistant, a tablet computer, etc. The electronic device conventionally comprises a processor 610 and a memory 620 and a program code 630 stored on said memory 620 and executable on the processor 610, said processor 610 implementing the method described in the above embodiments when said program code 630 is executed. The memory 620 may be a computer program product or a computer readable medium. The memory 620 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. The memory 620 has a storage space 6201 for program code 630 of a computer program for performing any of the method steps described above. For example, the memory space 6201 for the program code 630 may include individual computer programs for implementing the various steps in the above methods, respectively. The program code 630 is computer readable code. These computer programs may be read from or written to one or more computer program products. These computer program products comprise a program code carrier such as a hard disk, a Compact Disc (CD), a memory card or a floppy disk. The computer program comprises computer readable code which, when run on an electronic device, causes the electronic device to perform a method according to the above-described embodiments.
The embodiment of the application also discloses a computer readable storage medium, on which a computer program is stored, which when being executed by a processor, implements the steps of the application compatibility evaluation method according to the embodiment of the application.
Such a computer program product may be a computer readable storage medium, which may have memory segments, memory spaces, etc. arranged similarly to the memory 620 in the electronic device shown in fig. 6. The program code may be stored in the computer readable storage medium, for example, in a suitable form. The computer readable storage medium is typically a portable or fixed storage unit as described with reference to fig. 7. In general, the memory unit comprises computer readable code 630', which computer readable code 630' is code that is read by a processor, which code, when executed by the processor, implements the steps of the method described above.
Reference herein to "one embodiment," "an embodiment," or "one or more embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the application. Furthermore, it is noted that the word examples "in one embodiment" herein do not necessarily all refer to the same embodiment.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the application may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (7)

1. An application compatibility evaluation method, the method comprising:
acquiring application operation videos which are respectively recorded under different test environments for an application to be evaluated;
performing function segmentation processing on each application operation video to obtain single-function video fragments of single operation behaviors under different test environments;
performing similarity comparison on the single-function video clips corresponding to different test environments based on the video local features to obtain a single-function video clip group of the same single operation behavior under different test environments;
acquiring single-function performance data corresponding to each single-function video clip in each single-function video clip group;
acquiring performance data of the application to be evaluated under the same test environment according to the single-function performance data of all single-function video clips corresponding to the same test environment in all single-function video clip groups;
according to the performance data, carrying out compatibility performance evaluation on the application to be evaluated; the processing of the application operation videos by functional segmentation to obtain single-functional video fragments with single operation behaviors under different test environments comprises the following steps:
Respectively carrying out preset operation behavior identification on each application operation video to obtain a starting video position corresponding to the preset operation behavior in each application operation video;
acquiring a primary segmented video segment corresponding to a single operation behavior according to a starting video position corresponding to each preset operation behavior in each application operation video;
performing image processing on the initial segment video segment to determine a steady-state picture image frame in the initial segment video segment;
determining a redundant video starting position of the primary segment video segment according to the steady-state picture image frame;
and taking the video segment before the redundant video starting position in the initial segment video segment as a single-function video segment corresponding to a single operation behavior under a corresponding test environment.
2. The method according to claim 1, wherein the performing similarity comparison on the single-function video segments corresponding to different test environments based on the video local features to obtain a single-function video segment group with the same single operation behavior under different test environments includes:
generating a second set corresponding to each single-function video segment based on a perceptual hash algorithm, wherein the second set comprises image features of all video image frames in the corresponding single-function video segment;
Performing feature blocking on each image feature in the second set aiming at the second set corresponding to each single-function video segment to obtain a feature block corresponding to each image feature;
establishing an inverted index corresponding to each image feature according to the feature block corresponding to each image feature;
based on the inverted index of the image features in the second set corresponding to each single-function video segment, comparing the local feature similarity of the video to the second set corresponding to each single-function video segment under different test environments to obtain the feature similarity between the second sets corresponding to any two single-function video segments under different test environments;
and acquiring the single-function video fragment group with the same single operation behavior under different test environments according to the feature similarity between the second sets corresponding to any two single-function video fragments under different test environments.
3. The method according to claim 1, wherein the obtaining single-function performance data corresponding to each single-function video clip in each single-function video clip group includes:
and acquiring the duration of each single-function video segment in each single-function video segment group, and taking the duration as single-function performance data corresponding to each single-function video segment.
4. The method of claim 1, wherein the obtaining the initial segment video segment corresponding to the single operation behavior according to the initial video position corresponding to each preset operation behavior in each application operation video comprises:
for each initial video position except for the first initial video position in initial video positions corresponding to preset operation behaviors in application operation videos, taking the time position of the previous video image frame of the initial video position in the application operation videos as an end video position; and taking the end time position of the application operation video as an end video position;
and taking the application operation video from each starting video position to the backward adjacent ending video position as a first segmented video segment corresponding to a single operation behavior.
5. An application compatibility evaluation apparatus, the apparatus comprising:
the application operation video acquisition module is used for acquiring application operation videos which are respectively recorded under different test environments for the application to be evaluated;
the performance data acquisition module is used for processing the application operation video by adopting a video local feature alignment technology to acquire performance data of the application to be evaluated under different test environments;
The compatibility performance evaluation module is used for evaluating the compatibility performance of the application to be evaluated according to the performance data; wherein,
the performance data acquisition module further includes:
the single-function video segment alignment submodule is used for carrying out function segmentation processing on each application operation video to obtain single-function video segments with single operation behaviors under different test environments; the similarity comparison is carried out on the single-function video clips corresponding to different test environments based on the video local characteristics, and a single-function video clip group with the same single operation behavior under different test environments is obtained;
the performance data acquisition sub-module is used for acquiring single-function performance data corresponding to each single-function video clip in each single-function video clip group;
the performance data acquisition sub-module is further configured to acquire performance data of the application to be evaluated under the same test environment according to single-function performance data of all single-function video clips corresponding to the same test environment in all single-function video clip groups; the processing of the application operation videos by functional segmentation to obtain single-functional video fragments with single operation behaviors under different test environments comprises the following steps:
Respectively carrying out preset operation behavior identification on each application operation video to obtain a starting video position corresponding to the preset operation behavior in each application operation video;
acquiring a primary segmented video segment corresponding to a single operation behavior according to a starting video position corresponding to each preset operation behavior in each application operation video;
performing image processing on the initial segment video segment to determine a steady-state picture image frame in the initial segment video segment;
determining a redundant video starting position of the primary segment video segment according to the steady-state picture image frame;
and taking the video segment before the redundant video starting position in the initial segment video segment as a single-function video segment corresponding to a single operation behavior under a corresponding test environment.
6. An electronic device comprising a memory, a processor and program code stored on the memory and executable on the processor, wherein the processor implements the application compatibility evaluation method of any one of claims 1 to 4 when executing the program code.
7. A computer-readable storage medium, on which a program code is stored, characterized in that the program code, when being executed by a processor, implements the steps of the application compatibility evaluation method of any one of claims 1 to 4.
CN202311152973.2A 2023-09-08 2023-09-08 Application compatibility evaluation method and device, electronic equipment and storage medium Active CN116881175B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311152973.2A CN116881175B (en) 2023-09-08 2023-09-08 Application compatibility evaluation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311152973.2A CN116881175B (en) 2023-09-08 2023-09-08 Application compatibility evaluation method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116881175A CN116881175A (en) 2023-10-13
CN116881175B true CN116881175B (en) 2023-11-21

Family

ID=88262645

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311152973.2A Active CN116881175B (en) 2023-09-08 2023-09-08 Application compatibility evaluation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116881175B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5572444A (en) * 1992-08-19 1996-11-05 Mtl Systems, Inc. Method and apparatus for automatic performance evaluation of electronic display devices
CN112153375A (en) * 2020-09-25 2020-12-29 平安国际智慧城市科技股份有限公司 Front-end performance testing method, device, equipment and medium based on video information
CN112580467A (en) * 2020-12-08 2021-03-30 平安国际智慧城市科技股份有限公司 Video regression testing method and device, computer equipment and storage medium
CN113805977A (en) * 2020-06-11 2021-12-17 中移(苏州)软件技术有限公司 Test evidence obtaining method, model training method, device, equipment and storage medium
CN116383539A (en) * 2023-03-07 2023-07-04 北京智慧星光信息技术有限公司 Automatic testing method, device and equipment for Web pages

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200322677A1 (en) * 2019-04-05 2020-10-08 Sri International Characteristic-based assessment for video content

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5572444A (en) * 1992-08-19 1996-11-05 Mtl Systems, Inc. Method and apparatus for automatic performance evaluation of electronic display devices
CN113805977A (en) * 2020-06-11 2021-12-17 中移(苏州)软件技术有限公司 Test evidence obtaining method, model training method, device, equipment and storage medium
CN112153375A (en) * 2020-09-25 2020-12-29 平安国际智慧城市科技股份有限公司 Front-end performance testing method, device, equipment and medium based on video information
CN112580467A (en) * 2020-12-08 2021-03-30 平安国际智慧城市科技股份有限公司 Video regression testing method and device, computer equipment and storage medium
CN116383539A (en) * 2023-03-07 2023-07-04 北京智慧星光信息技术有限公司 Automatic testing method, device and equipment for Web pages

Also Published As

Publication number Publication date
CN116881175A (en) 2023-10-13

Similar Documents

Publication Publication Date Title
US11132555B2 (en) Video detection method, server and storage medium
US10642892B2 (en) Video search method and apparatus
US9619735B1 (en) Pure convolutional neural network localization
US20200322684A1 (en) Video recommendation method and apparatus
CN106547744B (en) Image retrieval method and system
US20230376527A1 (en) Generating congruous metadata for multimedia
JP5953151B2 (en) Learning device and program
CN109871490B (en) Media resource matching method and device, storage medium and computer equipment
CN110347872B (en) Video cover image extraction method and device, storage medium and electronic equipment
JP7132962B2 (en) Image processing method, device, server and storage medium
CN108874889B (en) Target body retrieval method, system and device based on target body image
CN108460098B (en) Information recommendation method and device and computer equipment
CN112559800B (en) Method, apparatus, electronic device, medium and product for processing video
CN111651636A (en) Video similar segment searching method and device
CN115443490A (en) Image auditing method and device, equipment and storage medium
CN110688524A (en) Video retrieval method and device, electronic equipment and storage medium
CN110019849B (en) Attention mechanism-based video attention moment retrieval method and device
CN115100739B (en) Man-machine behavior detection method, system, terminal device and storage medium
CN114565768A (en) Image segmentation method and device
CN107590233B (en) File management method and device
CN104580109A (en) Method and device for generating click verification code
CN116881175B (en) Application compatibility evaluation method and device, electronic equipment and storage medium
CN110569447B (en) Network resource recommendation method and device and storage medium
CN111738173A (en) Video clip detection method and device, electronic equipment and storage medium
CN115035463B (en) Behavior recognition method, behavior recognition device, behavior recognition equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant