CN105913429A - Calculating method for visual perception response time delay index of intelligent terminal user - Google Patents

Calculating method for visual perception response time delay index of intelligent terminal user Download PDF

Info

Publication number
CN105913429A
CN105913429A CN201610223448.9A CN201610223448A CN105913429A CN 105913429 A CN105913429 A CN 105913429A CN 201610223448 A CN201610223448 A CN 201610223448A CN 105913429 A CN105913429 A CN 105913429A
Authority
CN
China
Prior art keywords
image sequence
sequence
input image
picture numbers
response time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610223448.9A
Other languages
Chinese (zh)
Other versions
CN105913429B (en
Inventor
魏然
果敢
张蔚敏
李露
张玉凤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Academy of Information and Communications Technology CAICT
Original Assignee
China Academy of Telecommunications Research CATR
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Academy of Telecommunications Research CATR filed Critical China Academy of Telecommunications Research CATR
Priority to CN201610223448.9A priority Critical patent/CN105913429B/en
Publication of CN105913429A publication Critical patent/CN105913429A/en
Application granted granted Critical
Publication of CN105913429B publication Critical patent/CN105913429B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a calculating method for a visual perception response time delay index of an intelligent terminal user. The calculating method comprises the steps of performing gray processing on an original image sequence and generating an input image sequence; performing object detection in the input image sequence, obtaining the coordinate of the object in a plurality of input image sequences of the detected object; calculating the extreme value in each coordinate, recording the sequence number of the input image which corresponds with the extreme value as a first image sequence number; screening a plurality sets of first subsequences from the input image sequence according to a first adaptive threshold, determining a third image sequence number through ordering a plurality of sets of first subsequences; screening a plurality of sets of second subsequences from the input image sequence according to a second adaptive threshold, and determining a second image sequence from a plurality of sets of second subsequences according to the first image sequence number and the third image sequence number; calculating terminal response time according to the first image sequence number and the second image sequence number; and calculating application response time according to the third image sequence number and the second image sequence number.

Description

The computational methods of intelligent terminal user visual perception response time delay index
Technical field
The present invention is about mobile terminal evaluation and test technology, specifically, is to ring about a kind of intelligent terminal user visually-perceptible Answer the computational methods of time delay index.
Background technology
The new change of intelligent terminal's particularly smart mobile phone requires that the intelligent terminal adapted therewith evaluates and tests system and evaluation and test Method, it is necessary to jump out the test of traditional concern communication capacity, research service-oriented carrier and the test of service platform Method.Consumer's Experience is that evaluation and test smart terminal product comprehensive, effective is as business carrier and service platform true value Core index, carries out test based on Consumer's Experience and obtains the generally accreditation of industry.
The most domestic the most comprehensively evaluation and test system and evaluation and test ability, carry out intelligent terminal based on Consumer's Experience evaluation and test Research, forms intelligent terminal user and experiences evaluation and test system and evaluation and test ability, can be that industry sets up Consumer's Experience evaluation and test mark Quasi-offer Test Suggestion, it is possible to use in intelligent terminal's entry test, industry authentication test, promotes that manufacturer actively carries Rise Consumer's Experience.
The whole impressions before using a product or system, during use and after use of Consumer's Experience, i.e. user, Including emotion, look up to, like, the various aspects such as cognitive impression, physiology and psychoreaction, behavior and achievement.Three The factor affecting Consumer's Experience is: system, user and use environment.Therefore, Consumer's Experience can be defined as user with Produced all subjective feelings during product or service interaction.
In the middle of intelligent terminal user is experienced, particularly visually-perceptible field, the method for testing of current main-stream is exactly user's ginseng With test and appraisal, for user's this index of perception speed, not having the quantitative in order to describe of certain objectivity, this results in The unstability of test result and nonrepeatability, and quick for conditions such as tested crowd test terminal types Perception.
The method in prior art evaluated and tested Consumer's Experience mainly includes two kinds.One of which is that embedded system is rung Answering performance evaluation, i.e. the form by reading log recording obtains input and output time is poor, obtains the time delay being correlated with Can, the frame number in unit interval when running by installing test software, test intelligent terminal system interface or application software, Obtain smooth performance i.e. response delay index.It is the longest that this method of testing reads daily record, and needs third party software to connect Enter;Response delay precision is low, has no idea to distinguish intelligent terminal's response time and application response time.
Another kind is the operating board being fixed on by measured terminal and being provided with high-speed camera, uses mechanical hand to open application journey Sequence starts simultaneously at timing.The picture of shooting is reached server and carries out image comparison by high-speed camera, matches application journey Stop timing after the normal pictures of the sequence complete page of response, calculate the time difference i.e. application response time.This method The matching template of middle application need to shift to an earlier date Manual interception, and matching threshold is not fixed, when the differentiation intelligent terminal that has no idea responds Between with the application response time, it is impossible to continuously repeat test.
Summary of the invention
The main purpose of the embodiment of the present invention is to provide a kind of intelligent terminal user visual perception response time delay index Computational methods, to solve problems of the prior art, objective computation user's visual perception response time delay index, carry The computational accuracy of high response delay index.
To achieve these goals, the embodiment of the present invention provides a kind of intelligent terminal user visual perception response time delay index Computational methods, described computational methods include: original sequence carries out gray processing process, generate input picture Sequence;In described input image sequence, carry out target detection, obtain in the multiple input image sequences detect target Take the coordinate of described target;Calculate the extreme value in each described coordinate, the sequence number of input picture corresponding for described extreme value is remembered Record is the first picture numbers;Many groups the first sub-sequence is filtered out from described input image sequence according to the first adaptive threshold Row, and described many groups the first subsequence is ranked up, to determine the 3rd picture numbers;According to the second adaptive threshold Many groups the second subsequence is filtered out from described input image sequence, and according to described first picture numbers, the 3rd image sequence Number from described many groups the second subsequence, determine the second picture numbers;According to described first picture numbers and the second image sequence Number computing terminal response time;The application response time is calculated according to described 3rd picture numbers and the second picture numbers.
In one embodiment, above-mentioned carries out gray processing process by original sequence, generates input image sequence, tool Body includes: described original sequence is carried out gray processing process, generates the grey level histogram of described original sequence; According to described grey level histogram judge gray processing process after the contrast of original sequence whether less than or equal to the One predetermined threshold value;If it is, the original sequence after processing described gray processing is normalized, generate Described input image sequence;Otherwise, the original sequence after being processed by described gray processing is as described input figure As sequence.
In one embodiment, the original sequence after being processed described gray processing by below equation is normalized place Reason:
Wherein, x is the abscissa of the i-th two field picture input image sequence;Y is the vertical of the i-th two field picture input image sequence Coordinate;Ii(x y) is the gray value of described i-th frame input image sequence;IminFor described i-th frame input image sequence The minima of the gray value of middle pixel;ImaxFor the maximum of the gray value of pixel in described i-th frame input image sequence; Step is drawing coefficient, 0 < step < 255.
In one embodiment, above-mentioned carries out target detection in described input image sequence, specifically includes: utilize mould Plate matching process carries out mechanical arm target detection to described input image sequence, obtains multiple input figures target being detected As sequence;Or utilize Otsu threshold method that described input image sequence carries out finger target detection, obtain and mesh detected The multiple input image sequence of target.
In one embodiment, determine that the step of described first adaptive threshold and the second adaptive threshold includes: according to I frame input image sequence and i+1 frame input image sequence calculate error image sequence, and calculate described difference further The variance of value image sequence, generates variance sequence;The variance of one group of predetermined number is chosen from the stem of described variance sequence, Form the first prescription poor, and choose the variance of predetermined number described in a group from the afterbody of described variance sequence, form second Prescription is poor;Maximum in described first prescription difference is set as the first word adaptive threshold, by poor for described first prescription In minima be set as the second sub-adaptive threshold, by described first sub-adaptive threshold and the second sub-adaptive threshold It is defined as described first adaptive threshold;Maximum in described second prescription difference is set as the 3rd sub-adaptive thresholding Value, is set as the 4th sub-adaptive threshold by the minima in described second prescription difference, by described 3rd sub-adaptive thresholding Value and the 4th sub-adaptive threshold are defined as described second adaptive threshold.
In one embodiment, above-mentioned many groups first are filtered out according to the first adaptive threshold from described input image sequence Subsequence, and described many groups the first subsequence is ranked up, to determine the 3rd picture numbers, specifically include: by institute State in variance sequence more than described 4th sub-adaptive threshold and poor less than many prescriptions of described 3rd sub-adaptive threshold Sequence number composition primary importance array positionend;To described primary importance array positionendCarry out diff, Generate the first difference result Diffend(j), wherein, Diffend(j)=positionend(j+1)-positionend(j), j Position for described variance sequence;By described first difference result DiffendMore than the value of one second predetermined threshold value in (j) Corresponding position is charged to brush screen after adding 1 and is judged array Pos;When described brush screen judges array Pos as sky, by described First difference result DiffendJ in (), the sequence number less than the 3rd predetermined threshold value charges to the first afterbody array pend;By described One afterbody array pendIn the sequence number of the input picture corresponding to minima be defined as the 3rd described picture numbers.
In one embodiment, judge that array Pos is not as, time empty, obtaining described brush screen and judge array Pos when described brush screen In the first maximum frame3start;By described first difference result DiffendIn less than the sequence of described 3rd predetermined threshold value Number charge to the second afterbody array p'end;By described second afterbody array p'endIn more than or equal to described first maximum frame3startThe sequence number of corresponding sequence number charges to the 3rd afterbody array pend1;By described 3rd afterbody array pend1In Little value is defined as described 3rd picture numbers.
In one embodiment, by below equation calculate described in error image sequence:
Di(x, y)=Ii+1(x,y)-Ii(x, y),
Wherein, x is the abscissa of described input image sequence;Y is the vertical coordinate of described input image sequence;Ii(x,y) Gray value for described i-th frame input image sequence;Ii+1(x y) is the gray scale of described i+1 frame input image sequence Value;And the variance of described error image sequence is calculated by below equation:
Var i = 1 n - 1 &Sigma; x = 1 h &Sigma; y = 1 w ( D i ( x , y ) - D i &OverBar; ) 2 ,
Wherein, n=h*w, h and w are respectively the height and width of described input image sequence,
In one embodiment, above-mentioned many groups second are filtered out according to the second adaptive threshold from described input image sequence Subsequence, and from described many groups the second subsequence, determine second according to described first picture numbers, the 3rd picture numbers Picture numbers, specifically includes: by an adaptive template, described variance sequence is carried out medium filtering, after generating filtering Variance sequence, the size of described adaptive template is: template=(frame3-frame1) %10, wherein, frame1 For described first picture numbers, frame3For described 3rd picture numbers;Obtain in the difference sequence of described filtering rear and be more than Described second sub-adaptive threshold and the filtration variance sequence number of the many prescriptions difference less than described first sub-adaptive threshold, And described filtration variance sequence number will be more than described first picture numbers and less than described 3rd picture numbers further Sequence number composition second position array positionstart;To described second position array positionstartCarry out diff, Generate the second difference result Diffstart(k), wherein, Diffstart(k)=positionstart(k+1)-positionstart(k), k Position for described variance sequence;By described second difference result DiffstartLess than described 3rd predetermined threshold value in (k) Sequence number charges to stem array pstart;By described stem array pstartIn the sequence number of the input picture corresponding to maximum It is defined as the second described picture numbers.
In one embodiment, above-mentioned when responding according to described first picture numbers and the second picture numbers computing terminal Between, specifically include: described terminal response time is equal to time corresponding to described second picture numbers and described first image The difference of the time that sequence number is corresponding.
In one embodiment, above-mentioned application response is calculated according to described 3rd picture numbers and the second picture numbers Time, specifically include: the described application response time is equal to time corresponding to described 3rd picture numbers and described the The difference of the time that two picture numbers are corresponding.
Having the beneficial effects that of the embodiment of the present invention, objectifies the response delay index in user's visually-perceptible and measures Changing, calculated results is more accurate, and can repeat evaluation and test by the method, develops for intelligent terminal's evaluating system Valuable thinking and suggestion is provided with evaluation and test platform construction.
Accompanying drawing explanation
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, embodiment will be described below The accompanying drawing used required in is briefly described, it should be apparent that, the accompanying drawing in describing below is only the present invention's Some embodiments, for those of ordinary skill in the art, on the premise of not paying creative work, also may be used To obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 is the flow chart of the computational methods of the intelligent terminal user visual perception response time delay index of the embodiment of the present invention;
Fig. 2 is the normalized enhancing contrast ratio result schematic diagram of the embodiment of the present invention;
Fig. 3 is the object detection results schematic diagram of the embodiment of the present invention;
Fig. 4 is the conversion process schematic diagram of image corresponding to acquisition first picture numbers of the embodiment of the present invention;
Fig. 5 is the variance change curve of the input image sequence of the embodiment of the present invention;
Fig. 6 A to Fig. 6 I is the conversion process signal of image corresponding to acquisition the 3rd picture numbers of the embodiment of the present invention Figure;
Fig. 7 is the median-filtered result schematic diagram of the embodiment of the present invention;
Fig. 8 is the conversion process schematic diagram of image corresponding to acquisition second picture numbers of the embodiment of the present invention.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Describe, it is clear that described embodiment is only a part of embodiment of the present invention rather than whole embodiments wholely. Based on the embodiment in the present invention, those of ordinary skill in the art are obtained under not making creative work premise Every other embodiment, broadly falls into the scope of protection of the invention.
The embodiment of the present invention provides the computational methods of a kind of intelligent terminal user visual perception response time delay index.Hereinafter tie The present invention is described in detail to close accompanying drawing.
The embodiment of the present invention provides the computational methods of a kind of intelligent terminal user visual perception response time delay index, such as Fig. 1 Shown in, these computational methods mainly include following steps:
Step S101: original sequence carries out gray processing process, generates input image sequence;
Step S102: carry out target detection in input image sequence, is detecting multiple input picture sequences of target Row obtain the coordinate of target;
Step S103: calculate the extreme value in each coordinate, is recorded as the first image by input image sequence corresponding for extreme value Sequence number;
Step S104: filter out many groups the first subsequence according to the first adaptive threshold from input image sequence, and lead to Cross and many groups the first subsequence is ranked up, determine the 3rd picture numbers;
Step S105: filter out many groups the second subsequence, and root from input image sequence according to the second adaptive threshold From many groups the second subsequence, the second picture numbers is determined according to the first picture numbers, the 3rd picture numbers;
Step S106: according to the first picture numbers and the second picture numbers computing terminal response time;
Step S107: calculate the application response time according to the 3rd picture numbers and the second picture numbers.
By above-mentioned steps S101 to step S107, during the intelligent terminal user visual perception response of the embodiment of the present invention Prolong the computational methods of index, for the image sequence of input, carry out pretreatment by gray processing, by target detection method Obtain target location and judge the application triggers moment.By pretreatment image sequence being solved variance curve and judging Variance curve steady section, thus computational intelligence terminal begins to respond to the moment.Adaptive threshold method is utilized to judge to have responded Become the moment.The response time of the intelligent terminal in response delay index is that intelligent terminal begins to respond to moment and application program Trigger the moment and do poor result;The application response time is that intelligent terminal begins to respond to moment and response and completes the moment and do difference Result.Visible, the computational methods of the intelligent terminal user visual perception response time delay index of the embodiment of the present invention, will use Response delay index in the visually-perceptible of family (response time of in embodiments of the present invention, the most above-mentioned intelligent terminal and The application response time) objectify and quantify, calculated results is more accurate, and can be repeated by the method Evaluation and test, provides valuable thinking and suggestion for the exploitation of intelligent terminal's evaluating system and evaluation and test platform construction.
Below in conjunction with the accompanying drawing calculating side to the intelligent terminal user visual perception response time delay index of the embodiment of the present invention Each step of method is described in detail.
Above-mentioned step S101, carries out gray processing process by original sequence, generates input image sequence.
N frame original sequence high speed acquisition obtained carries out gray processing process, IiRepresent after gray processing processes The i-th two field picture.Statistical picture IiGrey level histogram, when the scope (i.e. the contrast of image) of greyscale transformation is little In or equal to one first predetermined threshold value (this first predetermined threshold value be judge in the i-th two field picture gray value maximum and Diversity between gray value minima, in this embodiment, can be set as 50 by this first predetermined threshold value, but this Invention is not limited thereto) time, stretch image I with common scale after utilizing normalized function normalizediGray scale Transformation range enhancing contrast ratio.Contrast normalization stretch function is:Wherein, X is the abscissa of the i-th two field picture input image sequence;Y is the vertical coordinate of the i-th two field picture input image sequence;Ii(x,y) Gray value for described i-th frame input image sequence;IminFor the gray scale of pixel in described i-th frame input image sequence The minima of value;ImaxFor the maximum of the gray value of pixel in described i-th frame input image sequence;Step is stretching Coefficient, the concrete value of 0 < step < 255, step can be revised according to actual needs, is generally the higher value close to 255 (such as 200).Using the image sequence after normalized as input image sequence.Above-mentioned normalized Result as in figure 2 it is shown, image before wherein Fig. 2 (a) show normalized, Fig. 2 (b) show through normalizing Image after change process.
When the scope of greyscale transformation is more than this first predetermined threshold value, then can be directly by the image after gray processing processes Sequence is as input image sequence.
Above-mentioned steps S102, carries out target detection in input image sequence, multiple input figures of target detected As sequence obtains the coordinate of target.
Specifically, being to utilize template matching method that input image sequence is carried out mechanical arm target detection, acquisition detects Multiple input image sequences of target;Or utilize Otsu threshold method that input image sequence carries out finger target inspection Survey, obtain multiple input image sequences target being detected.As it is shown on figure 3, wherein, Fig. 3 (a) show to be detected Figure, Fig. 3 (b) show object detection results schematic diagram, and being shown in Fig. 3 (b) comprises target (such as finger) Image.Further, obtain and the coordinate of target in multiple input image sequences of target detected.
After obtaining the coordinate of target, by above-mentioned steps S103, calculate the extreme value in each coordinates of targets, obtain The sequence number of the image that this extreme value is corresponding, image corresponding to this sequence number is application program and is triggered time chart picture, will application The program time chart picture that is triggered is recorded as the first picture numbers, and this first picture numbers is designated as frame1.Such as Fig. 4 institute Showing, wherein, Fig. 4 (a) show the image during finger shifts to application program (call), and Fig. 4 (b) show Finger down application program (call) moment, (position now utilizing the finger that Otsu threshold method detects was corresponding Coordinate is above-mentioned extreme value), when Fig. 4 (c) show finger application program to be left (call), Fig. 4 (d) institute It is shown as the image during finger leaves application program (call);Fig. 4 (e) show mechanical arm, and to shift to application program (logical Words) during image, Fig. 4 (f) show mechanical arm and presses application program (call) moment and (now utilize template The coordinate that the position of the mechanical arm that matching process detects is corresponding is above-mentioned extreme value), Fig. 4 (g) show mechanical arm During application program to be left (call), Fig. 4 (h) show during mechanical arm leaves application program (call) Image.
Above-mentioned steps S104, filters out many groups the first subsequence according to the first adaptive threshold from input image sequence, And by many groups the first subsequence is ranked up, determine the 3rd picture numbers.
For above-mentioned input image sequence, first ask for variance sequence Var of this input image sequence.
Specifically, it is to calculate this error image sequence by below equation:
Di(x, y)=Ii+1(x,y)-Ii(x, y),
Wherein, x is the abscissa of input image sequence;Y is the vertical coordinate of input image sequence;Ii(x is y) i-th The gray value of frame input image sequence;Ii+1(x y) is the gray value of i+1 frame input image sequence.
Then, the variance of above-mentioned error image sequence is calculated by below equation:
Var i = 1 n - 1 &Sigma; x = 1 h &Sigma; y = 1 w ( D i ( x , y ) - D i &OverBar; ) 2 ,
Wherein, n=h*w, h and w are respectively the height and width of input image sequence,Should The change curve of variance can be as shown in Fig. 5 (a) and Fig. 5 (b), and the most corresponding two kinds of different intelligent terminals trigger application program Time the variance change of interface image, wherein, Fig. 5 (a) show and there is not the intelligent terminal of brush screen situation and trigger application The variance change curve of the interface image that program is, Fig. 5 (b) show exist brush screen situation intelligent terminal trigger should With the variance change curve of the interface image that program is.
Specifically, obtain adaptive threshold by procedure below and (include the first adaptive threshold and the second adaptive thresholding Value): choose the variance of one group of predetermined number from the stem of above-mentioned variance sequence, form the first prescription poor, and from side The afterbody of difference sequence chooses the variance of one group of predetermined number, forms the second prescription poor;By the maximum in the first prescription difference It is set as the first sub-adaptive threshold, the minima in the first prescription difference is set as the second sub-adaptive threshold, by One sub-adaptive threshold and the second sub-adaptive threshold are defined as the first adaptive threshold;By the maximum in the second prescription difference Value is set as the 3rd sub-adaptive threshold, and the minima in the second prescription difference is set as the 4th sub-adaptive threshold, will 3rd sub-adaptive threshold and the 4th sub-adaptive threshold are defined as the second adaptive threshold.
Such as, in one embodiment, above-mentioned predetermined number is 15, the most in this embodiment, chooses variance sequence Vari 15 prescription differences above-mentioned the first prescriptions of composition of stem poor, choose variance sequence Vari15 prescription difference groups of afterbody The second prescription becoming above-mentioned is poor.Corresponding, the first above-mentioned sub-adaptive threshold startmax=max{Var1,Var2,Var3,…,Var15, the second above-mentioned sub-adaptive threshold startmin=max{Var1,Var2,Var3,…,Var15, the 3rd above-mentioned sub-adaptive threshold endmax=max{VarN-15,VarN-14,VarN-13,…,VarN-1, the 4th above-mentioned sub-adaptive threshold endmin=min{VarN-15,VarN-14,VarN-13,…,VarN-1}。
It should be noted that foregoing is as predetermined number using 15, but in actual applications, can be according to reality Border needs to adjust the numerical value of this predetermined number, and the present invention is not limited thereto.
Reach to terminate screen after responsive state at application program and show two states, a kind of such as Fig. 6 A, Fig. 6 B and Shown in Fig. 6 C, there is not brush screen situation, during wherein Fig. 6 A and Fig. 6 B show application program (call) response Interface, is that interface has just responded the moment shown in Fig. 6 C;A kind of as shown in Fig. 6 D to Fig. 6 I, there are brush screen feelings Condition, the interface during wherein Fig. 6 D and Fig. 6 E show application program (call) response, is that interface is firm shown in Fig. 6 F Just respond the moment;Fig. 6 G, Fig. 6 H and Fig. 6 I show intelligent terminal after application response completes and deposit at interface Brush screen situation.
For variance sequence Var, array positionendIn record all Var (j) > endminAnd Var (j) < endmax's Subscript j, array positionendIn value be to have responded moment possible position.Note DiffendJ () is right positionendSolve the result of diff, record Diffend(j) more than one second predetermined threshold value (e.g. 10, can Being adjusted according to actual needs, the present invention is not limited thereto) position, and be stored in array after this position is added 1 Pos, if array Pos is empty, then there is above-mentioned brush screen situation in explanation, now, finds DiffendThe point of (j)≤2, Corresponding subscript is charged to array pend, minima min (p in the middle of all satisfactory subscriptsend) corresponding figure Picture is i.e. the position that application program has just completed response, the i.e. the 3rd picture numbers, the 3rd picture numbers frame3= min(pend).If array Pos is not empty, then illustrates to there is not above-mentioned brush screen situation, now, first ask for the 3rd Original position frame that picture numbers is possible3start, wherein frame3start=max (Pos), finds Diffend(j)≤2 Point, charges to array p by corresponding sequence numberend, in array pend1In be stored in all more than or equal to frame3startInstitute is right Answering the value of sequence number, the 3rd picture numbers is the minima min (p in the middle of all satisfactory positionsend1) corresponding figure Picture, be i.e. application program just completed response position, the 3rd picture numbers frame3=min (pend1)。
As a example by content shown in above-mentioned Fig. 5, wherein, Fig. 5 (a) show the intelligent terminal's triggering that there is not brush screen situation The variance change curve of the interface image that application program is.Now, first pass through adaptive threshold and ask for the 3rd image sequence Number possible original position frame3start, as shown in Fig. 5 (a), the data that above-mentioned array Pos is comprised should be sequence In number 100-110,150-160,250-260 (comprising as a example by 10 variances by each steady section), difference result is more than The result of 10, the position in sequence number 100-110,150-160,250-260 that the variance of sequence number 110 is corresponding is 10, the position in sequence number 100-110,150-160,250-260 that the variance of sequence number 160 is corresponding is the 20th Individual, therefore, Diffend(10)=positionend(11)-positionend(10)=150-110=40, Diffend(20)=positionend(21)-positionend(20)=250-160=90, it is seen then that Diffend(10) and Diffend(20) it is all higher than this second predetermined threshold value (10), now, after position 10,20 is added 1, is stored in array Pos, Then the data in array Pos are 11,21.Now, frame3start=max (Pos)=21.Then, Diff is foundend(j)≤2 Point, corresponding sequence number is charged to array pend, in array pend1In be stored in all more than or equal to frame3start The value of corresponding sequence number, wherein, frame3startCorresponding sequence number is shown in the sequence number corresponding to 21, i.e. Fig. 5 (a) 250.3rd picture numbers is the minima min (p in the middle of all satisfactory positionsend1) corresponding image, Be i.e. application program just completed response position, the 3rd picture numbers frame3=min (pend1)。
Fig. 5 (b) show and there is the intelligent terminal of brush screen situation and trigger the variance change of the interface image that application program is Curve.Now, Diff is first looked forendJ the point of ()≤2, charges to array p by corresponding subscriptend, such as Fig. 5 (b) Shown in, it is assumed that the starting point of follow-up steady section is sequence number 160, then the subscript being stored in this array is 160-300. Then, the minima min (p in the middle of all subscripts meeting above-mentioned requirementsend) image corresponding to (namely 160) is i.e. Be application program just completed response position, the i.e. the 3rd picture numbers, the 3rd picture numbers frame3= min(pend)。
Above-mentioned step S105, filters out many groups the second subsequence according to the second adaptive threshold from input image sequence, And from many groups the second subsequence, determine the second picture numbers according to the first picture numbers, the 3rd picture numbers.
First the other side's difference sequence Var carries out adaptive template size medium filtering and removes remove impurity spot noise, this adaptive template Size template=(frame3-frame1) %10, after note filtering, result is Varmed, and medium filtering goes remove impurity The result of spot noise as it is shown in fig. 7, wherein Fig. 7 (a) show original-party difference sequence oscillogram, Fig. 7 (b) show through Cross the oscillogram after medium filtering denoising.Array positionstartIn record all Varmed (k) > startminAnd Varmed(k)<startmaxSubscript k, and, k meets frame1<k<frame3, positionstartIn value be Begin to respond to moment possible position.Note DiffstartFor to positionstartSolve the result of diff, find DiffstartJ the point of ()≤2, charges to array p by corresponding sequence numberstart, maximum in the middle of all satisfactory positions Value max (pstart) the corresponding the most corresponding intelligent terminal of image just completed response position, the i.e. second picture numbers, this is the years old Two picture numbers frame2=max (pstart).Intelligent terminal completes the situation of change such as figure of the interface image of response process Shown in 8, wherein, Fig. 8 (a), Fig. 8 (b) and Fig. 8 (c) show and there is not the intelligent terminal of brush screen situation and complete response The change of the interface image of process, Fig. 8 (d), Fig. 8 (e) and Fig. 8 (f) show that to there is the intelligent terminal of brush screen situation complete Become the change (the light and shade striped that brush screen produces ceaselessly is moving) of the interface image of response process.Fig. 8 (a) and Fig. 8 (d) Show terminal to begin to respond to interface, Fig. 8 (b) and Fig. 8 (e) and show terminal just response interface, Fig. 8 (c) and Figure (f) show the image during terminal response application program during the continuous transformation of interface.
After obtained the first picture numbers, the second picture numbers and the 3rd picture numbers by above steps, Calculate response delay index.
It should be noted that in above-mentioned steps S104 and step S105, be by ask for each frame of input image sequence it Between the mode of variance obtain image sequence conversion severe degree.But in actual applications, it is possible to calculated by other Mode represents that image sequence converts severe degree, such as by turning each of the gray-value variation curve of image sequence Derivation at Dian, obtains the situation of change of gray value by the way of derivation, thus obtains input image sequence each frame figure As sequence transformation severe degree, or obtaining each frame image sequence of input image sequence by other calculations converts Severe degree, the present invention is not limited thereto.
Above-mentioned steps S106, according to the first picture numbers and the second picture numbers computing terminal response time.Specifically, The difference of the time that this terminal response time is corresponding with the first picture numbers equal to time corresponding to the second picture numbers.
Above-mentioned steps S107, calculates the application response time according to the 3rd picture numbers and the second picture numbers.Tool Body ground, the time that this application response time is corresponding with the second picture numbers equal to time corresponding to the 3rd picture numbers Difference.
The present invention utilizes the image sequence during the acquisition of high speed acquisition camera and intelligent terminal interactive, by gathering figure As sequence analysis detects intelligent terminal's state, utilize simple and that universality is stronger target detection and the meter of variance index Calculate the computational methods automatization of response delay index in intelligent terminal user visually-perceptible being experienced.The present invention is that intelligence is whole The actual application such as the formulation of objective performance indications of end user experience, test provides effective tool, has wide city Field prospect and using value.
One of ordinary skill in the art will appreciate that all or part of step realizing in above-described embodiment method can be led to Program of crossing completes to instruct relevant hardware, and this program can be stored in a computer read/write memory medium, than Such as ROM/RAM, magnetic disc, CD etc..
Particular embodiments described above, has been carried out the purpose of the present invention, technical scheme and beneficial effect the most in detail Describe in detail bright, be it should be understood that the specific embodiment that the foregoing is only the present invention, be not used to limit this Bright protection domain, all within the spirit and principles in the present invention, any modification, equivalent substitution and improvement etc. done, Should be included within the scope of the present invention.

Claims (11)

1. the computational methods of an intelligent terminal user visual perception response time delay index, it is characterised in that described Computational methods include:
Original sequence is carried out gray processing process, generates input image sequence;
In described input image sequence, carry out target detection, obtain in the multiple input image sequences detect target The coordinate of described target;
Calculate the extreme value in each described coordinate, the sequence number of input picture corresponding for described extreme value is recorded as the first image sequence Number;
Many groups the first subsequence is filtered out from described input image sequence according to the first adaptive threshold, and to described many groups First subsequence is ranked up, to determine the 3rd picture numbers;
Many groups the second subsequence is filtered out from described input image sequence according to the second adaptive threshold, and according to described the One picture numbers, the 3rd picture numbers determine the second picture numbers from described many groups the second subsequence;
According to described first picture numbers and the second picture numbers computing terminal response time;
The application response time is calculated according to described 3rd picture numbers and the second picture numbers.
The computational methods of intelligent terminal user visual perception response time delay index the most according to claim 1, its It is characterised by, original sequence is carried out gray processing process, generate input image sequence, specifically include:
Described original sequence is carried out gray processing process, generates the grey level histogram of described original sequence;
According to described grey level histogram judge gray processing process after original sequence contrast whether less than or etc. In one first predetermined threshold value;
If it is, the original sequence after processing described gray processing is normalized, defeated described in generation Enter image sequence;Otherwise, the original sequence after being processed by described gray processing is as described input image sequence.
The computational methods of intelligent terminal user visual perception response time delay index the most according to claim 2, its Being characterised by, the original sequence after being processed described gray processing by below equation is normalized:
I i ( x , y ) = I i ( x , y ) - I m i n I m a x - I m i n * s t e p ,
Wherein, x is the abscissa of the i-th two field picture input image sequence;Y is the vertical of the i-th two field picture input image sequence Coordinate;Ii(x y) is the gray value of described i-th frame input image sequence;IminFor described i-th frame input image sequence The minima of the gray value of middle pixel;ImaxFor the maximum of the gray value of pixel in described i-th frame input image sequence; Step is drawing coefficient, 0 < step < 255.
The computational methods of intelligent terminal user visual perception response time delay index the most according to claim 1, its It is characterised by, described input image sequence carries out target detection, specifically includes:
Utilize template matching method that described input image sequence carries out mechanical arm target detection, obtain and target detected Multiple input image sequences;Or
Utilize Otsu threshold method that described input image sequence carries out finger target detection, obtain and detect that target is many Individual input image sequence.
The computational methods of intelligent terminal user visual perception response time delay index the most according to claim 1, its It is characterised by, determines that the step of described first adaptive threshold and the second adaptive threshold includes:
According to the i-th frame input image sequence and i+1 frame input image sequence calculating error image sequence, and further Calculate the variance of described error image sequence, generate variance sequence;
Choose the variance of one group of predetermined number from the stem of described variance sequence, form the first prescription poor, and from described side The afterbody of difference sequence chooses the variance of predetermined number described in a group, forms the second prescription poor;
Maximum in described first prescription difference is set as the first sub-adaptive threshold, by described first prescription difference Minima is set as the second sub-adaptive threshold, described first sub-adaptive threshold and the second sub-adaptive threshold is determined For described first adaptive threshold;
Maximum in described second prescription difference is set as the 3rd sub-adaptive threshold, by described second prescription difference Minima is set as the 4th sub-adaptive threshold, described 3rd sub-adaptive threshold and the 4th sub-adaptive threshold is determined For described second adaptive threshold.
The computational methods of intelligent terminal user visual perception response time delay index the most according to claim 5, its It is characterised by, filters out many groups the first subsequence according to the first adaptive threshold from described input image sequence, and to institute State many groups the first subsequence to be ranked up, to determine the 3rd picture numbers, specifically include:
Described variance sequence will be more than described 4th sub-adaptive threshold and less than described 3rd sub-adaptive threshold Sequence number composition primary importance array position of many prescriptions differenceend
To described primary importance array positionendCarry out diff, generate the first difference result Diffend(j), its In, Diffend(j)=positionend(j+1)-positionendJ (), j is the position of described variance sequence;
By described first difference result DiffendJ position corresponding more than the value of one second predetermined threshold value in () adds 1 postscript Enter brush screen and judge array Pos;
When described brush screen judges array Pos as sky, by described first difference result DiffendJ () is less than the 3rd pre- If the sequence number of threshold value charges to the first afterbody array pend
By described first afterbody array pendIn the sequence number of the input picture corresponding to minima be defined as described Three picture numbers.
The computational methods of intelligent terminal user visual perception response time delay index the most according to claim 6, its It is characterised by, when described brush screen judges that array Pos is not as, time empty, obtaining described brush screen and judge first in array Pos Maximum frame3start
By described first difference result DiffendIn charge to the second afterbody array less than the sequence number of described 3rd predetermined threshold value p'end
By described second afterbody array p'endIn more than or equal to described first maximum frame3startCorresponding sequence number Sequence number charges to the 3rd afterbody array pend1
By described 3rd afterbody array pend1In minima be defined as described 3rd picture numbers.
8. according to the computational methods of the intelligent terminal user visual perception response time delay index described in claim 6 or 7, It is characterized in that, by below equation calculate described in error image sequence:
Di(x, y)=Ii+1(x,y)-Ii(x, y),
Wherein, x is the abscissa of described input image sequence;Y is the vertical coordinate of described input image sequence;Ii(x,y) Gray value for described i-th frame input image sequence;Ii+1(x y) is the gray scale of described i+1 frame input image sequence Value;And
The variance of described error image sequence is calculated by below equation:
Var i = 1 n - 1 &Sigma; x = 1 h &Sigma; y = 1 w ( D i ( x , y ) - D i &OverBar; ) 2 ,
Wherein, n=h*w, h and w are respectively the height and width of described input image sequence,
9. according to the computational methods of the intelligent terminal user visual perception response time delay index described in claim 6 or 7, It is characterized in that, filter out many groups the second subsequence, and root according to the second adaptive threshold from described input image sequence From described many groups the second subsequence, the second picture numbers is determined, tool according to described first picture numbers, the 3rd picture numbers Body includes:
By an adaptive template, described variance sequence carried out medium filtering, generates filtering rear difference sequence, described from The size adapting to template is: template=(frame3-frame1) %10, wherein, frame1For described first Picture numbers, frame3For described 3rd picture numbers;
Obtain in the difference sequence of described filtering rear more than described second sub-adaptive threshold and less than described first son adaptive Answer the filtration variance sequence number of many prescriptions difference of threshold value, and described filtration variance sequence number will be more than described first figure further Second position array position is formed as sequence number and less than the sequence number of described 3rd picture numbersstart
To described second position array positionstartCarry out diff, generate the second difference result Diffstart(k), Wherein, Diffstart(k)=positionstart(k+1)-positionstartK (), k is the position of described variance sequence;
By described second difference result DiffstartK in (), the sequence number less than described 3rd predetermined threshold value charges to stem array pstart
By described stem array pstartIn the sequence number of the input picture corresponding to maximum be defined as the second described figure As sequence number.
The computational methods of intelligent terminal user visual perception response time delay index the most according to claim 9, its It is characterised by, according to described first picture numbers and the second picture numbers computing terminal response time, specifically includes:
The time that described terminal response time is equal to described second picture numbers corresponding is corresponding with described first picture numbers The difference of time.
The computational methods of 11. intelligent terminal user visual perception response time delay indexs according to claim 10, It is characterized in that, calculate the application response time according to described 3rd picture numbers and the second picture numbers, specifically wrap Include:
The described application response time is equal to time corresponding to described 3rd picture numbers and described second picture numbers The difference of corresponding time.
CN201610223448.9A 2016-04-12 2016-04-12 The calculation method of intelligent terminal user visual perception response time delay index Active CN105913429B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610223448.9A CN105913429B (en) 2016-04-12 2016-04-12 The calculation method of intelligent terminal user visual perception response time delay index

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610223448.9A CN105913429B (en) 2016-04-12 2016-04-12 The calculation method of intelligent terminal user visual perception response time delay index

Publications (2)

Publication Number Publication Date
CN105913429A true CN105913429A (en) 2016-08-31
CN105913429B CN105913429B (en) 2019-03-08

Family

ID=56745817

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610223448.9A Active CN105913429B (en) 2016-04-12 2016-04-12 The calculation method of intelligent terminal user visual perception response time delay index

Country Status (1)

Country Link
CN (1) CN105913429B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107329883A (en) * 2017-06-19 2017-11-07 中国信息通信研究院 The automatic calculating method and system of intelligent terminal application program interaction response time delay
CN107449977A (en) * 2016-12-29 2017-12-08 福建奥通迈胜电力科技有限公司 A kind of split-second precision difference computational methods based on image comparison technology
CN111179408A (en) * 2018-11-12 2020-05-19 北京物语科技有限公司 Method and apparatus for three-dimensional modeling
CN111858318A (en) * 2020-06-30 2020-10-30 北京百度网讯科技有限公司 Response time testing method, device, equipment and computer storage medium
CN113334392A (en) * 2021-08-06 2021-09-03 成都博恩思医学机器人有限公司 Mechanical arm anti-collision method and device, robot and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090119731A1 (en) * 2002-12-10 2009-05-07 Onlive, Inc. System for acceleration of web page delivery
CN101668223A (en) * 2009-09-07 2010-03-10 航天恒星科技有限公司 Method for measuring image transmission time delay
CN103684889A (en) * 2012-08-29 2014-03-26 云联(北京)信息技术有限公司 Cloud computing-based speed measurement method applied to user terminal
CN104679512A (en) * 2015-02-12 2015-06-03 腾讯科技(深圳)有限公司 Method and device for acquiring window program response time
CN104965773A (en) * 2015-07-09 2015-10-07 网易(杭州)网络有限公司 Terminal, jamming detection method, device as well as game jamming detection method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090119731A1 (en) * 2002-12-10 2009-05-07 Onlive, Inc. System for acceleration of web page delivery
CN101668223A (en) * 2009-09-07 2010-03-10 航天恒星科技有限公司 Method for measuring image transmission time delay
CN103684889A (en) * 2012-08-29 2014-03-26 云联(北京)信息技术有限公司 Cloud computing-based speed measurement method applied to user terminal
CN104679512A (en) * 2015-02-12 2015-06-03 腾讯科技(深圳)有限公司 Method and device for acquiring window program response time
CN104965773A (en) * 2015-07-09 2015-10-07 网易(杭州)网络有限公司 Terminal, jamming detection method, device as well as game jamming detection method and device

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107449977A (en) * 2016-12-29 2017-12-08 福建奥通迈胜电力科技有限公司 A kind of split-second precision difference computational methods based on image comparison technology
CN107329883A (en) * 2017-06-19 2017-11-07 中国信息通信研究院 The automatic calculating method and system of intelligent terminal application program interaction response time delay
CN111179408A (en) * 2018-11-12 2020-05-19 北京物语科技有限公司 Method and apparatus for three-dimensional modeling
CN111179408B (en) * 2018-11-12 2024-04-12 北京物语科技有限公司 Three-dimensional modeling method and equipment
CN111858318A (en) * 2020-06-30 2020-10-30 北京百度网讯科技有限公司 Response time testing method, device, equipment and computer storage medium
CN111858318B (en) * 2020-06-30 2024-04-02 北京百度网讯科技有限公司 Response time testing method, device, equipment and computer storage medium
CN113334392A (en) * 2021-08-06 2021-09-03 成都博恩思医学机器人有限公司 Mechanical arm anti-collision method and device, robot and storage medium

Also Published As

Publication number Publication date
CN105913429B (en) 2019-03-08

Similar Documents

Publication Publication Date Title
CN105913429A (en) Calculating method for visual perception response time delay index of intelligent terminal user
CN109242135B (en) Model operation method, device and business server
CN102955902B (en) Method and system for evaluating reliability of radar simulation equipment
WO2020143377A1 (en) Industry recognition model determination method and apparatus
CN106846362A (en) A kind of target detection tracking method and device
US8811750B2 (en) Apparatus and method for extracting edge in image
CN113379680B (en) Defect detection method, defect detection device, electronic device and computer readable storage medium
CN104978578A (en) Mobile phone photo taking text image quality evaluation method
CN106127775A (en) Measurement for Digital Image Definition and device
CN110633222A (en) Method and device for determining regression test case
CN110059700A (en) The recognition methods of image moire fringes, device, computer equipment and storage medium
CN104867225A (en) Banknote face orientation identification method and apparatus
CN110288599A (en) A kind of dead pixel detection method, device, electronic equipment and storage medium
CN103093458A (en) Detecting method and detecting device for key frame
CN105225523A (en) A kind of parking space state detection method and device
CN113887126A (en) Welding spot quality analysis method and device, terminal equipment and medium
CN102713974B (en) Learning device, recognition device, study recognition system and study recognition device
CN108062821B (en) Edge detection method and currency detection equipment
CN112183469B (en) Method for identifying congestion degree of public transportation and self-adaptive adjustment
CN116051185B (en) Advertisement position data abnormality detection and screening method
CN113850523A (en) ESG index determining method based on data completion and related product
CN113989632A (en) Bridge detection method and device for remote sensing image, electronic equipment and storage medium
CN112800923A (en) Human body image quality detection method and device, electronic equipment and storage medium
CN109945075A (en) A kind of water supply line leakiness detection method and device
CN109859162B (en) Automatic testing method for periodic stripes of industrial camera

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20211230

Address after: 100191 No. 40, Haidian District, Beijing, Xueyuan Road

Patentee after: CHINA ACADEMY OF INFORMATION AND COMMUNICATIONS

Address before: 100191 No. 52 Garden North Road, Beijing, Haidian District

Patentee before: CHINA ACADEME OF TELECOMMUNICATION RESEARCH OF MIIT