CN117762752A - Terminal performance detection method and device, electronic equipment and storage medium - Google Patents

Terminal performance detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117762752A
CN117762752A CN202311666277.3A CN202311666277A CN117762752A CN 117762752 A CN117762752 A CN 117762752A CN 202311666277 A CN202311666277 A CN 202311666277A CN 117762752 A CN117762752 A CN 117762752A
Authority
CN
China
Prior art keywords
frame
video
video frame
touch operation
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311666277.3A
Other languages
Chinese (zh)
Inventor
魏卓
李川
王正一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youzhuju Network Technology Co Ltd
Original Assignee
Beijing Youzhuju Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Youzhuju Network Technology Co Ltd filed Critical Beijing Youzhuju Network Technology Co Ltd
Priority to CN202311666277.3A priority Critical patent/CN117762752A/en
Publication of CN117762752A publication Critical patent/CN117762752A/en
Pending legal-status Critical Current

Links

Abstract

The invention relates to a terminal performance detection method, a device, an electronic device and a storage medium, wherein the method comprises the steps of obtaining a video picture obtained by shooting a display screen comprising a target terminal; acquiring a first video frame and a second video frame in the video picture; the first video frame comprises a video frame corresponding to the display screen when the touch operation is performed; the second video frame comprises a video frame corresponding to the touch operation when the picture content in the video picture changes; terminal capabilities of the target terminal are determined based on the first video frame and the second video frame. Therefore, the target terminal can be detected without professional detection software, and the detection efficiency can be greatly improved.

Description

Terminal performance detection method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the field of computer technology, and in particular, to a method and device for detecting terminal performance, an electronic device and a storage medium.
Background
Along with the continuous development of the technology in the automobile field, the automobile is gradually developed into an intelligent terminal integrating the functions of navigation, automobile control, video entertainment and the like from the traditional simple radio.
In the related art, the vehicle-to-vehicle evaluation is required, and the detection efficiency of the vehicle-to-vehicle is lower due to the fact that multiple parties are involved.
Disclosure of Invention
The disclosure provides a terminal performance detection method, a terminal performance detection device, electronic equipment and a storage medium.
According to an aspect of the present disclosure, there is provided a terminal performance detection method, including:
acquiring a video picture obtained by shooting a display screen comprising a target terminal;
acquiring a first video frame and a second video frame in the video picture; the first video frame comprises a video frame corresponding to the display screen when the touch operation is performed; the second video frame comprises a video frame corresponding to the touch operation when the picture content in the video picture changes;
terminal capabilities of the target terminal are determined based on the first video frame and the second video frame.
According to another aspect of the present disclosure, there is provided a terminal performance detection apparatus including:
the video picture acquisition module is used for acquiring a video picture obtained by shooting a display screen comprising a target terminal;
the video frame acquisition module is used for acquiring a first video frame and a second video frame in the video picture; the first video frame comprises a video frame corresponding to the display screen when the touch operation is performed; the second video frame comprises a video frame corresponding to the touch operation when the picture content in the video picture changes;
And the terminal performance determining module is used for determining the terminal performance of the target terminal based on the first video frame and the second video frame.
According to a third aspect of the present disclosure, an electronic device is provided. The electronic device includes: a memory and a processor, the memory having stored thereon a computer program, the processor implementing the method as described above when executing the program.
According to a fourth aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the above-described method of the present disclosure.
According to the terminal performance detection method, the terminal performance detection device, the electronic equipment and the storage medium, through obtaining a video picture obtained by shooting a display screen comprising a target terminal, a first video frame corresponding to the display screen in touch operation and a second video frame obtained in response to the touch operation are obtained, and the terminal performance of the target terminal is determined based on the first video frame and the second video frame. Therefore, the target terminal can be detected without professional detection software, and the detection efficiency can be greatly improved.
Drawings
Further details, features and advantages of the present disclosure are disclosed in the following description of exemplary embodiments, with reference to the following drawings, wherein:
fig. 1 is a schematic view of a scenario for performance detection of a vehicle in accordance with an exemplary embodiment of the present disclosure;
FIG. 2 is a schematic illustration of a vehicle with an infrared correlation frame according to an exemplary embodiment of the present disclosure;
fig. 3 is a flowchart of a terminal performance detection method according to an exemplary embodiment of the present disclosure;
FIG. 4 is a schematic block diagram of functional modules of a terminal performance detection apparatus according to an exemplary embodiment of the present disclosure
FIG. 5 is a block diagram of an electronic device provided in an exemplary embodiment of the present disclosure;
fig. 6 is a block diagram of a computer system according to an exemplary embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below. It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
It will be appreciated that prior to using the technical solutions disclosed in the embodiments of the present disclosure, the user should be informed and authorized of the type, usage range, usage scenario, etc. of the personal information related to the present disclosure in an appropriate manner according to the relevant legal regulations.
For example, in response to receiving an active request from a user, a prompt is sent to the user to explicitly prompt the user that the operation it is requesting to perform will require personal information to be obtained and used with the user. Thus, the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server or a storage medium for executing the operation of the technical scheme of the present disclosure according to the prompt information.
As an alternative but non-limiting implementation, in response to receiving an active request from a user, the manner in which the prompt information is sent to the user may be, for example, a popup, in which the prompt information may be presented in a text manner. In addition, a selection control for the user to select to provide personal information to the electronic device in a 'consent' or 'disagreement' manner can be carried in the popup window. It will be appreciated that the above-described notification and user authorization process is merely illustrative and not limiting of the implementations of the present disclosure, and that other ways of satisfying relevant legal regulations may be applied to the implementations of the present disclosure.
In order to improve performance detection efficiency of a terminal, the embodiment of the disclosure may perform image acquisition on a display screen of the terminal to obtain a video picture shot on the display screen of the terminal. For example, a high-frame camera is used for shooting a display screen of the terminal, so as to obtain a video picture containing the display screen of the terminal. The terminal may be an electronic product such as a car machine, a mobile phone or a tablet computer, and the embodiment is not limited thereto. For convenience of explanation, the terminal is taken as an example of the car machine in the embodiment, but the embodiment is not limited to this.
As shown in fig. 1, fig. 1 is a schematic view of a scenario for performance detection of a vehicle according to an exemplary embodiment of the present disclosure. A camera 12, which may be a high frame camera, may be mounted directly in front of the car body 11.
In an embodiment, the user may perform a touch operation, such as clicking, zooming, or sliding operation, on the vehicle 11 by using a finger. The camera 12 can acquire a video picture of a user during the operation of the car machine 11 by capturing an image including a display screen of the car machine 11. The vehicle performance of the vehicle 11 can be obtained by performing processing such as recognition on the acquired video image.
In the embodiment, by detecting the motion of the video screen, when it is detected that the user's hand has a trigger motion on the display screen of the car machine 11, it is possible to confirm that the car machine 11 has received the trigger operation of the user on the car machine 11.
When the hand of the user is recognized to have the triggering operation on the display screen, starting timing, for example, corresponding to time 1 at the moment; when detecting that the screen content of the display screen of the car machine 11 changes, stopping timing, for example, corresponding to time 2 at the moment, so that the response time delay of the car machine 11 can be obtained according to the time length between time 1 and time 2, wherein the time length is the time length between the time when the car machine 11 receives the relevant operation of the user and the time when the user reacts. The frame number of the corresponding video frame, for example, the frame number m, may also be acquired when the triggering operation of the display screen of the car machine 11 by the user's hand is detected, and the frame number of the corresponding video frame, for example, the frame number n, may be acquired when the change of the content in the display screen of the car machine 11 is detected. The inter-frame interval is obtained by the frame rate of the video picture or the above, so that the duration of the vehicle 11 between the time when the trigger operation of the user is received and the time when the response is made can be obtained based on the inter-frame interval, the frame number m, and the frame number n, and the response time delay is obtained. Wherein, m and n are positive integers, and n is greater than m.
Such an embodiment may evaluate the vehicle performance of the vehicle 11 by the response time delay described above. For example, the vehicle performance of the vehicle 11 may be determined by dividing the vehicle into a plurality of class sections, for example, class 1, class 2, and class 3, in advance, based on the class section corresponding to the response time delay.
In the embodiment, the response time delays corresponding to the lower vehicles 11 under different operations, for example, the response time delays corresponding to the clicking operation, the response time delays corresponding to the zooming operation or the response time delays corresponding to the zooming operation, can be further distinguished, so that the response time delays corresponding to the lower vehicles 11 under different operation types are further determined, and further the vehicle-to-vehicle performance of the lower vehicles 11 is comprehensively determined based on the response time delays.
The response time delay in the above embodiment may specifically be a power-on or power-off time delay on the vehicle 11, for example, how long the vehicle reacts after performing the power-on or power-off operation on the vehicle. Or, for a response delay of starting a certain app (application program) on the car machine 11, for example, by clicking a certain app on a display screen of the car machine 11, a starting operation is performed on the app, and the obtained response delay is obtained. Alternatively, the response time delay of the display content on the display screen of the vehicle 11 may be also obtained, for example, when a map on the display screen of the vehicle 11 is subjected to a related operation.
In an embodiment, corresponding to the above related operations, such as sliding or zooming operations, when two adjacent frames with an interval in the operation process are the same, it is determined that a stuck frame appears, the frames are also counted as a stuck frame, the number of stuck frames and the total number of frames in a unit time are obtained, and the number of stuck frames is calculated and divided by the total number of frames, so that the stuck ratio of the vehicle in the operation process can be obtained, and the size of the stuck ratio can be used as an evaluation index of the vehicle performance of the vehicle 11. In the embodiment, the katon ratio and the response time delay of the vehicle can be used as the vehicle performance of the vehicle 11 together in two different dimensions, or the vehicle performance of the vehicle 11 can be evaluated respectively, so that the problem of low vehicle performance detection efficiency caused by evaluating the vehicle performance through special vehicle evaluation software can be avoided, and the vehicle detection efficiency can be improved.
In the embodiment, besides identifying the video image through the hand of the user in the above manner, determining the corresponding operation type, such as clicking operation, sliding operation or zooming operation, etc., an infrared opposite-shooting frame 13 may be further disposed on the vehicle 11, and the operation type of the hand of the user to the display screen of the vehicle 11 may be detected through the infrared opposite-shooting frame. For example, an infrared correlation frame may be provided on the edge side of the display screen of the car machine 11, such as the upper and lower sides, or the left and right sides, or the like.
In the embodiment, the infrared correlation frame 13 with matched size can be mounted close to the surface of the vehicle 11, so that the infrared coverage of the surface of the vehicle is realized. The infrared opposite-shooting frame 13 is of a symmetrical structure and comprises an infrared emitting tube and an infrared receiving tube respectively, and can continuously emit invisible infrared light from one end of the infrared emitting tube, and the infrared light can be received by the infrared receiving tube which is horizontally symmetrical. When a finger operates the car machine, the coverage area of the infrared opposite-shooting frame can be shielded due to the opacity of the finger, so that a signal generated by the infrared opposite-shooting frame can be transmitted back to the upper computer system, and the current relevant operation on the car machine is prompted. Meanwhile, the operation actions can be distinguished according to the number of the shields and the position change of the shields, and different signals are returned to obtain touch operation types such as clicking, sliding or double-finger zooming. After the upper computer receives different finger operation signals, the upper computer can prompt by lighting different numbers of LED (light-emitting diode) lamps.
Therefore, by detecting the brightness of the LED lamps in the video frame, it can be determined whether the vehicle 11 receives the touch operation of the user on the display screen, and further, according to the detected number of the turned-on LED lamps, the touch operation type of the user, such as clicking, sliding or double-finger zooming, can be determined.
In an embodiment, the high-frame camera 12 may be installed in a direction facing the car machine 11, and the operation process may be recorded at a shooting frame rate of not less than 60 frames/second, so as to obtain a video picture including the display screen of the car machine 11, where the video picture may include the operation of a finger and the lighting of a corresponding number of LED lamps, and the change of the picture content of the display screen on the car machine 11.
In the embodiment provided in the present disclosure, in order to identify whether or not the screen content in the above-described video screen changes, it is further determined whether or not the car machine 11 responds to the related operation of the user. The video frames may be preprocessed, specifically, the video shot by the high-frame camera may be subjected to frame processing, and in order to reduce the amount of processed data, the video frames may be converted from the RGB color space to the gray color space frame by frame. And in order to ensure that the number of comparison pixels is uniform among subsequent video frames, the resolution of each frame can be further uniformly scaled and fixed at a specific resolution, such as 1920x1080.
In the embodiment, when the user performs related operations on the car machine 11, for example, clicking operations, a slight vibration may be generated during the operations, so that the shot video frames may shake. For example, in a captured video frame, when a display screen of a terminal is displayed on the video frame, the rectangular shape of the display screen is changed into a trapezoidal shape due to frame shake, and at this time, affine transformation can be performed on the frame according to the deviation condition of feature points of the inter-frame, so that the video frame is uniformly converted into a rectangle, and the shake suppression of the frame is realized. Finally, marking the region of the vehicle machine in the picture as a region of interest (ROI region of interest), and focusing the subsequent analysis on the region.
Based on the above embodiment, in the process of detecting the LED lamp, whether the LED exists in the picture frame by frame may be detected through an LED detection model trained based on a neural network. The influence of the change of shooting ambient light on LED detection can be effectively resisted through the model. The detection also comprises the detection of whether the LED lamps are lighted or not and the quantity of the lighted LED lamps. When not less than 1 LED lamp is lighted, the vehicle is indicated to be operated.
The training sample can comprise a plurality of images, each image can comprise an LED lamp, and a label for marking whether the LED lamp is lighted or not. And training the preset neural network model through the training sample, and stopping training when the training stopping condition is met, so as to obtain the LED detection model. The LED detection model can detect the lighting state of the LED lamp in the image.
In an embodiment, when a user performs a related operation on the display screen of the vehicle 11, in order to avoid the influence of a non-target object on the video frame, the target image area in each video frame may be segmented. For example, the non-target object may be a hand, clothing, or ornament, etc., and the target object may be the display content of the display screen. In the embodiment, the non-target object is taken as a hand as an example, in order to avoid the influence of the hand on the video image, the hand area in each video frame may be divided, and the embodiment is not limited thereto. The video picture can be processed by a hand recognition and segmentation model based on neural network training, and the model can be used for solving the possible situations of partial palm, clothing shielding and the like of the picture recorded in the vehicle machine evaluation process. Meanwhile, the influence caused by illumination change in the shooting process can be effectively resisted through the detection of the model, and the human-like image and the real human hand in the frame of the vehicle can be effectively distinguished. The model is used for carrying out pixel-level segmentation on the region of the hand in the picture according to the outline while completing the hand recognition. The hands can be simultaneously erased in the adjacent frames by dividing the hands in the adjacent frames and filling black, so that the subsequent judgment of the picture change of the vehicle-mounted device is not affected.
In the embodiment, the pixel level comparison of the same coordinates is performed on adjacent frames, so as to detect whether the picture between the frames has obvious change. Meanwhile, in order to further reduce unavoidable noise in a shot picture and better amplify differences among pixels, expansion, corrosion and binarization processing are carried out on the comparison result among pixels, the compared noise is eliminated, and pixels with differences are highlighted. And the ratio of different pixels between frames in the whole picture is calculated by combining the sensitivity degree of human eyes to the pixel change and the characteristic that the picture change of the car machine is necessarily in a pixel area with a certain size, and when the ratio exceeds a certain threshold value, the picture between frames can be judged to be changed.
Based on the above embodiments, the embodiments of the present disclosure provide a terminal performance detection method, as shown in fig. 3, which may include the following steps:
in step S310, a video screen obtained by photographing a display screen including a target terminal is acquired.
In the embodiment, the target terminal may be a terminal such as a mobile phone, a tablet computer, or a car machine, and the car machine in the above embodiment is specifically described as an example, and the embodiment is not limited thereto.
The high-frame camera is arranged right in front of the car body, and when the car body is in touch operation, the car body is shot through the high-frame camera, so that a video picture containing a display screen of the car body is obtained.
In step S320, a first video frame and a second video frame in a video picture are acquired.
In step S330, the terminal performance of the target terminal is determined based on the first video frame and the second video frame.
The first video frame comprises a video frame corresponding to the display screen when the touch operation is performed, and the second video frame comprises a video frame corresponding to the display screen when the picture content in the video picture changes in response to the touch operation.
In an embodiment, by acquiring a first video frame corresponding to when a touch operation is detected for a target terminal, a frame number m corresponding to the first video frame may be acquired, or timing may be started, to obtain time 1. When detecting that the picture content in the video picture changes, the frame number n of the second video frame corresponding to the current time can be obtained, or the timing is stopped, so that the time 2 is obtained.
In this way, the response time delay of the target terminal can be determined according to the duration between the time 1 and the time 2. Or, obtaining the frame rate of the video, obtaining the inter-frame interval by the frame rate, and obtaining the response time delay of the target terminal by obtaining the frame number m, the frame number n and the inter-frame interval. Accordingly, by acquiring the frame rate of the video picture, and based on the frame rate, the inter-frame interval can be determined. The response time delay of the target terminal is determined based on the difference value and the inter-frame interval by acquiring the difference value of the corresponding frame numbers between the first video frame and the second video frame, and further the terminal performance of the target terminal is determined based on the response time delay. Thus, the performance of the target terminal is obtained through the obtained response time delay.
In addition, the blocking ratio of the vehicle-mounted device in the operation process can be determined by acquiring the blocking frames in the operation process and the number of the blocking frames and the total frame number, so that the terminal performance of the target terminal can be determined, for example, the total number of the video frames between the first video frame and the second video frame can be acquired, and the blocking frame number contained in the video frame set can be acquired. Thus, the jamming ratio of the display screen can be obtained based on the total video frame number and the jamming frame number, and the jamming degree of the target terminal is determined based on the jamming ratio. Reference may be made specifically to the above embodiments, and details are not repeated here.
According to the terminal performance detection method provided by the embodiment of the disclosure, through obtaining a video picture obtained by shooting a display screen comprising a target terminal, a first video frame corresponding to the display screen when the display screen is subjected to touch operation and a second video frame obtained in response to the touch operation are obtained, and the terminal performance of the target terminal is determined based on the first video frame and the second video frame. Therefore, the target terminal can be detected without professional detection software, and the detection efficiency can be greatly improved.
Based on the above embodiment, in still another embodiment provided in the present disclosure, in order to determine the terminal performance of the target terminal, the step S330 may specifically further include the following steps:
In step S331, a touch operation type corresponding to the touch operation is obtained, and a terminal performance corresponding to the target terminal under the touch operation type is determined.
In an embodiment, a touch operation type corresponding to a touch operation may be further detected, and a terminal performance corresponding to the touch operation type may be determined. For example, under a click operation, a slide operation, or a zoom operation, respectively, the target terminals respectively correspond to the terminal capabilities.
In an embodiment, the corresponding operation type may be identified by identifying the operation action of the user's hand in the video frame. The edge area of the display screen of the target terminal can be provided with an infrared opposite-shooting frame and a plurality of indicator lamps, and the infrared opposite-shooting frame is used for lighting the indicator lamps with corresponding quantity according to the touch operation type when the touch operation of the display screen is detected as shown in the combination of fig. 2. Therefore, the identification of the control type can be converted into the identification of the number of the lighted indication lamps in the video picture, different touch operation types can be correspondingly lighted by different numbers of the indication lamps, and the indication lamps can be specifically the LED lamps.
Therefore, the video picture can contain the display area of the indicator lamp, and the corresponding touch operation type is determined by the number of the lightened indicator lamps in the display area. Therefore, when the indication lamps with the lighting states exist in the display area, the number of the indication lamps with the lighting states can be obtained, and the touch operation type corresponding to the touch operation is determined based on the number. Therefore, the corresponding relation between the touch operation type and the number of the lightened indication lamps is established in advance, so that the touch operation type can be conveniently detected.
In the embodiment, when determining the terminal performance corresponding to the target terminal under the touch operation type, the response time delay under the corresponding operation type can be obtained by combining the detected lighting condition of the indicator lamp, so as to determine the terminal performance of the target terminal. For example, when the indication lamp is detected to be on, starting timing to obtain a first moment; stopping timing when detecting that the picture content in the video picture changes, and obtaining a second moment; and determining response time delay corresponding to the target terminal under the touch operation type based on the first time and the second time, and determining terminal performance of the target terminal based on the response time delay.
In an embodiment, when the touch operation is a click operation, an operation object of the click operation in the display screen may be acquired. And acquiring the response time delay of the operation object under the clicking operation, and taking the response time delay of the operation object as the response time delay of the target terminal.
For example, a response delay to the manipulation object may be acquired, and the response delay may be taken as a response delay of the target terminal. For example, by clicking on a certain app on the target terminal, the duration required by the app when starting is acquired, and the duration is app starting response time delay. In addition, the shutdown or starting operation for the target terminal can be obtained, the shutdown or starting time delay can be obtained, or the operation of relevant content is carried out on the display screen of the target terminal, for example, the zoom operation of a map can be carried out, and the response time delay can be obtained. The specific response time delay can be used as the response time delay of the target terminal, so that the terminal performance of the target terminal can be determined through the response time delay.
In an embodiment, since the target terminal is usually disposed in a place by means of a stand or the like, when a user performs a touch operation on the target terminal by using a finger, the force applied by the finger to the target terminal may cause the target terminal to shake to a certain extent, so that a video frame shot by a camera may shake, which requires processing of a video frame in which shake occurs in the video frame. Thus, upon detecting the presence of a jittered frame in a video picture, a target feature point in the video picture is acquired. And acquiring offset information of the target feature points, and performing transformation processing on the jitter frame based on the offset information. For example, by extracting a target feature point in a video frame, affine transformation processing is performed on the video frame by acquiring an offset of the feature point to eliminate a shake phenomenon in a video picture.
In an embodiment, in order to accurately detect a change condition of a picture content of a video picture after a touch operation in the video picture is identified, the method may further include the following steps:
in step S340, frame-by-frame extraction is performed on the video picture, and the video picture is converted into a binarized image having a fixed resolution frame by frame.
In step S350, the binarized images corresponding to the frames are compared, so as to obtain the similarity between the adjacent frames, and a second video frame corresponding to the video frame when the frame content in the video frame changes is determined based on the similarity.
In the embodiment, each frame is converted from a color image into a gray-scale image and the gray-scale image is converted into a binary image by frame extraction of a video picture. By comparing the similarity between the binary images respectively corresponding to the adjacent frames, it is possible to determine whether or not the picture content between the adjacent frames has changed. For example, in the case where the similarity is smaller than the threshold, it is determined that the picture content between the adjacent videos is changed, otherwise it is determined that the picture content between the adjacent videos is not changed.
In an embodiment, the pixel difference between two adjacent video frames may be obtained by obtaining the pixel difference between the two adjacent video frames, for example, subtracting the pixel of the video frame of the subsequent frame from the pixel of the previous frame, and the difference feature between the two adjacent video frames may be obtained by denoising the pixel difference and binarizing the pixel difference. It is determined whether the picture content between the adjacent video frames has changed, for example, by the duty cycle of the difference feature throughout the video frames.
In the embodiment, noise in the video picture can be further denoised through corrosion, expansion and the like, so that the difference between adjacent frames can be more accurately compared.
In the embodiment provided by the disclosure, in order to obtain the similarity between adjacent frames more accurately, image segmentation processing may be performed on the video frames in the video frame to obtain a target image area containing a non-target object. And performing color filling processing on the target image area, and acquiring the similarity between adjacent frames after the color filling processing.
For example, the region of the hand in the video picture is segmented, the region of the hand is segmented, and the region of the hand is filled with black, so that when the picture content difference between two adjacent frames is compared, the region of the hand can be ignored, for example, the black region is ignored, the black region is not compared, and the influence of the hand on the video picture can be eliminated.
In the case of dividing each functional module by adopting a corresponding function, the embodiment of the present disclosure provides a terminal performance detection device, which may be a server, a terminal, or a chip applied to the server. Fig. 4 is a schematic block diagram of functional modules of a terminal performance detection device according to an exemplary embodiment of the present disclosure. As shown in fig. 4, the terminal performance detecting apparatus includes:
A video picture obtaining module 10, configured to obtain a video picture obtained by capturing a display screen including a target terminal;
a video frame acquisition module 20, configured to acquire a first video frame and a second video frame in the video frame; the first video frame comprises a video frame corresponding to the display screen when the touch operation is performed; the second video frame comprises a video frame corresponding to the touch operation when the picture content in the video picture changes;
a terminal capability determining module 30, configured to determine a terminal capability of the target terminal based on the first video frame and the second video frame.
In yet another embodiment provided in the present disclosure, the terminal performance determining module is specifically configured to:
and acquiring a touch operation type corresponding to the touch operation, and determining the terminal performance corresponding to the target terminal under the touch operation type.
In yet another embodiment provided by the present disclosure, the video frame includes a display area of an indicator light; the terminal performance determining module is specifically configured to:
when the display area is detected to have the indicator lights in the lighting state, the number of the indicator lights in the lighting state is obtained;
And determining the touch operation type corresponding to the touch operation based on the quantity.
In still another embodiment provided by the present disclosure, an infrared opposite-shooting frame and a plurality of indicator lamps are disposed in an edge area of the display screen, where the infrared opposite-shooting frame is configured to light a corresponding number of indicator lamps according to a touch operation type when it is detected that the display screen has a touch operation.
In yet another embodiment provided by the present disclosure, the apparatus further comprises:
the characteristic point acquisition module is used for acquiring target characteristic points in the video picture when detecting that the jittering frame exists in the video picture;
and the transformation processing module is used for acquiring the offset information of the target characteristic points and carrying out transformation processing on the jitter frame based on the offset information.
In yet another embodiment provided by the present disclosure, the apparatus further comprises:
the image processing module is used for carrying out frame extraction on the video picture and converting the video picture into a binary image with fixed resolution frame by frame;
and the video frame determining module is used for comparing the binarized images corresponding to the frames respectively, obtaining the similarity between the adjacent frames, and determining the second video frame corresponding to the video frame when the picture content in the video picture changes based on the similarity.
In yet another embodiment provided by the present disclosure, the video frame determination module is specifically further configured to:
performing image segmentation processing on the video frames in the video picture to obtain a target image area containing a non-target object;
and performing color filling processing on the target image area, and acquiring the similarity between adjacent frames after the color filling processing.
In yet another embodiment provided in the present disclosure, the terminal performance determining module is specifically further configured to:
acquiring the total video frame number between the first video frame and the second video frame, and acquiring the katon frame number contained in the video frame set;
based on the total video frame number and the stuck frame number, a stuck ratio is obtained;
and determining the jamming degree of the target terminal based on the jamming ratio, and determining the terminal performance of the target terminal based on the jamming degree.
In yet another embodiment provided in the present disclosure, the terminal performance determining module is specifically further configured to:
acquiring the frame rate of the video picture, and determining an inter-frame interval based on the frame rate;
acquiring a difference value of corresponding frame numbers between the first video frame and the second video frame;
And determining response time delay of the target terminal based on the difference value and the inter-frame interval, and determining terminal performance of the target terminal based on the response time delay.
In yet another embodiment provided in the present disclosure, the terminal performance determining module is specifically further configured to:
when the indication lamp is detected to be on, starting timing to obtain a first moment;
stopping timing when detecting that the picture content in the video picture changes, and obtaining a second moment;
and determining response time delay corresponding to the target terminal under the touch operation type based on the first time and the second time, and determining terminal performance of the target terminal based on the response time delay.
In yet another embodiment provided in the present disclosure, the terminal performance determining module is specifically further configured to:
when the touch operation is a click operation, acquiring an operation object of the click operation in the display screen;
and acquiring the response time delay of the operation object under the clicking operation, and taking the response time delay of the operation object as the response time delay of the target terminal.
For the device portion, corresponding to the above method embodiment, refer specifically to the description of the above embodiment, and are not repeated here.
According to the terminal performance detection device provided by the embodiment of the disclosure, through obtaining the video picture obtained by shooting the display screen comprising the target terminal, the first video frame corresponding to the display screen when the touch operation is performed and the second video frame obtained in response to the touch operation are obtained, and the terminal performance of the target terminal is determined based on the first video frame and the second video frame. Therefore, the target terminal can be detected without professional detection software, and the detection efficiency can be greatly improved.
The embodiment of the disclosure also provides an electronic device, including: at least one processor; a memory for storing the at least one processor-executable instruction; wherein the at least one processor is configured to execute the instructions to implement the above-described methods disclosed by embodiments of the present disclosure.
Fig. 5 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present disclosure. As shown in fig. 5, the electronic device 1800 includes at least one processor 1801 and a memory 1802 coupled to the processor 1801, the processor 1801 may perform corresponding steps in the above-described methods disclosed by embodiments of the present disclosure.
The processor 1801 may also be referred to as a central processing unit (central processing unit, CPU), which may be an integrated circuit chip with signal processing capabilities. The steps of the above-described methods disclosed in the embodiments of the present disclosure may be accomplished by instructions in the form of integrated logic circuits or software in hardware in the processor 1801. The processor 1801 may be a general purpose processor, a digital signal processor (digital signal processing, DSP), an ASIC (Application Specific Integrated Circuit ), an off-the-shelf programmable gate array (field-programmable gate array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present disclosure may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may reside in a memory 1802 such as random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as is well known in the art. The processor 1801 reads the information in the memory 1802 and, in combination with its hardware, performs the steps of the method described above.
In addition, various operations/processes according to the present disclosure, when implemented by software and/or firmware, may be installed from a storage medium or network to a computer system having a dedicated hardware structure, such as computer system 1900 shown in fig. 6, which is capable of performing various functions including functions such as those described previously, and the like, when various programs are installed. Fig. 6 is a block diagram of a computer system according to an exemplary embodiment of the present disclosure.
Computer system 1900 is intended to represent various forms of digital electronic computing devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the computer system 1900 includes a computing unit 1901, and the computing unit 1901 may perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1902 or a computer program loaded from a storage unit 1908 into a Random Access Memory (RAM) 1903. In the RAM 1903, various programs and data required for the operation of the computer system 1900 may also be stored. The computing unit 1901, ROM 1902, and RAM 1903 are connected to each other via a bus 1904. An input/output (I/O) interface 1905 is also connected to bus 1904.
Various components in computer system 1900 are connected to I/O interface 1905, including: an input unit 1906, an output unit 1907, a storage unit 1908, and a communication unit 1909. The input unit 1906 may be any type of device capable of inputting information to the computer system 1900, and the input unit 1906 may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device. The output unit 1907 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, video/audio output terminals, vibrators, and/or printers. Storage unit 1908 may include, but is not limited to, magnetic disks, optical disks. The communication unit 1909 allows the computer system 1900 to exchange information/data with other devices over a network, such as the internet, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
The computing unit 1901 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1901 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 1901 performs the various methods and processes described above. For example, in some embodiments, the above-described methods disclosed by embodiments of the present disclosure may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 1908. In some embodiments, some or all of the computer programs may be loaded and/or installed onto the electronic device via ROM 1902 and/or communication unit 1909. In some embodiments, the computing unit 1901 may be configured to perform the above-described methods of the disclosed embodiments by any other suitable means (e.g., by means of firmware).
The disclosed embodiments also provide a computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the above-described method disclosed by the disclosed embodiments.
A computer readable storage medium in embodiments of the present disclosure may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium described above can include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specifically, the computer-readable storage medium described above may include one or more wire-based electrical connections, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The disclosed embodiments also provide a computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the above-described methods of the disclosed embodiments.
In an embodiment of the present disclosure, computer program code for performing the operations of the present disclosure may be written in one or more programming languages, including but not limited to an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of remote computers, the remote computers may be connected to the user computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to external computers.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules, components or units referred to in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a module, component or unit does not in some cases constitute a limitation of the module, component or unit itself.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
The above description is merely illustrative of some embodiments of the present disclosure and of the principles of the technology applied. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Although some specific embodiments of the present disclosure have been described in detail by way of example, it should be understood by those skilled in the art that the above examples are for illustration only and are not intended to limit the scope of the present disclosure. It will be appreciated by those skilled in the art that modifications may be made to the above embodiments without departing from the scope and spirit of the disclosure. The scope of the present disclosure is defined by the appended claims.

Claims (14)

1. A method for detecting terminal performance, the method comprising:
acquiring a video picture obtained by shooting a display screen comprising a target terminal;
acquiring a first video frame and a second video frame in the video picture; the first video frame comprises a video frame corresponding to the display screen when the touch operation is performed; the second video frame comprises a video frame corresponding to the touch operation when the picture content in the video picture changes;
terminal capabilities of the target terminal are determined based on the first video frame and the second video frame.
2. The method of claim 1, wherein the determining the terminal capability of the target terminal comprises:
and acquiring a touch operation type corresponding to the touch operation, and determining the terminal performance corresponding to the target terminal under the touch operation type.
3. The method of claim 2, wherein the video picture comprises a display area of an indicator light;
the obtaining the touch operation type corresponding to the touch operation includes:
when the display area is detected to have the indicator lights in the lighting state, the number of the indicator lights in the lighting state is obtained;
And determining the touch operation type corresponding to the touch operation based on the quantity.
4. A method according to claim 3, wherein an infrared correlation frame and a plurality of indicator lamps are arranged in an edge area of the display screen, and the infrared correlation frame is used for lighting a corresponding number of indicator lamps according to a touch operation type when the display screen is detected to have the touch operation.
5. The method according to claim 1, wherein the method further comprises:
when detecting that a jittering frame exists in the video picture, acquiring a target characteristic point in the video picture;
and acquiring offset information of the target feature points, and carrying out transformation processing on the jitter frame based on the offset information.
6. The method according to claim 1, wherein the method further comprises:
frame-by-frame extraction is carried out on the video picture, and the video picture is converted into a binary image with fixed resolution frame by frame;
and comparing the binarized images corresponding to the frames respectively to obtain the similarity between the adjacent frames, and determining the corresponding second video frame when the picture content in the video picture changes based on the similarity.
7. The method of claim 6, wherein the obtaining the similarity between adjacent frames comprises:
performing image segmentation processing on the video frames in the video picture to obtain a target image area containing a non-target object;
and performing color filling processing on the target image area, and acquiring the similarity between adjacent frames after the color filling processing.
8. The method of claim 1, wherein the determining the terminal capability of the target terminal comprises:
acquiring the total video frame number between the first video frame and the second video frame, and acquiring the katon frame number contained in the video frame set;
based on the total video frame number and the stuck frame number, a stuck ratio is obtained;
and determining the jamming degree of the target terminal based on the jamming ratio, and determining the terminal performance of the target terminal based on the jamming degree.
9. The method of claim 1, wherein the determining the terminal capability of the target terminal comprises:
acquiring the frame rate of the video picture, and determining an inter-frame interval based on the frame rate;
acquiring a difference value of corresponding frame numbers between the first video frame and the second video frame;
And determining response time delay of the target terminal based on the difference value and the inter-frame interval, and determining terminal performance of the target terminal based on the response time delay.
10. The method of claim 3, wherein the determining the terminal performance corresponding to the target terminal under the touch operation type includes:
when the indication lamp is detected to be on, starting timing to obtain a first moment;
stopping timing when detecting that the picture content in the video picture changes, and obtaining a second moment;
and determining response time delay corresponding to the target terminal under the touch operation type based on the first time and the second time, and determining terminal performance of the target terminal based on the response time delay.
11. The method according to claim 8 or 9, wherein said determining the response delay of the target terminal comprises:
when the touch operation is a click operation, acquiring an operation object of the click operation in the display screen;
and acquiring the response time delay of the operation object under the clicking operation, and taking the response time delay of the operation object as the response time delay of the target terminal.
12. A terminal performance detection apparatus, the apparatus comprising:
the video picture acquisition module is used for acquiring a video picture obtained by shooting a display screen comprising a target terminal;
the video frame acquisition module is used for acquiring a first video frame and a second video frame in the video picture; the first video frame comprises a video frame corresponding to the display screen when the touch operation is performed; the second video frame comprises a video frame corresponding to the touch operation when the picture content in the video picture changes;
and the terminal performance determining module is used for determining the terminal performance of the target terminal based on the first video frame and the second video frame.
13. An electronic device, comprising:
at least one processor;
a memory for storing the at least one processor-executable instruction;
wherein the at least one processor is configured to execute the instructions to implement the method of any of claims 1-11.
14. A computer readable storage medium, characterized in that instructions in the computer readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method of any one of claims 1-11.
CN202311666277.3A 2023-12-06 2023-12-06 Terminal performance detection method and device, electronic equipment and storage medium Pending CN117762752A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311666277.3A CN117762752A (en) 2023-12-06 2023-12-06 Terminal performance detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311666277.3A CN117762752A (en) 2023-12-06 2023-12-06 Terminal performance detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117762752A true CN117762752A (en) 2024-03-26

Family

ID=90324741

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311666277.3A Pending CN117762752A (en) 2023-12-06 2023-12-06 Terminal performance detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117762752A (en)

Similar Documents

Publication Publication Date Title
US9122917B2 (en) Recognizing gestures captured by video
CN106878668B (en) Movement detection of an object
TWI540462B (en) Gesture recognition method and electronic apparatus using the same
WO2019020103A1 (en) Target recognition method and apparatus, storage medium and electronic device
WO2020140610A1 (en) Image processing method and device, and computer-readable storage medium
CN113892254A (en) Image sensor under display
US10791252B2 (en) Image monitoring device, image monitoring method, and recording medium
CN110070063B (en) Target object motion recognition method and device and electronic equipment
US20200336661A1 (en) Video recording and processing method and electronic device
US10671887B2 (en) Best image crop selection
CN110572636B (en) Camera contamination detection method and device, storage medium and electronic equipment
JP6598960B2 (en) Method and imaging device for fog detection in one scene
CN112669344A (en) Method and device for positioning moving object, electronic equipment and storage medium
WO2020140611A1 (en) Vin code identification method based on image processing, device and medium
US20210350129A1 (en) Using neural networks for object detection in a scene having a wide range of light intensities
CN111382735A (en) Night vehicle detection method, device, equipment and storage medium
US10748019B2 (en) Image processing method and electronic apparatus for foreground image extraction
US10140540B2 (en) Vehicle imaging system
CN112069880A (en) Living body detection method, living body detection device, electronic apparatus, and computer-readable medium
CN110222576B (en) Boxing action recognition method and device and electronic equipment
CN117762752A (en) Terminal performance detection method and device, electronic equipment and storage medium
US9104937B2 (en) Apparatus and method for recognizing image with increased image recognition rate
CN114627561B (en) Dynamic gesture recognition method and device, readable storage medium and electronic equipment
Qi et al. Cascaded cast shadow detection method in surveillance scenes
US9234756B2 (en) Object tracking device capable of removing background noise and operating method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination