CN109240875B - Canton analysis method and system - Google Patents

Canton analysis method and system Download PDF

Info

Publication number
CN109240875B
CN109240875B CN201810764779.2A CN201810764779A CN109240875B CN 109240875 B CN109240875 B CN 109240875B CN 201810764779 A CN201810764779 A CN 201810764779A CN 109240875 B CN109240875 B CN 109240875B
Authority
CN
China
Prior art keywords
stuck
scene
probability
gesture
gesture track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810764779.2A
Other languages
Chinese (zh)
Other versions
CN109240875A (en
Inventor
张润琦
李琴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201810764779.2A priority Critical patent/CN109240875B/en
Publication of CN109240875A publication Critical patent/CN109240875A/en
Application granted granted Critical
Publication of CN109240875B publication Critical patent/CN109240875B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3051Monitoring arrangements for monitoring the configuration of the computing system or of the computing system component, e.g. monitoring the presence of processing resources, peripherals, I/O links, software programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3447Performance evaluation by modeling

Abstract

The invention discloses a method and a system for analyzing stuck, wherein the method comprises the following steps: receiving a gesture track record uploaded by a client; inputting the gesture track record into a pre-trained stuck analysis model to obtain a stuck analysis result, wherein the stuck analysis result comprises the probability that the gesture track record corresponds to each stuck scene; and obtaining a current stuck scene based on the stuck analysis result. The scene of the card pause can be accurately judged so as to process the reason of the card pause in real time and optimize the loss stopping efficiency.

Description

Canton analysis method and system
[ technical field ] A method for producing a semiconductor device
The invention relates to a computer application technology, in particular to a method and a system for Kanton analysis.
[ background of the invention ]
With the continuous development of smart phone technology, smart phone has become an indispensable part of people's daily life. The use of smartphones is mainly reflected in the use of various APPs. It is not easy to understand, and good user experience is the basis of all other targets and is also the key of quality assurance. For example, the playability of the mobile game APP is mainly limited by katton. Research has revealed that as the performance of the user's mobile phone increases, the fundamental shift of the bottleneck of the card is occurring. The main restriction factors of the method are changed from the mobile phone performance of a user to three dimensions of APP quality, machine room environment and peak concurrence.
The on-line stuck detection technology commonly used in the industry at present depends on the huge expense of mobile phone client equipment, consumes a large amount of electric quantity and flow of user equipment through long connection, and can cause the mobile phone to be hot and absorb cost, and the damage caused by the technology is far more than the expected value.
With the change of the main restriction factors of the blockage, the main bottleneck of the blockage is not in the client equipment, so that the blockage problem is expected to be accurately positioned by monitoring the client, and the fish is not required to be searched by the border wood.
And the client device carries out pause monitoring and reporting, and the observation point is not affected by system resources such as a CPU (central processing unit), an RAM (random access memory) and the like of the client device. Even if the cause of the card pause exists in the client, the observation points cannot accurately warn the problem. For example, the memory occupancy rate of the IOS system reaches 90%, and the flow is still very high, while some Android systems may be stuck when the memory occupancy rate exceeds 80%. The number of mainstream android ROMs on the market is not less than dozens, and the updating frequency is increasing. It is not possible to adapt fully for each ROM. Therefore, it is difficult to find reliable dimensions and thresholds for monitoring and early warning of client system resources. If the false alarm rate is high for a long time, the monitoring opportunity loses due warning significance, and in case of real risk, an engineer is difficult to make the fastest reaction in a numb state.
In summary, a new online quality monitoring technology is urgently needed to address new service pain points with a certain goal. The problem of dimension that traditional client card pause detection technology can not cover is solved.
[ summary of the invention ]
Aspects of the application provide a stuck analysis method, system, device and storage medium, which can accurately judge a corresponding stuck scene so as to process stuck reasons in real time and optimize loss stopping efficiency.
In one aspect of the present invention, a method for stuck analysis is provided, the method comprising:
receiving a gesture track record uploaded by a client;
inputting the gesture track record into a pre-trained stuck analysis model to obtain a stuck analysis result, wherein the stuck analysis result comprises the probability that the gesture track record corresponds to each stuck scene;
and obtaining a current stuck scene based on the stuck analysis result.
The above-described aspects and any possible implementation further provide an implementation, where the katon analysis model is obtained by training a preset neural network.
The above-described aspect and any possible implementation further provide an implementation, where the katon analysis model is trained by the following training steps:
taking gesture track records collected under different stuck scenes and corresponding stuck scene labels as training samples;
and training the neural network by using a machine learning method based on the training sample, a preset classification loss function and a back propagation algorithm to obtain a Katon analysis model.
The foregoing aspect and any possible implementation manner further provide an implementation manner, where obtaining a current stuck scene based on the stuck analysis result includes:
determining whether the probability that the gesture track record does not belong to a stuck scene is the maximum probability of the stuck analysis result;
and if the probability is not the maximum probability, selecting the probability from the probabilities of the gesture tracks corresponding to the various stuck scenes according to the numerical value of the probability, and taking the stuck scene corresponding to the selected probability as the current stuck scene.
As to the above-mentioned aspect and any possible implementation manner, further providing an implementation manner, wherein selecting the probability from the probabilities of the gesture trajectory corresponding to each morton scene further includes:
sequencing the probabilities of the gesture track records corresponding to the various incarceration scenes according to the sequence of the numerical values from large to small to obtain a probability sequence;
a preset number of probabilities is chosen starting from the head of the probability sequence.
As to the above-mentioned aspect and any possible implementation manner, further providing an implementation manner, wherein selecting the probability from the probabilities of the gesture trajectory corresponding to each morton scene further includes:
and selecting the probability not less than the probability threshold value from the probabilities corresponding to the various katon scenes in the gesture track record.
The above-described aspects and any possible implementations further provide an implementation, where the morton scenario includes:
a weak network card pause scene, a CPU intensive scene and an APP flash back scene.
The above-described aspect and any possible implementation manner further provide an implementation manner, where the taking gesture trajectory records collected under different katon scenes and corresponding katon scene labels as training samples includes:
constructing a simulated morton scene;
collecting gesture track records of an internal measurement user in a simulated morton scene;
labels corresponding to the stuck scene are marked for the collected gesture trajectory records.
In another aspect of the present invention, there is provided a stuck analysis system, the system comprising:
the receiving unit is used for receiving the gesture track record uploaded by the client;
the analysis unit is used for inputting the gesture track record into a pre-trained stuck analysis model to obtain a stuck analysis result, wherein the stuck analysis result comprises the probability that the gesture track record corresponds to each stuck scene;
and the output unit is used for obtaining the current stuck scene based on the stuck analysis result.
The above-described aspects and any possible implementation further provide an implementation, where the katon analysis model is obtained by training a preset neural network.
There is further provided in accordance with the above-mentioned aspect and any possible implementation form, an implementation form, the system further comprising a training unit configured to train the training unit to perform the training
Taking gesture track records collected under different stuck scenes and corresponding stuck scene labels as training samples;
and training the neural network by using a machine learning method based on the training sample, a preset classification loss function and a back propagation algorithm to obtain a Katon analysis model.
The above-described aspect and any possible implementation further provide an implementation, where the output unit includes:
the determining subunit is used for determining whether the probability that the gesture track record does not belong to the stuck scene is the maximum probability of the stuck analysis result;
and the generating subunit is used for selecting the probability from the probabilities of the gesture tracks corresponding to the various stuck scenes according to the numerical value of the probability if the probability is not the maximum probability, and taking the stuck scene corresponding to the selected probability as the current stuck scene.
The above-mentioned aspect and any possible implementation further provide an implementation, where the generating subunit is specifically configured to:
sequencing the probabilities of the gesture track records corresponding to the various incarceration scenes according to the sequence of the numerical values from large to small to obtain a probability sequence;
a preset number of probabilities is chosen starting from the head of the probability sequence.
The above-mentioned aspect and any possible implementation further provide an implementation, where the generating subunit is specifically configured to:
and selecting the probability not less than the probability threshold value from the probabilities corresponding to the various katon scenes in the gesture track record.
The above-described aspects and any possible implementations further provide an implementation, where the morton scenario includes:
a weak network card pause scene, a CPU intensive scene and an APP flash back scene.
The above-described aspect and any possible implementation further provide an implementation, where the training subunit is specifically configured to:
constructing a simulated morton scene;
collecting gesture track records of an internal measurement user in a simulated morton scene;
labels corresponding to the stuck scene are marked for the collected gesture trajectory records.
In another aspect of the present invention, a computer device is provided, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method as described above when executing the program.
In another aspect of the invention, a computer-readable storage medium is provided, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the method as set forth above.
Based on the introduction, the scheme of the invention can accurately judge the corresponding stuck scene so as to process the stuck reason in real time and optimize the loss stopping efficiency.
[ description of the drawings ]
FIG. 1 is a flow chart of a method of stuck analysis according to the present invention;
FIG. 2 is a block diagram of a Cartesian analysis system according to the present invention;
fig. 3 illustrates a block diagram of an exemplary computer system/server 012 suitable for use in implementing embodiments of the invention.
[ detailed description ] embodiments
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a flowchart of an embodiment of the katon analysis method of the present invention, as shown in fig. 1, including the following steps:
step S11, receiving a gesture track record uploaded by the client;
step S12, inputting the gesture track record into a pre-trained Canton analysis model to obtain a Canton analysis result, wherein the Canton analysis result comprises the probability that the gesture track record corresponds to each Canton scene;
and step S13, obtaining the current stuck scene based on the stuck analysis result.
In one preferred implementation of step S11,
and the server receives the gesture track record of the user sent by the client installed on the mobile phone of the user in real time or at regular time.
Preferably, the client is the APP for which the morton analysis is to be performed, or the client is only the client that records and sends the gesture trajectory of the APP for which the morton analysis is to be performed by the user.
Preferably, the server receives the gesture track record of the user sent by the client in real time, and cuts the gesture track record into a group in a certain time period, for example, 2 minutes.
Preferably, the server receives the gesture track records of the user sent by the client in a group of a certain time period, for example, 2 minutes.
Preferably, each time segment comprises a plurality of traces.
In one preferred implementation of step S12,
preferably, the gesture track records are preprocessed, for example, each group of gesture tracks is converted into an N-dimensional vector such as: [ D0, D1.. Dn ], where D represents a feature dimension, including but not limited to: in the group of gestures, the occurrence frequency of the sliding operation, the sliding operation frequency, the clicking operation frequency, the long-term pressing operation frequency and the gesture position area (according to requirements, a screen 4 method and a screen 9 method are common). And carrying out data normalization processing on the N-dimensional vector, and normalizing each characteristic dimension value to be within a (0-1) interval.
And inputting the preprocessed gesture track record into a pre-trained stuck analysis model to obtain a stuck analysis result.
Preferably, the katton analysis model is obtained by training a preset neural network, wherein the neural network comprises a convolutional layer, a pooling layer, a fully-connected layer and a lossy layer, and the neural network comprises bayesian, CNN and DNN neural networks.
And step A, acquiring gesture track records collected under different stuck scenes and corresponding stuck scene labels as training samples.
Preferably, a simulation experiment is constructed, and a controllable abnormal scene is preset. Constructing a request congestion scene, for example, by an Ngi nx reverse proxy; simulating a smooth network gradual change and a weak network scene through the routing speed limit; constructing a CPU intensive scene through preset floating point operation; constructing a controllable crash scene by issuing abnormal data through a cloud end; and putting the special scene executor into a testing device in an Agent form, such as a mobile phone provided with an APP to be tested. A certain scenario can be triggered/closed dynamically, with a control at any time. The user trial playing process is tested within the APP without advance notification of abnormal conditions. So as to collect the most real gesture track record, and automatically printing the label of the corresponding scene on the collected gesture track record.
Through the automatic labeling technology, the cost and efficiency of training data can be greatly optimized.
Step B, taking gesture track records collected under different stuck scenes and corresponding stuck scene labels as training samples;
preferably, the gesture track records are preprocessed, for example, each group of gesture tracks is converted into an N-dimensional vector such as: [ D0, D1.. Dn ], where D represents a feature dimension, including but not limited to: in the group of gestures, the occurrence frequency of the sliding operation, the sliding operation frequency, the clicking operation frequency, the long-term pressing operation frequency and the gesture position area (according to requirements, a screen 4 method and a screen 9 method are common). And carrying out data normalization processing on the N-dimensional vector, and normalizing each characteristic dimension value to be within a (0-1) interval.
And step C, training the neural network by using a machine learning method based on the training sample, a preset classification loss function and a back propagation algorithm to obtain a Katon analysis model.
Preferably, the training sample is input into a preset neural network to obtain a first recognition result corresponding to the sample, the execution end may determine a difference between the first recognition result and a label corresponding to the sample by using a preset classification loss function, and a preset back propagation algorithm is used to adjust parameters in the preset neural network according to the difference.
It should be noted that the classification Loss function may be various Loss functions for classification (for example, a hindeloss function or a Softmax Loss function). During the training process, the classification loss function can restrict the mode and direction of the convolution kernel modification, and the training aim is to minimize the value of the classification loss function. Therefore, the parameters of the convolutional neural network obtained after training are the parameters corresponding to the minimum value of the classification loss function.
In addition, the back propagation algorithm may also be referred to as an error back propagation algorithm or an error inverse propagation algorithm. The learning process of the back propagation algorithm consists of a forward propagation process and a back propagation process. In a feed-forward network, an input signal is input through an input layer, calculated by a hidden layer, and output by an output layer. The output value is compared with the labeled value, if there is an error, the error is reversely propagated from the output layer to the input layer, and in the process, the neuron weight (such as the parameter of the convolution kernel in the convolution layer) can be adjusted by using a gradient descent algorithm (such as a random gradient descent algorithm).
In one preferred implementation of step S13,
preferably, obtaining the current stuck scene based on the stuck analysis result includes: determining whether the probability that the gesture track record does not belong to a stuck scene is the maximum probability of the stuck analysis result; and if the probability is not the maximum probability, selecting the probability from the probabilities of the gesture tracks corresponding to the various stuck scenes according to the numerical value of the probability, and taking the stuck scene corresponding to the selected probability as the current stuck scene.
Due to the occurrence of katton, it may not be limited to a single scene, but a mixture of multiple scenes; thus:
preferably, selecting the probability from the probabilities of the gesture trajectory corresponding to the respective stuck scenes further includes: sequencing the probabilities of the gesture track records corresponding to the various incarceration scenes according to the sequence of the numerical values from large to small to obtain a probability sequence; a preset number of probabilities is chosen starting from the head of the probability sequence.
Preferably, selecting the probability from the probabilities of the gesture trajectory corresponding to the respective stuck scenes further includes: and selecting the probability not less than the probability threshold value from the probabilities corresponding to the various katon scenes in the gesture track record.
In a preferred implementation of this embodiment,
the server can obtain a countermeasure through preset logic according to the current stuck scene.
For example, if the current state is stuck and the stuck scene is a CPU intensive scene, feeding back to the load balancing host according to the preset logic to indicate that a new try-play request is not connected to the host with higher CPU utilization rate; similarly, the effective operations such as dynamic capacity expansion can also be triggered. The first time senses and addresses the user's stuck pain point.
Preferably, the server may record the occurrence time of the katton scene, provide asynchronous analysis capability represented by HDFS (Hadoop distributed file system), and act as operation decision assistance. And (3) adopting timing task triggering, such as daily zero point, analyzing all data in the previous day, and outputting an analysis result, wherein the analysis result comprises the following steps:
the pause rate-time distribution trend guides accurate capacity expansion section time, guarantees user experience, and can be used as a core sensing component of dynamic storage capacity;
the blocking rate-geographical distribution trend guides accurate positioning of the short board machine room, transverse ratio analysis is carried out on the bad machine room and the good machine room by combining a single variable rule, the final positioning is that CDN, host configuration, network bandwidth and the like are subjected to fundamental optimization point blocking rate-APP distribution trend, accurate elimination of APP with stability smaller than blocking threshold value is guided, or APP with 20% of normal distribution tail end is eliminated periodically, and guarantee trial play experience is maintained at a higher level
According to the embodiment of the invention, the gesture characteristics of the user can be extracted through machine learning according to the gesture track record of the user, and the corresponding stuck scene can be accurately judged, so that the stuck reason can be processed in real time, and the loss stopping efficiency is optimized.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
The above is a description of method embodiments, and the embodiments of the present invention are further described below by way of apparatus embodiments.
Fig. 2 is a structural diagram of an embodiment of the katon analysis system of the present invention, as shown in fig. 2, including:
the receiving unit 21 is configured to receive a gesture track record uploaded by the client;
the analysis unit 22 is configured to input the gesture trajectory record into a pre-trained stuck analysis model to obtain a stuck analysis result, where the stuck analysis result includes probabilities of the gesture trajectory record corresponding to each stuck scene;
and the output unit 23 is configured to obtain a current stuck scene based on the stuck analysis result.
In a preferred implementation of the receiving unit 21,
the receiving unit 21 receives the gesture track record of the user sent by the client installed in the mobile phone of the user in real time or at regular time.
Preferably, the client is the APP for which the morton analysis is to be performed, or the client is only the client that records and sends the gesture trajectory of the APP for which the morton analysis is to be performed by the user.
Preferably, the receiving unit 21 receives the gesture track record of the user sent by the client in real time, and cuts the gesture track record into a group in a certain time period, for example, 2 minutes.
Preferably, the receiving unit 21 receives the gesture track records of the user sent by the client in a group of a certain time period, for example, 2 minutes.
Preferably, each time segment comprises a plurality of traces.
In a preferred implementation of the analysis unit 22,
preferably, the gesture track records are preprocessed, for example, each group of gesture tracks is converted into an N-dimensional vector such as: [ D0, D1.. Dn ], where D represents a feature dimension, including but not limited to: in the group of gestures, the occurrence frequency of the sliding operation, the sliding operation frequency, the clicking operation frequency, the long-term pressing operation frequency and the gesture position area (according to requirements, a screen 4 method and a screen 9 method are common). And carrying out data normalization processing on the N-dimensional vector, and normalizing each characteristic dimension value to be within a (0-1) interval.
And inputting the preprocessed gesture track record into a pre-trained stuck analysis model to obtain a stuck analysis result.
Preferably, the system further includes a training unit 24, and the katton analysis model is obtained by the training unit 24 through training a preset neural network, wherein the neural network includes a convolutional layer, a pooling layer, a fully-connected layer and a lossy layer, and the neural network includes bayesian, CNN and DNN neural networks.
And step A, acquiring gesture track records collected under different stuck scenes and corresponding stuck scene labels as training samples.
Preferably, a simulation experiment is constructed, and a controllable abnormal scene is preset. Constructing a request congestion scene, for example, by a Nginx reverse proxy; simulating a smooth network gradual change and a weak network scene through the routing speed limit; constructing a CPU intensive scene through preset floating point operation; constructing a controllable crash scene by issuing abnormal data through a cloud end; and putting the special scene executor into a testing device in an Agent form, such as a mobile phone provided with an APP to be tested. A certain scenario can be triggered/closed dynamically, with a control at any time. The user trial playing process is tested within the APP without advance notification of abnormal conditions. So as to collect the most real gesture track record, and automatically printing the label of the corresponding scene on the collected gesture track record.
Through the automatic labeling technology, the cost and efficiency of training data can be greatly optimized.
Step B, taking gesture track records collected under different stuck scenes and corresponding stuck scene labels as training samples;
preferably, the gesture track records are preprocessed, for example, each group of gesture tracks is converted into an N-dimensional vector such as: [ D0, D1.. Dn ], where D represents a feature dimension, including but not limited to: in the group of gestures, the occurrence frequency of the sliding operation, the sliding operation frequency, the clicking operation frequency, the long-term pressing operation frequency and the gesture position area (according to requirements, a screen 4 method and a screen 9 method are common). And carrying out data normalization processing on the N-dimensional vector, and normalizing each characteristic dimension value to be within a (0-1) interval.
And step C, training the neural network by using a machine learning method based on the training sample, a preset classification loss function and a back propagation algorithm to obtain a Katon analysis model.
Preferably, the analysis unit 22 inputs the training sample into a preset neural network to obtain a first recognition result corresponding to the sample, the execution end may determine a difference between the first recognition result and a label corresponding to the sample by using a preset classification loss function, and adjust a parameter in the preset neural network by using a preset back propagation algorithm according to the difference.
It should be noted that the classification Loss function may be various Loss functions for classification (for example, a hindeloss function or a Softmax Loss function). During the training process, the classification loss function can restrict the mode and direction of the convolution kernel modification, and the training aim is to minimize the value of the classification loss function. Therefore, the parameters of the convolutional neural network obtained after training are the parameters corresponding to the minimum value of the classification loss function.
In addition, the back propagation algorithm may also be referred to as an error back propagation algorithm or an error inverse propagation algorithm. The learning process of the back propagation algorithm consists of a forward propagation process and a back propagation process. In a feed-forward network, an input signal is input through an input layer, calculated by a hidden layer, and output by an output layer. The output value is compared with the labeled value, if there is an error, the error is reversely propagated from the output layer to the input layer, and in the process, the neuron weight (such as the parameter of the convolution kernel in the convolution layer) can be adjusted by using a gradient descent algorithm (such as a random gradient descent algorithm).
In a preferred implementation of the output unit 23,
preferably, the output unit 23 obtains the current katon scene based on the result of the katon analysis, and includes: determining whether the probability that the gesture track record does not belong to a stuck scene is the maximum probability of the stuck analysis result; and if the probability is not the maximum probability, selecting the probability from the probabilities of the gesture tracks corresponding to the various stuck scenes according to the numerical value of the probability, and taking the stuck scene corresponding to the selected probability as the current stuck scene.
Due to the occurrence of katton, it may not be limited to a single scene, but a mixture of multiple scenes; thus:
preferably, selecting the probability from the probabilities of the gesture trajectory corresponding to the respective stuck scenes further includes: sequencing the probabilities of the gesture track records corresponding to the various incarceration scenes according to the sequence of the numerical values from large to small to obtain a probability sequence; a preset number of probabilities is chosen starting from the head of the probability sequence.
Preferably, selecting the probability from the probabilities of the gesture trajectory corresponding to the respective stuck scenes further includes: and selecting the probability not less than the probability threshold value from the probabilities corresponding to the various katon scenes in the gesture track record.
In a preferred implementation of this embodiment,
the server may further include a decision unit 25, configured to obtain a countermeasure through a preset logic according to the current stuck scene.
For example, if the current state is stuck and the stuck scene is a CPU intensive scene, feeding back to the load balancing host according to the preset logic to indicate that a new try-play request is not connected to the host with higher CPU utilization rate; similarly, the effective operations such as dynamic capacity expansion can also be triggered. The first time senses and addresses the user's stuck pain point.
Preferably, the decision unit 25 may further record the occurrence time of the katton scene, provide an asynchronous analysis capability represented by HDFS (Hadoop distributed file system), and act as an operation decision aid. And (3) adopting timing task triggering, such as daily zero point, analyzing all data in the previous day, and outputting an analysis result, wherein the analysis result comprises the following steps:
the pause rate-time distribution trend guides accurate capacity expansion section time, guarantees user experience, and can be used as a core sensing component of dynamic storage capacity;
the blocking rate-geographical distribution trend guides accurate positioning of the short board machine room, transverse ratio analysis is carried out on the bad machine room and the good machine room by combining a single variable rule, the final positioning is that CDN, host configuration, network bandwidth and the like are subjected to fundamental optimization point blocking rate-APP distribution trend, accurate elimination of APP with stability smaller than blocking threshold value is guided, or APP with 20% of normal distribution tail end is eliminated periodically, and guarantee trial play experience is maintained at a higher level
According to the embodiment of the invention, the gesture characteristics of the user can be extracted through machine learning according to the gesture track record of the user, and the corresponding stuck scene can be accurately judged, so that the stuck reason can be processed in real time, and the loss stopping efficiency is optimized.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the terminal and the server described above may refer to corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processor, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Fig. 3 illustrates a block diagram of an exemplary computer system/server 012 suitable for use in implementing embodiments of the invention. The computer system/server 012 shown in fig. 3 is only an example, and should not bring any limitations to the function and the scope of use of the embodiments of the present invention.
As shown in fig. 3, the computer system/server 012 is embodied as a general purpose computing device. The components of computer system/server 012 may include, but are not limited to: one or more processors or processors 016, a system memory 028, and a bus 018 that couples various system components including the system memory 028 and the processors 016.
Bus 018 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Computer system/server 012 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer system/server 012 and includes both volatile and nonvolatile media, removable and non-removable media.
System memory 028 can include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)030 and/or cache memory 032. The computer system/server 012 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 034 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 3, commonly referred to as a "hard drive"). Although not shown in FIG. 3, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In such cases, each drive may be connected to bus 018 via one or more data media interfaces. Memory 028 can include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of embodiments of the present invention.
Program/utility 040 having a set (at least one) of program modules 042 can be stored, for example, in memory 028, such program modules 042 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof might include an implementation of a network environment. Program modules 042 generally perform the functions and/or methodologies of embodiments of the present invention as described herein.
The computer system/server 012 may also communicate with one or more external devices 014 (e.g., keyboard, pointing device, display 024, etc.), hi the present invention, the computer system/server 012 communicates with an external radar device, and may also communicate with one or more devices that enable a speaker to interact with the computer system/server 012, and/or with any device (e.g., network card, modem, etc.) that enables the computer system/server 012 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 022. Also, the computer system/server 012 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the internet) via the network adapter 020. As shown in fig. 3, the network adapter 020 communicates with the other modules of the computer system/server 012 via bus 018. It should be appreciated that although not shown in fig. 3, other hardware and/or software modules may be used in conjunction with the computer system/server 012, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processor 016 executes programs stored in the system memory 028 to perform the functions and/or methods of the described embodiments of the present invention.
The computer program described above may be provided in a computer storage medium encoded with a computer program that, when executed by one or more computers, causes the one or more computers to perform the method flows and/or apparatus operations shown in the above-described embodiments of the invention.
With the development of time and technology, the meaning of media is more and more extensive, and the propagation path of computer programs is not limited to tangible media any more, and can also be downloaded from a network directly and the like. Any combination of one or more computer-readable media may be employed. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the speaker computer, partly on the speaker computer, as a stand-alone software package, partly on the speaker computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the speaker's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processor, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (16)

1. A method of katon analysis, the method comprising:
receiving a gesture track record uploaded by a client;
inputting the gesture track record into a stuck analysis model obtained in advance based on neural network training to obtain a stuck analysis result, wherein the stuck analysis result comprises the probability that the gesture track record corresponds to each stuck scene; wherein the stuck scene is determined based on different abnormal scenes;
and obtaining a current stuck scene based on the stuck analysis result.
2. The method of claim 1, wherein the katon analysis model is trained by the following training steps:
taking gesture track records collected under different stuck scenes and corresponding stuck scene labels as training samples;
and training the neural network by using a machine learning method based on the training sample, a preset classification loss function and a back propagation algorithm to obtain a Katon analysis model.
3. The method of claim 1, wherein deriving a current stuck scene based on the stuck analysis result comprises:
determining whether the probability that the gesture track record does not belong to a stuck scene is the maximum probability of the stuck analysis result;
and if the probability is not the maximum probability, selecting the probability from the probabilities of the gesture tracks corresponding to the various stuck scenes according to the numerical value of the probability, and taking the stuck scene corresponding to the selected probability as the current stuck scene.
4. The method of claim 3, wherein selecting the probability from the probabilities of the gesture trajectory corresponding to the respective stuck scene further comprises:
sequencing the probabilities of the gesture track records corresponding to the various incarceration scenes according to the sequence of the numerical values from large to small to obtain a probability sequence;
a preset number of probabilities is chosen starting from the head of the probability sequence.
5. The method of claim 3, wherein selecting the probability from the probabilities of the gesture trajectory corresponding to the respective stuck scene further comprises:
and selecting the probability not less than the probability threshold value from the probabilities corresponding to the various katon scenes in the gesture track record.
6. The method of claim 1, wherein the stuck scene comprises:
a weak network card pause scene, a CPU intensive scene and an APP flash back scene.
7. The method of claim 2, wherein the taking as training samples the gesture trajectory records and corresponding katon scene labels collected under different katon scenes comprises:
constructing a simulated morton scene;
collecting gesture track records of an internal measurement user in a simulated morton scene;
labels corresponding to the stuck scene are marked for the collected gesture trajectory records.
8. A katon analysis system, the system comprising:
the receiving unit is used for receiving the gesture track record uploaded by the client;
the analysis unit is used for inputting the gesture track record into a stuck analysis model obtained in advance based on neural network training to obtain a stuck analysis result, wherein the stuck analysis result comprises the probability that the gesture track record corresponds to each stuck scene; wherein the stuck scene is determined based on different abnormal scenes;
and the output unit is used for obtaining the current stuck scene based on the stuck analysis result.
9. The system of claim 8, further comprising a training unit, configured to use gesture trajectory records collected under different katon scenes and corresponding katon scene labels as training samples;
and training the neural network by using a machine learning method based on the training sample, a preset classification loss function and a back propagation algorithm to obtain a Katon analysis model.
10. The system of claim 8, wherein the output unit comprises:
the determining subunit is used for determining whether the probability that the gesture track record does not belong to the stuck scene is the maximum probability of the stuck analysis result;
and the generating subunit is used for selecting the probability from the probabilities of the gesture tracks corresponding to the various stuck scenes according to the numerical value of the probability if the probability is not the maximum probability, and taking the stuck scene corresponding to the selected probability as the current stuck scene.
11. The system according to claim 10, wherein the generating subunit is specifically configured to:
sequencing the probabilities of the gesture track records corresponding to the various incarceration scenes according to the sequence of the numerical values from large to small to obtain a probability sequence;
a preset number of probabilities is chosen starting from the head of the probability sequence.
12. The system according to claim 10, wherein the generating subunit is specifically configured to:
and selecting the probability not less than the probability threshold value from the probabilities corresponding to the various katon scenes in the gesture track record.
13. The system of claim 8, wherein the stuck scene comprises:
a weak network card pause scene, a CPU intensive scene and an APP flash back scene.
14. The system of claim 9, wherein the training unit is specifically configured to:
constructing a simulated morton scene;
collecting gesture track records of an internal measurement user in a simulated morton scene;
labels corresponding to the stuck scene are marked for the collected gesture trajectory records.
15. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the method of any one of claims 1 to 7.
16. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN201810764779.2A 2018-07-12 2018-07-12 Canton analysis method and system Active CN109240875B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810764779.2A CN109240875B (en) 2018-07-12 2018-07-12 Canton analysis method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810764779.2A CN109240875B (en) 2018-07-12 2018-07-12 Canton analysis method and system

Publications (2)

Publication Number Publication Date
CN109240875A CN109240875A (en) 2019-01-18
CN109240875B true CN109240875B (en) 2022-05-03

Family

ID=65072577

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810764779.2A Active CN109240875B (en) 2018-07-12 2018-07-12 Canton analysis method and system

Country Status (1)

Country Link
CN (1) CN109240875B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109756762B (en) * 2019-01-29 2020-10-02 北京奇艺世纪科技有限公司 Method and device for determining terminal category
CN110311806B (en) * 2019-06-06 2020-11-10 上海交通大学 Mobile application program interface response delay diagnosis method, system and terminal
CN112445687A (en) * 2019-08-30 2021-03-05 深信服科技股份有限公司 Blocking detection method of computing equipment and related device
CN110908864A (en) * 2019-11-11 2020-03-24 腾讯科技(深圳)有限公司 Equipment blocking processing method, device, equipment and medium
CN110888781B (en) * 2019-11-21 2021-11-16 腾讯科技(深圳)有限公司 Application blockage detection method and detection device
CN111260335A (en) * 2020-02-12 2020-06-09 上海发才网络信息技术有限公司 Fission type benefit sharing mode for human resource service promotion
CN113453076B (en) * 2020-03-24 2023-07-14 中国移动通信集团河北有限公司 User video service quality evaluation method, device, computing equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104869203A (en) * 2015-06-18 2015-08-26 广东欧珀移动通信有限公司 Unsmooth running testing method and device, and testing equipment
CN105260117A (en) * 2015-09-30 2016-01-20 小米科技有限责任公司 Application control method and apparatus
CN105389252A (en) * 2015-10-16 2016-03-09 华为技术有限公司 Method and device for feeding back test problem
CN105637497A (en) * 2013-07-12 2016-06-01 谷歌公司 Methods and systems for performance monitoring for mobile applications
CN106940805A (en) * 2017-03-06 2017-07-11 江南大学 A kind of group behavior analysis method based on mobile phone sensor

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8298853B2 (en) * 2010-08-10 2012-10-30 International Business Machines Corporation CMOS pixel sensor cells with poly spacer transfer gates and methods of manufacture

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105637497A (en) * 2013-07-12 2016-06-01 谷歌公司 Methods and systems for performance monitoring for mobile applications
CN104869203A (en) * 2015-06-18 2015-08-26 广东欧珀移动通信有限公司 Unsmooth running testing method and device, and testing equipment
CN105260117A (en) * 2015-09-30 2016-01-20 小米科技有限责任公司 Application control method and apparatus
CN105389252A (en) * 2015-10-16 2016-03-09 华为技术有限公司 Method and device for feeding back test problem
CN106940805A (en) * 2017-03-06 2017-07-11 江南大学 A kind of group behavior analysis method based on mobile phone sensor

Also Published As

Publication number Publication date
CN109240875A (en) 2019-01-18

Similar Documents

Publication Publication Date Title
CN109240875B (en) Canton analysis method and system
CN109241418B (en) Abnormal user identification method and device based on random forest, equipment and medium
US10565442B2 (en) Picture recognition method and apparatus, computer device and computer- readable medium
US11645554B2 (en) Method and apparatus for recognizing a low-quality article based on artificial intelligence, device and medium
US11075862B2 (en) Evaluating retraining recommendations for an automated conversational service
US20190333118A1 (en) Cognitive product and service rating generation via passive collection of user feedback
CN110442712B (en) Risk determination method, risk determination device, server and text examination system
CN109447156B (en) Method and apparatus for generating a model
US20170316328A1 (en) Pollution prediction
CN110363220B (en) Behavior class detection method and device, electronic equipment and computer readable medium
US20190286698A1 (en) Dialog flow evaluation
US11321165B2 (en) Data selection and sampling system for log parsing and anomaly detection in cloud microservices
CN114925938B (en) Electric energy meter running state prediction method and device based on self-adaptive SVM model
CN112883990A (en) Data classification method and device, computer storage medium and electronic equipment
CN110909005B (en) Model feature analysis method, device, equipment and medium
CN113282920B (en) Log abnormality detection method, device, computer equipment and storage medium
CN111125658A (en) Method, device, server and storage medium for identifying fraudulent users
US20220214948A1 (en) Unsupervised log data anomaly detection
CN111385659A (en) Video recommendation method, device, equipment and storage medium
US20180025062A1 (en) Data searching apparatus
CN112434178A (en) Image classification method and device, electronic equipment and storage medium
CN114202224B (en) Method, apparatus, medium for detecting weld quality in a production environment
CN110059180B (en) Article author identity recognition and evaluation model training method and device and storage medium
CN110674839B (en) Abnormal user identification method and device, storage medium and electronic equipment
CN109408531B (en) Method and device for detecting slow-falling data, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant