CN113419621A - Abnormal behavior detection method, page and gesture processing method, device and electronic equipment - Google Patents

Abnormal behavior detection method, page and gesture processing method, device and electronic equipment Download PDF

Info

Publication number
CN113419621A
CN113419621A CN202011271295.8A CN202011271295A CN113419621A CN 113419621 A CN113419621 A CN 113419621A CN 202011271295 A CN202011271295 A CN 202011271295A CN 113419621 A CN113419621 A CN 113419621A
Authority
CN
China
Prior art keywords
gesture
user
image
page
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011271295.8A
Other languages
Chinese (zh)
Other versions
CN113419621B (en
Inventor
卜佳俊
黄刚
周晟
张怡远
刘美含
杜鑫明
王炜
徐筝
吴越
张震
张建锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Publication of CN113419621A publication Critical patent/CN113419621A/en
Application granted granted Critical
Publication of CN113419621B publication Critical patent/CN113419621B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Abstract

The embodiment of the invention provides an abnormal behavior detection method, a page and gesture processing method, a device and electronic equipment, wherein the method comprises the steps of obtaining gesture track data of a user on a specified page; drawing a gesture image corresponding to the designated page according to the gesture track data; and performing characteristic analysis on the gesture image to generate a detection result of the abnormal behavior of the user. In the embodiment of the invention, the acquired gesture track data of the user on the designated page is converted into the gesture image corresponding to the page, and then the abnormal behavior of the user is found based on the image characteristic analysis, so that the abnormal behavior of the user can be found more quickly and accurately.

Description

Abnormal behavior detection method, page and gesture processing method, device and electronic equipment
Technical Field
The application relates to an abnormal behavior detection method, a page and gesture processing device and electronic equipment, and belongs to the technical field of computers.
Background
The user has various experience obstacles or blocking problems in the process of using App software, so that user behaviors are abnormal, for example, when a page is jammed, the user may want to refresh the page continuously, or when a certain key or input box in the page is invalid, the user may click repeatedly, and the like. When the user has these abnormal behaviors, the user experience of using the APP will be greatly influenced. Therefore, the experience obstacles and the blocking problems are analyzed and found in time, and the method plays an important role in improving the App experience, and further improving the click rate, the utilization rate and the like of the user.
In the prior art, for the detection of user behavior abnormality, auxiliary judgment is mostly performed through information of other data sources, for example, after a page opening request is detected for a plurality of times continuously, a problem of access to the page is discovered, and the like. The user behavior abnormity is judged based on the indirect data source, so that serious hysteresis is often generated, the abnormity of the user behavior cannot be found in time, and the experience obstacle or blocking problem cannot be found in time.
Disclosure of Invention
The embodiment of the invention provides an abnormal behavior detection method, a page and gesture processing device and electronic equipment, and can find the abnormality of user behaviors.
In order to achieve the above object, an embodiment of the present invention provides a method for detecting an abnormal behavior of a user, including:
acquiring gesture track data of a user on a designated page;
drawing a gesture image corresponding to the designated page according to the gesture track data;
and carrying out image feature analysis on the gesture image to generate a detection result of the abnormal behavior of the user.
The embodiment of the present invention further provides a device for detecting an abnormal behavior of a user, including:
the gesture track data acquisition module is used for acquiring gesture track data of a user on a specified page;
the gesture image generation module is used for drawing a gesture image corresponding to the specified page according to the gesture track data;
and the detection result generation module is used for carrying out image characteristic analysis on the gesture image and generating a detection result of the abnormal behavior of the user.
An embodiment of the present invention further provides an electronic device, including:
a memory for storing a program;
and the processor is used for operating the program stored in the memory so as to execute the detection method of the abnormal user behavior.
The embodiment of the invention also provides a page processing method, which comprises the following steps:
acquiring gesture track data of a user on a page, which is submitted by at least one terminal;
determining a detection result of the abnormal behavior of the user according to the gesture track data; and executing page updating processing and/or switching a page interaction mode according to the detection result.
The embodiment of the invention also provides a page processing method, which comprises the following steps:
detecting gesture track data of a user on a page;
determining a detection result of the abnormal behavior of the user according to the gesture track data;
and according to the detection result of the abnormal behavior of the user, acquiring page updating data corresponding to the detection result and/or a command for switching a page interaction mode, and executing page updating and/or page interaction mode switching.
An embodiment of the present invention further provides an electronic device, including:
a memory for storing a program;
and the processor is used for operating the program stored in the memory so as to execute the page processing method.
The embodiment of the invention also provides a method for processing the non-contact gesture, which comprises the following steps:
acquiring gesture track data of a non-contact gesture of a user;
determining a detection result of the non-contact gesture of the user according to the gesture track data;
and switching the interaction mode of the equipment and/or guiding the user by the non-contact gesture according to the detection result of the non-contact gesture of the user.
An embodiment of the present invention further provides an electronic device, including:
a memory for storing a program;
and the processor is used for operating the program stored in the memory so as to execute the processing method of the non-contact gesture.
In the embodiment of the invention, the acquired gesture track data of the user on the designated page is converted into the gesture image corresponding to the page, and then the abnormal behavior of the user is found based on the image characteristic analysis, so that the abnormal behavior of the user can be found more quickly and accurately. Further, the problem existing in the page can be solved according to the detected problem, for example, switching the page interaction mode or performing page repair.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Fig. 1 is a schematic diagram illustrating a data processing principle of a method for detecting abnormal user behavior according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a system architecture to which the user abnormal behavior detection method according to the embodiment of the present invention is applied;
FIG. 3 is a schematic diagram illustrating a schematic diagram of drawing a gesture image according to an embodiment of the invention;
FIG. 4 is a flowchart illustrating a method for detecting abnormal user behavior according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of an apparatus for detecting abnormal behavior of a user according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The gesture trajectory data is data generated by gesture operation of a user when the user uses the APP, and generally, the user performs various gesture operations on a certain page, so as to perform various information interactions. The gesture trajectory data may include information in two aspects of a time dimension and a space dimension, where the space dimension may be position information of the gesture motion in the page area, and specifically may be coordinate information of a motion trajectory corresponding to the gesture motion, and the time dimension may be time corresponding to the gesture motion, and specifically may be start and end time of the gesture motion, time information corresponding to each point on the motion trajectory, and the like. Because the touch screen device occupies a very large proportion in the existing terminal device, the gesture behavior of the user on the application page can directly reflect the state of the user when the user uses the application, and the gesture trajectory data can fully depict the user experience at the page level. In the embodiment of the invention, a technical scheme for detecting the abnormal behavior of the user based on the gesture track data of the user on the page is provided, so that the problems of use obstacle and blocking of the user can be found in time.
Fig. 1 is a schematic diagram illustrating a data processing principle of a method for detecting abnormal user behavior according to an embodiment of the present invention. The specified page on the leftmost side may be a page of an APP on a user terminal or a page of a browser, and the user may perform an operation such as clicking, double clicking, single-finger sliding, multi-finger sliding, and the like on the specified page, and the operation trajectory may be a point, a straight line, or a curved line. In practical application, gesture trajectory data of a period from the time when a user enters the specified page to the time when the user leaves the specified page can be collected as a detection sample, so that a gesture operation behavior which is relatively complete for the specified page can be formed, and whether behavior abnormality exists when the user uses the page or not can be better judged. In the embodiment of the present invention, the user behavior abnormality may refer to a use obstacle and a blocking problem caused by various problems occurring in a page on the user terminal or a server-side system corresponding to the page.
In the embodiment of the invention, the gesture track data is not directly analyzed and processed, but the gesture track data is converted into the corresponding gesture image, and then the image analysis model is used for detecting the abnormal behavior of the user based on the image characteristic data. Because the gesture actions on the designated page have strong track or position characteristics, the intrinsic information contained in the gesture actions can be better mined after the gesture actions are converted into the gesture images. The image analysis model can be realized by adopting an unsupervised learning-based image clustering model or a supervised learning-based image classification model.
For the clustering model, gesture images corresponding to massive user gesture track data samples can be taken as clustering samples in advance to be clustered, and a clustering space is formed. As described above, the gesture image corresponding to the gesture trajectory data during the period from the time when the user enters the designated page to the time when the user leaves the designated page may be used as a cluster sample, and of course, the gesture image corresponding to the gesture trajectory data of the user during a fixed period may also be intercepted as a cluster sample, and the specific manner may be determined according to the needs. Based on the clustering processing of massive clustering samples, a sample space serving as a judgment reference for normal user behaviors can be formed, and for constructing the clustering space, the condition that the user behaviors are normal is considered to be the majority, so that most samples form more concentrated clustering distribution, for example, clustering areas in several sample sets can appear in the sample space. Based on the formed clustering space, after gesture images are formed by the collected user gesture track data, the gesture images are used as detection samples for clustering, and then whether the user behaviors are abnormal or not is determined according to the positions of the detection samples in the clustering space. Specifically, the abnormal behavior of the user may be identified according to the distance from the detection sample to the cluster center in the cluster space and/or the distribution density of the sample data around the detection sample, for example, if the detection sample is farther from the cluster center (which may be multiple cluster centers), it indicates that the detection sample belongs to an isolated point in the cluster space, and then the user behavior may be determined to be abnormal.
In addition, for the image classification model, the image classification model needs to be trained, and specifically, under the normal condition of the APP, a large number of user gesture trajectory data samples are collected and corresponding gesture images are generated as training data based on the same manner, so that the image classification model is trained. And then classifying the gesture images corresponding to the detection samples by using the trained image classification model so as to judge whether the user behaviors are abnormal. In addition, gesture track data samples corresponding to different types of user behavior anomalies can be made to serve as training data, so that the image classification model can not only judge whether the user behaviors are abnormal, but also can give a detection result of specific anomaly classification, and for example, the image classification model can identify the user behavior anomalies caused by page loading problems and the user behavior anomalies caused by page key failure through training.
The basic principle of the embodiment of the invention for detecting the abnormal behavior of the user is introduced above. The technical solution of the embodiment of the present invention can be applied to a cloud service platform, as shown in fig. 2, which is a schematic diagram of a system architecture applied to the method for detecting abnormal user behavior according to the embodiment of the present invention. And the user uses the service provided by the cloud service platform through the APP installed on the user terminal. The user terminal can continuously report gesture track data of the APP used by the user to the cloud service platform, a gesture image generation module in the cloud service platform can convert the gesture track data into a gesture image based on the aforementioned mode, then image feature data are extracted based on an image feature extraction module, and then the image feature data are processed by using an image analysis model, so that a detection result of abnormal behaviors of the user is obtained. Because the cloud service platform faces a large number of user terminals, when the cloud service platform detects that user behaviors on the user terminals are all abnormal, the situation may indicate that a problem occurs in a page function on one side of the cloud service platform or that a page interaction mode is not suitable for a currently used user, so that the processing can be performed in time. For example, a cloud service platform side may have a loss of page data or a defect of a page function, and for such a situation, a page update process may be performed, and new page data is sent to a terminal side or a page patch file is added, so that a page on the terminal side is updated in time to solve a problem existing in a current page. For another example, some page interaction methods may not be suitable for some users, for example, for some people with visual impairment, many functional users in the page may not be able to accurately identify the page, thereby causing operational impairment, and for this case, an instruction may be sent to the user terminal, the page interaction method may be switched, for example, the page interaction method may be switched to a voice instruction operation or a display size of a functional area in the page may be enlarged, and the like.
In the following, several processing manners for converting gesture trajectory data of a user into a gesture image are described in detail in the embodiments of the present invention. For the following modes, the basic principle is that gesture track data of a user can be converted into images with color characteristics corresponding to a page, and the characteristic information such as the sequence of gestures, the number of times of the gestures passing in certain areas, the distribution condition of gesture tracks in a page range and the like is embodied through color and position information. Specifically, the following three ways may be adopted:
1) color overlay mode
Whether a color covering mode or a color mixing mode is adopted, colors need to be distributed in advance in a time range covered by gesture track data, different times correspond to different colors, a specific distribution mode can be that the colors change along with the time in a certain trend, for example, the colors can be changed from light to dark along with the time according to a color spectrum, namely, the initial gesture action track corresponds to a light color, the final gesture action track corresponds to a dark color, in order to facilitate color processing, a layer can be distributed for each gesture action track, and finally, the synthesis processing of all the layers is carried out.
In the color overlay mode, the color corresponding to the gesture motion drawn later overlays the color corresponding to the gesture motion drawn earlier. In order to cooperate with the processing of the color overlay mode, the gesture trajectory data needs to include position information and a timestamp corresponding to the gesture motion of the user on the designated page. The time stamp determines the color corresponding to the gesture motion track on one hand, and carries the sequence information on the other hand.
As shown in fig. 3, which is a schematic diagram illustrating a principle of drawing a gesture image according to an embodiment of the present invention, in fig. 3, a plurality of gesture motions are represented by different arrow tracks, and in order to distinguish the tracks representing different gesture motions, different filling manners are adopted to represent different color differences. It should be noted that, in practical applications, the different filling patterns described above will correspond to different colors, and are only schematic here to illustrate the technical principle. In the color overlay mode, the trace of the gesture action occurring later overlaps the previous trace, and as can be seen from the superposition process of the single gesture a1 to the gesture a1+ a2+ A3, in the state of the gesture a1+ a2, the filling pattern of the gesture a2 (i.e., the color of the gesture a 2) is displayed at the intersection of the gesture traces, and in the state of the gesture a1+ a2+ A3, the filling pattern of the gesture A3 (i.e., the color of the gesture A3) is displayed at the intersection of the three gesture traces. The color distribution and the image characteristics which are mutually covered carry the characteristic information of time dimension and space dimension of gesture operation performed by a user, namely the user gesture characteristics of the user in time and space in a period of time accumulated by the image characteristics, and the special characteristics are displayed and processed in an image characteristic mode, so that deep mining can be performed based on machine learning, and abnormal states can be analyzed. Based on the characteristics of the color coverage mode, the characteristics of the last gesture action can be highlighted under the condition that gesture action tracks are intersected.
2) Color blending mode
The gesture trajectory data content and the color distribution mode required by the color mixing mode are the same, and the difference is that in the color mixing mode, the color corresponding to the gesture action drawn later is mixed with the color corresponding to the gesture action drawn earlier to be used as the final drawn color, that is, if a certain position is successively passed by a plurality of gesture trajectories, the final color at the position is the mixed color of the colors corresponding to a plurality of gestures, and the specific mixed color mode can adopt a mode of averaging color values.
As can be seen in connection with fig. 3, the difference from the color blend mode is that in the gesture intersection part, a blended state is presented. For example, at the intersection of gesture B1+ B2, a blend of fill patterns for gesture B1 and gesture B2 is presented (i.e., color blend for gesture B1 and gesture B2), and similarly at the intersection of gesture B1+ B2+ B3, a blend of three gesture fills is presented (color blend for gesture B1, gesture B2, gesture B3). Similar to the color overlay mode, the color overlay mode also carries information of the time dimension and the space dimension of the gesture motion, however, the color hybrid mode is characterized in that under the condition of intersection of gesture motion tracks, all the characteristics of the gesture motion are retained through color mixing.
3) Thermodynamic diagram mode
The difference between the thermodynamic diagram mode and the color overlay mode and the color hybrid mode is that the thermodynamic diagram mode focuses on information such as distribution density and superposition times of gesture actions without considering the sequence of the gesture actions of a user, and accordingly gesture trajectory data may not contain timestamp information. In a thermodynamic diagram mode, counting the times of superposition of each pixel point by gesture actions according to the position information corresponding to the gesture actions in an image range matched with the designated page; and performing color distribution according to the superposition times, drawing the gesture image, and particularly enabling the part with the large superposition times to correspond to a darker color and the part with the small superposition times to correspond to a lighter color. Different colors are allocated to positions with different superposition times, so that the image characteristics including the places where the gesture tracks of the user pass more and the places where the gesture tracks pass less can be achieved.
As can be seen from fig. 3, in the thermodynamic diagram mode, the gesture actions are not color-distinguished according to the chronological order, and therefore, the gestures C1, C2 and C3 in fig. 2 all use the same filling. In the intersection part of the gesture action track, the color is darker as the intersection times are more, and as shown in the figure, the color depth of the gesture C1+ C2+ C3 at the intersection is larger than the color state of the intersection formed by the gesture C1+ C2. The difference between the two modes is that the thermodynamic diagram mode emphasizes the distribution of gesture tracks and the intersection degree of the tracks.
The gesture action characteristics are converted into image characteristics by different mechanisms in the drawing modes of the gesture images, and then the detection result of the corresponding user behavior abnormity is obtained by adopting an artificial intelligence algorithm based on the analysis and mining of the image characteristics. In practical applications, any one of the above modes or a mixture of the modes may be used.
It should be noted that, for the convenience of image analysis processing, the specifications of the gesture images need to be unified, but since the screen types of the user terminals used by the users may have great differences, the gesture images can be converted into the specifications required for unification through resolution conversion. Specifically, the resolution information of the user terminal screen may be obtained first, and then the position information on the user terminal is converted into the position information adapted to the gesture image with the uniform specification according to the original resolution information of the user terminal screen and the preset standard resolution information for drawing the gesture image with the uniform specification, so as to form the gesture image with the uniform specification, so as to perform the subsequent image feature analysis.
According to the method for detecting the abnormal behaviors of the user, the acquired gesture track data of the user on the appointed page is converted into the gesture image corresponding to the page, and then the abnormal behaviors of the user are found based on image characteristic analysis. Based on the mode, on one hand, the gesture track data reflects the accurate and real-time behaviors of the user, the abnormal behaviors of the user can be found in a short time by using the gesture track data for detection, and the time efficiency is better. In addition, some user behavior abnormalities may be caused by incorrect gesture actions of the user, for example, the standard gesture action is circling clockwise, but the user actually performs the gesture action and circling counterclockwise, for this situation, the user may be prompted with a correct gesture action based on recognizing the abnormal behavior, the user may be photographed and instructed by the camera with respect to the gesture action, for example, the user may be prompted with a voice prompt "please circle clockwise", and when the user still performs the counterclockwise circling action, the user may be forced to perform an incorrect action by the camera, so that the user is prompted with a voice prompt "please circle in the opposite direction", and the user may be instructed to complete the correct gesture operation. This mode can be applied to some IOT (internet of things) devices with cameras.
As an application scenario of the embodiment of the present invention, whether a user currently using an APP is a user with visual impairment may be detected based on the method provided by the embodiment of the present invention, so that an APP page may be switched to an interaction mode suitable for a user with visual impairment. For example, a user with visual impairment, for example, the eyesight is very weak, and when viewing page information, the user often cannot see clearly, and therefore, there may be many zoom operations and page sliding operations, etc. to view detailed contents in a page, but some pages may not necessarily support the zoom operations, or the entire page content layout is many, and the user needs to zoom continuously or move up and down left and right to complete information reading, such operations may be identified by the method of the embodiment of the present invention, and thus reported to the service platform corresponding to the APP, so that a page suitable for the user with visual impairment may be sent to the user terminal, for example, the current page content is converted into multiple pages to be displayed and the zoom function is added, or the size of a control key on the APP page is increased, and in addition, a page auxiliary function may also be added, for example, a control instruction is sent to the APP of the user terminal to trigger the function of playing the content by voice. The above-described processing may be performed on the user terminal, that is, the processing such as the detection of the visual disturbance and the mode switching may be performed by the APP itself.
Through the processing of foretell visual disturbance detection and elimination obstacle, the better APP page of carrying on of the user that is convenient for to have visual disturbance is mutual to can promote the use experience of the user that has visual disturbance.
The method for detecting the abnormal behavior of the user in the embodiment of the invention can be used for detecting the contact type gesture operation behavior of the touch screen and can also be used for detecting the non-contact gesture behavior, for example, the non-contact gesture behavior of the user is collected through a camera of a mobile phone, a tablet, a notebook computer and a desktop computer or sensing equipment such as magnetic induction, and the gesture behavior is understood in a broad sense and can include body motions except for hands. For example, the smart tv performs various operations by recognizing human body motions (e.g., raising an arm, turning a body), and the like. In some cases, it is also likely that these non-contact gesture behaviors are abnormal, for example, the user is too far away from the camera, or the user acts too fast, or there is an object interference between the user and the device, so that the device cannot recognize the gesture behavior of the user, and thus cannot execute a corresponding instruction. The method based on the embodiment of the invention can quickly identify the behavior abnormity and prompt the user to solve the behavior abnormity, for example, the equipment can be switched into a voice control mode and the like.
In addition, the embodiment of the invention can also provide help for disabled people, such as the disabled people with disabled hands or the disabled people without arms. For a person with a disabled hand, a device having a non-contact gesture function is generally used, and for example, a sensor such as a camera detects a user's body movement to trigger a device operation. However, since the non-contact gesture may have some limitations or obstacles to the recognition conditions, such as ambient light, distance between the user and the camera of the device, whether an object is blocked in the middle, and the like, the non-contact gesture may be affected by the physical condition of the disabled person, and some disabled persons may not perform some gesture actions. For the situation, the method provided by the embodiment of the invention is used for helping the disabled to eliminate the obstacle through mode switching or a mode of guiding the user through voice and/or vision based on the abnormal detection of the non-contact gesture, so that the equipment can be used smoothly.
The technical solution of the present invention is further illustrated by some specific examples.
Example one
As shown in fig. 4, which is a flowchart illustrating a method for detecting an abnormal behavior of a user according to an embodiment of the present invention, the method may be applied to a cloud service platform, the user uses a service provided by the cloud service platform through an APP installed on a user terminal, the user terminal may continuously report gesture trajectory data of the user using the APP to the cloud service platform, and the cloud service platform performs an abnormal detection according to the reported gesture trajectory data. In addition, the method can also be applied to one side of the user terminal, corresponding abnormal detection is executed based on the acquisition of the user gesture track data, and then the detection result is reported to the cloud service platform. Specifically, the method may include:
s101: and acquiring gesture track data of the user on a specified page. In order to generate a subsequent gesture image, the gesture trajectory data at least includes position information corresponding to a gesture action of the user on the designated page, and in addition, the gesture trajectory data may further include timestamp information corresponding to the gesture action and other auxiliary information, such as a type of the gesture action, a single-finger swipe, a multi-finger swipe, and the like. The designated page may be a page in APP used by the user, a page of a web browser, or the like. The position information may be stored in the form of two-dimensional planar coordinates comprising coordinates of a start point and an end point of the gesture action, the start point coordinates and the end point coordinates being the same for the click operation. In the embodiment of the present invention, the step may specifically be to acquire gesture trajectory data of the user from entering the designated page to leaving the designated page, that is, taking the gesture trajectory data of the user from entering the designated page to leaving the designated page as a detection sample, so that a gesture operation behavior that is more complete for the designated page may be formed, so as to better determine whether a behavior abnormality exists when the user uses the page.
S102: and drawing a gesture image corresponding to the designated page according to the gesture track data. In the aspect of drawing a gesture image, gesture track data of a user can be converted into an image which has color characteristics and corresponds to a page, and the characteristic information such as the sequence of gestures, the number of times that the gestures pass through in certain areas, the distribution condition of gesture tracks in a page range and the like can be embodied through the color and position information.
In the case of carrying timestamp information, the images may be gestured in the following way: and selecting a color corresponding to the timestamp according to the position information, and drawing a gesture image in an image range matched with the appointed page, wherein the mapping relation between the color and the time can be preset to be distributed according to the preset color variation trend along with the time in the time range covered by the gesture track data. The specific distribution mode may be that the color changes in a certain trend with time, for example, the color may change from light to dark according to the color spectrum, that is, the initial gesture motion track corresponds to a light color, and the final gesture motion track corresponds to a dark color.
Specifically, in the process of drawing the gesture image, according to the sequence of the timestamps, a coverage mode or a color mixing mode is adopted for drawing, wherein in the coverage mode, a color corresponding to a gesture action drawn later covers a color corresponding to a gesture action drawn earlier, and in the color mixing mode, a color corresponding to a gesture action drawn later and a color corresponding to a gesture action drawn earlier are mixed to be used as a final drawn color, and a specific color mixing mode may be a mode of averaging color values. Therefore, the time stamp determines the color corresponding to the gesture motion track on one hand, and carries the sequence information on the other hand.
In addition to the color overlay mode and the color blend mode, a thermal map mode may also be employed. The thermodynamic diagram mode does not need to consider the sequence of the gesture actions of the user, but focuses on information such as the distribution density and the superposition times of the gesture actions, and accordingly gesture track data can not contain timestamp information. In the thermodynamic diagram mode, counting the times of superposition of each pixel point by gesture actions according to the position information corresponding to the gesture actions on the image range matched with the designated page; and performing color distribution according to the superposition times, drawing the gesture image, and particularly enabling the part with the large superposition times to correspond to a darker color and the part with the small superposition times to correspond to a lighter color.
In practical applications, the specifications of the gesture images need to be unified for the convenience of image analysis processing, but because the screen types of the user terminals used by the users may have great differences, the gesture images can be converted into the specifications required for unification through resolution conversion. Specifically, first resolution information of a screen of the user terminal may be acquired, then the position information on the user terminal may be converted into position information adapted to the gesture image according to the first resolution information and preset second resolution information for drawing the gesture image, and then the gesture image corresponding to the designated page may be drawn according to gesture trajectory data including the converted position information.
S103: and carrying out image feature analysis on the gesture image to generate a detection result of the abnormal behavior of the user. The feature analysis described here is an analysis of image features accumulated based on a gesture trajectory of a user, and is equivalent to converting a dynamic gesture operation of the user into a planar image feature and mining a correlation with an abnormal behavior from the viewpoint of image feature processing. The image features can be color distribution, graphic shape, frequency domain information of the image and the like, and by training the machine learning model, the incidence relation between the image features and the abnormal behaviors can be established, so that the abnormal behaviors can be detected based on image feature analysis.
Specifically, the image feature analysis processing for the gesture image can be realized by adopting an unsupervised learning-based image clustering model or a supervised learning-based image classification model. Wherein, the image characteristic data extracted from the gesture image is used as a detection sample.
In the case of performing processing using a clustering model, this step S103 may include: extracting image features of the gesture image to generate image feature data; clustering processing is carried out in a preset clustering space by taking the image characteristic data as a detection sample; and generating a detection result of the abnormal behavior of the user according to the distance from the detection sample to the clustering center in the clustering space and/or the distribution density of the sample data around the detection sample. The basic processing mode of clustering is to map the image features into a sample in a clustering space, the sample exists in a vector form, and whether the user behavior abnormity exists is determined by observing the relationship between the vector and other vectors in the clustering space. Therefore, under the condition of using a clustering mode, gesture images corresponding to massive user gesture track data samples can be taken as clustering samples in advance to be clustered to form a clustering space, and then the detection samples are judged based on the clustering space.
In the case of using an image classification model, this step S103 may include: extracting image features of the gesture image to generate image feature data; and classifying the image characteristic data by using an image classification model to generate a detection result of whether the abnormal behavior exists and/or the type of the abnormal behavior. In order to use the image classification model for detection, the image classification model needs to be trained in advance, and specifically, under the condition of normal use based on the APP, a large number of user gesture trajectory data samples are collected and corresponding gesture images are generated as training data to train the image classification model. And then classifying the gesture images corresponding to the detection samples by using the trained image classification model so as to judge whether the user behaviors are abnormal. In addition, gesture track data samples corresponding to different types of user behavior abnormity can be made to serve as training data, so that the image classification model can not only judge whether the user behavior is abnormal, but also give a detection result of specific abnormal classification.
According to the method for detecting the abnormal behaviors of the user, the acquired gesture track data of the user on the appointed page is converted into the gesture image data corresponding to the page, and then the abnormal behaviors of the user are found based on image characteristic analysis, so that the abnormal behaviors of the user can be found more quickly and accurately.
Example two
As shown in fig. 5, which is a schematic structural diagram of a device for detecting an abnormal behavior of a user according to an embodiment of the present invention, the device may be applied to a cloud service platform, the user uses a service provided by the cloud service platform through an APP installed on a user terminal, the user terminal may continuously report gesture trajectory data of the user using the APP to the cloud service platform, and the cloud service platform performs an abnormal detection according to the reported gesture trajectory data. In addition, the device can also be applied to one side of the user terminal, corresponding abnormal detection is executed based on the acquisition of the gesture track data of the user, and then the detection result is reported to the cloud service platform. Specifically, the apparatus may include:
and the gesture track data acquisition module 11 is configured to acquire gesture track data of the user on a specified page. The gesture trajectory data at least comprises position information corresponding to the gesture action of the user on the designated page, and in addition, the gesture trajectory data also can comprise timestamp information corresponding to the gesture action and other auxiliary information, such as the type of the gesture action, clicking, single-finger sliding, multi-finger sliding and the like. The designated page may be a page in APP used by the user, a page of a web browser, or the like. The position information may be stored in the form of two-dimensional planar coordinates comprising coordinates of a start point and an end point of the gesture action, the start point coordinates and the end point coordinates being the same for the click operation. In the embodiment of the invention, the acquiring of the gesture track data can comprise acquiring gesture track data of a user from entering a specified page to leaving the specified page.
And the gesture image generation module 12 is configured to draw a gesture image corresponding to the designated page according to the gesture trajectory data. In the aspect of drawing a gesture image, gesture track data of a user can be converted into an image which has color characteristics and corresponds to a page, and the characteristic information such as the sequence of gestures, the number of times that the gestures pass through in certain areas, the distribution condition of gesture tracks in a page range and the like can be embodied through the color and position information.
In the case of carrying timestamp information, the images may be gestured in the following way: and selecting a color corresponding to the timestamp according to the position information, and drawing a gesture image in an image range matched with the appointed page, wherein the mapping relation between the color and the time can be preset to be distributed according to the preset color variation trend along with the time in the time range covered by the gesture track data. The specific distribution mode may be that the color changes in a certain trend with time, for example, the color may change from light to dark according to the color spectrum, that is, the initial gesture motion track corresponds to a light color, and the final gesture motion track corresponds to a dark color.
Specifically, in the process of drawing the gesture image, according to the sequence of the timestamps, a coverage mode or a color mixing mode is adopted for drawing, wherein in the coverage mode, a color corresponding to a gesture action drawn later covers a color corresponding to a gesture action drawn earlier, and in the color mixing mode, a color corresponding to a gesture action drawn later and a color corresponding to a gesture action drawn earlier are mixed to be used as a final drawn color, and a specific color mixing mode may be a mode of averaging color values. Therefore, the time stamp determines the color corresponding to the gesture motion track on one hand, and carries the sequence information on the other hand.
In addition to the color overlay mode and the color blend mode, a thermal map mode may also be employed. The thermodynamic diagram mode does not need to consider the sequence of the gesture actions of the user, but focuses on information such as the distribution density and the superposition times of the gesture actions, and accordingly gesture track data can not contain timestamp information. In the thermodynamic diagram mode, counting the times of superposition of each pixel point by gesture actions according to the position information corresponding to the gesture actions on the image range matched with the designated page; and performing color distribution according to the superposition times, drawing the gesture image, and particularly enabling the part with the large superposition times to correspond to a darker color and the part with the small superposition times to correspond to a lighter color. The technical principle of the modes adopted by the three drawing gesture images can be seen from the previous exemplary description in conjunction with fig. 3.
In practical applications, the specifications of the gesture images need to be unified for the convenience of image analysis processing, but because the screen types of the user terminals used by the users may have great differences, the gesture images can be converted into the specifications required for unification through resolution conversion. Specifically, first resolution information of a screen of the user terminal may be acquired, then the position information on the user terminal may be converted into position information adapted to the gesture image according to the first resolution information and preset second resolution information for drawing the gesture image, and then the gesture image corresponding to the designated page may be drawn according to gesture trajectory data including the converted position information.
And the detection result generation module 13 is configured to perform image feature analysis on the gesture image to generate a detection result of the abnormal behavior of the user. The feature analysis described here is an analysis of image features accumulated based on a gesture trajectory of a user, and is equivalent to converting a dynamic gesture operation of the user into a planar image feature and mining a correlation with an abnormal behavior from the viewpoint of image feature processing. The image features can be color distribution, graphic shape, frequency domain information of the image and the like, and by training the machine learning model, the incidence relation between the image features and the abnormal behaviors can be established, so that the abnormal behaviors can be detected based on image feature analysis.
Specifically, the image feature analysis processing for the gesture image can be realized by adopting an unsupervised learning-based image clustering model or a supervised learning-based image classification model. Wherein, the image characteristic data extracted from the gesture image is used as a detection sample.
In the case of processing using a clustering model, the detection processing may include: extracting image features of the gesture image to generate image feature data; clustering processing is carried out in a preset clustering space by taking the image characteristic data as a detection sample; and generating a detection result of the abnormal behavior of the user according to the distance from the detection sample to the clustering center in the clustering space and/or the distribution density of the sample data around the detection sample.
In the case of using an image classification model, the detection process may include: extracting image features of the gesture image to generate image feature data; and classifying the image characteristic data by using an image classification model to generate a detection result of whether the abnormal behavior exists and/or the type of the abnormal behavior.
The detailed description of the above processing procedure, the detailed description of the technical principle, and the detailed analysis of the technical effect are described in the foregoing embodiments, and are not repeated herein.
According to the detection device for the abnormal behaviors of the user, the acquired gesture track data of the user on the appointed page is converted into the gesture image corresponding to the page, and then the abnormal behaviors of the user are found based on image characteristic analysis, so that the abnormal behaviors of the user can be found more quickly and accurately.
EXAMPLE III
The present embodiment provides a page processing method, which can perform abnormality detection based on the detection method of abnormal user behavior in the foregoing embodiments, and execute handling processing when abnormal user behavior is found. The page processing method can be applied to a server side, and the method can comprise the following steps:
s201: and acquiring gesture track data of the user on the page, which is submitted by at least one terminal. The APP on the terminal can continuously report the gesture track data to the server side, so that the server can acquire the gesture track data reported by the APPs of the terminals, and subsequent user behavior abnormity detection is performed.
S202: and determining the detection result of the abnormal behavior of the user according to the gesture track data. Specifically, the manner of obtaining the detection result based on the gesture trajectory data may be the manner described in the foregoing embodiment, and the corresponding gesture image is generated according to the gesture trajectory data, and then the image feature analysis is performed on the gesture image, so as to generate the detection result of the abnormal behavior of the user. For a detailed implementation, reference may be made to the detailed description of the foregoing embodiments.
S203: and executing page updating processing and/or switching a page interaction mode according to the detection result. On the server side, based on the gesture track data reported by the terminals, the abnormity of a plurality of users can be found, and the abnormity of individual users can also be found. For example, when the server side detects that a user behavior is abnormal due to page jamming, the server may actively perform page update processing to resend new page data to all or the terminals with the abnormality to update the current page, and reduce some content units occupying traffic in the resent web page data, thereby reducing the page jamming. For another example, when some page functions are found to be invalid, the page may be diagnosed to determine the reason of the invalid function, and a patch file is issued to the APP of the user terminal in time to restore the page functions. For another example, some page interaction methods may not be suitable for some users, for example, for some people with visual impairment, many functional users in the page may not be able to accurately identify the page, thereby causing operational impairment, and for this case, an instruction may be sent to the user terminal, and the page interaction method may be switched, for example, the page interaction method may be switched to a voice instruction operation, a voice content playing, or a display size of a functional area in the page may be enlarged.
On the other hand, the page processing method of the embodiment may also be applied to a terminal side, that is, a processing module for performing user abnormal behavior recognition based on gesture trajectory data is embedded in the terminal APP, and handling processing is performed according to a detection result. In addition, the terminal can also complete the identification of the abnormal behavior of the user based on the gesture track data through the interaction with the server. Specifically, the method may include:
s301: and detecting gesture track data of the user on the page. The terminal can record gesture actions of the user in the process of using the APP through the system log, so that gesture track data is formed. Of course, the recording and use of the user operation by the terminal or APP is performed on the premise of obtaining the user's explicit authorization.
S302: and determining the detection result of the abnormal behavior of the user according to the gesture track data. The terminal can finish the detection of the abnormal behavior of the user through interaction with the server, specifically, the terminal reports the gesture track data to the server, and then receives a detection result of the abnormal behavior of the user, which is returned by the server and generated based on image feature analysis of a gesture image corresponding to the gesture track data. In addition, the detection of the abnormal behavior of the user can also be completed locally at the terminal, specifically, the terminal generates a corresponding gesture image according to the gesture trajectory data, and then performs image feature analysis on the gesture image to generate a detection result of the abnormal behavior of the user. For a detailed implementation, reference may be made to the detailed description of the foregoing embodiments.
S303: and according to the detection result of the abnormal behavior of the user, acquiring page updating data corresponding to the detection result and/or a command for switching a page interaction mode, and executing page updating and/or page interaction mode switching. Specifically, the mechanism for executing the corresponding processing based on the detection result is the same as the processing principle of the server side, except that, on the terminal side, when the need of performing the web page update is found, the terminal APP needs to actively initiate a request to the server, and can report the detection result at the same time to trigger the server to perform the web page update or issue the web page patch file, and the like. In addition, for the user with visual impairment, the terminal can detect the abnormality of the user behavior in time, and switch the page interaction mode, for example, switching to voice instruction operation, voice content playing or enlarging the display size of the functional area in the page.
According to the page processing method, the abnormal behavior of the user is detected based on the gesture track data of the user, and the page causing the abnormal behavior of the user is adjusted in time, so that the use obstacle of the user is eliminated, and the user experience of the user in using the APP is improved.
Example four
The embodiment of the invention provides a method for processing a non-contact gesture, which can be applied to equipment with a non-contact gesture function or a server communicated with the equipment, acquires non-contact gesture trajectory data acquired by the equipment through interaction with the equipment, and sends an instruction to the equipment based on a processing result of the server, and specifically, the method can comprise the following steps:
s401: gesture trajectory data of the user non-contact gesture is acquired. The embodiment of the invention can be used for detecting the contact gesture operation behavior of the touch screen and can also be used for detecting the non-contact gesture behavior, for example, the non-contact gesture behavior of the user is collected through a camera of a mobile phone, a tablet, a notebook computer, a desktop computer or sensing equipment such as magnetic induction, and the like, wherein the gesture behavior is understood in a broad sense and can comprise body actions except hands. For example, the smart tv performs various operations by recognizing human body motions (e.g., raising an arm, turning a body), and the like.
S402: and determining the detection result of the non-contact gesture of the user according to the gesture track data. In some cases, it is also likely that these non-contact gesture behaviors are abnormal, for example, the user is too far away from the camera, or the user acts too fast, or there is object interference between the user and the device, so that the device cannot recognize the instruction corresponding to the non-contact gesture behavior of the user. For the abnormal detection of the non-contact gesture, the detection method of the abnormal behavior of the user in the foregoing embodiment may also be adopted to perform the detection, a gesture image may be drawn according to a trajectory of the non-contact gesture of the user acquired by the device within a certain time range, and for the non-contact gesture, the gesture image may not correspond to a specified page, but the gesture trajectory within a certain time period may be directly drawn on a canvas of a preset size. The gesture image can be drawn based on the non-contact gesture track according to the three modes mentioned in the embodiment.
S403: and switching the interaction mode of the equipment and/or guiding the user by the non-contact gesture according to the detection result of the non-contact gesture of the user. For example, after the detection result of the non-contact gesture abnormality occurs, the device may be switched to a voice control mode or the like. In addition, the user can be guided to execute the correct gesture actions by means of screen prompt or voice prompt.
In addition, the embodiment of the invention can also provide help for disabled people, such as the disabled people with disabled hands or the disabled people without arms. For a person with a disabled hand, a device having a non-contact gesture function is generally used, and for example, a sensor such as a camera detects a user's body movement to trigger a device operation. However, since the non-contact gesture may have some limitations or obstacles to the recognition conditions, such as ambient light, distance between the user and the camera of the device, whether an object is blocked in the middle, and the like, the non-contact gesture may be affected by the physical condition of the disabled person, and some disabled persons may not perform some gesture actions. For the situation, the method provided by the embodiment of the invention is used for helping the disabled to eliminate the obstacle through mode switching or a mode of guiding the user through voice and/or vision based on the abnormal detection of the non-contact gesture, so that the equipment can be used smoothly.
EXAMPLE five
The foregoing embodiment describes a method for detecting an abnormal behavior of a user, a method for processing a non-contact gesture, and a flow process and a device structure of a page processing method, where the functions of the method and the device may be implemented by an electronic device, as shown in fig. 6, which is a schematic structural diagram of the electronic device according to the embodiment of the present invention, and specifically includes: a memory 110 and a processor 120.
And a memory 110 for storing a program.
In addition to the programs described above, the memory 110 may also be configured to store other various data to support operations on the electronic device. Examples of such data include instructions for any application or method operating on the electronic device, contact data, phonebook data, messages, pictures, videos, and so forth.
The memory 110 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The processor 120, coupled to the memory 110, is configured to execute the program in the memory 110 to perform the operation steps of the method for detecting the abnormal behavior of the user and/or the method for processing the page and/or the method for processing the non-contact gesture described in the foregoing embodiments.
Furthermore, the processor 120 may also include various modules described in the foregoing embodiments to perform detection processing of user abnormal behavior and/or page processing and/or processing of non-contact gestures, and the memory 110 may be used, for example, to store data required and/or output by these modules to perform operations.
The detailed description of the above processing procedure, the detailed description of the technical principle, and the detailed analysis of the technical effect are described in the foregoing embodiments, and are not repeated herein.
Further, as shown, the electronic device may further include: communication components 130, power components 140, audio components 150, display 160, and other components. Only some of the components are schematically shown in the figure and it is not meant that the electronic device comprises only the components shown in the figure.
The communication component 130 is configured to facilitate wired or wireless communication between the electronic device and other devices. The electronic device may access a wireless network based on a communication standard, such as WiFi, a mobile communication network, such as 2G, 3G, 4G/LTE, 5G, or a combination thereof. In an exemplary embodiment, the communication component 130 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 130 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
The power supply component 140 provides power to the various components of the electronic device. The power components 140 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for an electronic device.
The audio component 150 is configured to output and/or input audio signals. For example, the audio component 150 includes a Microphone (MIC) configured to receive external audio signals when the electronic device is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 110 or transmitted via the communication component 130. In some embodiments, audio assembly 150 also includes a speaker for outputting audio signals.
The display 160 includes a screen, which may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The aforementioned program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (20)

1. A method for detecting abnormal user behavior comprises the following steps:
acquiring gesture track data of a user on a designated page;
drawing a gesture image corresponding to the designated page according to the gesture track data;
and carrying out image feature analysis on the gesture image to generate a detection result of the abnormal behavior of the user.
2. The method of claim 1, wherein the gesture trajectory data includes location information and a timestamp corresponding to a gesture action of the user on a specified page, and wherein rendering the gesture image corresponding to the specified page according to the gesture trajectory data includes:
selecting a color corresponding to the timestamp, and drawing the gesture image according to the position information in an image range matched with the designated page, wherein the preset multiple colors are distributed along with the time variation trend in the time range covered by the gesture track data.
3. The method according to claim 2, wherein in the process of drawing the gesture image, drawing is performed in an overlay mode or a color mixing mode according to the sequence of the timestamps;
in the coverage mode, the color corresponding to the gesture drawn later covers the color corresponding to the gesture drawn earlier;
in the color mixing mode, a color corresponding to a gesture motion drawn later is mixed with a color corresponding to a gesture motion drawn earlier to be a finally drawn color.
4. The method of claim 1, wherein the gesture trajectory data includes location information corresponding to a gesture action of a user on a specified page, and wherein rendering a gesture image corresponding to the specified page according to the gesture trajectory data includes:
counting the times of superposition of each pixel point by the gesture action according to the position information corresponding to the gesture action on the image range matched with the appointed page;
and drawing the gesture image according to the color corresponding to the superposition times.
5. The method of claim 1, wherein obtaining gesture trajectory data of a user on a specified page comprises: acquiring gesture track data of a user from entering the specified page to leaving the specified page.
6. The method of claim 1, wherein the gesture trajectory data includes position information corresponding to a gesture action of the user on a designated page of a user terminal screen, and the drawing a gesture image corresponding to the designated page according to the gesture trajectory data includes:
acquiring first resolution information of a user terminal screen;
converting the position information on the user terminal into position information adapted to the gesture image according to the first resolution information and preset second resolution information used for drawing the gesture image;
and drawing a gesture image corresponding to the appointed page according to the gesture track data containing the converted position information.
7. The method of claim 1, wherein performing image feature analysis on the gesture image to generate a detection result of an abnormal behavior of a user comprises:
extracting image features of the gesture image to generate image feature data;
clustering processing is carried out in a preset clustering space by taking the image characteristic data as a detection sample;
and generating a detection result of the abnormal behavior of the user according to the distance from the detection sample to the clustering center in the clustering space and/or the distribution density of the sample data around the detection sample.
8. The method of claim 1, wherein performing image feature analysis on the gesture image to generate a detection result of an abnormal behavior of a user comprises:
extracting image features of the gesture image to generate image feature data;
and classifying the image characteristic data by using an image classification model to generate a detection result of whether the abnormal behavior exists and/or the type of the abnormal behavior.
9. A device for detecting abnormal behavior of a user, comprising:
the gesture track data acquisition module is used for acquiring gesture track data of a user on a specified page;
the gesture image generation module is used for drawing a gesture image corresponding to the specified page according to the gesture track data;
and the detection result generation module is used for carrying out image characteristic analysis on the gesture image and generating a detection result of the abnormal behavior of the user.
10. The apparatus of claim 9, wherein the gesture trajectory data comprises location information and a timestamp corresponding to a gesture action of a user on a specified page, and wherein rendering a gesture image corresponding to the specified page according to the gesture trajectory data comprises:
and selecting a color corresponding to the timestamp, and drawing the gesture image according to the position information in an image range matched with the appointed page, wherein the color is distributed according to a preset color variation trend along with time in a time range covered by the gesture track data.
11. The apparatus of claim 9, wherein the image feature analysis is performed on the gesture image, and the identification of the abnormal behavior of the user comprises:
extracting image features of the gesture image to generate image feature data;
clustering processing is carried out in a preset clustering space by taking the image characteristic data as a detection sample;
and identifying the abnormal behavior of the user according to the distance from the detection sample to the clustering center in the clustering space and/or the distribution density of sample data around the detection sample.
12. The apparatus of claim 9, wherein obtaining gesture trajectory data of a user on a specified page comprises: acquiring gesture track data of a user from entering the specified page to leaving the specified page.
13. An electronic device, comprising:
a memory for storing a program;
a processor for executing the program stored in the memory to perform the method for detecting abnormal user behavior according to any one of claims 1 to 8.
14. A page processing method includes:
acquiring gesture track data of a user on a page, which is submitted by at least one terminal;
determining a detection result of the abnormal behavior of the user according to the gesture track data; and executing page updating processing and/or switching a page interaction mode according to the detection result.
15. The method of claim 14, wherein determining a detection result of the user's abnormal behavior from the gesture trajectory data comprises:
generating a corresponding gesture image according to the gesture track data;
and carrying out image feature analysis on the gesture image to generate a detection result of the abnormal behavior of the user.
16. A page processing method includes:
detecting gesture track data of a user on a page;
determining a detection result of the abnormal behavior of the user according to the gesture track data;
and according to the detection result of the abnormal behavior of the user, acquiring page updating data corresponding to the detection result and/or a command for switching a page interaction mode, and executing page updating and/or page interaction mode switching.
17. The method of claim 16, wherein determining a detection result of the user's abnormal behavior from the gesture trajectory data comprises:
reporting the gesture track data to a server, and receiving a detection result of the abnormal behavior of the user, which is generated by analyzing the image characteristics of the gesture image corresponding to the gesture track data and is returned by the server;
alternatively, the first and second electrodes may be,
generating a corresponding gesture image according to the gesture track data;
and carrying out image feature analysis on the gesture image to generate a detection result of the abnormal behavior of the user.
18. An electronic device, comprising:
a memory for storing a program;
a processor for executing the program stored in the memory to perform the page processing method of any one of claims 14 to 17.
19. A method for processing non-contact gestures comprises the following steps:
acquiring gesture track data of a non-contact gesture of a user;
determining a detection result of the non-contact gesture of the user according to the gesture track data;
and switching the interaction mode of the equipment and/or guiding the user by the non-contact gesture according to the detection result of the non-contact gesture of the user.
20. An electronic device, comprising:
a memory for storing a program;
a processor for executing the program stored in the memory to perform the method of processing a non-contact gesture according to claim 19.
CN202011271295.8A 2020-11-12 2020-11-13 Abnormal behavior detection method, page and gesture processing method, device and electronic equipment Active CN113419621B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2020112658722 2020-11-12
CN202011265872 2020-11-12

Publications (2)

Publication Number Publication Date
CN113419621A true CN113419621A (en) 2021-09-21
CN113419621B CN113419621B (en) 2023-05-02

Family

ID=77711721

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011271295.8A Active CN113419621B (en) 2020-11-12 2020-11-13 Abnormal behavior detection method, page and gesture processing method, device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113419621B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114077529A (en) * 2022-01-19 2022-02-22 荣耀终端有限公司 Log uploading method and device, electronic equipment and computer readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104679434A (en) * 2015-03-17 2015-06-03 北京合生共济投资咨询有限责任公司 Method for switching client pages
CN104933408A (en) * 2015-06-09 2015-09-23 深圳先进技术研究院 Hand gesture recognition method and system
CN105740823A (en) * 2016-02-01 2016-07-06 北京高科中天技术股份有限公司 Dynamic gesture trace recognition method based on depth convolution neural network
CN105807900A (en) * 2014-12-30 2016-07-27 丰唐物联技术(深圳)有限公司 Non-contact type gesture control method and intelligent terminal
CN108921101A (en) * 2018-07-04 2018-11-30 百度在线网络技术(北京)有限公司 Processing method, equipment and readable storage medium storing program for executing based on gesture identification control instruction
CN111382599A (en) * 2018-12-27 2020-07-07 北京搜狗科技发展有限公司 Image processing method and device and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105807900A (en) * 2014-12-30 2016-07-27 丰唐物联技术(深圳)有限公司 Non-contact type gesture control method and intelligent terminal
CN104679434A (en) * 2015-03-17 2015-06-03 北京合生共济投资咨询有限责任公司 Method for switching client pages
CN104933408A (en) * 2015-06-09 2015-09-23 深圳先进技术研究院 Hand gesture recognition method and system
CN105740823A (en) * 2016-02-01 2016-07-06 北京高科中天技术股份有限公司 Dynamic gesture trace recognition method based on depth convolution neural network
CN108921101A (en) * 2018-07-04 2018-11-30 百度在线网络技术(北京)有限公司 Processing method, equipment and readable storage medium storing program for executing based on gesture identification control instruction
CN111382599A (en) * 2018-12-27 2020-07-07 北京搜狗科技发展有限公司 Image processing method and device and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114077529A (en) * 2022-01-19 2022-02-22 荣耀终端有限公司 Log uploading method and device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN113419621B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN110321047B (en) Display control method and device
US20120110516A1 (en) Position aware gestures with visual feedback as input method
WO2014189685A1 (en) Motion-assisted visual language for human computer interfaces
CN109800153A (en) Mobile application test method and device, electronic equipment, storage medium
EP3933552B1 (en) Method and device for determining gaze position of user, storage medium, and electronic apparatus
US11704142B2 (en) Computer application with built in training capability
CN103677597A (en) Terminal equipment and same-screen display method and system
CN110070551A (en) Rendering method, device and the electronic equipment of video image
CN113076903A (en) Target behavior detection method and system, computer equipment and machine readable medium
CN113419621B (en) Abnormal behavior detection method, page and gesture processing method, device and electronic equipment
US11257186B2 (en) Image processing apparatus, image processing method, and computer-readable recording medium
CN103164281B (en) Control to gather method and the electronic equipment of image
CN104133553A (en) Method and device for showing webpage content
US20230393649A1 (en) Method and device for inputting information
CN103870146A (en) Information processing method and electronic equipment
CN103543824B (en) Gesture input system and method
CN112148171B (en) Interface switching method and device and electronic equipment
CN110262864B (en) Application processing method and device, storage medium and terminal
CN112005296A (en) Selecting displays using machine learning
CN113377220B (en) Information storage method and device
Suriya et al. An Efficient Artificial Intelligence based Human-Machine Interaction System
CN114967927B (en) Intelligent gesture interaction method based on image processing
US11675496B2 (en) Apparatus, display system, and display control method
US20230376122A1 (en) Interface displaying method, apparatus, device and medium
Mohamed Eye-gesture control of computer systems via artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant