CN113419621B - Abnormal behavior detection method, page and gesture processing method, device and electronic equipment - Google Patents

Abnormal behavior detection method, page and gesture processing method, device and electronic equipment Download PDF

Info

Publication number
CN113419621B
CN113419621B CN202011271295.8A CN202011271295A CN113419621B CN 113419621 B CN113419621 B CN 113419621B CN 202011271295 A CN202011271295 A CN 202011271295A CN 113419621 B CN113419621 B CN 113419621B
Authority
CN
China
Prior art keywords
gesture
user
image
page
track data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011271295.8A
Other languages
Chinese (zh)
Other versions
CN113419621A (en
Inventor
卜佳俊
黄刚
周晟
张怡远
刘美含
杜鑫明
王炜
徐筝
吴越
张震
张建锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Publication of CN113419621A publication Critical patent/CN113419621A/en
Application granted granted Critical
Publication of CN113419621B publication Critical patent/CN113419621B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Abstract

The embodiment of the invention provides an abnormal behavior detection method, a page, a gesture processing method, a gesture processing device and electronic equipment, wherein the method comprises the steps of acquiring gesture track data of a user on a designated page; drawing a gesture image corresponding to the appointed page according to the gesture track data; and carrying out feature analysis on the gesture image to generate a detection result of the abnormal behavior of the user. According to the embodiment of the invention, the collected gesture track data of the user on the designated page is converted into the gesture image corresponding to the page, and then the abnormal behavior of the user is found based on the image feature analysis, so that the abnormal behavior of the user can be found more quickly and accurately.

Description

Abnormal behavior detection method, page and gesture processing method, device and electronic equipment
Technical Field
The application relates to an abnormal behavior detection method, a page processing method, a gesture processing device and electronic equipment, and belongs to the technical field of computers.
Background
The user can have various experience barriers or blocking problems in the process of using the App software, so that abnormal user behaviors occur, for example, when the page is blocked, the user can constantly want to refresh the page, or when a certain key or an input box in the page fails, the user can repeatedly click, and the like. When the user has such abnormal behaviors, the experience of the user on APP use is greatly affected. Therefore, the method and the device analyze and discover the experience barriers and the obstructive problems in time, and play an important role in improving the App experience, further improving the click rate and the use rate of users and the like.
In the prior art, for detecting abnormal user behaviors, auxiliary judgment is mostly carried out through information of other data sources, for example, after continuous page opening requests are detected for many times, access problems of the page are found out. Based on the indirect data source, the abnormal user behavior is judged, so that serious hysteresis often occurs, the abnormal user behavior cannot be found timely, and further experience barriers or blocking problems cannot be found timely.
Disclosure of Invention
The embodiment of the invention provides an abnormal behavior detection method, a page and gesture processing method, a device and electronic equipment, and aims to timely find out the abnormality of user behaviors.
In order to achieve the above object, an embodiment of the present invention provides a method for detecting abnormal behavior of a user, including:
acquiring gesture track data of a user on a designated page;
drawing a gesture image corresponding to the appointed page according to the gesture track data;
and carrying out image feature analysis on the gesture image to generate a detection result of abnormal behaviors of the user.
The embodiment of the invention also provides a device for detecting the abnormal behavior of the user, which comprises the following steps:
the gesture track data acquisition module is used for acquiring gesture track data of a user on a designated page;
the gesture image generation module is used for drawing a gesture image corresponding to the appointed page according to the gesture track data;
and the detection result generation module is used for carrying out image feature analysis on the gesture image and generating a detection result of abnormal behaviors of the user.
The embodiment of the invention also provides electronic equipment, which comprises:
a memory for storing a program;
and the processor is used for running the program stored in the memory so as to execute the detection method of the abnormal behavior of the user.
The embodiment of the invention also provides a page processing method, which comprises the following steps:
Acquiring gesture track data of a user on a page submitted by at least one terminal;
determining a detection result of the abnormal behavior of the user according to the gesture track data; and executing page updating processing and/or switching page interaction modes according to the detection result.
The embodiment of the invention also provides a page processing method, which comprises the following steps:
detecting gesture track data of a user on a page;
determining a detection result of the abnormal behavior of the user according to the gesture track data;
and according to the detection result of the abnormal behavior of the user, acquiring page update data corresponding to the detection result and/or an instruction for switching the page interaction mode, and executing page update and/or page interaction mode switching.
The embodiment of the invention also provides electronic equipment, which comprises:
a memory for storing a program;
and the processor is used for running the program stored in the memory so as to execute the page processing method.
The embodiment of the invention also provides a non-contact gesture processing method, which comprises the following steps:
acquiring gesture track data of a non-contact gesture of a user;
determining a detection result of the non-contact gesture of the user according to the gesture track data;
and switching the interaction mode of the equipment and/or guiding the non-contact gesture of the user according to the detection result of the non-contact gesture of the user.
The embodiment of the invention also provides electronic equipment, which comprises:
a memory for storing a program;
and the processor is used for running the program stored in the memory to execute the processing method of the non-contact gesture.
According to the embodiment of the invention, the collected gesture track data of the user on the designated page is converted into the gesture image corresponding to the page, and then the abnormal behavior of the user is found based on the image feature analysis, so that the abnormal behavior of the user can be found more quickly and accurately. Further, the problems existing in the page can be solved according to the detected problems, such as switching the page interaction mode or repairing the page.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
FIG. 1 is a schematic diagram of data processing principle of a method for detecting abnormal behavior of a user according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a system architecture to which the user abnormal behavior detection method according to the embodiment of the present invention is applied;
FIG. 3 is a schematic diagram of drawing a gesture image according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating a method for detecting abnormal behavior of a user according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a device for detecting abnormal behavior of a user according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Gesture track data is data generated by gesture operations of a user when the user uses the APP, and generally, the user performs various gesture operations on a certain page, so as to perform various information interactions. The gesture track data may include information in two aspects of a time dimension and a space dimension, where the space dimension may be position information of a gesture motion in a page area, specifically may be coordinate information of a motion track corresponding to the gesture motion, and the time dimension may be time corresponding to the gesture motion, specifically may be starting and ending time of the gesture motion, time information corresponding to each point on the motion track, and the like. Because the touch screen device occupies a very large proportion in the existing terminal device, the gesture behavior of the user on the application page can directly represent the state of the user when the user uses the application, and the gesture track data can fully score the user experience at the page level. In the embodiment of the invention, a technical scheme for detecting abnormal user behaviors based on gesture track data of a user on a page is provided so as to realize timely discovery of using obstacle and blocking problems of the user.
Fig. 1 is a schematic diagram of a data processing principle of a method for detecting abnormal behavior of a user according to an embodiment of the present invention. The designated page at the leftmost side may be an APP page or a browser page on the user terminal, and the user may perform operations such as clicking, double clicking, single-finger sliding, multi-finger sliding, etc. on the designated page, where the operation track may be a point, a straight line, or a curve. In practical application, gesture track data of a period from entering the appointed page to leaving the appointed page can be collected as a detection sample, so that gesture operation behaviors which are complete for the appointed page can be formed, and whether the user has abnormal behaviors when using the page can be better judged. In the embodiment of the invention, the abnormal user behavior can refer to the use obstacle and the blocking problem caused by various problems of the page on the user terminal or the system on the server side corresponding to the page.
In the embodiment of the invention, the gesture track data is not directly analyzed and processed, but is converted into the corresponding gesture image, and then the image analysis model is used for detecting the abnormal behavior of the user based on the image characteristic data. Because the gesture on the appointed page has strong track or position characteristics, the inherent information contained in the gesture can be better mined after the gesture is converted into the gesture image. The image analysis model can be realized by adopting an image clustering model based on unsupervised learning or an image classification model based on supervised learning.
For the clustering model, a large number of gesture images corresponding to gesture track data samples of users can be used as clustering samples for clustering processing in advance to form a clustering space. As described above, the gesture image corresponding to the gesture track data of the time from entering the designated page to leaving the designated page may be taken as a clustering sample, or of course, the gesture image corresponding to the gesture track data of the user in the fixed time period may be taken as a clustering sample, and the specific manner may be determined according to the needs. Based on the clustering processing of a large number of clustered samples, a sample space serving as a normal user behavior judging standard can be formed, and for constructing the clustered space, the situation that the user behavior is considered normal is mostly considered, so that most of the samples form a relatively concentrated cluster distribution, for example, a cluster area in which a plurality of samples are concentrated can appear in the sample space. And forming a gesture image based on the formed clustering space by using the acquired gesture track data of the user as a detection sample, and then determining whether the user behavior is abnormal according to the position of the detection sample in the clustering space. In particular, the abnormal behavior of the user may be identified according to the distance from the detection sample to the clustering center in the clustering space and/or the distribution density of the sample data around the detection sample, for example, if the detection sample is far from the clustering center (possibly multiple clustering centers), which indicates that the detection sample belongs to an isolated point in the clustering space, it may be determined that the user behavior is abnormal, and in some cases, although the detection sample is far from the clustering center, there are still many samples around it, the distribution density of these samples is low, and a large sample range is covered, so many samples in the samples of this cluster are far from the clustering center, in which case the detection sample may still be considered normal, and no abnormality occurs in the user behavior.
In addition, for the image classification model, training the image classification model is required, specifically, under the condition that the APP is normal, based on the same manner as the above, a large number of gesture track data samples of the user are collected, corresponding gesture images are generated as training data, and the image classification model is trained. And then classifying the gesture image corresponding to the detection sample by using the trained image classification model, so as to judge whether the user behavior is abnormal. In addition, gesture track data samples corresponding to different types of user behavior anomalies can be made to serve as training data, so that the image classification model can not only judge whether the user behavior is abnormal, but also give out detection results of specific anomaly classification, for example, the image classification model can be trained to identify the user behavior anomalies caused by page loading problems and the user behavior anomalies caused by page key failure.
The basic principle of the embodiment of the invention for detecting the abnormal behavior of the user is introduced. The technical scheme of the embodiment of the invention can be applied to a cloud service platform, as shown in fig. 2, which is a schematic diagram of a system architecture to which the user abnormal behavior detection method of the embodiment of the invention is applied. The user uses the service provided by the cloud service platform through the APP installed on the user terminal. The user terminal can report gesture track data of the APP to the cloud service platform continuously, a gesture image generation module in the cloud service platform can convert the gesture track data into gesture images based on the mode, then the image feature extraction module extracts image feature data based on the image feature extraction module, and further the image feature data is processed by using the image analysis model, so that detection results of abnormal behaviors of the user are obtained. Because the cloud service platform faces a large number of user terminals, when the cloud service platform detects that the user behaviors on a large number of user terminals are abnormal, the situation can indicate that the page function on one side of the cloud service platform is problematic or that the page interaction mode is not suitable for the currently used user, so that the processing can be performed in time. For example, the cloud service platform side may have a defect of losing page data or page functions, and in this case, the page update process may be performed, and new page data is sent to the terminal side or a page patch file is added, so that the page on the terminal side is updated in time, so as to solve the problem existing in the current page. For another example, the interaction mode of some pages may not be suitable for some users, for example, for some people with visual obstacles, many functional users in the pages may not be accurately identified, thus causing operation obstacle, and for this case, an instruction may be sent to the user terminal, and the page interaction mode may be switched, for example, to voice instruction operation or to enlarge the display size of the functional area in the page.
In the following, several processing manners of converting gesture track data of a user into gesture images are described in detail in the embodiments of the present invention. For the following modes, the basic principle is that the gesture track data of a user can be converted into images with color features corresponding to pages, and feature information such as the sequence of gestures, the times of gesture passing in certain areas, the distribution condition of gesture tracks in the page range and the like is reflected through color and position information. Specifically, the following three modes can be adopted:
1) Color overlay mode
Whether a color coverage mode or a color mixing mode is adopted, colors are required to be pre-distributed on the time range covered by gesture track data, so that different times correspond to different colors, a specific distribution mode can be adopted to enable the colors to change with a certain trend along with time, for example, the colors can be enabled to change from light to dark along with time according to a color spectrum, namely, the gesture motion track at the beginning corresponds to light color, the gesture motion track at the last corresponds to dark color, one layer can be distributed for each gesture motion track, and the synthesis processing of all layers is finally carried out.
In the color overlay mode, the color corresponding to the gesture action drawn later overlays the color corresponding to the gesture action drawn earlier. In order to coordinate with the processing of the color overlay mode, the gesture track data needs to contain position information and a timestamp corresponding to the gesture action of the user on the designated page. The time stamp determines the corresponding color of the gesture motion track on one hand and carries the sequence information on the other hand.
Fig. 3 is a schematic diagram of drawing a gesture image according to an embodiment of the present invention, in which fig. 3 shows a plurality of gesture actions with different arrow tracks, and different filling manners are adopted to show different color differences in order to distinguish the tracks generated by the different gesture actions. It should be noted that, in practical applications, the above-mentioned different filling patterns will correspond to different colors, which are only schematic here to illustrate the technical principles. In the color overlay mode, the trace of the gesture motion occurring later will overlay the trace before, and it can be seen that, from the superposition process of the single gesture A1 to the gesture a1+a2+a3, in the state of the gesture a1+a2, the filling pattern of the gesture A2 (i.e. the color of the gesture A2) is displayed at the intersection of the gesture traces, and in the state of the gesture a1+a2+a3, the filling pattern of the gesture A3 (i.e. the color of the gesture A3) is displayed at the intersection of the three gesture traces. The color distribution and the image features which are mutually covered carry characteristic information of time dimension and space dimension of gesture operation of a user, namely the gesture features of the user in time and space in a period of time in which the image features are accumulated, and the features are displayed and processed in a mode of the image features, so that deep mining can be carried out based on machine learning, and abnormal states can be analyzed. Based on the characteristics of the color coverage mode, the characteristics of the last gesture action can be highlighted under the condition that gesture action tracks meet.
2) Color mixing mode
The difference between the color mixing mode and the gesture track data content required by the color mixing mode is that after the color corresponding to the gesture action drawn later and the color corresponding to the gesture action drawn earlier are mixed in the color mixing mode, the color is used as the final drawn color, namely if a certain position is successively passed by a plurality of gesture tracks, the final color at the position is the color mixing of the colors corresponding to a plurality of gestures, and a specific color mixing mode can adopt a mode of averaging color values.
As can be seen in connection with fig. 3, a mixed state is presented at the gesture intersection, unlike the color mixing mode. For example, the intersection of the gestures b1+b2 presents a mix of fill patterns of the gestures B1 and B2 (i.e., the color mix of the gestures B1 and B2), and similarly, the intersection of the gestures b1+b2+b3 presents a mix of three gesture fills (the color mix of the gestures B1, B2, B3). Similar to the color overlay mode, the color overlay mode also carries information of time dimension and space dimension of gesture actions, however, the color mixing mode is characterized in that under the condition of intersection of gesture action tracks, the characteristics of all gesture actions are reserved through color mixing.
3) Thermodynamic diagram pattern
The thermodynamic diagram mode is different from the color coverage mode and the color mixing mode, and therefore, the thermodynamic diagram mode does not need to consider the sequence of gesture actions of a user, but focuses on information such as distribution density, superposition times and the like of the gesture actions, and accordingly, the gesture track data does not need to contain timestamp information. In a thermodynamic diagram mode, counting the times of superposition of each pixel point by gesture actions according to the position information corresponding to the gesture actions on an image range matched with the specified page; and carrying out color distribution according to the overlapping times, and drawing the gesture image, wherein the part with more overlapping times can be corresponding to a darker color, and the part with less overlapping times can be corresponding to a lighter color. By distributing different colors to the positions with different overlapping times, the image characteristics comprise the times of passing the gesture track of the user at which places are more, the times of passing the gesture track at which places are less, and the like.
As can be seen in conjunction with fig. 3, in thermodynamic diagram mode, the gestures are not color-differentiated according to chronological order, and thus, in fig. 2, the gestures C1, C2, and C3 all use the same padding. In the intersection part of the gesture motion tracks, the more times of intersection, the darker the color, and as shown in the figure, the color depth of the gesture C1+C2+C3 at the intersection is larger than the color state of the intersection formed by the gesture C1+C2. The difference between the two modes is that the thermodynamic diagram mode emphasizes the distribution of gesture tracks and the track intersection degree.
The gesture motion characteristics are converted into image characteristics by different mechanisms in the drawing modes of the gesture images, and then the detection results of the corresponding user behavior anomalies are obtained by analyzing and mining the image characteristics based on an artificial intelligence algorithm. In practical applications, any one of the modes described above or a mixture of modes may be employed.
It should be noted that, for convenience of image analysis processing, the specification of gesture images needs to be unified, but since the screen types of user terminals used by users may have a large difference, the gesture images may be converted into the specification required to be unified through resolution conversion. Specifically, the resolution information of the user terminal screen may be obtained first, and then, according to the original resolution information of the user terminal screen and the preset standard resolution information for drawing the gesture image with the unified specification, the position information on the user terminal is converted into the position information of the gesture image adapted to the unified specification, so as to form the gesture image with the unified specification, so as to facilitate the subsequent image feature analysis.
According to the detection method for the abnormal behavior of the user, the collected gesture track data of the user on the appointed page is converted into the gesture image corresponding to the page, and then the abnormal behavior of the user is found based on image feature analysis. On the one hand, because the gesture track data reflect accurate and real-time behaviors of the user, the abnormal behaviors of the user can be found in a short time by detecting the gesture track data, and the method has good timeliness, on the other hand, by mapping the gesture track data into the gesture image, deeper features between the gesture track data and the page can be mined in an image processing mode, so that the abnormal behaviors of the user can be found more quickly and accurately, and in addition, the detection mode carries out association analysis on the gesture track data and the specific page, so that the page where the problem exists can be positioned quickly. In addition, some anomalies in user behavior may be caused by incorrect gesture actions of the user, for example, the standard gesture actions are clockwise circling, but the user actually performs the gesture actions are anticlockwise circling, for this case, the correct gesture actions may be prompted to the user based on the fact that after the abnormal actions are recognized, the gesture actions of the user may also be shot and guided by the camera, for example, a voice prompt "please circle clockwise" may be performed to the user, and when the user still performs the anticlockwise circling actions, the user may have to perform the wrong actions by the camera, so that the voice prompt "please circle in the opposite direction" is issued to the user, so that the user may be guided to complete the correct gesture operations. This mode may be applied to some IOT (internet of things) devices with cameras.
As an application scene of the embodiment of the invention, whether the user currently using the APP is a user with vision impairment can be detected based on the method provided by the embodiment of the invention, so that the APP page can be switched into an interaction mode suitable for the user with vision impairment. For example, users with vision disorder, such as poor eyesight, often cannot see clearly when viewing page information, so many zooming operations, page sliding operations and the like may exist, so as to view detail content in pages, but some pages do not necessarily support zooming operations, or the whole page content is more in layout, and the users need to zoom continuously or move left and right and up and down to complete information reading, so that operation behaviors can be identified through the method of the embodiment of the invention, and then the operation behaviors are reported to a service platform corresponding to the APP, so that pages suitable for users with vision disorder can be sent to the user terminal, for example, the current page content is converted into a plurality of pages to be displayed, a zooming function is increased, or the size of control keys on the APP page is increased, and in addition, page auxiliary functions, such as functions of sending control instructions to the APP of the user terminal, and triggering voice playing content, can be increased. The above-described processes may be performed by the APP itself, i.e., the above-described processes of detecting visual impairment and switching modes may be performed by the user terminal.
Through the above-mentioned visual obstacle detection and the processing of eliminating the obstacle, the user that is convenient for have visual obstacle carries out APP page better and carries out the interaction to can promote the user's that has visual obstacle use experience.
The detection method of the abnormal behavior of the user can be used for detecting the touch gesture operation behavior of the touch screen, and can also be applied to detecting the non-contact gesture behavior, for example, the non-contact gesture behavior of the user is collected through a mobile phone, a tablet, a notebook and a camera of a desktop computer or sensing equipment such as magnetic induction, and the gesture behavior is widely understood and can comprise limb actions except hands. For example, the smart tv performs various operations by recognizing human actions (e.g., lifting an arm, turning a body), etc. In some cases, abnormal situations of these non-contact gesture behaviors are likely to occur, for example, the user is far away from the camera, or the user acts too quickly, or there is object interference between the user and the device, so that the device cannot recognize the gesture behaviors of the user, and further cannot execute the corresponding instruction. The method provided by the embodiment of the invention can quickly identify the abnormal behaviors and prompt the user to solve the abnormal behaviors, for example, the device can be switched into a voice control mode and the like.
In addition, the embodiment of the invention can also provide help for the disabled, such as the disabled with hands or without arms. For people with handicapped hands, devices with non-contact gesture functions are typically used, for example, the device operation is triggered by detecting the limb movements of the user by means of a sensor such as a camera. However, since the non-contact gesture may have some limitation or obstruction of recognition conditions, such as ambient light, distance between the user and the camera of the device, whether there is an object shielding in the middle, and the like, in addition, the non-contact gesture may be affected by the physical condition of the disabled person, and some disabled persons may not be able to perform some gesture actions. In view of the situation, the method provided by the embodiment of the invention helps the handicapped people to eliminate the obstacle by mode switching or voice and/or visual guidance users based on the abnormal detection of the non-contact gestures, so that the equipment can be used smoothly.
The technical scheme of the invention is further described by the following specific examples.
Example 1
Fig. 4 is a schematic flow chart of a method for detecting abnormal behavior of a user according to an embodiment of the present invention, where the method may be applied to a cloud service platform, a user uses a service provided by the cloud service platform through an APP installed on a user terminal, the user terminal may continuously report gesture track data of the user using the APP to the cloud service platform, and the cloud service platform performs abnormality detection according to the reported gesture track data. In addition, the method can also be applied to one side of the user terminal, corresponding abnormality detection is executed based on the acquisition of gesture track data of the user, and then the detection result is reported to the cloud service platform. Specifically, the method may include:
S101: gesture track data of a user on a designated page is obtained. In order to generate the subsequent gesture image, the gesture track data at least includes location information corresponding to a gesture action of the user on the designated page, and in addition, the gesture track data may further include timestamp information corresponding to the gesture action and other auxiliary information, for example, a type of the gesture action, such as clicking, single-finger sliding, multi-finger sliding, and the like. The designated page may be a page in the APP used by the user, or may be a page of a web browser. The position information may be stored in the form of two-dimensional planar coordinates, including coordinates of a start point and an end point of the gesture motion, which are the same for a click operation. In the embodiment of the invention, the step can be specifically to acquire gesture track data of the user from entering the specified page to leaving the specified page, namely, the gesture track data of the user from entering the specified page to leaving the specified page is taken as a detection sample, so that gesture operation behaviors which are more complete for the specified page can be formed, and whether the user has abnormal behaviors when using the page can be better judged.
S102: and drawing a gesture image corresponding to the specified page according to the gesture track data. In the aspect of drawing the gesture image, the gesture track data of the user can be converted into an image with color characteristics and corresponding to the page, and characteristic information such as the sequence of the gestures, the times of gesture passing in certain areas, the distribution condition of the gesture track in the page range and the like can be embodied through the color and position information.
In the case of carrying timestamp information, the gesture image may be performed in the following manner: according to the position information, selecting a color corresponding to the timestamp, and drawing a gesture image on an image range matched with a designated page, wherein the mapping relation between the color and time can be preset to be distributed according to the preset color change trend along with time on the time range covered by gesture track data. A specific distribution manner may be that the color is changed with a certain trend over time, for example, the color spectrum may be changed from light to dark according to the color spectrum, that is, the gesture motion track at the beginning corresponds to light color, and the gesture motion track at the end corresponds to dark color.
Specifically, in the process of drawing a gesture image, a coverage mode or a color mixing mode is adopted to draw according to the sequence of time stamps, wherein in the coverage mode, colors corresponding to gesture actions drawn later cover colors corresponding to gesture actions drawn earlier, and in the color mixing mode, colors corresponding to gesture actions drawn later and colors corresponding to gesture actions drawn earlier are mixed, and then a specific color mixing mode can adopt a mode of taking an average value of color values as a final drawn color. Therefore, the time stamp determines the corresponding color of the gesture motion track on one hand and carries the sequence information on the other hand.
Besides the color overlay mode and the color mixing mode, the thermodynamic diagram mode may be employed. The thermodynamic diagram mode does not need to consider the sequence of gesture actions of a user, but focuses on information such as distribution density, superposition times and the like of the gesture actions, so that corresponding gesture track data can not contain time stamp information. In a thermodynamic diagram mode, counting the times of superposition of each pixel point by gesture actions according to the position information corresponding to the gesture actions on an image range matched with a designated page; and carrying out color distribution according to the overlapping times, and drawing the gesture image, wherein the part with more overlapping times can be corresponding to a darker color, and the part with less overlapping times can be corresponding to a lighter color.
In practical application, for convenience of image analysis processing, the specification of gesture images needs to be unified, however, since the screen types of user terminals used by users may have great differences, the gesture images can be converted into the specification required by the unification through resolution conversion. Specifically, first resolution information of a user terminal screen may be acquired first, then position information on the user terminal is converted into position information adapted to a gesture image according to the first resolution information and preset second resolution information for drawing the gesture image, and then a gesture image corresponding to a specified page is drawn according to gesture track data containing the converted position information.
S103: and carrying out image feature analysis on the gesture image to generate a detection result of the abnormal behavior of the user. The feature analysis referred to herein refers to analysis of image features accumulated based on a gesture trajectory of a user, and corresponds to conversion of a gesture operation performed by the user into planar image features, and mining of association with abnormal behavior from the viewpoint of image feature processing. The image features can be color distribution, graphic shape, frequency domain information of the image and the like, and the association relationship between the image features and abnormal behaviors can be established by training the machine learning model, so that the detection of the abnormal behaviors based on the image feature analysis is realized.
Specifically, the image feature analysis processing for the gesture image can be implemented by adopting an image clustering model based on unsupervised learning or an image classification model based on supervised learning. Wherein, the image characteristic data extracted from the gesture image is taken as a detection sample.
In the case of processing using the cluster model, this step S103 may include: extracting image features of the gesture image to generate image feature data; taking the image characteristic data as a detection sample, and carrying out clustering processing in a preset clustering space; and generating a detection result of the abnormal behavior of the user according to the distance from the detection sample to the clustering center in the clustering space and/or the distribution density of sample data around the detection sample. The basic processing mode of clustering is to map image features into one sample in a clustering space, the sample exists in the form of a vector, and whether the user behavior abnormality exists or not is determined by observing the relationship between the vector and other vectors in the clustering space. Therefore, in the case of using the clustering mode, a large number of gesture images corresponding to the gesture track data samples of the user can be clustered in advance as clustering samples to form a clustering space, and then the detection samples are judged based on the clustering space.
In the case of using the image classification model, this step S103 may include: extracting image features of the gesture image to generate image feature data; and classifying the image characteristic data by using an image classification model to generate a detection result of whether abnormal behaviors exist and/or the categories of the abnormal behaviors. In order to detect by using the image classification model, the image classification model needs to be trained in advance, and particularly, under the condition that the APP is normally used, a large number of gesture track data samples of a user are collected, corresponding gesture images are generated to serve as training data, and the image classification model is trained. And then classifying the gesture image corresponding to the detection sample by using the trained image classification model, so as to judge whether the user behavior is abnormal. In addition, gesture track data samples corresponding to different types of user behavior anomalies can be made to serve as training data, so that the image classification model can not only judge whether the user behavior is abnormal, but also give out detection results of specific anomaly classification.
According to the detection method for the abnormal behavior of the user, the collected gesture track data of the user on the designated page is converted into the gesture image data corresponding to the page, and then the abnormal behavior of the user is found based on the image feature analysis, so that the abnormal behavior of the user can be found more quickly and accurately.
Example two
Fig. 5 is a schematic structural diagram of a device for detecting abnormal behavior of a user according to an embodiment of the present invention, where the device may be applied to a cloud service platform, and a user may continuously report gesture track data of the user using the APP to the cloud service platform by using a service provided by the cloud service platform through the APP installed on the user terminal, and the cloud service platform performs abnormality detection according to the reported gesture track data. In addition, the device can also be applied to one side of the user terminal, corresponding abnormality detection is executed based on the acquisition of gesture track data of the user, and then the detection result is reported to the cloud service platform. Specifically, the apparatus may include:
the gesture track data acquisition module 11 is configured to acquire gesture track data of a user on a specified page. The gesture track data at least includes position information corresponding to a gesture action of a user on a designated page, and in addition, the gesture track data may further include timestamp information corresponding to the gesture action and other auxiliary information, such as a type of the gesture action, a click, a single-finger sliding, a multi-finger sliding, and the like. The designated page may be a page in the APP used by the user, or may be a page of a web browser. The position information may be stored in the form of two-dimensional planar coordinates, including coordinates of a start point and an end point of the gesture motion, which are the same for a click operation. In an embodiment of the present invention, acquiring gesture track data may include acquiring gesture track data of a user from entering a specified page to leaving the specified page.
The gesture image generating module 12 is configured to draw a gesture image corresponding to the specified page according to the gesture track data. In the aspect of drawing the gesture image, the gesture track data of the user can be converted into an image with color characteristics and corresponding to the page, and characteristic information such as the sequence of the gestures, the times of gesture passing in certain areas, the distribution condition of the gesture track in the page range and the like can be embodied through the color and position information.
In the case of carrying timestamp information, the gesture image may be performed in the following manner: according to the position information, selecting a color corresponding to the timestamp, and drawing a gesture image on an image range matched with a designated page, wherein the mapping relation between the color and time can be preset to be distributed according to the preset color change trend along with time on the time range covered by gesture track data. A specific distribution manner may be that the color is changed with a certain trend over time, for example, the color spectrum may be changed from light to dark according to the color spectrum, that is, the gesture motion track at the beginning corresponds to light color, and the gesture motion track at the end corresponds to dark color.
Specifically, in the process of drawing a gesture image, a coverage mode or a color mixing mode is adopted to draw according to the sequence of time stamps, wherein in the coverage mode, colors corresponding to gesture actions drawn later cover colors corresponding to gesture actions drawn earlier, and in the color mixing mode, colors corresponding to gesture actions drawn later and colors corresponding to gesture actions drawn earlier are mixed, and then a specific color mixing mode can adopt a mode of taking an average value of color values as a final drawn color. Therefore, the time stamp determines the corresponding color of the gesture motion track on one hand and carries the sequence information on the other hand.
Besides the color overlay mode and the color mixing mode, the thermodynamic diagram mode may be employed. The thermodynamic diagram mode does not need to consider the sequence of gesture actions of a user, but focuses on information such as distribution density, superposition times and the like of the gesture actions, so that corresponding gesture track data can not contain time stamp information. In a thermodynamic diagram mode, counting the times of superposition of each pixel point by gesture actions according to the position information corresponding to the gesture actions on an image range matched with a designated page; and carrying out color distribution according to the overlapping times, and drawing the gesture image, wherein the part with more overlapping times can be corresponding to a darker color, and the part with less overlapping times can be corresponding to a lighter color. The technical principle of the modes adopted by the three drawing gesture images can be seen from the foregoing exemplary description with reference to fig. 3.
In practical application, for convenience of image analysis processing, the specification of gesture images needs to be unified, however, since the screen types of user terminals used by users may have great differences, the gesture images can be converted into the specification required by the unification through resolution conversion. Specifically, first resolution information of a user terminal screen may be acquired first, then position information on the user terminal is converted into position information adapted to a gesture image according to the first resolution information and preset second resolution information for drawing the gesture image, and then a gesture image corresponding to a specified page is drawn according to gesture track data containing the converted position information.
The detection result generating module 13 is configured to perform image feature analysis on the gesture image, and generate a detection result of the abnormal behavior of the user. The feature analysis referred to herein refers to analysis of image features accumulated based on a gesture trajectory of a user, and corresponds to conversion of a gesture operation performed by the user into planar image features, and mining of association with abnormal behavior from the viewpoint of image feature processing. The image features can be color distribution, graphic shape, frequency domain information of the image and the like, and the association relationship between the image features and abnormal behaviors can be established by training the machine learning model, so that the detection of the abnormal behaviors based on the image feature analysis is realized.
Specifically, the image feature analysis processing for the gesture image can be implemented by adopting an image clustering model based on unsupervised learning or an image classification model based on supervised learning. Wherein, the image characteristic data extracted from the gesture image is taken as a detection sample.
In the case of processing using a cluster model, the detection processing may include: extracting image features of the gesture image to generate image feature data; taking the image characteristic data as a detection sample, and carrying out clustering processing in a preset clustering space; and generating a detection result of the abnormal behavior of the user according to the distance from the detection sample to the clustering center in the clustering space and/or the distribution density of sample data around the detection sample.
In the case of using an image classification model, the detection process may include: extracting image features of the gesture image to generate image feature data; and classifying the image characteristic data by using an image classification model to generate a detection result of whether abnormal behaviors exist and/or the categories of the abnormal behaviors.
The above detailed description of the processing procedure, the detailed description of the technical principle and the detailed analysis of the technical effect are described in the foregoing embodiments, and are not repeated herein.
According to the detection device for the abnormal behavior of the user, the collected gesture track data of the user on the designated page is converted into the gesture image corresponding to the page, and the abnormal behavior of the user is found based on the image feature analysis, so that the abnormal behavior of the user can be found more quickly and accurately.
Example III
The present embodiment provides a page processing method, which can perform abnormality detection based on the detection method of user abnormal behavior in the foregoing embodiment, and when user behavior abnormality is found, execute processing of handling. The page processing method can be applied to a server side, and the method can comprise the following steps:
S201: and acquiring gesture track data of the user on the page submitted by at least one terminal. The APP on the terminal can report gesture track data to the server side continuously, so that the server can acquire gesture track data reported by the APP on a plurality of terminals, and subsequent user behavior abnormality detection is performed.
S202: and determining a detection result of the abnormal behavior of the user according to the gesture track data. Specifically, the method of obtaining the detection result based on the gesture track data may be the method described in the foregoing embodiment, where the corresponding gesture image is first generated according to the gesture track data, and then the image feature analysis is performed on the gesture image, so as to generate the detection result of the abnormal behavior of the user. Reference may be made to the detailed description of the foregoing embodiments for a detailed implementation.
S203: and executing page updating processing and/or switching page interaction modes according to the detection result. On the server side, based on gesture track data reported by a plurality of terminals, the abnormality of a plurality of users can be found, and the abnormality of individual users can also be found. For example, when the server side detects that a user behavior is abnormal due to the occurrence of page jamming at a plurality of terminals, the server can actively execute page update processing, resend new page data to all or the terminals with abnormal occurrence to update the current page, and reduce some content units occupying traffic in the resent page data, thereby reducing the page jamming. For another example, when some page functions are found to be invalid, the page functions can be recovered by diagnosing the page, determining the reason of the function invalidation and timely issuing patch files to the APP of the user terminal. For another example, the interaction mode of some pages may not be suitable for some users, for example, for some people with visual obstacles, many functional users in the pages may not be accurately identified, thus causing operation obstacle, for this case, an instruction may be sent to the user terminal, and the page interaction mode may be switched, for example, to voice instruction operation, voice content playing, or enlarging the display size of a functional area in the page.
On the other hand, the page processing method of the embodiment can also be applied to a terminal side, that is, a processing module for identifying abnormal behaviors of a user based on gesture track data is embedded in a terminal APP, and processing is performed according to the detection result. In addition, the terminal can also complete recognition of abnormal behaviors of the user based on gesture track data through interaction with the server. Specifically, the method may include:
s301: gesture track data of a user on a page is detected. The terminal can record gesture actions of a user in the process of using the APP through the system log, so that gesture track data are formed. Of course, the recording and use of the terminal or APP for user operations is performed with explicit authorization of the user.
S302: and determining a detection result of the abnormal behavior of the user according to the gesture track data. The terminal can complete detection of abnormal behaviors of the user through interaction with the server, specifically, the terminal reports the gesture track data to the server, and then receives detection results of the abnormal behaviors of the user, which are returned by the server and are generated based on image feature analysis of gesture images corresponding to the gesture track data. In addition, the detection of the abnormal behavior of the user can be completed locally at the terminal, specifically, the terminal generates a corresponding gesture image according to the gesture track data, and then performs image feature analysis on the gesture image to generate a detection result of the abnormal behavior of the user. Reference may be made to the detailed description of the foregoing embodiments for a detailed implementation.
S303: and according to the detection result of the abnormal behavior of the user, acquiring page update data corresponding to the detection result and/or an instruction for switching the page interaction mode, and executing page update and/or page interaction mode switching. Specifically, the mechanism for performing the corresponding processing based on the detection result is the same as the processing principle of the previous server side, except that, on the terminal side, when the web page update is found to be required, the terminal APP needs to actively initiate a request to the server, and the detection result may be simultaneously reported to trigger the server to perform the page update or issue the page patch file, and the like. In addition, for the visually impaired user, the terminal can timely detect the abnormality of the user behavior and switch the page interaction mode, for example, the operation can be switched to voice instruction operation, voice content playing or the display size of a functional area in the enlarged page, and the like.
According to the page processing method, abnormal behaviors of the user are detected based on gesture track data of the user, and pages causing the abnormal behaviors of the user are timely adjusted, so that using barriers of the user are eliminated, and user experience of the user in using APP is improved.
Example IV
The embodiment of the invention provides a non-contact gesture processing method, which can be applied to equipment with a non-contact gesture function or a server in communication with the equipment, and can acquire non-contact gesture track data acquired by the equipment through interaction with the equipment, and send instructions to the equipment based on the processing result of the server, and specifically, the method can comprise the following steps:
s401: gesture track data of the non-contact gesture of the user is obtained. The embodiment of the invention not only can be used for detecting the touch gesture operation behaviors of the touch screen, but also can be applied to detecting the non-contact gesture behaviors, for example, the non-contact gesture behaviors of a user are collected through a mobile phone, a tablet, a notebook and a camera of a desktop computer or sensing equipment such as magnetic induction, and the gesture behaviors are widely understood and can comprise limb actions except hands. For example, the smart tv performs various operations by recognizing human actions (e.g., lifting an arm, turning a body), etc.
S402: and determining a detection result of the non-contact gesture of the user according to the gesture track data. In some cases, abnormal situations of these non-contact gesture behaviors are likely to occur, for example, the user is far away from the camera, or the user acts too quickly, or there is object interference between the user and the device, so that the device cannot recognize the instruction corresponding to the non-contact gesture behaviors of the user. For the anomaly detection of the non-contact gesture, the detection method of the user anomaly behavior in the foregoing embodiment may also be used to perform detection, and the gesture image may be drawn according to the track of the non-contact gesture of the user acquired by the device within a period of time, where for the non-contact gesture, the gesture image may not correspond to the specified page, but may be drawn directly on a canvas of a preset size within a period of time. Specifically, according to one of the three modes, a gesture image can be drawn based on a non-contact gesture track.
S403: and switching the interaction mode of the equipment and/or guiding the non-contact gesture of the user according to the detection result of the non-contact gesture of the user. For example, after the detection result of the non-contact gesture abnormality occurs, the device may be switched to a voice control mode or the like. In addition, the user can be guided to execute correct gesture actions in a screen prompt or voice prompt mode.
In addition, the embodiment of the invention can also provide help for the disabled, such as the disabled with hands or without arms. For people with handicapped hands, devices with non-contact gesture functions are typically used, for example, the device operation is triggered by detecting the limb movements of the user by means of a sensor such as a camera. However, since the non-contact gesture may have some limitation or obstruction of recognition conditions, such as ambient light, distance between the user and the camera of the device, whether there is an object shielding in the middle, and the like, in addition, the non-contact gesture may be affected by the physical condition of the disabled person, and some disabled persons may not be able to perform some gesture actions. In view of the situation, the method provided by the embodiment of the invention helps the handicapped people to eliminate the obstacle by mode switching or voice and/or visual guidance users based on the abnormal detection of the non-contact gestures, so that the equipment can be used smoothly.
Example five
The foregoing embodiments describe a method for detecting abnormal behaviors of a user, a method for processing a non-contact gesture, and a flow processing and device structure of a page processing method, where functions of the above methods and devices may be implemented by an electronic device, as shown in fig. 6, which is a schematic structural diagram of the electronic device according to an embodiment of the present invention, and specifically includes: a memory 110 and a processor 120.
A memory 110 for storing a program.
In addition to the programs described above, the memory 110 may also be configured to store various other data to support operations on the electronic device. Examples of such data include instructions for any application or method operating on the electronic device, contact data, phonebook data, messages, pictures, videos, and the like.
The memory 110 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The processor 120 is coupled to the memory 110, and is configured to execute a program in the memory 110 to perform the operation steps of the detection method of the abnormal behavior of the user and/or the page processing method and/or the processing method of the non-contact gesture described in the foregoing embodiments.
Further, the processor 120 may also include various modules described in the foregoing embodiments to perform detection processing of user abnormal behavior and/or page processing and/or processing of contactless gestures, and the memory 110 may be used, for example, to store data and/or output data required for these modules to perform operations.
The above detailed description of the processing procedure, the detailed description of the technical principle and the detailed analysis of the technical effect are described in the foregoing embodiments, and are not repeated herein.
Further, as shown, the electronic device may further include: communication component 130, power component 140, audio component 150, display 160, and other components. The drawing shows only a part of the components schematically, which does not mean that the electronic device comprises only the components shown in the drawing.
The communication component 130 is configured to facilitate communication between the electronic device and other devices in a wired or wireless manner. The electronic device may access a wireless network based on a communication standard, such as a WiFi,2G, 3G, 4G/LTE, 5G, or other mobile communication network, or a combination thereof. In one exemplary embodiment, the communication component 130 receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 130 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
A power supply assembly 140 provides power to the various components of the electronic device. Power supply components 140 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for electronic devices.
The audio component 150 is configured to output and/or input audio signals. For example, the audio component 150 includes a Microphone (MIC) configured to receive external audio signals when the electronic device is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 110 or transmitted via the communication component 130. In some embodiments, the audio assembly 150 further includes a speaker for outputting audio signals.
The display 160 includes a screen, which may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or sliding action, but also the duration and pressure associated with the touch or sliding operation.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the method embodiments described above may be performed by hardware associated with program instructions. The foregoing program may be stored in a computer-readable storage medium. The program, when executed, performs steps including the method embodiments described above; and the aforementioned storage medium includes: various media that can store program code, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (18)

1. A detection method of abnormal behavior of a user comprises the following steps:
acquiring gesture track data of a user on a designated page;
drawing a gesture image corresponding to the appointed page according to the gesture track data;
Performing image feature analysis on the gesture image to generate a detection result of abnormal behaviors of the user,
the gesture track data comprises position information and a time stamp corresponding to gesture actions of a user on a designated page, and the drawing of the gesture image corresponding to the designated page according to the gesture track data comprises the following steps:
and selecting a color corresponding to the timestamp, and drawing the gesture image according to the position information on an image range matched with the specified page, wherein a plurality of preset colors are distributed along the time variation trend on the time range covered by the gesture track data.
2. The method according to claim 1, wherein in the process of drawing the gesture image, an overlay mode or a color mixing mode is adopted for drawing according to the sequence of the time stamps;
in the coverage mode, colors corresponding to gesture actions drawn later cover colors corresponding to gesture actions drawn before;
in the color mixing mode, colors corresponding to the gesture actions drawn later and colors corresponding to the gesture actions drawn earlier are mixed, and then the mixed colors are used as the colors drawn finally.
3. The method of claim 1, wherein the gesture track data includes location information corresponding to a gesture action of a user on a specified page, and the drawing a gesture image corresponding to the specified page according to the gesture track data includes:
counting the times of overlapping each pixel point by gesture actions according to the position information corresponding to the gesture actions on the image range matched with the appointed page;
and drawing the gesture image according to the corresponding matched colors of the superposition times.
4. The method of claim 1, wherein obtaining gesture trajectory data of a user on a specified page comprises: gesture track data of a user from entering the specified page to leaving the specified page is acquired.
5. The method of claim 1, wherein the gesture track data includes location information corresponding to a gesture action of a user on a specified page of a user terminal screen, and the drawing a gesture image corresponding to the specified page according to the gesture track data includes:
acquiring first resolution information of a user terminal screen;
according to the first resolution information and preset second resolution information for drawing the gesture image, converting the position information on the user terminal into position information suitable for the gesture image;
And drawing a gesture image corresponding to the designated page according to the gesture track data containing the converted position information.
6. The method of claim 1, wherein performing image feature analysis on the gesture image, generating a detection result of abnormal behavior of a user comprises:
extracting image features of the gesture image to generate image feature data;
taking the image characteristic data as a detection sample, and carrying out clustering processing in a preset clustering space;
and generating a detection result of abnormal behaviors of the user according to the distance from the detection sample to the clustering center in the clustering space and/or the distribution density of sample data around the detection sample.
7. The method of claim 1, wherein performing image feature analysis on the gesture image, generating a detection result of abnormal behavior of a user comprises:
extracting image features of the gesture image to generate image feature data;
and using an image classification model to classify the image characteristic data, and generating a detection result of whether abnormal behaviors exist or not and/or the categories of the abnormal behaviors.
8. A device for detecting abnormal behavior of a user, comprising:
The gesture track data acquisition module is used for acquiring gesture track data of a user on a designated page;
the gesture image generation module is used for drawing a gesture image corresponding to the appointed page according to the gesture track data;
a detection result generation module for carrying out image feature analysis on the gesture image to generate a detection result of abnormal behavior of the user,
the gesture track data comprises position information and a time stamp corresponding to gesture actions of a user on a designated page, and the drawing of the gesture image corresponding to the designated page according to the gesture track data comprises the following steps:
and selecting a color corresponding to the timestamp, and drawing the gesture image according to the position information on an image range matched with the designated page, wherein the color is distributed according to a preset color change trend along with time on a time range covered by the gesture track data.
9. The apparatus of claim 8, wherein performing image feature analysis on the gesture image, identifying abnormal behavior of a user comprises:
extracting image features of the gesture image to generate image feature data;
Taking the image characteristic data as a detection sample, and carrying out clustering processing in a preset clustering space;
and identifying abnormal behaviors of the user according to the distance from the detection sample to the clustering center in the clustering space and/or the distribution density of sample data around the detection sample.
10. The apparatus of claim 8, wherein obtaining gesture trajectory data of a user on a specified page comprises: gesture track data of a user from entering the specified page to leaving the specified page is acquired.
11. An electronic device, comprising:
a memory for storing a program;
a processor for running the program stored in the memory to execute the method for detecting abnormal behavior of a user according to any one of claims 1 to 7.
12. A method of page processing, comprising:
acquiring gesture track data of a user on a page submitted by at least one terminal;
determining a detection result of the abnormal behavior of the user according to the gesture track data; and executing page updating processing and/or switching page interaction modes according to the detection result.
13. The method of claim 12, wherein determining a detection of user abnormal behavior from gesture trajectory data comprises:
Generating a corresponding gesture image according to the gesture track data;
and carrying out image feature analysis on the gesture image to generate a detection result of abnormal behaviors of the user.
14. A method of page processing, comprising:
detecting gesture track data of a user on a page;
determining a detection result of the abnormal behavior of the user according to the gesture track data;
acquiring page update data and/or instructions for switching page interaction modes corresponding to the detection result according to the detection result of the abnormal behavior of the user, executing page update and/or page interaction mode switching,
the method for determining the detection result of the abnormal behavior of the user according to the gesture track data comprises the following steps:
generating a corresponding gesture image according to the gesture track data;
performing image feature analysis on the gesture image to generate detection results of abnormal behaviors of the user, wherein the gesture track data comprises position information and time stamps corresponding to gesture actions of the user on a designated page,
the drawing the gesture image corresponding to the specified page according to the gesture track data comprises the following steps:
and selecting a color corresponding to the timestamp, and drawing the gesture image according to the position information on an image range matched with the designated page, wherein the color is distributed according to a preset color change trend along with time on a time range covered by the gesture track data.
15. The method of claim 14, wherein determining a detection of user abnormal behavior from gesture trajectory data comprises:
and reporting the gesture track data to a server, and receiving a detection result of abnormal behavior of the user, which is returned by the server and is generated based on image feature analysis of the gesture image corresponding to the gesture track data.
16. An electronic device, comprising:
a memory for storing a program;
a processor for executing the program stored in the memory to perform the page processing method of any one of claims 12 to 15.
17. A method of processing a non-contact gesture, comprising:
acquiring gesture track data of a non-contact gesture of a user;
determining a detection result of the non-contact gesture of the user according to the gesture track data;
switching the interaction mode of the equipment and/or guiding the non-contact gesture of the user according to the detection result of the non-contact gesture of the user,
the determining of the detection result of the non-contact gesture of the user according to the gesture track data comprises the following steps:
generating a corresponding gesture image according to the gesture track data;
performing image feature analysis on the gesture image, determining a detection result of a non-contact gesture of a user, wherein the gesture track data comprises position information and a time stamp corresponding to gesture actions of the user on a designated page,
The drawing the gesture image corresponding to the specified page according to the gesture track data comprises the following steps:
and selecting a color corresponding to the timestamp, and drawing the gesture image according to the position information on an image range matched with the designated page, wherein the color is distributed according to a preset color change trend along with time on a time range covered by the gesture track data.
18. An electronic device, comprising:
a memory for storing a program;
a processor for executing the program stored in the memory to perform the method of processing a contactless gesture of claim 17.
CN202011271295.8A 2020-11-12 2020-11-13 Abnormal behavior detection method, page and gesture processing method, device and electronic equipment Active CN113419621B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2020112658722 2020-11-12
CN202011265872 2020-11-12

Publications (2)

Publication Number Publication Date
CN113419621A CN113419621A (en) 2021-09-21
CN113419621B true CN113419621B (en) 2023-05-02

Family

ID=77711721

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011271295.8A Active CN113419621B (en) 2020-11-12 2020-11-13 Abnormal behavior detection method, page and gesture processing method, device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113419621B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114077529B (en) * 2022-01-19 2022-06-21 荣耀终端有限公司 Log uploading method and device, electronic equipment and computer readable storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105807900A (en) * 2014-12-30 2016-07-27 丰唐物联技术(深圳)有限公司 Non-contact type gesture control method and intelligent terminal
CN104679434A (en) * 2015-03-17 2015-06-03 北京合生共济投资咨询有限责任公司 Method for switching client pages
CN104933408B (en) * 2015-06-09 2019-04-05 深圳先进技术研究院 The method and system of gesture identification
CN105740823B (en) * 2016-02-01 2019-03-29 北京高科中天技术股份有限公司 Dynamic gesture track recognizing method based on depth convolutional neural networks
CN108921101A (en) * 2018-07-04 2018-11-30 百度在线网络技术(北京)有限公司 Processing method, equipment and readable storage medium storing program for executing based on gesture identification control instruction
CN111382599A (en) * 2018-12-27 2020-07-07 北京搜狗科技发展有限公司 Image processing method and device and electronic equipment

Also Published As

Publication number Publication date
CN113419621A (en) 2021-09-21

Similar Documents

Publication Publication Date Title
US9829984B2 (en) Motion-assisted visual language for human computer interfaces
US9870684B2 (en) Information processing apparatus, information processing method, program, and information processing system for achieving a surveillance camera system
EP2365420B1 (en) System and method for hand gesture recognition for remote control of an internet protocol TV
US20120110516A1 (en) Position aware gestures with visual feedback as input method
US11868521B2 (en) Method and device for determining gaze position of user, storage medium, and electronic apparatus
US20120229509A1 (en) System and method for user interaction
KR20190030140A (en) Method for eye-tracking and user terminal for executing the same
US11704142B2 (en) Computer application with built in training capability
CN110908513B (en) Data processing method and electronic equipment
CN113419621B (en) Abnormal behavior detection method, page and gesture processing method, device and electronic equipment
CN113076903A (en) Target behavior detection method and system, computer equipment and machine readable medium
CN115909127A (en) Training method of abnormal video recognition model, abnormal video recognition method and device
CN109992124A (en) Input method, device and machine readable media
Wang et al. Gaze-aware hand gesture recognition for intelligent construction
CN112826516A (en) Electromyographic signal processing method, device, equipment, readable storage medium and product
US20230393649A1 (en) Method and device for inputting information
CN109891459A (en) Image processing apparatus, image processing method and computer readable recording medium
CN108073291A (en) A kind of input method and device, a kind of device for input
KR20190031829A (en) Electric terminal and method for controlling the same
CN113377220B (en) Information storage method and device
CN112005296A (en) Selecting displays using machine learning
WO2022127063A1 (en) Input method and device, and device for input
CN115509371A (en) Key identification method and device for identifying keys
Mohamed Eye-gesture control of computer systems via artificial intelligence
CN114067237A (en) Video data processing method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant