CN113900889B - Method and system for intelligently identifying APP manual operation - Google Patents

Method and system for intelligently identifying APP manual operation Download PDF

Info

Publication number
CN113900889B
CN113900889B CN202111110957.8A CN202111110957A CN113900889B CN 113900889 B CN113900889 B CN 113900889B CN 202111110957 A CN202111110957 A CN 202111110957A CN 113900889 B CN113900889 B CN 113900889B
Authority
CN
China
Prior art keywords
touch
information
obtaining
user
radius
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111110957.8A
Other languages
Chinese (zh)
Other versions
CN113900889A (en
Inventor
杨冠军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bairong Zhixin Beijing Technology Co ltd
Original Assignee
Bairong Zhixin Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bairong Zhixin Beijing Technology Co ltd filed Critical Bairong Zhixin Beijing Technology Co ltd
Priority to CN202111110957.8A priority Critical patent/CN113900889B/en
Publication of CN113900889A publication Critical patent/CN113900889A/en
Application granted granted Critical
Publication of CN113900889B publication Critical patent/CN113900889B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3041Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system component is an input/output interface
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3438Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment monitoring of user actions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0414Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means using force sensing means to determine a position

Abstract

The invention provides a method and a system for intelligently identifying APP manual operation, wherein the method comprises the following steps: acquiring a starting instruction and starting a monitoring function; obtain touch event touchesBegan, touchesMoved, touchesEnded information; obtaining information such as touch radius information, touch radius tolerance information, touch pressing force and the like; obtaining a preset service node set; acquiring a non-human operation data set; obtaining a touch recognition model; touchesBegan, touchesMoved, touchesEnded, the touch radius tolerance, the touch pressing force and the like are input into the recognition model, and a first recognition result is obtained. In the prior art, whether an operator is a real person or not is identified, user experience is affected, and the key nodes are identified and non-real person operation is blocked, so that the technical problems of low accuracy and timeliness and the intelligent level to be improved are solved.

Description

Method and system for intelligently identifying APP manual operation
Technical Field
The invention relates to the field of intelligent identification and detection, in particular to a method and a system for intelligent identification of APP manual operation.
Background
With the development of the mobile internet, more and more business processes can be directly completed on the mobile APP, the mobile APP is an APP operator, identification of user identities is always an important ring in business wind control, some fraudulent molecules can control batch equipment by adopting a group control system, then the APP is operated by adopting an automatic script mode, a traditional wind control means generally uses a short message verification code, an image identification verification code, a slider verification code and a text click verification code, and for the means, a corresponding code receiving platform and a code printing platform exist on the short message verification code market, so that the identification means is avoided; other intelligent verification codes are easy to directly crack by the simple intelligent verification codes along with the rapid development of artificial intelligence in recent years; the difficulty is high, and the complicated verification code can cause adverse effects on user experience.
However, in the process of implementing the technical scheme of the embodiment of the application, the inventor discovers that the above technology has at least the following technical problems:
in the prior art, whether an APP operator is a real person or not is identified, user experience is affected, and the key nodes are identified and non-real person operation is blocked, so that the accuracy and timeliness are low, and the intelligent level is required to be improved.
Disclosure of Invention
The embodiment of the application solves the technical problems that whether an APP operator is a true person or not is identified and the accuracy and timeliness of non-true person operation are not high and the intelligent level is to be improved for key node identification and blocking in the prior art by providing the intelligent APP manual operation identification method and system. By analyzing touch information of a user in an APP operation process, whether the user operates as a real person or not is intelligently identified, the user does not feel in the whole process, and the technical effects of identifying, judging and blocking accuracy and timeliness of non-real person operation are improved while user experience is improved.
In view of the above problems, the embodiment of the application provides a method and a system for intelligently identifying APP manual operation.
In a first aspect, an embodiment of the present application provides a method for intelligently identifying an APP manual operation, where the method includes: when a first APP starts to run, a first starting instruction is obtained; starting a monitoring function of the first touch screen intelligent device according to the first starting instruction; obtaining touchsBegan information, touchsMoved information and touchsEnded information of a first touch event based on the first touch screen intelligent device; analyzing and obtaining touch radius information, touch radius tolerance information, touch pressing force and touch position coordinate information according to the touchbetgan information, touchmovement information and touchend information; obtaining a preset service node set; acquiring a non-human operation data set; training a neural network model according to the non-artificial operation data set to obtain a touch recognition model; when the operation node of the first APP is in the preset service node set, inputting the touchbetgan information, the touchmoved information, the touchend information, the touch radius tolerance information, the touch pressing force and the touch position coordinate information into the touch recognition model to obtain a first recognition result.
In another aspect, an embodiment of the present application provides a system for intelligently identifying an APP manual operation, where the system includes: the first obtaining unit is used for obtaining a first starting instruction when the first APP starts to operate; the first execution unit is used for dynamically aiming at the monitoring function of the first touch screen intelligent device according to the first starting instruction; the second obtaining unit is used for obtaining touchsBegan information, touchsMoved information and touchsEnded information of a first touch event based on the first touch screen intelligent device; the second execution unit is used for analyzing and obtaining touch radius information, touch radius tolerance information, touch pressing force and touch position coordinate information according to the touchsBegan information, touchsMoved information and touchsEnded information; a third obtaining unit, configured to obtain a predetermined service node set; a fourth obtaining unit for obtaining a non-human operation data set; the fifth obtaining unit is used for training the neural network model according to the non-artificial operation data set to obtain a touch recognition model; the sixth obtaining unit is configured to input the touchbetgan information, the touchmoved information, the touchend information, the touch radius tolerance information, the touch pressing force and the touch position coordinate information into the touch recognition model when the operation node of the first APP falls in the predetermined service node set, and obtain a first recognition result.
In a third aspect, an embodiment of the present application provides a system for intelligently identifying an APP manually operated, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the steps of any one of the methods of the first aspect when executing the program.
One or more technical solutions provided in the embodiments of the present application at least have the following technical effects or advantages:
due to the adoption of the method, when the first APP starts to run, a first starting instruction is obtained; starting a monitoring function of the first touch screen intelligent device according to the first starting instruction; obtaining touchsBegan information, touchsMoved information and touchsEnded information of a first touch event based on the first touch screen intelligent device; analyzing and obtaining touch radius information, touch radius tolerance information, touch pressing force and touch position coordinate information according to the touchbetgan information, touchmovement information and touchend information; obtaining a preset service node set; acquiring a non-human operation data set; training a neural network model according to the non-artificial operation data set to obtain a touch recognition model; when the operation node of the first APP is in the preset service node set, the touchess Began information, touchess moved information, touchess end information, touch radius tolerance information, touch pressing force and touch position coordinate information are input into the touch recognition model to obtain a first recognition result.
The foregoing description is only an overview of the present application, and is intended to be implemented in accordance with the teachings of the present application in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present application more readily apparent.
Drawings
FIG. 1 is a schematic flow chart of a method for intelligently identifying APP manual operation according to an embodiment of the application;
fig. 2 is a schematic flow chart of information correction such as touch radius and touch radius tolerance in a method for intelligently identifying an APP manual operation according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of obtaining operation habit coefficients of a user in a method for intelligently identifying APP manual operation according to an embodiment of the present application;
fig. 4 is a schematic flow chart of a method for intelligently identifying an APP artificial operation to obtain an identification result according to an embodiment of the present application;
FIG. 5 is a schematic flow chart of verification of recognition results of a method for intelligently recognizing APP manual operation according to an embodiment of the present application;
fig. 6 is a schematic flow chart of a method for intelligently identifying an APP manual operation to obtain a first alert instruction according to an embodiment of the present application;
fig. 7 is a schematic flow chart of a method for intelligently identifying an APP manual operation to obtain a second alert instruction according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a system for intelligent recognition of APP human operation in accordance with an embodiment of the present application;
fig. 9 is a schematic structural diagram of an exemplary electronic device according to an embodiment of the present application.
Reference numerals illustrate: the device comprises a first obtaining unit 11, a second obtaining unit 12, a first generating unit 13, a third obtaining unit 14, a fourth obtaining unit 15, a second generating unit 16, a fifth obtaining unit 17, an electronic device 300, a memory 301, a processor 302, a communication interface 303, and a bus architecture 304.
Detailed Description
The embodiment of the application solves the technical problems that whether an APP operator is a true person or not is identified and the accuracy and timeliness of non-true person operation are not high and the intelligent level is to be improved for key node identification and blocking in the prior art by providing the intelligent APP manual operation identification method and system. By analyzing touch information of a user in an APP operation process, whether the user operates as a real person or not is intelligently identified, the user does not feel in the whole process, and the technical effects of identifying, judging and blocking accuracy and timeliness of non-real person operation are improved while user experience is improved.
Summary of the application
With the development of the mobile internet, more and more business processes can be directly completed on the mobile APP, the mobile APP is an APP operator, identification of user identities is always an important ring in business wind control, some fraudulent molecules can control batch equipment by adopting a group control system, then the APP is operated by adopting an automatic script mode, a traditional wind control means generally uses a short message verification code, an image identification verification code, a slider verification code and a text click verification code, and for the means, a corresponding code receiving platform and a code printing platform exist on the short message verification code market, so that the identification means is avoided; other intelligent verification codes are easy to directly crack by the simple intelligent verification codes along with the rapid development of artificial intelligence in recent years; the difficulty is high, and the complicated verification code can cause adverse effects on user experience. In the prior art, whether an APP operator is a real person or not is identified, user experience is affected, and the key nodes are identified and non-real person operation is blocked, so that the accuracy and timeliness are low, and the intelligent level is required to be improved.
Aiming at the technical problems, the technical scheme provided by the application has the following overall thought:
the embodiment of the application provides a method for intelligently identifying APP manual operation, wherein the method comprises the following steps: when a first APP starts to run, a first starting instruction is obtained; starting a monitoring function of the first touch screen intelligent device according to the first starting instruction; obtaining touchsBegan information, touchsMoved information and touchsEnded information of a first touch event based on the first touch screen intelligent device; analyzing and obtaining touch radius information, touch radius tolerance information, touch pressing force and touch position coordinate information according to the touchbetgan information, touchmovement information and touchend information; obtaining a preset service node set; acquiring a non-human operation data set; training a neural network model according to the non-artificial operation data set to obtain a touch recognition model; when the operation node of the first APP is in the preset service node set, inputting the touchbetgan information, the touchmoved information, the touchend information, the touch radius tolerance information, the touch pressing force and the touch position coordinate information into the touch recognition model to obtain a first recognition result.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
Example 1
As shown in fig. 1, an embodiment of the present application provides a method for intelligently identifying an APP manual operation, where the method is applied to a touch screen intelligent device, and the method includes:
s100: when a first APP starts to run, a first starting instruction is obtained;
s200: starting a monitoring function of the first touch screen intelligent device according to the first starting instruction;
s300: obtaining touchsBegan information, touchsMoved information and touchsEnded information of a first touch event based on the first touch screen intelligent device;
specifically, the first APP is any application program in any touch screen smart device, and the touch screen smart device may, but is not limited to, be: a smart phone, a tablet computer and other terminal equipment with a touch screen. After the user opens the first APP, touch information of a first touch event of clicking operation performed by the user on the first APP is captured, and touchesBegan, touchesMoved, touchesEnded, a set of methods can be used to control finger touch of the APP user. Touch information is acquired through a touch monitoring method provided by the system, the touch monitoring method cannot be perceived by a user, a single touch event is acquired, namely, the touchsBegan information of the first touch event is acquired once, the touchsMoved information is zero or more times, and the touchsEnded information is acquired once. The touchmoved information is acquired zero or more times because a part of operations need not be moved by the user, only need clicking, and a part of operations need to be long pressed and moved multiple times to be completed. All touch operations need to touch the screen and leave the screen, and the touchsBegan information and touchsEnded information are acquired once each. By starting the monitoring function of the first touch screen intelligent device, under the condition that the user does not feel, the touch information of the user is obtained, accurate data information can be provided for subsequent touch operation judgment, and the comprehensive experience sense of the user is not influenced.
S400: analyzing and obtaining touch radius information, touch radius tolerance information, touch pressing force and touch position coordinate information according to the touchbetgan information, touchmovement information and touchend information;
s500: obtaining a preset service node set;
s600: acquiring a non-human operation data set;
specifically, according to the touchbegan information, touchmoved information and touchend information, resolving a touch radius (majorRadius) in the first touch event touch information, a touch radius tolerance (majorRadius) information, a touch pressing force (force) information and touch position coordinate (X, Y) information, and storing the above information for standby. Wherein the touch radius tolerance information is used to describe a variance of the touch radius information. The touch radius, the touch radius tolerance and the touch pressing force are further refined on the touch screen operation, and are also the judging basis for judging whether the user is a real person operation or not. Further, the preset service node set is obtained, for example, registration, login, ordering, payment and the like, the preset service node set covers key service nodes of the first APP, all non-human touch information of the touch screen intelligent device including touch radius, touch radius variance, touch pressing force and touch position coordinates is obtained, and the non-human operation data set is collected after arrangement and analysis, so that a foundation is laid for judging whether a user is a real person operation or not.
S700: training a neural network model according to the non-artificial operation data set to obtain a touch recognition model;
s800: when the operation node of the first APP is in the preset service node set, inputting the touchbetgan information, the touchmoved information, the touchend information, the touch radius tolerance information, the touch pressing force and the touch position coordinate information into the touch recognition model to obtain a first recognition result.
Specifically, the non-human operation data set is input into a neural network model for training, the neural network is an operation model formed by connecting a large number of neurons, the output of the network is expressed according to a logic strategy of a network connection mode, the output information is more accurate through training of the model, and the non-human operation data set is input into the neural network model for comprehensive analysis of operation data, so that the touch recognition model is obtained. When the first APP needs any operation such as registration, login, ordering and payment, the touchbetgan information, touchmoved information and touchend information are input into the touch recognition model, comprehensive analysis of touch operation is carried out, so that a first recognition result is obtained, and a first touch event of the first APP is judged and recognized according to the first recognition result. The output first recognition result is more accurate and reliable through the training of the model.
Further, as shown in fig. 2, an embodiment of the present application includes:
s910: obtaining an operation habit coefficient of a first user, wherein the first user is a user of the touch screen intelligent device;
s920: and correcting the touch radius information, the touch radius tolerance information, the touch pressing force and the touch position coordinate information according to the operation habit coefficient to obtain first touch radius information, first touch radius tolerance information, first touch pressing force and first touch position coordinate information.
Specifically, each person has a unique operating habit due to a difference in personal habits, for example, a difference in operating habits of a right-handed person and a left-handed person. Therefore, the operation habit coefficient of any user of the touch screen intelligent device, namely the operation habit coefficient of the first user, is obtained. And correcting the acquired touch radius information, touch radius tolerance information, touch pressing force and touch position coordinate information according to the habit of the first user to obtain first touch radius information, first touch radius tolerance information, first touch pressing force and first touch position coordinate information. The method can achieve the effects of obtaining finer and more accurate touch information and improving the accuracy of intelligent identification.
Further, as shown in fig. 3, the step S910 further includes:
s911: obtaining a value threshold of the operation habit coefficient of the first user;
s912: randomly obtaining M operation habit coefficients from a value threshold of the operation habit coefficients of the first user;
s913: calculating the M operation habit coefficients according to a genetic algorithm to obtain M predicted operation state curves, wherein the M predicted operation state curves are in one-to-one correspondence with the M operation habit coefficients;
s914: obtaining an actual operation state curve of the first user;
s915: and comparing the M predicted operation state curves with the actual operation state curve to obtain the operation habit coefficient of the first user, wherein the similarity between the predicted operation state curve corresponding to the operation habit coefficient of the first user and the actual operation state curve is the largest.
Specifically, the genetic algorithm is essentially that random search is continuously performed in a solution space, new solutions are continuously generated in the search process, a better solution algorithm is reserved, the implementation difficulty is low, and a satisfactory result can be obtained in a short time. The genetic algorithm directly operates the structural object when in use, has no limitation of derivation and function continuity, has inherent hidden parallelism and better global optimizing capability, adopts a probabilistic optimizing method, can automatically acquire and guide an optimized search space without determining rules and adaptively adjusts the search direction, so the genetic algorithm is widely applied to various fields. And calculating the M operation habit coefficients according to a genetic algorithm to obtain M predicted operation state curves, wherein the M operation habit coefficients are randomly obtained from a value threshold of the operation habit coefficient of the first user, and the M predicted operation state curves are in one-to-one correspondence with the M operation habit coefficients. The actual operation state curves of the first user are effect record data after the first user performs actual operation, and the M predicted operation state curves and the actual operation state curves are compared to obtain a predicted value with closest similarity, wherein the operation habit coefficient corresponding to the predicted value is the operation habit coefficient of the first user.
Further, as shown in fig. 4, the step S800 further includes:
s810: when the operation node of the first APP is in the preset service node set, inputting the touchbetgan information, the touchmoved information, the touchend information, the touch radius tolerance information, the touch pressing force and the touch position coordinate information into the touch recognition model as input data;
s820: the touch recognition model is obtained through training of multiple sets of training data, and each set of training data in the multiple sets of training data comprises non-human operation data and identification information for identifying whether human operation is marked;
s830: and obtaining output information of the touch recognition model, wherein the output information comprises the first recognition result.
Specifically, the neural network is an operation model formed by interconnecting a large number of neurons, the output of the network is expressed according to a logic strategy of a network connection mode, the output information is more accurate through training of the model, and when the operation node of the first APP is in the preset service node set, the touchess Began information, the touchess moved information and the touchess end information are input into a touch recognition model for comprehensive analysis of touch operation, so that the output information including the first recognition result is obtained. Further, the training process is essentially a supervised learning process, each set of supervised data includes non-human operation data and identification information for identifying whether the human operation is marked, the touch recognition model performs continuous self-correction and adjustment until the obtained output result is consistent with the identification information, the data supervised learning of the set is ended, and the next set of data supervised learning is performed. When the output information of the touch recognition model reaches a preset accuracy rate or a convergence state, the supervised learning process is ended, and the technical effect of improving the intelligent degree of data training is achieved.
Further, as shown in fig. 5, after the first recognition result is obtained, step S800 includes:
s840: obtaining a predetermined touch radius condition rule;
s850: acquiring a preset touch pressing force condition rule;
s860: acquiring a predetermined touch position coordinate condition rule;
s870: judging whether the first touch event accords with the preset touch radius condition rule, the preset touch pressing force condition rule and the preset touch position coordinate condition rule at the same time, and obtaining a first judging result;
s880: and verifying the first identification result according to the first judgment result. Specifically, the predetermined touch radius condition rule is that if the operation is performed by a real person, the touch radius and the touch radius variance accuracy are 6 bits after a decimal point, and if each touch radius and the touch radius variance are integers, that is, 0 after the decimal point, the user is performed by a non-real person. The predetermined touch pressing force condition rule is to judge whether each piece of touch information is 0, and if the pieces of touch information are 0, the user operates for a non-real person. If the touch pressing force has a specific value, the user operates by a real person, and the median value of the touch pressing force is generally 0 in touch Began and touch end and is not 0. The predetermined touch position coordinate condition rule is to judge whether the values of coordinates X and Y are integers or not each time, namely, the values are 0 after decimal points; if each time an integer, the user is a non-human operation. And if the touch position coordinate precision is 6 bits after the decimal point, the user operates the touch position coordinate precision as a true man. Examples of data for ease of understanding the above conditional rules are as follows: true man operation data: majorradius= 21.152344; majorradiustolerance= 5.283203; x- >133.333328, Y- >567.366828. Non-real person operation data: majorradius= 20.000000; majorradiustolerance= 5.000000; x- >68.000000, Y- >401.000000. And judging whether the first touch event accords with the three condition rules at the same time, obtaining a first judging result, namely, the equipment is operated by a non-real person, and checking the first identifying result according to the first judging result. The established preset touch radius condition rule, preset touch pressing force condition rule and preset touch position coordinate condition rule can obviously distinguish real operation from non-real operation, and can realize the effect of timely blocking operation in a key business stage.
Further, as shown in fig. 6, the verifying the first recognition result according to the first determination result, step S880 includes:
s881: if the first judgment result is that the first touch event accords with the preset touch radius condition rule, the preset touch pressing force condition rule and the preset touch position coordinate condition rule at the same time, determining that the first touch event is a manual operation event;
s882: judging whether the first identification result is the manual operation event or not;
s883: if the first identification result is not the manual operation event, a first reminding instruction is obtained, and the first reminding instruction is used for reminding that the first identification result is wrong.
Specifically, if the first touch event simultaneously meets the preset touch radius condition rule, the preset touch pressing force condition rule and the preset touch position coordinate condition rule, the first touch event is a manual operation event, the first identification result is judged, if the first judgment result is inconsistent with the first identification result, the first identification result is wrong, the first reminding instruction is generated, and the first identification result is reminded to be wrong. And correcting the touch recognition model so as to improve the accuracy of the output result.
Further, as shown in fig. 7, the verifying the first recognition result according to the first determination result, step S880 further includes:
s884: if the first judgment result is that the first touch event does not simultaneously accord with the preset touch radius condition rule, the preset touch pressing force condition rule and the preset touch position coordinate condition rule, determining that the first touch event is a non-human operation event;
s885: judging whether the first identification result is the non-human operation event or not;
s886: if the first identification result is not the non-human operation event, a second reminding instruction is obtained, and the second reminding instruction is used for reminding that the first identification result is wrong.
Specifically, if the first touch event does not meet the predetermined touch radius condition rule, the predetermined touch pressing force condition rule and the predetermined touch position coordinate condition rule at the same time, determining that the first touch event is a non-human operation event, judging whether the first recognition result is the non-human operation event, if the first judgment result is inconsistent with the first recognition result, indicating that the first recognition result is wrong, obtaining a second reminding instruction, reminding that the first recognition result is wrong, and correcting the first recognition result.
In summary, the method and the system for intelligently identifying the APP manual operation provided by the embodiment of the application have the following technical effects:
1. due to the adoption of the method, when the first APP starts to run, a first starting instruction is obtained; starting a monitoring function of the first touch screen intelligent device according to the first starting instruction; obtaining touchsBegan information, touchsMoved information and touchsEnded information of a first touch event based on the first touch screen intelligent device; analyzing and obtaining touch radius information, touch radius tolerance information, touch pressing force and touch position coordinate information according to the touchbetgan information, touchmovement information and touchend information; obtaining a preset service node set; acquiring a non-human operation data set; training a neural network model according to the non-artificial operation data set to obtain a touch recognition model; when the operation node of the first APP is in the preset service node set, the touchess Began information, touchess moved information, touchess end information, touch radius tolerance information, touch pressing force and touch position coordinate information are input into the touch recognition model to obtain a first recognition result.
2. The method has the advantages that the preset touch radius condition rule is established, the preset touch pressing force condition rule and the preset touch position coordinate condition rule are obtained, and the reminding mechanism is adopted, so that the intelligent judgment of the real operation and the non-real operation is achieved, the first identification result is corrected, and the technical effect of timely blocking the operation in the key business stage is achieved.
Example two
Based on the same inventive concept as the method for intelligently identifying the APP manual operation in the foregoing embodiment, as shown in fig. 8, an embodiment of the present application provides a system for intelligently identifying the APP manual operation, where the system includes:
a first obtaining unit 11, where the first obtaining unit 11 is configured to obtain a first start instruction when the first APP starts to operate;
the first execution unit 12 is configured to move a monitoring function of the first touch screen intelligent device according to the first start instruction by the first execution unit 12;
a second obtaining unit 13, where the second obtaining unit 13 is configured to obtain touchsbegan information, touchsmoved information, touchsend information based on a first touch event of the first touch screen smart device;
the second execution unit 14 is configured to parse and obtain touch radius information, touch radius tolerance information, touch pressing force and touch position coordinate information according to the touchbetgan information, touchmoved information and touchend information;
A third obtaining unit 15, where the third obtaining unit 15 is configured to obtain a predetermined service node set;
a fourth obtaining unit 16, the fourth obtaining unit 16 being configured to obtain a non-human operation data set;
a fifth obtaining unit 17, where the fifth obtaining unit 17 is configured to train the neural network model according to the set of non-artificial operation data to obtain a touch recognition model;
a sixth obtaining unit 18, where the sixth obtaining unit 18 is configured to input the touchbetgan information, the touchmoved information, the touchend information, the touch radius tolerance information, the touch pressing force, and the touch position coordinate information into the touch recognition model when the operation node of the first APP falls in the predetermined service node set, and obtain a first recognition result.
Further, the system includes:
a seventh obtaining unit, configured to obtain an operation habit coefficient of a first user, where the first user is a user of the touch screen intelligent device;
and the eighth obtaining unit is used for correcting the touch radius information, the touch radius tolerance information, the touch pressing force and the touch position coordinate information according to the operation habit coefficient to obtain first touch radius information, first touch radius tolerance information, first touch pressing force and first touch position coordinate information.
Further, the system includes:
a ninth obtaining unit, configured to obtain a value threshold of an operation habit coefficient of the first user;
a tenth obtaining unit, configured to randomly obtain M operation habit coefficients from a value threshold of the operation habit coefficients of the first user;
an eleventh obtaining unit, configured to calculate the M operation habit coefficients according to a genetic algorithm, and obtain M predicted operation state curves, where the M predicted operation state curves are in one-to-one correspondence with the M operation habit coefficients;
a twelfth obtaining unit for obtaining an actual operation state curve of the first user;
and a thirteenth obtaining unit, configured to compare the M predicted operation state curves with the actual operation state curves, and obtain an operation habit coefficient of the first user, where a similarity between a predicted operation state curve corresponding to the operation habit coefficient of the first user and the actual operation state curve is the largest.
Further, the system includes:
the third execution unit is used for inputting the touchbetgan information, the touchmoved information and the touchend information into the touch recognition model by taking the touch radius information, the touch radius tolerance information, the touch pressing force and the touch position coordinate information as input data when the operation node of the first APP falls in the preset service node set;
A fourteenth obtaining unit, configured to obtain the touch recognition model through training of multiple sets of training data, where each set of training data in the multiple sets of training data includes non-human operation data and identification information for identifying whether the touch recognition model is marked for human operation;
a fifteenth obtaining unit configured to obtain output information of the touch recognition model, the output information including the first recognition result.
Further, the system includes:
a sixteenth obtaining unit for obtaining a predetermined touch radius condition rule;
a seventeenth obtaining unit configured to obtain a predetermined touch pressing force condition rule;
an eighteenth obtaining unit configured to obtain a predetermined touch position coordinate condition rule;
a nineteenth obtaining unit, configured to determine whether the first touch event meets the predetermined touch radius condition rule, the predetermined touch pressing force condition rule, and the predetermined touch position coordinate condition rule at the same time, and obtain a first determination result;
and the fourth execution unit is used for checking the first identification result according to the first judgment result.
Further, the system includes:
the fifth execution unit is used for determining that the first touch event is a manual operation event if the first judgment result is that the first touch event simultaneously accords with the preset touch radius condition rule, the preset touch pressing force condition rule and the preset touch position coordinate condition rule;
the first judging unit is used for judging whether the first identification result is the manual operation event or not;
the twentieth acquisition unit is used for acquiring a first reminding instruction if the first identification result is not the manual operation event, wherein the first reminding instruction is used for reminding that the first identification result is wrong.
Further, the system includes:
the sixth execution unit is used for determining that the first touch event is a non-human operation event if the first judgment result shows that the first touch event does not simultaneously accord with the preset touch radius condition rule, the preset touch pressing force condition rule and the preset touch position coordinate condition rule;
the second judging unit is used for judging whether the first identification result is the non-human operation event or not;
The twenty-first obtaining unit is used for obtaining a second reminding instruction if the first identification result is not the non-human operation event, and the second reminding instruction is used for reminding that the first identification result is wrong.
Exemplary electronic device
An electronic device of an embodiment of the application is described below with reference to figure 9,
based on the same inventive concept as the method for intelligently identifying APP manual operation in the foregoing embodiments, the embodiment of the present application further provides a system for intelligently identifying APP manual operation, including: a processor coupled to a memory for storing a program that, when executed by the processor, causes the system to perform the method of any of the first aspects
The electronic device 300 includes: a processor 302, a communication interface 303, a memory 301. Optionally, the electronic device 300 may also include a bus architecture 304. Wherein the communication interface 303, the processor 302 and the memory 301 may be interconnected by a bus architecture 304; the bus architecture 304 may be a peripheral component interconnect (peripheral component interconnect, PCI) bus, or an extended industry standard architecture (extended industry Standard architecture, EISA) bus, among others. The bus architecture 304 may be divided into address buses, data buses, control buses, and the like. For ease of illustration, only one thick line is shown in fig. 9, but not only one bus or one type of bus.
Processor 302 may be a CPU, microprocessor, ASIC, or one or more integrated circuits for controlling the execution of the programs of the present application.
The communication interface 303 uses any transceiver-like means for communicating with other devices or communication networks, such as ethernet, radio access network (radio access network, RAN), wireless local area network (wireless local area networks, WLAN), wired access network, etc.
The memory 301 may be, but is not limited to, ROM or other type of static storage device that may store static information and instructions, RAM or other type of dynamic storage device that may store information and instructions, or may be an EEPROM (electrically erasable Programmable read-only memory), a compact disc-only memory (CD-ROM) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be self-contained and coupled to the processor through bus architecture 304. The memory may also be integrated with the processor.
The memory 301 is used for storing computer-executable instructions for executing the inventive arrangements, and is controlled by the processor 302 for execution. The processor 302 is configured to execute computer-implemented instructions stored in the memory 301, so as to implement a method for intelligently identifying an APP artificial operation according to the foregoing embodiment of the present application.
Alternatively, the computer-executable instructions in the embodiments of the present application may be referred to as application program codes, which are not particularly limited in the embodiments of the present application.
The embodiment of the application provides a method for intelligently identifying APP manual operation, wherein the method comprises the following steps: when a first APP starts to run, a first starting instruction is obtained; starting a monitoring function of the first touch screen intelligent device according to the first starting instruction; obtaining touchsBegan information, touchsMoved information and touchsEnded information of a first touch event based on the first touch screen intelligent device; analyzing and obtaining touch radius information, touch radius tolerance information, touch pressing force and touch position coordinate information according to the touchbetgan information, touchmovement information and touchend information; obtaining a preset service node set; acquiring a non-human operation data set; training a neural network model according to the non-artificial operation data set to obtain a touch recognition model; when the operation node of the first APP is in the preset service node set, inputting the touchbetgan information, the touchmoved information, the touchend information, the touch radius tolerance information, the touch pressing force and the touch position coordinate information into the touch recognition model to obtain a first recognition result.
Those of ordinary skill in the art will appreciate that: the first, second, etc. numbers referred to in the present application are merely for convenience of description and are not intended to limit the scope of the embodiments of the present application, nor represent the sequence. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one" means one or more. At least two means two or more. "at least one," "any one," or the like, refers to any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one of a, b, or c (species ) may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device including one or more servers, data centers, etc. that can be integrated with the available medium. The usable medium may be a magnetic medium (e.g., a floppy Disk, a hard Disk, a magnetic tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
The various illustrative logical blocks and circuits described in connection with the embodiments of the present application may be implemented or performed with a general purpose processor, a digital signal processor, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the general purpose processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other similar configuration.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software unit executed by a processor, or in a combination of the two. The software elements may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. In an example, a storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC, which may reside in a terminal. In the alternative, the processor and the storage medium may reside in different components in a terminal. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Although the application has been described in connection with specific features and embodiments thereof, it will be apparent that various modifications and combinations can be made without departing from the spirit and scope of the application. Accordingly, the specification and drawings are merely exemplary illustrations of the present application as defined in the appended claims and are considered to cover any and all modifications, variations, combinations, or equivalents that fall within the scope of the application. It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the scope of the application. Thus, it is intended that the present application cover the modifications and variations of this application provided they come within the scope of the appended claims and their equivalents.

Claims (7)

1. A method for intelligently identifying an APP manual operation, wherein the method is applied to a touch screen intelligent device, the method comprising:
when a first APP starts to run, a first starting instruction is obtained;
starting a monitoring function of the first touch screen intelligent device according to the first starting instruction;
obtaining touchsBegan information, touchsMoved information and touchsEnded information of a first touch event based on the first touch screen intelligent device;
Analyzing and obtaining touch radius information, touch radius tolerance information, touch pressing force and touch position coordinate information according to the touchbetgan information, touchmovement information and touchend information;
obtaining a preset service node set;
acquiring a non-human operation data set;
training a neural network model according to the non-artificial operation data set to obtain a touch recognition model;
when the operation node of the first APP is in the preset service node set, inputting the touchbetgan information, the touchmoved information, the touchend information, the touch radius tolerance information, the touch pressing force and the touch position coordinate information into the touch recognition model to obtain a first recognition result;
the method further comprises the steps of:
obtaining an operation habit coefficient of a first user, wherein the first user is a user of the touch screen intelligent device;
correcting the touch radius information, the touch radius tolerance information, the touch pressing force and the touch position coordinate information according to the operation habit coefficient to obtain first touch radius information, first touch radius tolerance information, first touch pressing force and first touch position coordinate information;
The obtaining the operation habit coefficient of the first user includes:
obtaining a value threshold of the operation habit coefficient of the first user;
randomly obtaining M operation habit coefficients from a value threshold of the operation habit coefficients of the first user;
calculating the M operation habit coefficients according to a genetic algorithm to obtain M predicted operation state curves, wherein the M predicted operation state curves are in one-to-one correspondence with the M operation habit coefficients;
obtaining an actual operation state curve of the first user;
and comparing the M predicted operation state curves with the actual operation state curve to obtain the operation habit coefficient of the first user, wherein the similarity between the predicted operation state curve corresponding to the operation habit coefficient of the first user and the actual operation state curve is the largest.
2. The method of claim 1, wherein the obtaining a first recognition result comprises:
when the operation node of the first APP is in the preset service node set, inputting the touchbetgan information, the touchmoved information, the touchend information, the touch radius tolerance information, the touch pressing force and the touch position coordinate information into the touch recognition model as input data;
The touch recognition model is obtained through training of multiple sets of training data, and each set of training data in the multiple sets of training data comprises non-human operation data and identification information for identifying whether human operation is marked;
and obtaining output information of the touch recognition model, wherein the output information comprises the first recognition result.
3. The method of claim 1, wherein after the obtaining the first recognition result, further comprising:
obtaining a predetermined touch radius condition rule;
acquiring a preset touch pressing force condition rule;
acquiring a predetermined touch position coordinate condition rule;
judging whether the first touch event accords with the preset touch radius condition rule, the preset touch pressing force condition rule and the preset touch position coordinate condition rule at the same time, and obtaining a first judging result;
and verifying the first identification result according to the first judgment result.
4. The method of claim 3, wherein the verifying the first recognition result according to the first determination result comprises:
if the first judgment result is that the first touch event accords with the preset touch radius condition rule, the preset touch pressing force condition rule and the preset touch position coordinate condition rule at the same time, determining that the first touch event is a manual operation event;
Judging whether the first identification result is the manual operation event or not;
if the first identification result is not the manual operation event, a first reminding instruction is obtained, and the first reminding instruction is used for reminding that the first identification result is wrong.
5. The method of claim 3, wherein the verifying the first recognition result according to the first determination result comprises:
if the first judgment result is that the first touch event does not simultaneously accord with the preset touch radius condition rule, the preset touch pressing force condition rule and the preset touch position coordinate condition rule, determining that the first touch event is a non-human operation event;
judging whether the first identification result is the non-human operation event or not;
if the first identification result is not the non-human operation event, a second reminding instruction is obtained, and the second reminding instruction is used for reminding that the first identification result is wrong.
6. A system for intelligently identifying an APP human operation, wherein the system comprises:
the first obtaining unit is used for obtaining a first starting instruction when the first APP starts to operate;
The first execution unit is used for dynamically aiming at the monitoring function of the first touch screen intelligent device according to the first starting instruction;
the second obtaining unit is used for obtaining touchsBegan information, touchsMoved information and touchsEnded information of a first touch event based on the first touch screen intelligent device;
the second execution unit is used for analyzing and obtaining touch radius information, touch radius tolerance information, touch pressing force and touch position coordinate information according to the touchsBegan information, touchsMoved information and touchsEnded information;
a third obtaining unit, configured to obtain a predetermined service node set;
a fourth obtaining unit for obtaining a non-human operation data set;
the fifth obtaining unit is used for training the neural network model according to the non-artificial operation data set to obtain a touch recognition model;
a sixth obtaining unit, configured to input, when an operation node of the first APP falls in the predetermined service node set, the touchbetgan information, the touchmoved information, the touchend information, the touch radius tolerance information, the touch pressing force, and the touch position coordinate information into the touch recognition model, to obtain a first recognition result;
The system further comprises:
a seventh obtaining unit, configured to obtain an operation habit coefficient of a first user, where the first user is a user of the touch screen intelligent device;
an eighth obtaining unit, configured to correct the touch radius information, the touch radius tolerance information, the touch pressing force, and the touch position coordinate information according to the operation habit coefficient, to obtain first touch radius information, first touch radius tolerance information, first touch pressing force, and first touch position coordinate information;
a ninth obtaining unit, configured to obtain a value threshold of an operation habit coefficient of the first user;
a tenth obtaining unit, configured to randomly obtain M operation habit coefficients from a value threshold of the operation habit coefficients of the first user;
an eleventh obtaining unit, configured to calculate the M operation habit coefficients according to a genetic algorithm, to obtain M predicted operation state curves, where the M predicted operation state curves are in one-to-one correspondence with the M operation habit coefficients;
a twelfth obtaining unit configured to obtain an actual operation state curve of the first user;
a thirteenth obtaining unit, configured to compare the M predicted operation state curves with the actual operation state curve, and obtain an operation habit coefficient of the first user, where a similarity between the predicted operation state curve corresponding to the operation habit coefficient of the first user and the actual operation state curve is the largest.
7. A system for intelligently identifying APP manual operations, comprising: a processor coupled to a memory for storing a program that, when executed by the processor, causes the system to perform the method of any one of claims 1-5.
CN202111110957.8A 2021-09-18 2021-09-18 Method and system for intelligently identifying APP manual operation Active CN113900889B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111110957.8A CN113900889B (en) 2021-09-18 2021-09-18 Method and system for intelligently identifying APP manual operation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111110957.8A CN113900889B (en) 2021-09-18 2021-09-18 Method and system for intelligently identifying APP manual operation

Publications (2)

Publication Number Publication Date
CN113900889A CN113900889A (en) 2022-01-07
CN113900889B true CN113900889B (en) 2023-10-24

Family

ID=79028866

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111110957.8A Active CN113900889B (en) 2021-09-18 2021-09-18 Method and system for intelligently identifying APP manual operation

Country Status (1)

Country Link
CN (1) CN113900889B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004295766A (en) * 2003-03-28 2004-10-21 Sony Corp Robot apparatus and user authentication method through robot
WO2016171923A1 (en) * 2015-04-21 2016-10-27 Alibaba Group Holding Limited Method and system for identifying a human or machine
CN106503499A (en) * 2016-09-22 2017-03-15 天津大学 Smart mobile phone touch-screen input recognition method based on machine learning
CN108416198A (en) * 2018-02-06 2018-08-17 平安科技(深圳)有限公司 Man-machine identification model establishes device, method and computer readable storage medium
WO2019001558A1 (en) * 2017-06-29 2019-01-03 苏州锦佰安信息技术有限公司 Human and machine recognition method and device
WO2020037919A1 (en) * 2018-08-22 2020-02-27 平安科技(深圳)有限公司 User behavior recognition method and device employing prediction model
WO2020252932A1 (en) * 2019-06-20 2020-12-24 平安科技(深圳)有限公司 Operation behavior-based human and machine recognition method and apparatus, and computer device
CN113065109A (en) * 2021-04-22 2021-07-02 中国工商银行股份有限公司 Man-machine recognition method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150241984A1 (en) * 2014-02-24 2015-08-27 Yair ITZHAIK Methods and Devices for Natural Human Interfaces and for Man Machine and Machine to Machine Activities
CN106155298B (en) * 2015-04-21 2019-11-08 阿里巴巴集团控股有限公司 The acquisition method and device of man-machine recognition methods and device, behavioural characteristic data
US11620375B2 (en) * 2019-01-22 2023-04-04 International Business Machines Corporation Mobile behaviometrics verification models used in cross devices
US11803273B2 (en) * 2021-04-22 2023-10-31 Pixart Imaging Inc. Touch sensor, touch pad, method for identifying inadvertent touch event and computer device
US20230177724A1 (en) * 2021-12-07 2023-06-08 Adasky, Ltd. Vehicle to infrastructure extrinsic calibration system and method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004295766A (en) * 2003-03-28 2004-10-21 Sony Corp Robot apparatus and user authentication method through robot
WO2016171923A1 (en) * 2015-04-21 2016-10-27 Alibaba Group Holding Limited Method and system for identifying a human or machine
CN106503499A (en) * 2016-09-22 2017-03-15 天津大学 Smart mobile phone touch-screen input recognition method based on machine learning
WO2019001558A1 (en) * 2017-06-29 2019-01-03 苏州锦佰安信息技术有限公司 Human and machine recognition method and device
CN108416198A (en) * 2018-02-06 2018-08-17 平安科技(深圳)有限公司 Man-machine identification model establishes device, method and computer readable storage medium
WO2019153604A1 (en) * 2018-02-06 2019-08-15 平安科技(深圳)有限公司 Device and method for creating human/machine identification model, and computer readable storage medium
WO2020037919A1 (en) * 2018-08-22 2020-02-27 平安科技(深圳)有限公司 User behavior recognition method and device employing prediction model
WO2020252932A1 (en) * 2019-06-20 2020-12-24 平安科技(深圳)有限公司 Operation behavior-based human and machine recognition method and apparatus, and computer device
CN113065109A (en) * 2021-04-22 2021-07-02 中国工商银行股份有限公司 Man-machine recognition method and device

Also Published As

Publication number Publication date
CN113900889A (en) 2022-01-07

Similar Documents

Publication Publication Date Title
CN109472240B (en) Face recognition multi-model adaptive feature fusion enhancement method and device
EP3113114A1 (en) Image processing method and device
CN108460346B (en) Fingerprint identification method and device
CN109345553B (en) Palm and key point detection method and device thereof, and terminal equipment
CN108229588B (en) Machine learning identification method based on deep learning
CN110287775B (en) Palm image clipping method, palm image clipping device, computer equipment and storage medium
CN111401219B (en) Palm key point detection method and device
EP2544147A1 (en) Biological information management device and method
CN110414550B (en) Training method, device and system of face recognition model and computer readable medium
CN113792853B (en) Training method of character generation model, character generation method, device and equipment
CN111414868B (en) Method for determining time sequence action segment, method and device for detecting action
Ruan et al. Dynamic gesture recognition based on improved DTW algorithm
CN111340233B (en) Training method and device of machine learning model, and sample processing method and device
CN112836661A (en) Face recognition method and device, electronic equipment and storage medium
CN110741387A (en) Face recognition method and device, storage medium and electronic equipment
CN114428748B (en) Simulation test method and system for real service scene
CN113420848A (en) Neural network model training method and device and gesture recognition method and device
CN111492407B (en) System and method for map beautification
CN113900889B (en) Method and system for intelligently identifying APP manual operation
CN116168403A (en) Medical data classification model training method, classification method, device and related medium
CN111259806B (en) Face area identification method, device and storage medium
CN110705439B (en) Information processing method, device and equipment
CN112541446A (en) Biological feature library updating method and device and electronic equipment
CN113705366A (en) Personnel management system identity identification method and device and terminal equipment
CN112861689A (en) Searching method and device of coordinate recognition model based on NAS technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100000 floors 1-3, block a, global creative Plaza, No. 10, Furong street, Chaoyang District, Beijing

Applicant after: Bairong Zhixin (Beijing) Technology Co.,Ltd.

Address before: 100000 floors 1-3, block a, global creative Plaza, No. 10, Furong street, Chaoyang District, Beijing

Applicant before: Bairong Zhixin (Beijing) credit investigation Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant