CN113900889A - Method and system for intelligently identifying APP manual operation - Google Patents

Method and system for intelligently identifying APP manual operation Download PDF

Info

Publication number
CN113900889A
CN113900889A CN202111110957.8A CN202111110957A CN113900889A CN 113900889 A CN113900889 A CN 113900889A CN 202111110957 A CN202111110957 A CN 202111110957A CN 113900889 A CN113900889 A CN 113900889A
Authority
CN
China
Prior art keywords
touch
information
obtaining
radius
app
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111110957.8A
Other languages
Chinese (zh)
Other versions
CN113900889B (en
Inventor
杨冠军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bairong Zhixin Beijing Credit Investigation Co Ltd
Original Assignee
Bairong Zhixin Beijing Credit Investigation Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bairong Zhixin Beijing Credit Investigation Co Ltd filed Critical Bairong Zhixin Beijing Credit Investigation Co Ltd
Priority to CN202111110957.8A priority Critical patent/CN113900889B/en
Publication of CN113900889A publication Critical patent/CN113900889A/en
Application granted granted Critical
Publication of CN113900889B publication Critical patent/CN113900889B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3041Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system component is an input/output interface
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3438Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment monitoring of user actions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0414Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means using force sensing means to determine a position

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Quality & Reliability (AREA)
  • Mathematical Physics (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a method and a system for intelligently identifying APP manual operation, wherein the method comprises the following steps: acquiring a starting instruction and starting a monitoring function; obtaining touch events touch Began, touch moved and touch Ended information; obtaining information such as touch radius information, touch radius tolerance information, touch pressure degree and the like; obtaining a predetermined service node set; obtaining a non-human operation data set; obtaining a touch recognition model; and inputting touch Began, touch moved, touch Ended, touch radius tolerance, touch pressure degree and the like into the recognition model to obtain a first recognition result. The technical problems that in the prior art, user experience is influenced when an operator is identified to be operated by a real person, accuracy and timeliness of key node identification and non-real person operation blocking are low, and intelligence level needs to be improved are solved.

Description

Method and system for intelligently identifying APP manual operation
Technical Field
The invention relates to the field of intelligent identification and detection, in particular to a method and a system for intelligently identifying APP manual operation.
Background
With the development of mobile internet, more and more business processes can be directly completed on a mobile APP, the business process is an APP operator, the identification of user identity is always an important part in business wind control, some fraudulent molecules can adopt a group control system to control batch equipment, then the APP is operated in an automatic script mode, the traditional wind control means generally uses a short message verification code, an image identification verification code, a slider verification code and a character click verification code, and for the above means, a corresponding code receiving platform and a corresponding code printing platform exist in the short message verification code market, and the identification means is avoided; with the rapid development of artificial intelligence in recent years, simple intelligent identifying codes are easy to be directly cracked; the difficulty is high, and the complicated verification code can cause adverse effects on user experience.
However, in the process of implementing the technical solution of the invention in the embodiments of the present application, the inventors of the present application find that the above-mentioned technology has at least the following technical problems:
the technical problems that user experience is influenced when an APP operator is identified to be operated by a real person or not, accuracy and timeliness of key node identification and non-real person operation blocking are not high, and intelligence level is to be improved exist in the prior art.
Disclosure of Invention
The embodiment of the application provides a method and a system for intelligently identifying the APP manual operation, and solves the technical problems that in the prior art, the user experience is influenced when an APP operator is identified to be operated by a real person, the accuracy and timeliness of identifying and blocking the non-real person operation on key nodes are not high, and the intelligent level is to be improved. Through the touch information of the analysis user in the APP operation process, whether this user of intelligent recognition is real man operation, whole process user does not have the perception, improves discernment, judgement and blocks the technological effect of non-real man operation accuracy, promptness when promoting user experience.
In view of the foregoing problems, the embodiments of the present application provide a method and a system for intelligently identifying an APP manual operation.
In a first aspect, an embodiment of the present application provides a method for intelligently identifying an APP artificial operation, where the method includes: when a first APP starts to run, obtaining a first starting instruction; according to the first starting instruction, starting a monitoring function of the first touch screen intelligent device; obtaining touch Began information, touch moved information and touch Ended information of a first touch event based on the first touch screen intelligent device; analyzing and obtaining touch radius information, touch radius tolerance information, touch pressing force and touch position coordinate information according to the touchBegan information, touchMoved information and touchEnded information; obtaining a predetermined service node set; obtaining a non-human operation data set; training a neural network model according to the non-artificial operation data set to obtain a touch recognition model; when the operation node of the first APP is located in the preset service node set, inputting the touchbesgan information, touchmoved information and touchEnded information into the touch recognition model to obtain a first recognition result.
On the other hand, the embodiment of the present application provides a system for intelligently identifying APP manual operation, wherein the system includes: a first obtaining unit, configured to obtain a first start instruction when a first APP starts to run; the first execution unit is used for activating a monitoring function of the first touch screen intelligent device according to the first starting instruction; a second obtaining unit, configured to obtain touch beta information, touch moved information, and touch extended information of a first touch event based on the first touch screen smart device; the second execution unit is used for analyzing and obtaining touch radius information, touch radius tolerance information, touch pressure degree and touch position coordinate information according to the touchBegan information, touchdevices moved information and touchEnded information; a third obtaining unit, configured to obtain a predetermined service node set; a fourth obtaining unit for obtaining a set of non-human operated data; a fifth obtaining unit, configured to train a neural network model according to the non-human operation data set, so as to obtain a touch recognition model; a sixth obtaining unit, configured to, when the running node of the first APP falls into the set of predetermined service nodes, input the touchbesgan information, the touchmoved information, the touchend information, the touch radius tolerance information, the touch pressure degree, and the touch position coordinate information into the touch recognition model, and obtain a first recognition result.
In a third aspect, an embodiment of the present application provides a system for intelligently identifying APP manual operations, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method according to any one of the first aspect when executing the program.
One or more technical solutions provided in the embodiments of the present application have at least the following technical effects or advantages:
due to the adoption of the method, when the first APP starts to run, a first starting instruction is obtained; according to the first starting instruction, starting a monitoring function of the first touch screen intelligent device; obtaining touch Began information, touch moved information and touch Ended information of a first touch event based on the first touch screen intelligent device; analyzing and obtaining touch radius information, touch radius tolerance information, touch pressing force and touch position coordinate information according to the touchBegan information, touchMoved information and touchEnded information; obtaining a predetermined service node set; obtaining a non-human operation data set; training a neural network model according to the non-artificial operation data set to obtain a touch recognition model; work as the operation node of first APP falls when in the predetermined service node set, will the touchBegan information touchup information touchedMoved information touchedEnded information, touch radius information touch radius tolerance information touch according to the pressure degree with touch position coordinate information input among the touch identification model, obtain the technical scheme of first recognition result, this application embodiment is through providing the method and the system of an intelligent recognition APP artificial operation, through the touch information of analysis user in APP operation process, whether this user of intelligent recognition is real man's operation, whole process user does not have the perception, improves discernment, judgement and blocks the technological effect of non-real man operation accuracy, timeliness when promoting user experience.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
Fig. 1 is a schematic flowchart illustrating a method for intelligently identifying an APP manual operation according to an embodiment of the present application;
fig. 2 is a schematic flow chart illustrating information correction such as a touch radius and a touch radius tolerance in a method for intelligently identifying an APP manual operation according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a method for intelligently identifying an APP manual operation according to an embodiment of the present application, where the method obtains an operation habit coefficient of a user;
fig. 4 is a schematic flowchart of a method for intelligently identifying an APP manual operation according to an embodiment of the present application to obtain an identification result;
fig. 5 is a schematic flow chart illustrating a process of checking an identification result in a method for intelligently identifying an APP manual operation according to an embodiment of the present application;
fig. 6 is a schematic flowchart illustrating a process of obtaining a first reminding instruction in a method for intelligently identifying an APP manual operation according to an embodiment of the present application;
fig. 7 is a schematic flowchart illustrating a process of obtaining a second reminding instruction in a method for intelligently identifying APP manual operations according to an embodiment of the present application;
FIG. 8 is a schematic structural diagram of a system for intelligently identifying APP manual operations according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an exemplary electronic device according to an embodiment of the present application.
Description of reference numerals: a first obtaining unit 11, a second obtaining unit 12, a first generating unit 13, a third obtaining unit 14, a fourth obtaining unit 15, a second generating unit 16, a fifth obtaining unit 17, an electronic device 300, a memory 301, a processor 302, a communication interface 303, and a bus architecture 304.
Detailed Description
The embodiment of the application provides a method and a system for intelligently identifying the APP manual operation, and solves the technical problems that in the prior art, the user experience is influenced when an APP operator is identified to be operated by a real person, the accuracy and timeliness of identifying and blocking the non-real person operation on key nodes are not high, and the intelligent level is to be improved. Through the touch information of the analysis user in the APP operation process, whether this user of intelligent recognition is real man operation, whole process user does not have the perception, improves discernment, judgement and blocks the technological effect of non-real man operation accuracy, promptness when promoting user experience.
Summary of the application
With the development of mobile internet, more and more business processes can be directly completed on a mobile APP, the business process is an APP operator, the identification of user identity is always an important part in business wind control, some fraudulent molecules can adopt a group control system to control batch equipment, then the APP is operated in an automatic script mode, the traditional wind control means generally uses a short message verification code, an image identification verification code, a slider verification code and a character click verification code, and for the above means, a corresponding code receiving platform and a corresponding code printing platform exist in the short message verification code market, and the identification means is avoided; with the rapid development of artificial intelligence in recent years, simple intelligent identifying codes are easy to be directly cracked; the difficulty is high, and the complicated verification code can cause adverse effects on user experience. The technical problems that user experience is influenced when an APP operator is identified to be operated by a real person or not, accuracy and timeliness of key node identification and non-real person operation blocking are not high, and intelligence level is to be improved exist in the prior art.
In view of the above technical problems, the technical solution provided by the present application has the following general idea:
the embodiment of the application provides a method for intelligently identifying APP manual operation, wherein the method comprises the following steps: when a first APP starts to run, obtaining a first starting instruction; according to the first starting instruction, starting a monitoring function of the first touch screen intelligent device; obtaining touch Began information, touch moved information and touch Ended information of a first touch event based on the first touch screen intelligent device; analyzing and obtaining touch radius information, touch radius tolerance information, touch pressing force and touch position coordinate information according to the touchBegan information, touchMoved information and touchEnded information; obtaining a predetermined service node set; obtaining a non-human operation data set; training a neural network model according to the non-artificial operation data set to obtain a touch recognition model; when the operation node of the first APP is located in the preset service node set, inputting the touchbesgan information, touchmoved information and touchEnded information into the touch recognition model to obtain a first recognition result.
Having thus described the general principles of the present application, various non-limiting embodiments thereof will now be described in detail with reference to the accompanying drawings.
Example one
As shown in fig. 1, an embodiment of the present application provides a method for intelligently identifying an APP manual operation, where the method is applied to a touch screen smart device, and the method includes:
s100: when a first APP starts to run, obtaining a first starting instruction;
s200: according to the first starting instruction, starting a monitoring function of the first touch screen intelligent device;
s300: obtaining touch Began information, touch moved information and touch Ended information of a first touch event based on the first touch screen intelligent device;
specifically, the first APP is any application program in any touch screen smart device, and the touch screen smart device may be, but is not limited to: terminal equipment with a touch screen, such as a smart phone and a tablet computer. After the user opens the first APP, touch information of a first touch event of a click operation performed on the first APP by the user is captured, and the method of touchbands, touchmoved and touchEnded can be used for controlling finger touch of the APP user. The touch information is acquired through a touch monitoring method provided by the system, the touch monitoring method cannot be sensed by a user, and a single touch event, namely the touchs beta information of the first touch event, touchmoved information is acquired for zero times or more, and touchend information is acquired for one time. The touchmoved information is obtained zero times or more because a part of operations do not need to be moved by a user and only need to be clicked, and a part of operations need to be pressed for a long time and moved for multiple times. And all touch operations need to touch the screen and leave the screen, and touch Began information and touch Ended information are acquired once respectively. By starting the monitoring function of the first touch screen intelligent device, the touch information of the user is obtained under the condition that the user does not sense, accurate data information can be provided for subsequent touch operation judgment, and the comprehensive experience of the user is not influenced.
S400: analyzing and obtaining touch radius information, touch radius tolerance information, touch pressing force and touch position coordinate information according to the touchBegan information, touchMoved information and touchEnded information;
s500: obtaining a predetermined service node set;
s600: obtaining a non-human operation data set;
specifically, according to the touchbesgan information, touchhersaved information and touchesEnded information, a touch radius (majordradius), touch radius tolerance (majordradius tolerance) information, touch pressure degree (force) information and touch position coordinate (X, Y) information in the first touch event touch information are analyzed, and the information is stored for later use. Wherein the touch radius tolerance information is used to describe a variance of the touch radius information. The touch radius, the touch radius tolerance and the touch pressing force degree are used for further refining the touch screen operation and are also used as a basis for subsequently judging whether the user is operated by a real person. Further, the preset service node set is obtained, for example, registration, login, ordering, payment and the like, the preset service node set covers the key service nodes of the first APP, all non-artificial touch information of the touch screen intelligent device including touch radius, touch radius variance, touch pressing strength and touch position coordinates are obtained, the non-artificial operation data set is collected after sorting and analysis, and a foundation is laid for subsequently judging whether the user operates for a real person.
S700: training a neural network model according to the non-artificial operation data set to obtain a touch recognition model;
s800: when the operation node of the first APP is located in the preset service node set, inputting the touchbesgan information, touchmoved information and touchEnded information into the touch recognition model to obtain a first recognition result.
Specifically, the non-artificial operation data set is input into a neural network model for training, the neural network is an operation model formed by interconnection of a large number of neurons, the output of the network is expressed according to a logic strategy of a network connection mode, the output information is more accurate through the training of the model, and the non-artificial operation data set is input into the neural network model for comprehensive analysis of operation data, so that the touch recognition model is obtained. When the first APP needs to be registered, logged in, placed, paid and the like, touch Began information, touch Moved information and touch Ended information are obtained, touch radius information, touch radius tolerance information and touch position coordinate information are input into the touch recognition model to perform comprehensive analysis of touch operation, so that a first recognition result is obtained, and the first recognition result is used for judging and recognizing a first touch event of the first APP. The output first recognition result is more accurate and reliable through the training of the model.
Further, as shown in fig. 2, the embodiment of the present application includes:
s910: obtaining an operation habit coefficient of a first user, wherein the first user is a user of the touch screen intelligent device;
s920: and correcting the touch radius information, the touch radius tolerance information, the touch pressing force degree and the touch position coordinate information according to the operation habit coefficient to obtain first touch radius information, first touch radius tolerance information, first touch pressing force degree and first touch position coordinate information.
Specifically, each person has a unique operation habit due to the difference in personal habits, for example, the operation habits of a right-handed person and a left-handed person are different. Therefore, the operation habit coefficient of any user of the touch screen intelligent device, namely the operation habit coefficient of the first user, is obtained. And correcting the collected touch radius information, the touch radius tolerance information, the touch pressing force and the touch position coordinate information according to the habit of a first user to obtain first touch radius information, first touch radius tolerance information, first touch pressing force and first touch position coordinate information. The effects of obtaining more detailed and accurate touch information and improving the accuracy of intelligent identification can be achieved.
Further, as shown in fig. 3, the obtaining an operation habit coefficient of the first user, step S910 further includes:
s911: obtaining a value threshold of the operation habit coefficient of the first user;
s912: randomly obtaining M operation habit coefficients from the value threshold of the operation habit coefficient of the first user;
s913: calculating the M operation habit coefficients according to a genetic algorithm to obtain M predicted operation state curves, wherein the M predicted operation state curves correspond to the M operation habit coefficients one to one;
s914: obtaining an actual operating state curve of the first user;
s915: and comparing the M predicted operation state curves with the actual operation state curve to obtain an operation habit coefficient of the first user, wherein the similarity between the predicted operation state curve corresponding to the operation habit coefficient of the first user and the actual operation state curve is the maximum.
Specifically, the essence of the genetic algorithm is that random search is continuously performed in a solution space, new solutions are continuously generated in the search process, and a more optimal solution algorithm is retained, so that the realization difficulty is low, and a satisfactory result can be obtained in a short time. The genetic algorithm directly operates the structural object when in use, has no limitation of derivation and function continuity, has inherent implicit parallelism and better global optimization capability, adopts a probabilistic optimization method, can automatically acquire and guide an optimized search space without determining rules, and adaptively adjusts the search direction, so the genetic algorithm is widely applied to various fields. And calculating the M operation habit coefficients according to a genetic algorithm to obtain M predicted operation state curves, wherein the M operation habit coefficients are randomly obtained from a value threshold of the operation habit coefficient of the first user, and the M predicted operation state curves correspond to the M operation habit coefficients one to one. The actual operation state curve of the first user is effect recording data after the first user performs actual operation, predicted values with the closest similarity are obtained by comparing the M predicted operation state curves with the actual operation state curve, and the operation habit coefficient corresponding to the predicted values is the operation habit coefficient of the first user.
Further, as shown in fig. 4, the obtaining the first recognition result, step S800 further includes:
s810: when the running node of the first APP is located in the preset service node set, inputting the touchbesgan information, the touchesMoved information, the touchedEnded information, the touch radius tolerance information, the touch pressure degree and the touch position coordinate information into the touch recognition model as input data;
s820: the touch recognition model is obtained through training of multiple groups of training data, and each group of training data in the multiple groups of training data comprises non-human operation data and identification information for identifying whether a human is operated to be marked or not;
s830: and obtaining output information of the touch recognition model, wherein the output information comprises the first recognition result.
Specifically, the neural network is an operation model formed by interconnection of a large number of neurons, the output of the network is expressed according to a logic strategy of a connection mode of the network, the output information is more accurate through training of the model, and when the operation node of the first APP falls into the predetermined service node set, the touchbegan information, the touchmoved information, the touchend information, the touch radius tolerance information, the touch pressing degree and the touch position coordinate information are input into the touch recognition model to perform comprehensive analysis of touch operation, so as to obtain the output information including the first recognition result. Further, the training process is essentially a supervised learning process, each group of supervised data includes non-human operation data and identification information for identifying whether the human operation is marked, the touch recognition model performs continuous self-correction and adjustment until an obtained output result is consistent with the identification information, the group of data supervised learning is ended, and the next group of data supervised learning is performed. When the output information of the touch recognition model reaches the preset accuracy rate or reaches the convergence state, the supervised learning process is ended, and the technical effect of improving the intelligent degree of data training is achieved.
Further, as shown in fig. 5, after obtaining the first recognition result, step S800 includes:
s840: obtaining a preset touch radius condition rule;
s850: obtaining a preset touch pressing degree condition rule;
s860: obtaining a preset touch position coordinate condition rule;
s870: judging whether the first touch event simultaneously meets the preset touch radius condition rule, the preset touch pressing strength condition rule and the preset touch position coordinate condition rule to obtain a first judgment result;
s880: and checking the first identification result according to the first judgment result. Specifically, the predetermined touch radius condition rule is that if the user performs a real operation, the touch radius and the touch radius variance are accurate 6 bits after the decimal point, and if the touch radius and the touch radius variance are both integers at each time, that is, 0 after the decimal point, the user performs a non-real operation. The preset touch pressing degree condition rule is that whether each piece of touch information is 0 or not is judged, and if the touch information is 0, the user is operated by a non-real person. If the touch pressure degree has a specific numerical value, the user is a real person operation, generally, the value of the touchesbgan and touchesEnded is 0, and the value of the touchesMoved is not 0. The preset touch position coordinate condition rule is to judge whether the values of the coordinates X and Y are integers each time, namely 0 is set after decimal point; if the number of the operations is an integer, the user is a non-real person operation. If the touch position coordinate precision is 6 bits after the decimal point, the user is operated by a real person. For ease of understanding, the above conditional rules take the following data examples: real person operation data: majorround 21.152344; majorRadiusTolerance 5.283203; x- >133.333328, Y- > 567.366828. Non-human operation data: majorround 20.000000; majorRadiusTolerance 5.000000; x- >68.000000, Y- > 401.000000. And judging whether the first touch event simultaneously accords with the three condition rules to obtain a first judgment result, namely that the equipment is operated by a non-real person, and verifying the first identification result according to the first judgment result. The established condition rule of the preset touch radius, the condition rule of the preset touch pressing force degree and the condition rule of the preset touch position coordinate can obviously distinguish real person operation from non-real person operation, and the effect of timely blocking key business stage operation can be realized.
Further, as shown in fig. 6, the verifying the first identification result according to the first determination result, and step S880 includes:
s881: if the first judgment result indicates that the first touch event simultaneously conforms to the preset touch radius condition rule, the preset touch pressing degree condition rule and the preset touch position coordinate condition rule, determining that the first touch event is an artificial operation event;
s882: judging whether the first identification result is the manual operation event or not;
s883: and if the first identification result is not the manual operation event, obtaining a first reminding instruction, wherein the first reminding instruction is used for reminding that the first identification result is wrong.
Specifically, if the first touch event simultaneously satisfies the predetermined touch radius condition rule, the predetermined touch pressing force condition rule and the predetermined touch position coordinate condition rule, the first touch event is a manual operation event, the first recognition result is determined, if the first determination result is inconsistent with the first recognition result, the first recognition result is in error, and the first prompt instruction is generated to prompt that the first recognition result is in error. Therefore, the touch recognition model is corrected, and the accuracy of an output result is improved.
Further, as shown in fig. 7, the verifying the first identification result according to the first determination result, and step S880 further includes:
s884: if the first judgment result indicates that the first touch event does not simultaneously accord with the preset touch radius condition rule, the preset touch pressing degree condition rule and the preset touch position coordinate condition rule, determining that the first touch event is a non-human operation event;
s885: judging whether the first identification result is the non-human operation event or not;
s886: and if the first recognition result is not the non-manual operation event, obtaining a second reminding instruction, wherein the second reminding instruction is used for reminding that the first recognition result is wrong.
Specifically, if the first touch event does not conform to the predetermined touch radius condition rule, the predetermined touch pressing force condition rule and the predetermined touch position coordinate condition rule at the same time, it is determined that the first touch event is a non-human operation event, whether the first recognition result is the non-human operation event is determined, if the first determination result is inconsistent with the first recognition result, it is determined that the first recognition result is incorrect, a second reminding instruction is obtained, the first recognition result is reminded of the occurrence of the error, and the first recognition result is corrected, so that the touch recognition scheme can be more accurately judged by the correction mechanism.
To sum up, the method and the system for intelligently identifying the APP manual operation provided by the embodiment of the application have the following technical effects:
1. due to the adoption of the method, when the first APP starts to run, a first starting instruction is obtained; according to the first starting instruction, starting a monitoring function of the first touch screen intelligent device; obtaining touch Began information, touch moved information and touch Ended information of a first touch event based on the first touch screen intelligent device; analyzing and obtaining touch radius information, touch radius tolerance information, touch pressing force and touch position coordinate information according to the touchBegan information, touchMoved information and touchEnded information; obtaining a predetermined service node set; obtaining a non-human operation data set; training a neural network model according to the non-artificial operation data set to obtain a touch recognition model; work as the operation node of first APP falls when in the predetermined service node set, will the touchBegan information touchup information touchedMoved information touchedEnded information, touch radius information touch radius tolerance information touch according to the pressure degree with touch position coordinate information input among the touch identification model, obtain the technical scheme of first recognition result, this application embodiment is through providing the method and the system of an intelligent recognition APP artificial operation, through the touch information of analysis user in APP operation process, whether this user of intelligent recognition is real man's operation, whole process user does not have the perception, improves discernment, judgement and blocks the technological effect of non-real man operation accuracy, timeliness when promoting user experience.
2. Due to the fact that the preset touch radius condition rule is established, the preset touch pressing force condition rule and the preset touch position coordinate condition rule are obtained, and the reminding mechanism is adopted, intelligent judgment on real-person operation and non-real-person operation is achieved, the first recognition result is corrected, and the technical effect of timely blocking key business stage operation is achieved.
Example two
Based on the same inventive concept as the method for intelligently identifying the artificial operation of the APP in the foregoing embodiment, as shown in fig. 8, the embodiment of the present application provides a system for intelligently identifying the artificial operation of the APP, wherein the system includes:
a first obtaining unit 11, where the first obtaining unit 11 is configured to obtain a first start instruction when a first APP starts to run;
the first execution unit 12, where the first execution unit 12 is configured to activate a monitoring function of the first touch screen smart device according to the first start instruction;
a second obtaining unit 13, where the second obtaining unit 13 is configured to obtain touch beta information, touch moved information, and touch extended information of a first touch event based on the first touch screen smart device;
the second execution unit 14 is configured to analyze and obtain touch radius information, touch radius tolerance information, touch pressure degree and touch position coordinate information according to the touchbegan information, touchesMoved information and touchesend information;
a third obtaining unit 15, where the third obtaining unit 15 is configured to obtain a predetermined service node set;
a fourth obtaining unit 16, wherein the fourth obtaining unit 16 is used for obtaining a non-human operation data set;
a fifth obtaining unit 17, where the fifth obtaining unit 17 is configured to train a neural network model according to the set of non-human operation data to obtain a touch recognition model;
a sixth obtaining unit 18, where the sixth obtaining unit 18 is configured to, when the running node of the first APP falls into the predetermined service node set, input the touchbesgan information, touchmoved information, touchended information, touch radius tolerance information, the touch pressure degree, and the touch position coordinate information into the touch recognition model, and obtain a first recognition result.
Further, the system comprises:
a seventh obtaining unit, configured to obtain an operation habit coefficient of a first user, where the first user is a user of the touch screen smart device;
and the eighth obtaining unit is used for correcting the touch radius information, the touch radius tolerance information, the touch pressing force degree and the touch position coordinate information according to the operation habit coefficient to obtain first touch radius information, first touch radius tolerance information, first touch pressing force degree and first touch position coordinate information.
Further, the system comprises:
a ninth obtaining unit, configured to obtain a value threshold of the operation habit coefficient of the first user;
a tenth obtaining unit, configured to randomly obtain M operation habit coefficients from a value threshold of the operation habit coefficient of the first user;
an eleventh obtaining unit, configured to calculate the M operation habit coefficients according to a genetic algorithm, and obtain M predicted operation state curves, where the M predicted operation state curves are in one-to-one correspondence with the M operation habit coefficients;
a twelfth obtaining unit, configured to obtain an actual operating state curve of the first user;
a thirteenth obtaining unit, configured to compare the M predicted operation state curves with the actual operation state curve, and obtain an operation habit coefficient of the first user, where a similarity between a predicted operation state curve corresponding to the operation habit coefficient of the first user and the actual operation state curve is the largest.
Further, the system comprises:
a third execution unit, configured to input, when an operation node of the first APP is in the predetermined service node set, the touchbesgan information, the touchmoved information, the touchended information, the touch radius information, the touch tolerance information, the degree of pressure by touch, and the touch position coordinate information as input data into the touch recognition model;
a fourteenth obtaining unit, configured to obtain, by training the touch recognition model through multiple sets of training data, where each set of training data in the multiple sets of training data includes non-human operation data and identification information used to identify whether a human operation is marked;
a fifteenth obtaining unit configured to obtain output information of the touch recognition model, the output information including the first recognition result.
Further, the system comprises:
a sixteenth obtaining unit configured to obtain a predetermined touch radius condition rule;
a seventeenth obtaining unit, configured to obtain a predetermined touch pressing force condition rule;
an eighteenth obtaining unit configured to obtain a predetermined touch position coordinate condition rule;
a nineteenth obtaining unit, configured to determine whether the first touch event simultaneously meets the predetermined touch radius condition rule, the predetermined touch pressing force condition rule, and the predetermined touch position coordinate condition rule, and obtain a first determination result;
and the fourth execution unit is used for verifying the first identification result according to the first judgment result.
Further, the system comprises:
a fifth execution unit, configured to determine that the first touch event is a manual operation event if the first determination result indicates that the first touch event simultaneously meets the predetermined touch radius condition rule, the predetermined touch pressing strength condition rule, and the predetermined touch position coordinate condition rule;
the first judging unit is used for judging whether the first identification result is the man-made operation event or not;
a twentieth obtaining unit, configured to obtain a first reminding instruction if the first identification result is not the manual operation event, where the first reminding instruction is used to remind that the first identification result is incorrect.
Further, the system comprises:
a sixth executing unit, configured to determine that the first touch event is a non-human operation event if the first determination result indicates that the first touch event does not simultaneously meet the predetermined touch radius condition rule, the predetermined touch pressing force condition rule, and the predetermined touch position coordinate condition rule;
a second judging unit, configured to judge whether the first identification result is the non-human operation event;
a twenty-first obtaining unit, configured to obtain a second prompting instruction if the first identification result is not the non-human operation event, where the second prompting instruction is used to prompt that the first identification result is incorrect.
Exemplary electronic device
The electronic device of the embodiment of the present application is described below with reference to figure 9,
based on the same inventive concept as the method for intelligently identifying the APP manual operation in the foregoing embodiment, the embodiment of the present application further provides a system for intelligently identifying the APP manual operation, including: a processor coupled to a memory for storing a program that, when executed by the processor, causes a system to perform the method of any of the first aspects
The electronic device 300 includes: processor 302, communication interface 303, memory 301. Optionally, the electronic device 300 may also include a bus architecture 304. Wherein, the communication interface 303, the processor 302 and the memory 301 may be connected to each other through a bus architecture 304; the bus architecture 304 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus architecture 304 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 9, but this does not indicate only one bus or one type of bus.
Processor 302 may be a CPU, microprocessor, ASIC, or one or more integrated circuits for controlling the execution of programs in accordance with the teachings of the present application.
The communication interface 303 may be any device, such as a transceiver, for communicating with other devices or communication networks, such as an ethernet, a Radio Access Network (RAN), a Wireless Local Area Network (WLAN), a wired access network, and the like.
The memory 301 may be, but is not limited to, a ROM or other type of static storage device that can store static information and instructions, a RAM or other type of dynamic storage device that can store information and instructions, an electrically erasable Programmable read-only memory (EEPROM), a compact-read-only-memory (CD-ROM) or other optical disk storage, optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be self-contained and coupled to the processor through a bus architecture 304. The memory may also be integral to the processor.
The memory 301 is used for storing computer-executable instructions for executing the present application, and is controlled by the processor 302 to execute. The processor 302 is configured to execute the computer-executable instructions stored in the memory 301, so as to implement a method for intelligently recognizing APP artificial operations provided by the above-described embodiments of the present application.
Optionally, the computer-executable instructions in the embodiments of the present application may also be referred to as application program codes, which are not specifically limited in the embodiments of the present application.
The embodiment of the application provides a method for intelligently identifying APP manual operation, wherein the method comprises the following steps: when a first APP starts to run, obtaining a first starting instruction; according to the first starting instruction, starting a monitoring function of the first touch screen intelligent device; obtaining touch Began information, touch moved information and touch Ended information of a first touch event based on the first touch screen intelligent device; analyzing and obtaining touch radius information, touch radius tolerance information, touch pressing force and touch position coordinate information according to the touchBegan information, touchMoved information and touchEnded information; obtaining a predetermined service node set; obtaining a non-human operation data set; training a neural network model according to the non-artificial operation data set to obtain a touch recognition model; when the operation node of the first APP is located in the preset service node set, inputting the touchbesgan information, touchmoved information and touchEnded information into the touch recognition model to obtain a first recognition result.
Those of ordinary skill in the art will understand that: the various numbers of the first, second, etc. mentioned in this application are only used for the convenience of description and are not used to limit the scope of the embodiments of this application, nor to indicate the order of precedence. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one" means one or more. At least two means two or more. "at least one," "any," or similar expressions refer to any combination of these items, including any combination of singular or plural items. For example, at least one (one ) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device including one or more available media integrated servers, data centers, and the like. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The various illustrative logical units and circuits described in this application may be implemented or operated upon by design of a general purpose processor, a digital signal processor, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other similar configuration.
The steps of a method or algorithm described in the embodiments herein may be embodied directly in hardware, in a software element executed by a processor, or in a combination of the two. The software cells may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. For example, a storage medium may be coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC, which may be disposed in a terminal. In the alternative, the processor and the storage medium may reside in different components within the terminal. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Although the present application has been described in conjunction with specific features and embodiments thereof, it will be evident that various modifications and combinations can be made thereto without departing from the spirit and scope of the application. Accordingly, the specification and figures are merely exemplary of the present application as defined in the appended claims and are intended to cover any and all modifications, variations, combinations, or equivalents within the scope of the present application. It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations.

Claims (9)

1. A method for intelligently identifying APP (application) artificial operation is applied to a touch screen intelligent device, and comprises the following steps:
when a first APP starts to run, obtaining a first starting instruction;
according to the first starting instruction, starting a monitoring function of the first touch screen intelligent device;
obtaining touch Began information, touch moved information and touch Ended information of a first touch event based on the first touch screen intelligent device;
analyzing and obtaining touch radius information, touch radius tolerance information, touch pressing force and touch position coordinate information according to the touchBegan information, touchMoved information and touchEnded information;
obtaining a predetermined service node set;
obtaining a non-human operation data set;
training a neural network model according to the non-artificial operation data set to obtain a touch recognition model;
when the operation node of the first APP is located in the preset service node set, inputting the touchbesgan information, touchmoved information and touchEnded information into the touch recognition model to obtain a first recognition result.
2. The method of claim 1, wherein the method further comprises:
obtaining an operation habit coefficient of a first user, wherein the first user is a user of the touch screen intelligent device;
and correcting the touch radius information, the touch radius tolerance information, the touch pressing force degree and the touch position coordinate information according to the operation habit coefficient to obtain first touch radius information, first touch radius tolerance information, first touch pressing force degree and first touch position coordinate information.
3. The method of claim 2, wherein the obtaining the operation habit coefficients of the first user comprises:
obtaining a value threshold of the operation habit coefficient of the first user;
randomly obtaining M operation habit coefficients from the value threshold of the operation habit coefficient of the first user;
calculating the M operation habit coefficients according to a genetic algorithm to obtain M predicted operation state curves, wherein the M predicted operation state curves correspond to the M operation habit coefficients one to one;
obtaining an actual operating state curve of the first user;
and comparing the M predicted operation state curves with the actual operation state curve to obtain an operation habit coefficient of the first user, wherein the similarity between the predicted operation state curve corresponding to the operation habit coefficient of the first user and the actual operation state curve is the maximum.
4. The method of claim 1, wherein the obtaining a first recognition result comprises:
when the running node of the first APP is located in the preset service node set, inputting the touchbesgan information, the touchesMoved information, the touchedEnded information, the touch radius tolerance information, the touch pressure degree and the touch position coordinate information into the touch recognition model as input data;
the touch recognition model is obtained through training of multiple groups of training data, and each group of training data in the multiple groups of training data comprises non-human operation data and identification information for identifying whether a human is operated to be marked or not;
and obtaining output information of the touch recognition model, wherein the output information comprises the first recognition result.
5. The method of claim 1, wherein after obtaining the first recognition result, further comprising:
obtaining a preset touch radius condition rule;
obtaining a preset touch pressing degree condition rule;
obtaining a preset touch position coordinate condition rule;
judging whether the first touch event simultaneously meets the preset touch radius condition rule, the preset touch pressing strength condition rule and the preset touch position coordinate condition rule to obtain a first judgment result;
and checking the first identification result according to the first judgment result.
6. The method of claim 5, wherein said verifying the first recognition result according to the first determination result comprises:
if the first judgment result indicates that the first touch event simultaneously conforms to the preset touch radius condition rule, the preset touch pressing degree condition rule and the preset touch position coordinate condition rule, determining that the first touch event is an artificial operation event;
judging whether the first identification result is the manual operation event or not;
and if the first identification result is not the manual operation event, obtaining a first reminding instruction, wherein the first reminding instruction is used for reminding that the first identification result is wrong.
7. The method of claim 5, wherein said verifying the first recognition result according to the first determination result comprises:
if the first judgment result indicates that the first touch event does not simultaneously accord with the preset touch radius condition rule, the preset touch pressing degree condition rule and the preset touch position coordinate condition rule, determining that the first touch event is a non-human operation event;
judging whether the first identification result is the non-human operation event or not;
and if the first recognition result is not the non-manual operation event, obtaining a second reminding instruction, wherein the second reminding instruction is used for reminding that the first recognition result is wrong.
8. A system for intelligently identifying APP artifacts, wherein the system comprises:
a first obtaining unit, configured to obtain a first start instruction when a first APP starts to run;
the first execution unit is used for activating a monitoring function of the first touch screen intelligent device according to the first starting instruction;
a second obtaining unit, configured to obtain touch beta information, touch moved information, and touch extended information of a first touch event based on the first touch screen smart device;
the second execution unit is used for analyzing and obtaining touch radius information, touch radius tolerance information, touch pressure degree and touch position coordinate information according to the touchBegan information, touchdevices moved information and touchEnded information;
a third obtaining unit, configured to obtain a predetermined service node set;
a fourth obtaining unit for obtaining a set of non-human operated data;
a fifth obtaining unit, configured to train a neural network model according to the non-human operation data set, so as to obtain a touch recognition model;
a sixth obtaining unit, configured to, when the running node of the first APP falls into the set of predetermined service nodes, input the touchbesgan information, the touchmoved information, the touchend information, the touch radius tolerance information, the touch pressure degree, and the touch position coordinate information into the touch recognition model, and obtain a first recognition result.
9. A system for intelligently identifying APP human operations, comprising: a processor coupled to a memory, the memory for storing a program that, when executed by the processor, causes a system to perform the method of any of claims 1-7.
CN202111110957.8A 2021-09-18 2021-09-18 Method and system for intelligently identifying APP manual operation Active CN113900889B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111110957.8A CN113900889B (en) 2021-09-18 2021-09-18 Method and system for intelligently identifying APP manual operation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111110957.8A CN113900889B (en) 2021-09-18 2021-09-18 Method and system for intelligently identifying APP manual operation

Publications (2)

Publication Number Publication Date
CN113900889A true CN113900889A (en) 2022-01-07
CN113900889B CN113900889B (en) 2023-10-24

Family

ID=79028866

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111110957.8A Active CN113900889B (en) 2021-09-18 2021-09-18 Method and system for intelligently identifying APP manual operation

Country Status (1)

Country Link
CN (1) CN113900889B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004295766A (en) * 2003-03-28 2004-10-21 Sony Corp Robot apparatus and user authentication method through robot
US20150241984A1 (en) * 2014-02-24 2015-08-27 Yair ITZHAIK Methods and Devices for Natural Human Interfaces and for Man Machine and Machine to Machine Activities
US20160315948A1 (en) * 2015-04-21 2016-10-27 Alibaba Group Holding Limited Method and system for identifying a human or machine
WO2016171923A1 (en) * 2015-04-21 2016-10-27 Alibaba Group Holding Limited Method and system for identifying a human or machine
CN106503499A (en) * 2016-09-22 2017-03-15 天津大学 Smart mobile phone touch-screen input recognition method based on machine learning
CN108416198A (en) * 2018-02-06 2018-08-17 平安科技(深圳)有限公司 Man-machine identification model establishes device, method and computer readable storage medium
WO2019001558A1 (en) * 2017-06-29 2019-01-03 苏州锦佰安信息技术有限公司 Human and machine recognition method and device
WO2020037919A1 (en) * 2018-08-22 2020-02-27 平安科技(深圳)有限公司 User behavior recognition method and device employing prediction model
US20200233952A1 (en) * 2019-01-22 2020-07-23 International Business Machines Corporation Mobile behaviometrics verification models used in cross devices
WO2020252932A1 (en) * 2019-06-20 2020-12-24 平安科技(深圳)有限公司 Operation behavior-based human and machine recognition method and apparatus, and computer device
CN113065109A (en) * 2021-04-22 2021-07-02 中国工商银行股份有限公司 Man-machine recognition method and device
US20220342530A1 (en) * 2021-04-22 2022-10-27 Pixart Imaging Inc. Touch sensor, touch pad, method for identifying inadvertent touch event and computer device
US20230177724A1 (en) * 2021-12-07 2023-06-08 Adasky, Ltd. Vehicle to infrastructure extrinsic calibration system and method

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004295766A (en) * 2003-03-28 2004-10-21 Sony Corp Robot apparatus and user authentication method through robot
US20150241984A1 (en) * 2014-02-24 2015-08-27 Yair ITZHAIK Methods and Devices for Natural Human Interfaces and for Man Machine and Machine to Machine Activities
US20160315948A1 (en) * 2015-04-21 2016-10-27 Alibaba Group Holding Limited Method and system for identifying a human or machine
WO2016171923A1 (en) * 2015-04-21 2016-10-27 Alibaba Group Holding Limited Method and system for identifying a human or machine
CN106503499A (en) * 2016-09-22 2017-03-15 天津大学 Smart mobile phone touch-screen input recognition method based on machine learning
WO2019001558A1 (en) * 2017-06-29 2019-01-03 苏州锦佰安信息技术有限公司 Human and machine recognition method and device
CN108416198A (en) * 2018-02-06 2018-08-17 平安科技(深圳)有限公司 Man-machine identification model establishes device, method and computer readable storage medium
WO2019153604A1 (en) * 2018-02-06 2019-08-15 平安科技(深圳)有限公司 Device and method for creating human/machine identification model, and computer readable storage medium
WO2020037919A1 (en) * 2018-08-22 2020-02-27 平安科技(深圳)有限公司 User behavior recognition method and device employing prediction model
US20200233952A1 (en) * 2019-01-22 2020-07-23 International Business Machines Corporation Mobile behaviometrics verification models used in cross devices
WO2020252932A1 (en) * 2019-06-20 2020-12-24 平安科技(深圳)有限公司 Operation behavior-based human and machine recognition method and apparatus, and computer device
CN113065109A (en) * 2021-04-22 2021-07-02 中国工商银行股份有限公司 Man-machine recognition method and device
US20220342530A1 (en) * 2021-04-22 2022-10-27 Pixart Imaging Inc. Touch sensor, touch pad, method for identifying inadvertent touch event and computer device
US20230177724A1 (en) * 2021-12-07 2023-06-08 Adasky, Ltd. Vehicle to infrastructure extrinsic calibration system and method

Also Published As

Publication number Publication date
CN113900889B (en) 2023-10-24

Similar Documents

Publication Publication Date Title
CN109472213B (en) Palm print recognition method and device, computer equipment and storage medium
CN109472240B (en) Face recognition multi-model adaptive feature fusion enhancement method and device
CN110728323B (en) Target type user identification method and device, electronic equipment and storage medium
JP5454672B2 (en) Biological information processing apparatus and method
CN108460346B (en) Fingerprint identification method and device
CN111401219B (en) Palm key point detection method and device
US11062120B2 (en) High speed reference point independent database filtering for fingerprint identification
KR102038237B1 (en) Credit score model training method, credit score calculation method, apparatus and server
CN109035021B (en) Method, device and equipment for monitoring transaction index
CN111340233B (en) Training method and device of machine learning model, and sample processing method and device
CN113792853B (en) Training method of character generation model, character generation method, device and equipment
CN115082920A (en) Deep learning model training method, image processing method and device
CN111797320A (en) Data processing method, device, equipment and storage medium
CN112214402A (en) Code verification algorithm selection method and device and storage medium
CN114428748B (en) Simulation test method and system for real service scene
CN114203285B (en) Big data analysis method applied to smart medical treatment and smart medical treatment server
CN113986561B (en) Artificial intelligence task processing method and device, electronic equipment and readable storage medium
CN114581249A (en) Financial product recommendation method and system based on investment risk bearing capacity assessment
CN113900889B (en) Method and system for intelligently identifying APP manual operation
CN116343300A (en) Face feature labeling method, device, terminal and medium
CN111259806B (en) Face area identification method, device and storage medium
CN109933579B (en) Local K neighbor missing value interpolation system and method
CN114203312A (en) Digital medical service analysis method and server combined with big data intelligent medical treatment
JP7206605B2 (en) Information processing equipment
CN111368792A (en) Characteristic point mark injection molding type training method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100000 floors 1-3, block a, global creative Plaza, No. 10, Furong street, Chaoyang District, Beijing

Applicant after: Bairong Zhixin (Beijing) Technology Co.,Ltd.

Address before: 100000 floors 1-3, block a, global creative Plaza, No. 10, Furong street, Chaoyang District, Beijing

Applicant before: Bairong Zhixin (Beijing) credit investigation Co.,Ltd.

GR01 Patent grant
GR01 Patent grant