WO2019001558A1 - 一种人机识别的方法和设备 - Google Patents

一种人机识别的方法和设备 Download PDF

Info

Publication number
WO2019001558A1
WO2019001558A1 PCT/CN2018/093553 CN2018093553W WO2019001558A1 WO 2019001558 A1 WO2019001558 A1 WO 2019001558A1 CN 2018093553 W CN2018093553 W CN 2018093553W WO 2019001558 A1 WO2019001558 A1 WO 2019001558A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
client
behavior
behavior data
operator
Prior art date
Application number
PCT/CN2018/093553
Other languages
English (en)
French (fr)
Inventor
冯继强
Original Assignee
苏州锦佰安信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201710517649.4A external-priority patent/CN107294981B/zh
Priority claimed from CN201710517666.8A external-priority patent/CN107330311A/zh
Application filed by 苏州锦佰安信息技术有限公司 filed Critical 苏州锦佰安信息技术有限公司
Publication of WO2019001558A1 publication Critical patent/WO2019001558A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication

Definitions

  • the invention relates to the field of human-computer recognition, and in particular to a method and a device for human-machine recognition.
  • the existing human-computer recognition methods mainly include technologies such as IP limitation, device fingerprint, browser fingerprint, verification code, graphic verification, and mobile phone short message.
  • IP IP
  • the limitation of IP to the client is the limitation of network resources. Now there is no limit effect. If the attacker has a large number of IP resource pools, it can easily bypass this limitation.
  • the fingerprint of the device it is a physical resource limitation. A malicious attacker with a large number of physical resources can still easily bypass the client limit; for browser fingerprint restrictions, the request header is restricted. In general, the masquerading can be modified to bypass the restriction on the client.
  • the present invention provides a method and apparatus for human-machine recognition for implementing simple and effective human-computer recognition.
  • the embodiment of the present invention proposes the following specific embodiments:
  • One embodiment of the present application provides a method for human-computer recognition implemented on a server, the server including at least one processor, a memory, and a communication platform connected to the network, the method comprising: acquiring the The behavior data of the client under the operation of the operator, the behavior data is sensor data reflecting at least one operation behavior of the operator on the client; using the behavior data judgment model to determine whether the behavior data matches the human behavior data And determining, based on the result of the judgment, that the operator of the client is a person or a machine.
  • the behavior data includes at least one of the following: rotation data of the client, force data, orientation data, screen operation data, and input device operation data.
  • the behavioral data determination model is trained based on behavioral sample data of a human operating client and/or behavioral sample data of a machine operating client.
  • the method further comprises initiating a predetermined defense process based on a result of determining that the operator of the client is a person or a machine.
  • the behavior data matches the human behavior data, determining that the operator of the client is a person; if the behavior data does not match the human behavior data, determining the client The operator of the end is the machine.
  • the system of the present application provides a human-machine recognition system, including: an obtaining module, configured to acquire behavior data of the client sent by a client under an operation of an operator, where the behavior data is to reflect the operator. Sensor data for at least one operation behavior of the client; the obtaining module is further configured to acquire a behavior data determination model; and the determining module is configured to determine, by using the behavior data determination model, whether the behavior data matches the human behavior data; And an identification module, configured to determine, according to the determination result, that the operator of the client is a person or a machine.
  • the behavior data includes at least one of: rotation data, force data, orientation data, screen operation data, or input device operation data of the client.
  • the behavioral data determination model is trained based on behavioral sample data of a human operating client and/or behavioral sample data of a machine operating client.
  • the system further includes a defense module for initiating a predetermined defense process based on a result of determining that the operator of the client is a person or a machine.
  • the identification module determines that the operator of the client is a human; if the behavior data does not match the human behavior data, The identification module determines that the operator of the client is a machine.
  • An embodiment of the present application provides a device for human-computer recognition, including a processor, where the processor is configured to: acquire behavior data of the client sent by a client under an operation of an operator, where The behavior data is sensor data reflecting at least one operation behavior of the operator on the client; acquiring a behavior data judgment model; using the behavior data determination model to determine the behavior data and human behavior data and/or machine behavior data The degree of matching; based on the degree of matching of the behavior data with human behavior data and/or machine behavior data, the operator of the client is determined to be a person or a machine.
  • One embodiment of the present application provides a computer readable storage medium, where the storage medium stores computer instructions, and when the computer reads a computer instruction in the storage medium, the computer performs a method for human-computer recognition, and the method includes: acquiring a client The action data sent by the client under the operation of the operator, the behavior data is sensor data reflecting at least one operation behavior of the operator on the client; acquiring a behavior data judgment model; using the behavior data Determining a model to determine a degree of matching of the behavior data with human behavior data and/or machine behavior data; determining that the operator of the client is a person or a machine based on a degree of matching of the behavior data with human behavior data and/or machine behavior data .
  • One embodiment of the present application provides a method for human-computer recognition implemented on a client, the client including at least one processor, a memory, and a communication platform connected to the network, the method comprising: acquiring a client in operation The behavior data under operation is sent to the server, the behavior data is sensor data reflecting at least one operation behavior of the operator on the client; receiving server information, determining, according to the behavior data, the server The operator is the judgment result of the person or the machine to start or not to start the preset defense process.
  • the behavior data includes at least one of the following: rotation data of the client, force data, orientation data, screen operation data, and input device operation data.
  • the system of the present application provides a human-machine recognition system, which includes: an obtaining module, configured to acquire behavior data of a client under an operation of an operator, and send the data to a server, where the behavior data is Sensor data of at least one operation behavior of the client to the client; a receiving module, configured to receive information of the server; and an identification module, configured to determine, according to the behavior data, that the operator is a person or a machine The result starts or does not initiate the preset defense process.
  • an obtaining module configured to acquire behavior data of a client under an operation of an operator, and send the data to a server, where the behavior data is Sensor data of at least one operation behavior of the client to the client
  • a receiving module configured to receive information of the server
  • an identification module configured to determine, according to the behavior data, that the operator is a person or a machine The result starts or does not initiate the preset defense process.
  • the behavior data includes at least one of: rotation data, force data, orientation data, screen operation data, or input device operation data of the client.
  • One embodiment of the present application provides a client for human-computer recognition, including a processor, where the processor is configured to: acquire behavior data of a client under operation of an operator, and send the data to a server, where The behavior data is sensor data reflecting at least one operation behavior of the operator on the client; receiving server information, according to the server determining whether the operator is a person or a machine based on the behavior data, starting or not starting Pre-defined defense process.
  • the behavior data includes at least one of: rotation data, force data, orientation data, screen operation data, or input device operation data of the client.
  • One embodiment of the present application provides a computer readable storage medium, where the storage medium stores computer instructions, and when the computer reads a computer instruction in the storage medium, the computer performs a method for human-computer recognition, and the method includes: acquiring a client The behavior data that is operated by the operator is sent to the server, the behavior data is sensor data reflecting at least one operation behavior of the operator on the client; receiving server information, based on the behavior data according to the server It is determined that the operator is a judgment result of a person or a machine to start or not to initiate a preset defense process.
  • the embodiment of the invention also provides a method for human-computer recognition, including:
  • behavior data of the client in an operating state wherein the behavior data includes rotation data of the client, force data of the client, and orientation data of the client;
  • behavior data matches a behavior characteristic of a corresponding person in the preset behavior analysis model, determining that the operator of the client is a person;
  • the behavior data does not match the behavior data of the person of the preset behavioral data determination model, it is determined that the operator of the client is a machine.
  • the method further includes:
  • a preset defense process is initiated on the client.
  • the method further includes:
  • the behavior data and the corresponding analysis result are generated and stored in a human behavior database.
  • the method further includes: the behavior analysis model is trained by the specimen data in the human analysis database exceeding a preset threshold.
  • the rotation data is obtained by gyroscope monitoring
  • the force data is obtained by the accelerometer
  • the orientation data is monitored by the magnetometer positioning device.
  • the embodiment of the invention further provides a device for human-computer recognition, comprising:
  • An acquiring module configured to obtain behavior data of the client in an operating state, where the behavior data includes rotation data of the client, force data of the client, and orientation data of the client;
  • An identification module configured to: when the behavior data matches a behavior characteristic of a corresponding person in a preset behavior analysis model, determine that the operator of the client is a person; and when the behavior data and the preset behavior data judgment model When the behavior data of the person does not match, it is determined that the operator of the client is a machine.
  • the device further includes:
  • the defense module is configured to initiate a preset defense process to the client when it is determined that the operator of the client is a machine.
  • the device further includes:
  • a storage module configured to generate the specimen data and store the specimen data in the human behavior database when the operator of the client is determined to be a human.
  • the behavior analysis model is trained by the specimen data in the human analysis database exceeding a preset threshold.
  • the rotation data is obtained by gyroscope monitoring
  • the force data is obtained by the accelerometer
  • the orientation data is monitored by the magnetometer positioning device.
  • the embodiment of the present invention further provides a computer readable storage medium, where the storage medium stores computer instructions.
  • the computer executes the human-machine recognition method, and the method includes: acquiring The behavior data of the client in the running state; wherein the behavior data includes at least one of the following data: the rotation data of the client, the force data of the client, the orientation data of the client;
  • the behavior data is matched with the behavior characteristics of the corresponding person in the preset behavior analysis model, and then the operator of the client is determined to be a person; if the behavior data does not match the behavior data of the person of the preset behavior data judgment model, It is determined that the operator of the client is a machine.
  • the embodiment of the present invention discloses a method and device for human-machine recognition, wherein the method includes: acquiring behavior data of a client in an operating state; wherein the behavior data includes rotation data of the client, Determining the force data of the client, the orientation data of the client; if the behavior data matches the behavior characteristic of the corresponding person in the preset behavior analysis model, determining that the operator of the client is a person; if the behavior data If the behavior data of the person of the preset behavior data judgment model does not match, it is determined that the operator of the client is a machine.
  • the client's behavior data at the runtime that is, the client's behavioral characteristics in the runtime, whether the rotation data, the force data, the orientation data, and the preset behavior analysis model match the behavior characteristics of the client are determined as the client's Whether the operation is a person or a machine, based on the behavior characteristics of the person operating the client, the person or the machine is identified, and the recognition is simple, accurate and fast.
  • FIG. 1 is a schematic flowchart diagram of a method for human-machine recognition according to an embodiment of the present invention
  • FIG. 2 is a schematic flowchart diagram of a method for human-machine recognition according to an embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of a device for human-machine recognition according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of a device for human-machine recognition according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of a device for human-machine recognition according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of an application scenario of a human-machine recognition system according to an embodiment of the present invention.
  • the term “comprising” or “including” may be used in the various embodiments of the present disclosure to indicate the existence of the disclosed function, operation or element, and does not limit one or more functions, operations or elements. increase.
  • the terms “comprising,” “having,” “,” It should not be understood that the existence or addition of one or more features, numbers, steps, operations, components or components of one or more other features, numbers, steps, operations, components, components or combinations of the foregoing are excluded. Or the possibility of a combination of the foregoing.
  • the expression “or” or “at least one of A or / and B” includes any or all combinations of the simultaneously listed characters.
  • the expression “A or B” or “at least one of A or / and B” may include A, may include B, or may include both A and B.
  • Expressions used in various embodiments of the present disclosure may modify various constituent elements in various embodiments, but the corresponding constituent elements may not be limited.
  • the above statements do not limit the order and/or importance of the elements.
  • the above statements are only used for the purpose of distinguishing one element from another.
  • the first user device and the second user device indicate different user devices, although both are user devices.
  • a first element could be termed a second element, and a second element could be termed a first element, without departing from the scope of the various embodiments of the present disclosure.
  • the first constituent element can be directly connected to the second constituent element and can be “connected” between the first constituent element and the second constituent element.
  • the third component On the contrary, when a constituent element is “directly connected” to another constituent element, it is understood that there is no third constituent element between the first constituent element and the second constituent element.
  • the term "user” as used in various embodiments of the present disclosure may indicate a person using an electronic device or a device using an electronic device (eg, an artificial intelligence electronic device, a machine).
  • FIG. 6 is a schematic diagram of an application scenario of a human-machine recognition system (or a human-machine recognition device) according to some embodiments of the present application.
  • the human recognition system 600 can be an online service platform for Internet services.
  • the human recognition system 600 can be applied to any combination of one or more of a game platform, a shopping platform, an instant messaging platform, a trading platform, an entertainment platform, an educational platform, and the like.
  • the human recognition system can identify whether the operator of the client is a human or a machine.
  • Some of the technical goals that can be implemented include limiting (and/or discovering, combating) unsafe behaviors such as machine cheating, machine violations, and machine cracking.
  • the human recognition system 600 can include a server 610, a network 620, a client 630, and a database 640.
  • the server 610 can include a processing device 612.
  • server 610 can be used to process information and/or data related to human recognition.
  • Server 610 can be a standalone server or group of servers.
  • the server group can be centralized or distributed (eg, server 610 can be a distributed system).
  • the server 610 can be regional or remote in some embodiments.
  • server 610 can access information and/or materials stored on client 630 and/or database 640 over network 620.
  • server 610 can interface directly with client 630 and/or database 640 to access information and/or materials stored therein.
  • server 610 can execute on a cloud platform.
  • the cloud platform may include one of a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an internal cloud, and the like, or any combination thereof.
  • server 610 can include processing device 612.
  • the processing device 612 can process data and/or information related to human recognition to implement one or more of the functions described in this application. For example, processing device 612 can identify whether the operator is a human or a machine by using behavioral data to determine model processing information and/or data.
  • processing device 612 can include one or more sub-processing devices (eg, a single core processing device or a multi-core multi-core processing device).
  • processing device 612 may comprise a central processing unit (CPU), an application specific integrated circuit (ASIC), an application specific instruction processor (ASIP), a graphics processing unit (GPU), a physical processor (PPU), a digital signal processor ( DSP, Field Programmable Gate Array (FPGA), Editable Logic (PLD), Controller, Microcontroller Unit, Reduced Instruction Set Computer (RISC), microprocessor, etc., any combination of one or more.
  • the server 610 can also be one or more components of the client 630, the server 610 can communicate with the client 630 in the same program, or the server 610 can communicate with the client 630 between different programs.
  • server 610 can be implemented on a computing device having one or more modules as described in Figures 3-5 of the present application.
  • Network 620 can facilitate the exchange of data and/or information.
  • one or more components eg, server 610, client 630, and database 640
  • the server 610 can obtain behavior data of the user operation client from the client 630 through the network 620.
  • network 620 can be any type of wired or wireless network.
  • network 620 can include cable networks, wired networks, fiber optic networks, telecommunications networks, internal networks, internet networks, regional networks (LANs), wide area networks (WANs), wireless area networks (WLANs), metropolitan area networks (MANs).
  • network 620 can include one or more network access points.
  • network 620 can include wired or wireless network access points, such as base station and/or internet switching points 620-1, 620-2, ... through which one or more components of human-machine identification system 600 It can be connected to network 620 to exchange data and/or information.
  • the client may be a mobile client (ie, a mobile terminal) or a fixed terminal, such as a mobile phone 630-1, a tablet 630-2, a notebook computer 630-3, and an in-vehicle device 630-4. And desktop computers, built-in computers, and more.
  • the client may also include a wearable device, a virtual reality device, and/or an augmented reality device, etc., or any combination thereof.
  • the wearable device can include a smart bracelet, smart footwear, smart glasses, smart helmet, smart watch, smart wear, smart backpack, smart accessory, and the like, or any combination thereof.
  • the virtual reality device and/or the augmented reality device may include a virtual reality helmet, virtual reality glasses, virtual reality eyewear, augmented reality helmet, augmented reality glasses, an augmented reality eye mask, and the like, or any combination thereof.
  • virtual reality devices and / or augmented reality device may include Google Glass TM, RiftCon TM, Fragments TM, Gear VR TM like.
  • the client can also be integrated with the server, or the client can be one or more components of the server.
  • the client may be any device having one or more sensors that may be used to obtain behavior data for the client, and the application does not limit the form of the client.
  • Database 640 can store data and/or instructions. In some embodiments, database 640 can store material obtained from client 630. In some embodiments, database 640 can store information and/or instructions for execution or use by server 610 to perform the example methods described herein. For example, database 640 can store behavior data associated with the client's operations from client 630. In some embodiments, database 640 can store data and/or instructions that server 610 uses to execute or use to perform the exemplary methods described herein. For example, database 640 can store instructions for using the behavioral data to determine model processing behavior data to identify that the operator of the behavioral data is a person or machine, which instructions can be executed by processing device 612.
  • database 640 can include any combination of one or more of mass storage, removable storage, volatile read and write memory (eg, random access memory RAM), read only memory (ROM), and the like.
  • database 640 can be implemented on a cloud platform.
  • the cloud platform may include any combination of one or more of a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an internal cloud, and the like.
  • database 640 can be coupled to network 620 to communicate with one or more components of human identification system 600 (e.g., server 610, client 630, etc.). One or more components of the human identification system 600 can access data or instructions stored in the database 640 over the network 620. In some embodiments, database 640 can interface or communicate directly with one or more components (e.g., server 610, client 630, etc.) in human recognition system 600. In some embodiments, database 640 can be part of server 610. In some embodiments, one or more components (eg, server 610, client 630, etc.) in human recognition system 600 may have access to database 640.
  • Embodiment 1 of the present invention discloses a method for human-machine recognition, as shown in FIG. 1 and FIG. 2.
  • the human recognition method can be implemented by the human recognition system 600.
  • the human recognition method may include:
  • Step 101 Obtain behavior data of the client in an operating state; specifically, obtain behavior data of the client sent by the client under the operation of the operator, where the behavior data may reflect that the operator is to the client Sensor data of at least one operational behavior of the end.
  • the behavior data can be used to reflect the behavior of the user (such as a person or a machine) when the client is used.
  • the behavior data may include rotation data of the client, force data of the client, orientation data of the client, screen operation data of the client, input device operation data of the client, image sensing data of the client, magnetic field sensing data of the client Any combination of one or more of the infrared sensing data of the client.
  • the behavior data can be embodied by one or more sensor data of the client. The one or more sensor data may reflect one or more operational actions of the operator to the client.
  • the senor may include a camera, an acoustic sensor, a temperature sensor, a humidity sensor, a position sensor, a pressure sensor, a load cell, a flow sensor, a level sensor, a distance sensor, a speed sensor, an acceleration sensor, a torque sensor, and water.
  • illuminance sensor thermal sensor, photosensitive sensor, gas sensor, force sensor, magnetic sensor, humidity sensor, acoustic sensor, radiation sensitive sensor, color sensor, taste sensor, resistive sensor, capacitor Type sensor, inductive sensor, piezoelectric sensor, electromagnetic sensor, magnetoresistive sensor, photoelectric sensor, piezoresistive sensor, pyroelectric sensor, nuclear radiation sensor, semiconductor sensor, etc. combination.
  • the sensor may acquire any combination of one or more of the target object's video, audio, image, temperature, humidity, material, location, detection data, electromagnetic data, biometric data, gravity data, behavioral data, and the like.
  • the behavior data may include acceleration sensor data, gyroscope data, magnetometer data, screen sensor data, mouse operation data, touchpad operation data, touch screen operation data, keyboard operation data, light sensor data, distance sensors Any combination of one or more of data, temperature sensor data, screen pressure sensor data, infrared sensor data, camera data, spectral sensor data, and the like.
  • the client's rotation data can be used to reflect the client's rotational behavior.
  • the rotation data may include three components, for example, an angular velocity component on the X, Y, and Z axes in a three-dimensional space, which may be monitored by a gyroscope, an angular velocity sensor, or the like.
  • the force data of the client can be used to reflect the force behavior of the client.
  • the force data may include three components, such as acceleration components on the X, Y, and Z axes in a three-dimensional space.
  • the force data can be monitored by accelerometers, gravity sensors, inertial sensors, and the like.
  • the orientation data of the client can be used to reflect the orientation of the client (eg, the orientation of the client is as above, left and right, before and after, etc.).
  • the orientation data may include three components, such as magnetic components on the X, Y, and Z axes in a three-dimensional space.
  • the orientation data can be monitored by a positioning device such as a magnetometer or a position sensor.
  • the screen operation data of the client can be used to reflect the pressing, sliding, etc. of the user (eg, a person or machine) on the client screen.
  • the screen operation data can be monitored by a screen pressure sensor.
  • the input device operation data of the client may include operation data of an input device such as a mouse, a touch pad, a keyboard, or the like.
  • the mouse operation data may include a moving frequency of the mouse, a moving range, a pressing force, a pressing speed, a pressing position, and the like.
  • the touch panel operation data may include the frequency of use of the touch panel, the operation range, the operation strength, the operation speed, the contact area with the hand, and the like.
  • the keyboard operation data may include typing habits, error rate, pressing force, use speed, and the like.
  • the process of obtaining behavioral data may include the entire process of the client in an operational state. For example, you can get the behavior data of the client from the beginning of use (such as power on, wake up, etc.) to the end of use (such as shutdown, sleep, etc.).
  • the process of obtaining behavioral data can be a process between two operations (start operation, end operation) of the client.
  • start operation may be to begin entering a password on the login page
  • end operation may be to end the input password.
  • the start operation may be an operation to start retrieving a password
  • the end operation may be an end of the retrieve password operation.
  • the process from the start of the operation of the client to the end of the operation may be a collection of operations after the user enters the page.
  • the behavioral data may be encrypted as it is transmitted to further improve the accuracy of the identification, ensuring that subsequent recognition processes are performed based on accurate tamper-free information.
  • Algorithms for encrypting and decrypting behavior data may include digest algorithms (eg, MD5, SHA1, etc.), hash algorithms (eg, SM3, etc.), symmetric encryption algorithms (eg, AES, DES, IDEA, SSF33, SM1, SM4, SM7) And so on, any combination of one or more of asymmetric encryption algorithms (eg, SM2, SM9, RSA, etc.).
  • the behavior data may be encrypted, and the encrypted behavior data is transmitted to the server, and after receiving the encrypted behavior data, the server may decrypt the data. The decrypted behavior data is processed.
  • the execution device of step 101 may be a server (such as server 610).
  • the client such as client 630
  • the data is processed.
  • the specific process can be as follows:
  • Step 102 If the behavior data matches the behavior characteristics of the corresponding person in the preset behavior data judgment model, determine that the operator of the client is a person;
  • the server may be through a machine learning end, and the specific machine learning end includes a human behavior database.
  • the human behavior database may include a human-operated client, such as a user's behavioral characteristics of operating the mobile phone, such as the magnitude of the movement, the strength of the hand-held device, the frequency of the movement, and the like.
  • the machine learning side may also include a machine behavior database in which the behavioral characteristics of the machine operation client are included.
  • the machine operation includes a combination of one or more of program controlled machine operations, mechanical machine operations, or other non-human operations. Therefore, the behavior data can be effectively identified to identify whether the person or the machine is operating.
  • the machine learning end can include a behavioral data determination model (including but not limited to a Convolutional Neural Network (CNN), a Feature Pyramid Network (FPN), etc.).
  • the raw behavior data may first be pre-processed (specifically, the pre-processing may be performed at the client and/or server).
  • the original behavior data can be converted to frequency domain features by Fourier transform; for example, the original behavior data can be denoised (eg, culling extreme data, etc.).
  • raw behavioral data (or pre-processed raw behavioral data) may be entered into the behavioral data determination model.
  • the behavioral data judgment model can extract the feature vector from the input data (such as the pre-processed original behavior data), and judge based on the extracted feature vector (such as classification).
  • the behavioral data determination model can be stored at the client.
  • the classification result can be a machine operation or a human operation to determine whether the user of the client is a machine or a person.
  • the classification results may include machine operations or human operations in different poses.
  • human operations in different poses may include human operations while walking, human operations while standing, human operations while lying down, human operations when sitting down, and the like. Therefore, it is not only possible to determine whether the user is a machine or a person, but also to judge the posture of the operation when the user is a person.
  • the behavioral data determination model can be trained using human behavior client data in the human behavior database (in some embodiments, also including client behavior data when the machine operates the client).
  • the behavior data can be marked after the behavior data of the client when the human operation client is acquired and the behavior data of the client when the machine operates the client.
  • the corresponding behavior data can be marked as a human operation; when it is determined that the machine operates the client, the corresponding behavior data can be marked as a machine operation.
  • the corresponding behavior data can be marked as human operations in different poses, such as human operation while walking, human operation while standing, lying down Human manipulation, human operations when sitting down, etc.
  • the number of marked behavior data may be greater than or equal to a preset value, such as 50, 100, 500, etc., to ensure the training effect of the model.
  • the tagged behavioral data can be divided into a training dataset and a test dataset.
  • the original behavior data in the training data set may be pre-processed (for example, the original behavior data in the training data set is converted into frequency domain features by Fourier transform), and the original behavior of the pre-processed training data set is further analyzed.
  • Data is input to the initial behavioral data judgment model for training.
  • the test data set can be used to test the behavior data judgment model.
  • the parameters of the behavior data judgment model can be adjusted according to the test result, so as to finally obtain the behavior data judgment model as above.
  • the behavior data of the client when the human operates the client may include behavior data of the same and/or different people operating the client in the same and/or different postures using the same and/or different clients. For example, when Zhang San walks, he operates the data of the Apple mobile phone and the data of the Samsung mobile phone when Li Si is sitting. By collecting behavior data of different people using different clients in different postures, the training data can be effectively enriched, so that the obtained behavioral data judgment model is applicable to more application scenarios, thereby improving the accuracy of human-computer recognition. rate.
  • the behavior of the machine operating client may include software and/or program driven automatic click, input, and/or mechanically driven automatic click, input, move, etc., of the behavior of the client operating the machine.
  • software and/or program driven automatic clicks, input machine operations may include automatic game play, automatic typing, automatic clicks, automatic downloads, automatic verification, automatic password entry, and the like.
  • Machine-driven automatic click, input, and move machine operations can include the use of devices that automatically click on the client's screen (such as scrolling devices, rocking devices, etc.) to move the client's device (such as a rocking device that can place or hang the client) ) Machine operations that cause changes in the client's behavioral data.
  • the method further includes:
  • the behavior data and the corresponding analysis result may be generated and stored in the human behavior database.
  • the analysis result (which is the person in operation) and the corresponding behavior data may be generated and stored in the human behavior database, and the sample data may be used. Training in behavioral data judgment models, or for predicting behavioral data.
  • Step 103 If the behavior data does not match the behavior data of the person of the preset behavior data determination model, determine that the operator of the client is a machine.
  • whether the operator of the client is a machine can also be determined by determining whether the behavior data and the preset behavior data determine whether the behavior data of the machine in the model matches. In some embodiments, it is also possible to determine whether the operator of the client is a person or a machine by comprehensively determining the behavior data and the preset behavior data to determine the degree of matching between the behavior data of the person in the model and the behavior data of the machine.
  • the behavior data judging model can be used to determine the degree of matching of the behavior data with the human behavior data and/or the machine behavior data; and based on the degree of matching of the behavior data with the human behavior data and/or the machine behavior data, the operator of the client is determined to be a person. Or machine.
  • the behavior data matches the human behavior data (for example, the matching degree is greater than a preset threshold such as 70%, 80%, etc.), it can be determined that the operator of the client is a person. If the behavior data matches the machine behavior data (for example, the degree of matching is greater than a preset threshold such as 70%, 80%, etc.), it can be determined that the operator of the client is a machine. In some embodiments, if the degree of matching of the behavior data with the human behavior data is greater than the matching degree of the behavior data with the machine behavior data, it can be determined that the operator of the client is a person. If the match between the behavior data and the machine behavior data is greater than the match between the behavior data and the human behavior data, it can be determined that the operator of the client is a machine.
  • a preset threshold such as 70%, 80%, etc.
  • reverse data ie, machine behavior data
  • the reverse data may include software and/or program-driven behavioral data of the client operating with automatic clicks, inputs, and/or mechanically driven automatic clicks, inputs, moves, and the like.
  • Software and/or program-driven automatic click, input machine operations may include automatic game play, automatic typing, automatic click, automatic download and other machine operations.
  • Machine-driven automatic click, input, and move machine operations can include the use of devices that can automatically click on the client's screen (such as scrolling devices, rocking devices, etc.), which can move the client's device (such as a rocking device that can place or hang the client, etc.) ), etc., the machine operation that causes the client's behavior data to change.
  • the reverse data can be obtained by simulating the above machine operations.
  • the method further includes:
  • a preset defense process is initiated on the client.
  • the behavior data does not match the behavior data of the person of the preset behavior data judgment model (or when the operator is determined to be a machine by other methods disclosed in the present application), it can be confirmed that the machine is operating.
  • the machine operation is a risky operation
  • the defense process can be initiated, for example, re-authentication (such as fingerprint identification). , password input, face recognition, etc.), or control client shutdown, and so on.
  • an anti-false positive mechanism can be added. For example, in order to prevent a person from being mistakenly judged as a machine, it is possible to extend the judgment time and repeat the execution judgment a plurality of times (if the multiple judgment results are consistent, the confirmation is confirmed). In some embodiments, when it is confirmed that a misjudgment has occurred, for example, when the machine operation is judged to be a human operation, the corresponding behavior data may be marked as a machine operation to increase the sample size of the machine operation, and the model may be judged for the behavior data. Perform re- (or enhanced) training.
  • the original behavior data (or pre-processed behavior data) may also be pre-judged, for example, to determine the fluctuation/deviation range of the data, when the original When the behavior data is not within the preset range, the original behavior data can be directly judged as a machine operation, or its matching degree with the machine behavior data is increased.
  • the human-computer recognition method may further include: the behavior data determination model is a sample data in a human analysis database (also referred to as a human behavior database) that exceeds a predetermined threshold (in some embodiments, It can also be trained by including the behavior data of the client when the machine operates the client. Specifically, the behavioral data judgment model can be trained by using big data. The more the specific specimen data is, the more accurate the behavioral data judgment model is, so that the specimen data can be continuously obtained and continuously trained. .
  • a human analysis database also referred to as a human behavior database
  • a predetermined threshold in some embodiments, It can also be trained by including the behavior data of the client when the machine operates the client.
  • the behavioral data judgment model can be trained by using big data. The more the specific specimen data is, the more accurate the behavioral data judgment model is, so that the specimen data can be continuously obtained and continuously trained. .
  • the method for human-machine recognition includes: obtaining behavior data of a client in an operating state; wherein the behavior data includes rotation data of the client, force data of the client, and orientation data of the client. If the behavioral data and the behavioral characteristics of the corresponding person in the preset behavior analysis model (also referred to as the behavioral data judgment model) determine that the operator of the client is a person; if the behavior data and the preset behavior data determine the model of the person If the behavior data does not match, then the client of the client is determined to be the machine. Therefore, based on the normal state, when a person operates a client, such as a mobile terminal, a change in position, pressure, etc. occurs, and if the operation is performed on the client through the machine, or there is no position and pressure. Data, or location and pressure data are very different from human data, so that people and machines can be identified in a new perspective, simple, fast, and accurate.
  • Embodiment 2 of the present invention also discloses a device for human-computer recognition. As shown in Figure 3, it includes:
  • the obtaining module 201 is configured to obtain the behavior data of the client in the running state; wherein the behavior data may include the rotation data of the client, the force data of the client, the orientation data of the client, the screen operation data of the client, and the input device of the client. Any combination of one or more of operational data, image-aware data of the client, magnetic field sensing data of the client, infrared sensing data of the client, and the like. Specifically, the behavior data can be embodied by one or more sensor data of the client.
  • the behavior data may include acceleration sensor data, gyroscope data, magnetometer data, screen sensor data, mouse operation data, touchpad operation data, touch screen operation data, keyboard operation data, light sensor data, distance sensor data, temperature sensor Any combination of one or more of data, screen pressure sensor data, infrared sensor data, camera data, spectral sensor data, and the like.
  • the identifying module 202 is configured to: when the behavior data and the preset behavior data determine a behavior characteristic of the corresponding person in the model, determine that the operator of the client is a person; and when the behavior data and the preset behavior data are determined When the behavior data of the person of the model does not match, it is determined that the operator of the client is a machine.
  • a determination module may be further included for determining, by the behavior data, the model to determine the degree of matching of the behavior data with the human behavior data and/or the machine behavior data.
  • the device further includes:
  • the defense module 203 is configured to start a preset defense process on the client when it is determined that the operator of the client is a machine.
  • the behavior data does not match the behavior data of the person of the preset behavior data judgment model, it can be confirmed that the machine is operating, in which case, since the machine operation is a risky operation, in this case, It can be determined that the operation of the client is risky, so it is necessary to defend against the defense process, such as re-authenticating other methods, or controlling the client to shut down.
  • the device further includes:
  • the storage module 204 is configured to generate the specimen data and store the specimen data in the human behavior database when the operator of the client is determined to be a human.
  • the storage module 204 can also be used to store a human behavior database, a machine behavior database, and/or a behavioral data determination model.
  • the behavioral data determination model (or referred to as a behavioral analysis model) may be trained by specimen data in the human analysis database that exceeds a predetermined threshold.
  • the client's rotation data can be used to reflect the client's rotation behavior.
  • the rotation data may include three components, for example, an angular velocity component on the X, Y, and Z axes in a three-dimensional space, which may be monitored by a gyroscope, an angular velocity sensor, or the like.
  • the force data of the client can be used to reflect the force behavior of the client.
  • the force data may include three components, such as accelerations on the X, Y, and Z axes in a three-dimensional space.
  • the force data can be monitored by accelerometers, gravity sensors, inertial sensors, and the like.
  • the client's orientation data can be used to reflect the client's orientation.
  • the orientation data may include three components, such as magnetic components on the X, Y, and Z axes in a three-dimensional space.
  • the orientation data can be monitored by a positioning device such as a magnetometer or a position sensor.
  • the screen operation data of the client can be used to reflect the pressing, sliding, etc. of the user (eg, a person or machine) on the client screen.
  • the screen operation data can be monitored by a screen pressure sensor.
  • the user's input speed can be obtained through the screen pressure sensor, the time difference between pressing the keyboard when inputting, the length of pressing time, the pressing force, and the like.
  • sliding input such as a nine-square sliding input
  • the systems (e.g., devices) and their modules illustrated in Figures 3-5 can be implemented in a variety of ways.
  • the system and its modules can be implemented in hardware, software, or a combination of software and hardware.
  • the hardware portion can be implemented using dedicated logic; the software portion can be stored in memory and executed by an appropriate instruction execution system, such as a microprocessor or dedicated design hardware.
  • processor control code such as a carrier medium such as a magnetic disk, CD or DVD-ROM, such as read-only memory (firmware)
  • processor control code such as a carrier medium such as a magnetic disk, CD or DVD-ROM, such as read-only memory (firmware)
  • Such code is provided on a programmable memory or on a data carrier such as an optical or electronic signal carrier.
  • the system of the present application and its modules can be implemented not only with hardware such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, etc., or programmable hardware devices such as field programmable gate arrays, programmable logic devices, and the like. It can also be implemented by, for example, software executed by various types of processors, or by a combination of the above-described hardware circuits and software (for example, firmware).
  • the above description of the authentication system and its modules is merely for convenience of description, and the present application is not limited to the scope of the embodiments. It will be understood that, after understanding the principles of the system, it is possible for those skilled in the art to arbitrarily combine the various modules or connect the other subsystems without departing from the principle.
  • the obtaining module 201, the identifying module 202, the defense module 203, and the storage module 204 may be different modules in one system, or may be a module to implement the functions of the above two or more modules.
  • the above modules can be flexibly matched and combined as needed, and are not limited to several specific embodiments in the drawings of the specification.
  • the embodiment of the present invention discloses a method and device for human-machine recognition, wherein the method includes: acquiring behavior data of a client in an operating state; wherein the behavior data includes rotation data of the client, Determining the force data of the client, the orientation data of the client; if the behavior data and the behavior characteristic of the corresponding person in the preset behavior analysis model, determining that the operator of the client is a person; if the behavior data is The preset behavior data determines that the behavior data of the person of the model does not match, and then determines that the operator of the client is a machine.
  • the client's behavior data at the runtime that is, the client's behavioral characteristics in the runtime, whether the rotation data, the force data, the orientation data, and the preset behavior analysis model match the behavior characteristics of the client are determined as the client's Whether the operation is a person or a machine, based on the behavior characteristics of the person operating the client, the person or the machine is identified, and the recognition is simple, accurate and fast.
  • modules in the apparatus in the implementation scenario may be distributed in the apparatus for implementing the scenario according to the implementation scenario description, or may be correspondingly changed in one or more devices different from the implementation scenario.
  • the modules of the above implementation scenarios may be combined into one module, or may be further split into multiple sub-modules.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephonic Communication Services (AREA)

Abstract

本申请公开了一种人机识别的方法和设备,其中该方法包括:获取客户端在运行状态下的行为数据;若所述行为数据与预设的行为分析模型中对应人的行为特征匹配,则确定所述客户端的操作者是人;若所述行为数据与预设的行为数据判断模型的人的行为数据不匹配,则确定所述客户端的操作者是机器。以此通过客户端在运行时的行为数据,也即通过客户端在运行时的旋转数据、受力数据、方位数据与预设的行为分析模型中对应人的行为特征是否匹配来确定为客户端的操作是人还是机器,以此基于人操作客户端的行为特征对是人还是机器进行识别,识别简单准确快捷。

Description

一种人机识别的方法和设备
交叉引用
本申请要求2017年6月29日提交的编号为CN201710517666.8的中国申请,以及2017年6月29日提交的编号为CN201710517649.4的中国申请的优先权。上述申请的内容以引用方式被包含于此。
技术领域
本发明涉及人机识别领域,特别涉及一种人机识别的方法和设备。
背景技术
随着互联网的高速发展,早期传统的游戏外挂产业,也即利用机器槽孔目前已经深入到互联网风控业务安全领域,目前市场上大部分业务软件针对这种基于人机识别的反人机技术还未广泛使用甚至是未使用从而导致相关业务出现了一些安全问题。
现有的人机识别方式主要有限制IP、设备指纹、浏览器指纹、验证码、图形验证、手机短信等技术。
但是针对IP对客户端进行限制是网络资源限制,如今已经没有多大的限制效果,如若攻击者拥有大量的IP资源池还是能够较轻易的绕过此限制;至于针对设备指纹是物理资源限制,假若恶意攻击者拥有大量物理资源还是能够比较轻易的绕过客户端限制;针对浏览器指纹限制则为请求头部限制,一般情况下都可以修改伪装从而绕过对客户端的限制。
至于验证码识别、图形验证等识别方式如今大部分能够被快速破解;手机短信则是流程比较繁琐,成本也比较高。
目前已有的识别都无法简单有效的进行人机识别,因此目前需要一种简单有效的人机识别方法。
发明内容
针对现有技术中的缺陷,本发明提出了一种人机识别的方法和设备,用以实现简单有效的人机识别。
具体的,本发明实施例提出了以下具体的实施例:
本申请实施例之一提供一种在服务器上实现的人机识别的方法,所述服务器包括至少一个处理器、存储器和连接到网络的通信平台,所述方法包括:获取客户端发送的所述客户端在操作者操作下的行为数据,所述行为数据为反映所述操作者对所述客户端的至少一个操作行为的传感器数据;利用行为数据判断模型判断所述行为数据与人类行为数据是否匹配;基于判断结果确定所述客户端的操作者是人或机器。
在一些实施例中,所述行为数据包括以下数据中的至少一个:所述客户端的旋转数据、受力数据、方位数据、屏幕操作数据以及输入设备操作数据。
在一些实施例中,所述行为数据判断模型是基于人操作客户端的行为样本数据和/或机器操作客户端的行为样本数据训练得到的。
在一些实施例中,所述方法还包括基于判断所述客户端的操作者是人或机器的结果启动预设的防御流程。
在一些实施例中,若所述行为数据与所述人类行为数据相匹配,则判断所述客户端的操作者是人;若所述行为数据与所述人类行为数据不匹配,则判断所述客户端的操作者是机器。
本申请实施例之一提供一种人机识别的系统,包括:获取模块,用于获取客户端发送的所述客户端在操作者操作下的行为数据,所述行为数据为反映所述操作者对所述客户端的至少一个操作行为的传感器数据;所述获取模块还用于获取行为数据判断模型;判断模块,用于利用所述行为数据判断模型判断所述行为数据与人类行为数据是否匹配;识别模块,用于基于判断结果确定所述客户端的操作者是人或机器。
在一些实施例中,所述行为数据包括以下数据中的至少一个:所述客户端的旋转数据、受力数据、方位数据、屏幕操作数据或输入设备操作数据。
在一些实施例中,所述行为数据判断模型是基于人操作客户端的行为样本数据和/或机器操作客户端的行为样本数据训练得到的。
在一些实施例中,所述系统还包括防御模块,用于基于判断所述客户端的操作者是人或机器的结果启动预设的防御流程。
在一些实施例中,若所述行为数据与所述人类行为数据相匹配,则所述识别模块判断所述客户端的操作者是人;若所述行为数据与所述人类行为数据不匹配,则所述识别模块判断所述客户端的操作者是机器。
本申请实施例之一提供一种人机识别的装置,其特征在于,包括处理器,所述处理器被配置为:获取客户端发送的所述客户端在操作者操作下的行为数据,所述行为数据为反映所述操作者对所述客户端的至少一个操作行为的传感器数据;获取行为数据判断模型;利用所述行为数据判断模型判断所述行为数据与人类行为数据和/或机器行为数据的匹配程度;基于所述行为数据与人类行为数据和/或机器行为数据的匹配程度判断所述 客户端的操作者是人或机器。
本申请实施例之一提供一种计算机可读存储介质,所述存储介质存储计算机指令,当计算机读取存储介质中的计算机指令后,计算机执行人机识别的方法,所述方法包括:获取客户端发送的所述客户端在操作者操作下的行为数据,所述行为数据为反映所述操作者对所述客户端的至少一个操作行为的传感器数据;获取行为数据判断模型;利用所述行为数据判断模型判断所述行为数据与人类行为数据和/或机器行为数据的匹配程度;基于所述行为数据与人类行为数据和/或机器行为数据的匹配程度判断所述客户端的操作者是人或机器。
本申请实施例之一提供一种在客户端上实现的人机识别的方法,所述客户端包括至少一个处理器、存储器和连接到网络的通信平台,所述方法包括:获取客户端在操作者操作下的行为数据并发送给服务器,所述行为数据为反映所述操作者对所述客户端的至少一个操作行为的传感器数据;接收服务器信息,根据所述服务器基于所述行为数据确定所述操作者是人或机器的判断结果启动或不启动预设的防御流程。
在一些实施例中,所述行为数据包括以下数据中的至少一个:所述客户端的旋转数据、受力数据、方位数据、屏幕操作数据以及输入设备操作数据。
本申请实施例之一提供一种人机识别的系统,其特征在于,包括:获取模块,用于获取客户端在操作者操作下的行为数据并发送给服务器,所述行为数据为反映所述操作者对所述客户端的至少一个操作行为的传感器数据;接收模块,用于接收服务器的信息;识别模块,用于根据所述服 务器基于所述行为数据确定所述操作者是人或机器的判断结果启动或不启动预设的防御流程。
在一些实施例中,所述行为数据包括以下数据中的至少一个:所述客户端的旋转数据、受力数据、方位数据、屏幕操作数据或输入设备操作数据。
本申请实施例之一提供一种人机识别的客户端,其特征在于,包括处理器,所述处理器被配置为:获取客户端在操作者操作下的行为数据并发送给服务器,所述行为数据为反映所述操作者对所述客户端的至少一个操作行为的传感器数据;接收服务器信息,根据所述服务器基于所述行为数据确定所述操作者是人或机器的判断结果启动或不启动预设的防御流程。
在一些实施例中,所述行为数据包括以下数据中的至少一个:所述客户端的旋转数据、受力数据、方位数据、屏幕操作数据或输入设备操作数据。
本申请实施例之一提供一种计算机可读存储介质,所述存储介质存储计算机指令,当计算机读取存储介质中的计算机指令后,计算机执行人机识别的方法,所述方法包括:获取客户端在操作者操作下的行为数据并发送给服务器,所述行为数据为反映所述操作者对所述客户端的至少一个操作行为的传感器数据;接收服务器信息,根据所述服务器基于所述行为数据确定所述操作者是人或机器的判断结果启动或不启动预设的防御流程。
本发明实施例还提出了一种人机识别的方法,包括:
获取客户端在运行状态下的行为数据;其中,所述行为数据包括所述客户端的旋转数据、所述客户端的受力数据、所述客户端的方位数据;
若所述行为数据与预设的行为分析模型中对应人的行为特征匹配,则确定所述客户端的操作者是人;
若所述行为数据与预设的行为数据判断模型的人的行为数据不匹配,则确定所述客户端的操作者是机器。
在一个具体的实施例中,该方法还包括:
当确定所述客户端的操作者是机器时,对所述客户端启动预设的防御流程。
在一个具体的实施例中,该方法还包括:
当确定所述客户端的操作者是人时,将所述行为数据以及对应的分析结果生成标本数据并存储在人类行为数据库中。
在一个具体的实施例中,还包括:所述行为分析模型是通过所述人类分析数据库中数量超过预设阈值的标本数据来训练得到的。
在一个具体的实施例中,所述旋转数据是通过陀螺仪监测得到的、所述受力数据是通过所述加速计监测得到的、所述方位数据是通过所述磁力计定位设备监测得到的。
本发明实施例还提出了一种人机识别的设备,包括:
获取模块,用于获取客户端在运行状态下的行为数据;其中,所述行为数据包括所述客户端的旋转数据、所述客户端的受力数据、所述客户端的方位数据;
识别模块,用于当所述行为数据与预设的行为分析模型中对应人的行为特征匹配时,确定所述客户端的操作者是人;以及当所述行为数据与预设的行为数据判断模型的人的行为数据不匹配时,确定所述客户端的操 作者是机器。
在一个具体的实施例中,该设备还包括:
防御模块,用于当确定所述客户端的操作者是机器时,对所述客户端启动预设的防御流程。
在一个具体的实施例中,该设备还包括:
存储模块,用于当确定所述客户端的操作者是人时,将所述行为数据以及对应的分析结果生成标本数据并存储在人类行为数据库中。
在一个具体的实施例中,
所述行为分析模型是通过所述人类分析数据库中数量超过预设阈值的标本数据来训练得到的。
在一个具体的实施例中,所述旋转数据是通过陀螺仪监测得到的、所述受力数据是通过所述加速计监测得到的、所述方位数据是通过所述磁力计定位设备监测得到的。
本发明实施例还提出了一种计算机可读存储介质,所述存储介质存储计算机指令,当计算机读取存储介质中的计算机指令后,计算机执行所述人机识别方法,所述方法包括:获取客户端在运行状态下的行为数据;其中,所述行为数据包括以下数据中的至少一种:所述客户端的旋转数据、所述客户端的受力数据、所述客户端的方位数据;若所述行为数据与预设的行为分析模型中对应人的行为特征匹配,则确定所述客户端的操作者是人;若所述行为数据与预设的行为数据判断模型的人的行为数据不匹配,则确定所述客户端的操作者是机器。
以此,本发明实施例公开了一种人机识别的方法和设备,其中该方 法包括:获取客户端在运行状态下的行为数据;其中,所述行为数据包括所述客户端的旋转数据、所述客户端的受力数据、所述客户端的方位数据;若所述行为数据与预设的行为分析模型中对应人的行为特征匹配,则确定所述客户端的操作者是人;若所述行为数据与预设的行为数据判断模型的人的行为数据不匹配,则确定所述客户端的操作者是机器。以此通过客户端在运行时的行为数据,也即通过客户端在运行时的旋转数据、受力数据、方位数据与预设的行为分析模型中对应人的行为特征是否匹配来确定为客户端的操作是人还是机器,以此基于人操作客户端的行为特征对是人还是机器进行识别,识别简单准确快捷。
附图说明
为了更清楚地说明本发明实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,应当理解,以下附图仅示出了本发明的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1为本发明实施例提出的一种人机识别的方法的流程示意图;
图2为本发明实施例提出的一种人机识别的方法的流程示意图;
图3为本发明实施例提出的一种人机识别的设备的结构示意图;
图4为本发明实施例提出的一种人机识别的设备的结构示意图;
图5为本发明实施例提出的一种人机识别的设备的结构示意图;
图6为本发明实施例提出的一种人机识别系统的应用场景示意图。
具体实施方式
在下文中,将更全面地描述本公开的各种实施例。本公开可具有各种实施例,并且可在其中做出调整和改变。然而,应理解:不存在将本公开的各种实施例限于在此公开的特定实施例的意图,而是应将本公开理解为涵盖落入本公开的各种实施例的精神和范围内的所有调整、等同物和/或可选方案。
在下文中,可在本公开的各种实施例中使用的术语“包括”或“可包括”指示所公开的功能、操作或元件的存在,并且不限制一个或更多个功能、操作或元件的增加。此外,如在本公开的各种实施例中所使用,术语“包括”、“具有”及其同源词仅意在表示特定特征、数字、步骤、操作、元件、组件或前述项的组合,并且不应被理解为首先排除一个或更多个其它特征、数字、步骤、操作、元件、组件或前述项的组合的存在或增加一个或更多个特征、数字、步骤、操作、元件、组件或前述项的组合的可能性。
在本公开的各种实施例中,表述“或”或“A或/和B中的至少一个”包括同时列出的文字的任何组合或所有组合。例如,表述“A或B”或“A或/和B中的至少一个”可包括A、可包括B或可包括A和B二者。
在本公开的各种实施例中使用的表述(诸如“第一”、“第二”等)可修饰在各种实施例中的各种组成元件,不过可不限制相应组成元件。例如,以上表述并不限制元件的顺序和/或重要性。以上表述仅用于将一个元件与其它元件区别开的目的。例如,第一用户装置和第二用户装置指示不同用户装置,尽管二者都是用户装置。例如,在不脱离本公开的各种实施例的范围的情况下,第一元件可被称为第二元件,同样地,第二元件也可被称 为第一元件。
应注意到:如果描述将一个组成元件“连接”到另一组成元件,则可将第一组成元件直接连接到第二组成元件,并且可在第一组成元件和第二组成元件之间“连接”第三组成元件。相反地,当将一个组成元件“直接连接”到另一组成元件时,可理解为在第一组成元件和第二组成元件之间不存在第三组成元件。
在本公开的各种实施例中使用的术语“用户”可指示使用电子装置的人或使用电子装置的装置(例如,人工智能电子装置、机器)。
在本公开的各种实施例中使用的术语仅用于描述特定实施例的目的并且并非意在限制本公开的各种实施例。如在此所使用,单数形式意在也包括复数形式,除非上下文清楚地另有指示。除非另有限定,否则在这里使用的所有术语(包括技术术语和科学术语)具有与本公开的各种实施例所属领域普通技术人员通常理解的含义相同的含义。所述术语(诸如在一般使用的词典中限定的术语)将被解释为具有与在相关技术领域中的语境含义相同的含义并且将不被解释为具有理想化的含义或过于正式的含义,除非在本公开的各种实施例中被清楚地限定。
图6所示为根据本申请一些实施例所示的人机识别系统(或人机识别装置)的应用场景示意图。该人机识别系统600可以是用于互联网服务的线上服务平台。例如,该人机识别系统600可以应用于游戏平台、购物平台、即时通讯平台、交易平台、娱乐平台、教育平台等一种或多种的任意组合。在一些实施例中,人机识别系统可以识别出客户端的操作者是人还是机器。其可以实现的部分技术目的包括限制(和/或发现、打击)机器 作弊、机器违规操作、机器破解等不安全行为。如图6所示,该人机识别系统600可以包含服务器610、网络620、客户端630以及数据库640。该服务器610可包含处理设备612。
在一些实施例中,服务器610可以用于处理与人机识别相关的信息和/或数据。服务器610可以是独立的服务器或者服务器组。该服务器组可以是集中式的或者分布式的(如:服务器610可以是分布系统)。在一些实施例中该服务器610可以是区域的或者远程的。例如,服务器610可通过网络620访问存储于客户端630和/或数据库640的信息和/或资料。在一些实施例中,服务器610可直接与客户端630和/或数据库640连接以访问存储于其中的信息和/或资料。在一些实施例中,服务器610可在云平台上执行。例如,该云平台可包括私有云、公共云、混合云、社区云、分散式云、内部云等中的一种或其任意组合。
在一些实施例中,服务器610可包含处理设备612。该处理设备612可处理与人机识别有关的数据和/或信息以实现一个或多个本申请中描述的功能。例如处理设备612可以通过使用行为数据判断模型处理信息和/或数据来识别操作者是人还是机器。在一些实施例中,处理设备612可包含一个或多个子处理设备(如:单芯处理设备或多核多芯处理设备)。仅仅作为范例,处理设备612可包含中央处理器(CPU)、专用集成电路(ASIC)、专用指令处理器(ASIP)、图形处理器(GPU)、物理处理器(PPU)、数字信号处理器(DSP)、现场可编程门阵列(FPGA)、可编辑逻辑电路(PLD)、控制器、微控制器单元、精简指令集电脑(RISC)、微处理器等一种或以上任意组合。在一些实施例中,该服务器610还可以是客户端630的一个或 多个组件,服务器610可以与客户端630在同一程序内部通信,或者服务器610可以与客户端630在不同程序间通信。在一些实施例中,服务器610可以在本申请的图3-5中描述的具有一个或多个模块的计算设备上实施。
网络620可促进数据和/或信息的交换。在一些实施例中,人机识别系统600中的一个或多个组件(如:服务器610、客户端630和数据库640)可通过网络620发送数据和/或信息给人机识别系统600中的其他组件。例如,服务器610可以通过网络620从客户端630中获取用户操作客户端的行为数据。在一些实施例中,网络620可以是任意类型的有线或无线网络。例如,网络620可包括缆线网络、有线网络、光纤网络、电信网络、内部网络、网际网络、区域网络(LAN)、广域网络(WAN)、无线区域网络(WLAN)、都会区域网络(MAN)、公共电话交换网络(PSTN)、蓝牙网络、ZigBee网络、近场通讯(NFC)网络等或以上任意组合。在一些实施例中,网络620可包括一个或多个网络进出点。例如,网络620可包含有线或无线网络进出点,如基站和/或网际网络交换点620-1、620-2、...,通过这些进出点,人机识别系统600的一个或多个组件可连接到网络620上以交换数据和/或信息。
在一些实施例中,客户端可以为可移动的客户端(也即移动终端)或者固定的终端,例如手机630-1、平板电脑630-2、笔记本电脑630-3、车载装置630-4,以及台式电脑、内置电脑等等。在一些实施例中,客户端还可以包括可穿戴设备、虚拟现实设备和/或增强现实设备等,或其任意组合。在一些实施例中,可穿戴设备可以包括智能手镯、智能鞋袜、智能眼镜、智能头盔、智能手表、智能穿着、智能背包、智能附件等或其任意组 合。在一些实施例中,虚拟现实设备和/或增强实境设备可包括虚拟现实头盔、虚拟现实眼镜、虚拟现实眼罩、增强实境头盔、增强实境眼镜、增强实境眼罩等或其任意组合。例如,虚拟现实装置和/或增强现实装置可以包括Google Glass TM,RiftCon TM,Fragments TM,Gear VR TM等。在一些实施例中,客户端还可以与服务器集成为一体,或者客户端为服务器的一个或多个组件。在一些替代性实施例中,客户端可以是具有一个或多个传感器的任意设备,该一个或多个传感器可以用于获取客户端的行为数据,本申请对客户端的形式不做限制。
数据库640可存储资料和/或指令。在一些实施例中,数据库640可存储从客户端630获取的资料。在一些实施例中,数据库640可存储供服务器610执行或使用的信息和/或指令,以执行本申请中描述的示例性方法。例如,数据库640可以存储从客户端630获取与客户端的操作相关的行为数据。在一些实施例中,数据库640可以储存服务器610用来执行或使用以完成本申请中描述的示例性方法的数据及/或指令。例如,数据库640可以存储用于使用行为数据判断模型处理行为数据来识别行为数据的操作者是人或机器的指令,该指令可以由处理设备612执行。在一些实施例中,数据库640可包括大容量存储器、可移动存储器、挥发性读写存储器(例如随机存取存储器RAM)、只读存储器(ROM)等一种或以上任意组合。在一些实施例中,数据库640可在云平台上实现。例如,该云平台可包括私有云、公共云、混合云、社区云、分散式云、内部云等一种或以上任意组合。
在一些实施例中,数据库640可与网络620连接以与人机识别系统 600的一个或多个部件(如,服务器610、客户端630等)通讯。人机识别系统600的一个或多个组件可通过网络620访问存储于数据库640中的资料或指令。在一些实施例中,数据库640可直接与人机识别系统600中的一个或多个组件(如,服务器610、客户端630等)连接或通讯。在一些实施例中,数据库640可以是服务器610的一部分。在一些实施例中,人机识别系统600中的一个或多个组件(如,服务器610、客户端630等)可具有访问数据库640的权限。
实施例1
本发明实施例1公开了一种人机识别的方法,如图1以及图2所示。在一些实施例中,该人机识别方法可以由人机识别系统600实现。
如图1所示,人机识别方法可以包括:
步骤101、获取客户端在运行状态下的行为数据;具体的,获取客户端发送的所述客户端在操作者操作下的行为数据,所述行为数据可以为反映所述操作者对所述客户端的至少一个操作行为的传感器数据。
其中,行为数据可以用于反映客户端在被使用时使用者(如人或机器等)的行为。在一些实施例中,行为数据可以包括客户端的旋转数据、客户端的受力数据、客户端的方位数据、客户端的屏幕操作数据、客户端的输入设备操作数据、客户端的图像感知数据、客户端的磁场感知数据、客户端的红外感知数据等一种或多种的任意组合。具体的,该行为数据可以由客户端的一个或多个传感器数据体现。该一个或多个传感器数据可以反映操作者对客户端的一个或多个操作行为。在一些实施例中,传感器可以包括摄像头、声音传感器、温度传感器、湿度传感器、位置传感器、压 力传感器、称重传感器、流量传感器、液位传感器、距离传感器、速度传感器、加速度传感器、力矩传感器、水浸传感器、照度传感器、热敏传感器、光敏传感器、气敏传感器、力敏传感器、磁敏传感器、湿敏传感器、声敏传感器、放射线敏感传感器、色敏传感器、味敏传感器、电阻式传感器、电容式传感器、电感式传感器、压电式传感器、电磁式传感器、磁阻式传感器、光电式传感器、压阻式传感器、热电式传感器、核辐射式传感器、半导体式传感器等一种或多种的任意组合。在一些实施例中,传感器可以获取目标对象的视频、音频、图像、温度、湿度、材质、位置、检测数据、电磁数据、生物数据、重力数据、行为数据等一种或多种的任意组合。在一些实施例中,行为数据可以包括加速度传感器数据、陀螺仪数据、磁力计数据、屏幕传感器数据、鼠标操作数据、触控板操作数据、触摸屏操作数据、键盘操作数据、光传感器数据、距离传感器数据、温度传感器数据、屏幕压力传感器数据、红外传感器数据、摄像头数据、光谱传感器数据等一种或多种的任意组合。
在一些实施例中,客户端的旋转数据可以用于反映客户端的旋转行为。具体的,该旋转数据可以包括三个分量,例如可以为三维空间中的X、Y、Z轴上的角速度分量,其可以通过陀螺仪、角速度传感器等监测得到。在一些实施例中,客户端的受力数据可以用于反映客户端的受力行为。具体的,该受力数据可以包括三个分量,如三维空间中X、Y、Z轴上的加速度分量。该受力数据可以通过加速度计、重力传感器、惯性传感器等监测得到。在一些实施例中,客户端的方位数据可以用于反映客户端的方位情况(如客户端的朝向如上下、左右、前后等)。具体的,方位数据可以包括 三个分量,例如三维空间中X、Y、Z轴上的磁分量。方位数据可以通过磁力计、位置传感器等定位设备监测得到。在一些实施例中,客户端的屏幕操作数据可以用于反映使用者(如人或机器)在客户端屏幕上的按压、滑动等操作。具体的,屏幕操作数据可以通过屏幕压力传感器监测得到。例如,使用者使用屏幕键盘输入时,可以通过屏幕压力传感器获取使用者的输入速度、输入时按压键盘之间的时间差、按压时间长度、按压力度等。当进行滑动输入(例如九宫格滑动输入)时,可以获取滑过每个点(如每个格)的时间、滑过每两个点之间的时间间隔、滑动压力等。在一些实施例中,客户端的输入设备操作数据可以包括鼠标、触控板、键盘等输入设备的操作数据。例如,鼠标操作数据可以包括鼠标的移动频率、移动幅度、按压力度、按压速度、按压位置等。又例如,触控板操作数据可以包括触控板的使用频率、操作幅度、操作力度、操作速度、与手的接触面积等等。再例如,键盘操作数据可以包括打字习惯、出错率、按压力度、使用速度等等。
对于目前的许多移动终端,例如手机、平板电脑、可穿戴设备、虚拟现实设备和/或增强现实设备等,由于上述的一种或多种传感器(如陀螺仪、加速度计、磁力计、屏幕压力传感器等)已经内置于移动终端中,因此可以直接获取到用于反映行为数据的传感器数据,而不需要额外的增加设备来专门获取行为数据,从而可以节约使用成本。
在一些实施例中,获取行为数据的过程可以包括客户端在运行状态下的整个过程。例如,可以获取客户端从开始使用(如开机、唤醒等)到结束使用(如关机、睡眠等)整个过程的行为数据。在一些实施例中,获 取行为数据的过程可以为客户端某两个操作(开始操作、结束操作)之间的过程。例如,在一些实施例中,开始操作可以是在登录页面开始输入密码,结束操作可以是输入密码结束。又例如,在一些实施例中,开始操作可以是开始找回密码的操作,结束操作可以是找回密码操作结束。在一些实施例中,从开始操作客户端到结束操作的过程可以是用户进入页面后的一系列操作的集合。
在一些实施例中,行为数据在进行传输时可以进行加密,以进一步提高识别的准确性,保证后续的识别过程是基于准确的未经篡改的信息来进行的。对行为数据进行加解密的算法可以包括摘要算法(例如,MD5、SHA1等)、哈希算法(例如,SM3等)、对称加密算法(例如,AES、DES、IDEA、SSF33、SM1、SM4、SM7等)、非对称加密算法(例如,SM2、SM9、RSA等)等一种或多种的任意组合。在一些实施例中,在客户端获取到行为数据之后,可以对该行为数据进行加密,并将加密后的行为数据传送给服务器,而服务器在接收到加密的行为数据之后,可以先进行解密再对解密后的行为数据进行处理。
具体的,步骤101的执行设备可以为服务器(如服务器610),在此情况下,客户端(如客户端630)在监测到自身的行为数据之后,将其上报给服务器,后续服务器会对行为数据进行处理,具体的过程可以如下:
步骤102、若行为数据与预设的行为数据判断模型中对应人的行为特征匹配,则确定客户端的操作者是人;
具体的,仍以上述为例来进行说明,服务器中可以通过机器学习端,具体的机器学习端中包含有人类行为数据库。在该人类行为数据库中可以 包含有人操作客户端,例如用户操作手机的行为特征,例如移动的幅度,手握手机的力度,移动的频率等等。在一些实施例中,机器学习端中还可以包含机器行为数据库,在该机器行为数据库中,包含了机器操作客户端的行为特征。该机器操作包含了程序控制的机器操作,机械式的机器操作,或其他非人类操作的一种或多种的组合。因此可以有效的对行为数据进行识别,识别出到底是人还是机器在进行操作。
在一些实施例中,机器学习端可以包含行为数据判断模型(包括但不限于卷积神经网络(CNN)、特征金字塔网络(FPN)等)。在一些实施例中,首先可以对原始行为数据进行预处理(具体的,预处理可以在客户端和/或服务器进行)。例如,可以通过傅里叶变换将原始行为数据转化为频域特征;又例如,可以对原始行为数据进行去噪处理(如剔除极端数据等)。在一些实施例中,可以将原始行为数据(或预处理后的原始行为数据)输入至行为数据判断模型中。行为数据判断模型可以从所输入的数据(如预处理后的原始行为数据)中提取特征向量,并基于提取出的特征向量进行判断(如进行分类)。在一些实施例中,行为数据判断模型可以存储在客户端。在一些实施例中,分类结果可以为机器操作或人类操作,以判断客户端的使用者是机器还是人。在一些替代性实施例中,分类结果可以包括机器操作或不同姿态的人类操作。例如,不同姿态的人类操作可以包括走路时的人类操作,站立时的人类操作,卧躺时的人类操作,坐下时的人类操作等等。从而不仅可以判断出使用者是机器还是人,而且当使用者是人时还可以判断出操作的姿态。
在一些实施例中,行为数据判断模型可以使用人类行为数据库中的 人类操作客户端时客户端的行为数据(在一些实施例中,也可以包括机器操作客户端时客户端的行为数据)进行训练。具体地,在获取人类操作客户端时客户端的行为数据和机器操作客户端时客户端的行为数据后,可以对行为数据进行标记。例如,确定为人类操作客户端时,可以将对应的行为数据标记为人类操作;确定为机器操作客户端时,可以将对应的行为数据标记为机器操作。在一些实施例中,确定为人类在不同姿态下操作客户端时,可以将对应的行为数据标记为不同姿态下的人类操作,例如,走路时的人类操作,站立时的人类操作,卧躺时的人类操作,坐下时的人类操作等等。在一些实施例中,带标记的行为数据的数量可以大于或等于预设数值,如50条、100条、500条等,以保证模型的训练效果。在一些实施例中,可以将带标记的行为数据分为训练数据集和测试数据集。具体的,可以将该训练数据集中的原始行为数据进行预处理(例如通过傅里叶变换将训练数据集中的原始行为数据转化为频域特征),再将预处理后的训练数据集中的原始行为数据输入至初始行为数据判断模型中进行训练。测试数据集可以用于测试行为数据判断模型,具体的,可以根据测试结果对行为数据判断模型的参数进行调整,以最终得到如上的行为数据判断模型。
在一些实施例中,人类操作客户端时客户端的行为数据可以包含相同和/或不同人使用相同和/或不同客户端在相同和/或不同姿势下操作客户端的行为数据。例如,张三走路时操作苹果手机的数据和李四坐着时操作三星手机的数据。通过采集不同的人使用不同的客户端在不同姿势下操作的行为数据,可以有效的丰富训练数据,以使所获得的行为数据判断模型适用于更多的应用场景,从而提高人机识别的准确率。
在一些实施例中,机器操作客户端的行为可以包括软件和/或程序驱动下的自动点击、输入和/或机械驱动下的自动点击、输入、移动等机器操作的客户端的行为。例如,软件和/或程序驱动下的自动点击、输入的机器操作可以包括自动玩游戏、自动打字、自动点击、自动下载、自动验证、自动密码输入等机器操作。机械驱动下的自动点击、输入、移动的机器操作可以包括使用能够自动点击客户端屏幕的设备(如滚动装置、摇摆装置等),可以移动客户端的设备(如可以放置或悬挂客户端的摇摆装置等)等使客户端的行为数据发生变化的机器操作。
具体的,在一个实际的实施例中,该方法还包括:
当确定客户端的操作者是人时,可以将行为数据以及对应的分析结果生成标本数据并存储在人类行为数据库中。
为了进一步提高准确性,在确定操作手机的是人之后,可以将此次识别的分析结果(是人在操作)以及对应的行为数据生成标本数据并存储在人类行为数据库中,该标本数据可以用于行为数据判断模型的训练,或者用于对行为数据的预判。
至于若是识别结果为机器操作时,则可以进行下述操作:
步骤103、若所述行为数据与预设的行为数据判断模型的人的行为数据不匹配,则确定所述客户端的操作者是机器。
在一些实施例中,也可以通过判断行为数据与预设的行为数据判断模型中机器的行为数据是否匹配来确定客户端的操作者是否为机器。在一些实施例中,还可以通过综合判断行为数据与预设的行为数据判断模型中人的行为数据、机器的行为数据的匹配程度综合确定客户端的操作者是人 还是机器。例如可以利用行为数据判断模型判断行为数据与人类行为数据和/或机器行为数据的匹配程度;并基于行为数据与人类行为数据和/或机器行为数据的匹配程度判断所述客户端的操作者是人或机器。具体的,若行为数据与人类行为数据相匹配(例如,匹配程度大于预设阈值如70%、80%等),则可以判断客户端的操作者是人。若行为数据与机器行为数据相匹配(例如,匹配程度大于预设阈值如70%、80%等),则可以判断客户端的操作者是机器。在一些实施例中,若行为数据与人类行为数据的匹配程度大于行为数据与机器行为数据的匹配程度,则可以判断客户端的操作者是人。若行为数据与机器行为数据的匹配程度大于行为数据与人类行为数据的匹配程度,则可以判断客户端的操作者是机器。
在一些实施例中,也可以基于识别为机器的行为数据生成反向数据(即机器行为数据),并存储在机器行为数据库中作为反面的数据存在,以此可以更好的进行正反对比,提高识别的精确度。例如,反面数据可以包括软件和/或程序驱动下的自动点击、输入和/或机械驱动下的自动点击、输入、移动等机器操作的客户端的行为数据。软件和/或程序驱动下的自动点击、输入的机器操作可以包括自动玩游戏、自动打字、自动点击、自动下载等机器操作。机械驱动下的自动点击、输入、移动的机器操作可以包括使用可以自动点击客户端屏幕的设备(如滚动装置、摇摆装置等),可以移动客户端的设备(如可以放置或悬挂客户端的摇摆装置等)等,使客户端的行为数据发生变化的机器操作。在一些实施例中,反面数据可以通过模拟上述机器操作来获取。
在一个具体的实施例中,还包括:
当确定客户端的操作者是机器时,对客户端启动预设的防御流程。
具体的,当行为数据与预设的行为数据判断模型的人的行为数据不匹配时(或是通过本申请披露的其他方法判定操作者是机器时),则可以确认是机器在操作。在此情况下,由于机器操作为具有风险的操作,在此情况下,可以确定客户端的运行是有风险的,因此需要进行防御,可以启动防御流程,例如重新进行其他方式的认证(如指纹识别、密码输入、脸部识别等),或者控制客户端关机等等。
在一些实施例中,为防止发生误判,以提升用户体验,可以增加防误判机制。例如,为防止将人误判为机器,可以延长判断时间、多次重复执行判断(如多次判断结果均一致时再确认判断)等。在一些实施例中,当确认发生误判后,例如当把机器操作判断为人类操作时,可以将对应行为数据标记为机器操作,以增大机器操作的样本量,并可以对行为数据判断模型进行重新(或增强)训练。在一些实施例中,为了进一步减少误判和/或提升判断效率,还可以对原始行为数据(或预处理后的行为数据)进行预判,例如,判断数据的波动/偏离幅度范围,当原始行为数据不在预设范围内时,可以将该原始行为数据直接判断为机器操作,或者增加其与机器行为数据的匹配度。
在一个具体的实施例中,人机识别方法还可以包括:行为数据判断模型是通过人类分析数据库(亦可称为人类行为数据库)中数量超过预设阈值的标本数据(在一些实施例中,也可以包括机器操作客户端时客户端的行为数据)来训练得到的。具体的,该行为数据判断模型可以是通过大数据来训练得到的,具体的标本数据的数量越多,训练得到的行为数据判 断模型越准确,以此可以不断获取到标本数据,并不断进行训练。
以此,本发明实施例1公开了一种人机识别的方法包括:获取客户端在运行状态下的行为数据;其中,行为数据包括客户端的旋转数据、客户端的受力数据、客户端的方位数据;若行为数据与预设的行为分析模型(亦可称为行为数据判断模型)中对应人的行为特征,则确定客户端的操作者是人;若行为数据与预设的行为数据判断模型的人的行为数据不匹配,则确定客户端的操作者是机器。以此,基于正常状态下时,人操作客户端,例如移动终端时,会产生位置、压力等上的变化,而若是通过机器来对客户端进行的操作,或者不会有位置以及压力上的数据,或者位置以及压力上的数据与人为的数据是截然不同的,以此在一个全新的角度对人与机器进行识别,简单,快捷,且准确不易被规避。
实施例2
为了对本发明进行进一步的说明,本发明实施例2还公开了一种人机识别的设备。如图3所示,包括:
获取模块201,用于获取客户端在运行状态下的行为数据;其中,该行为数据可以包括客户端的旋转数据、客户端的受力数据、客户端的方位数据、客户端的屏幕操作数据、客户端的输入设备操作数据、客户端的图像感知数据、客户端的磁场感知数据、客户端的红外感知数据等一种或多种的任意组合。具体的,该行为数据可以由客户端的一个或多个传感器数据体现。例如,行为数据可以包括加速度传感器数据、陀螺仪数据、磁力计数据、屏幕传感器数据、鼠标操作数据、触控板操作数据、触摸屏操作数据、键盘操作数据、光传感器数据、距离传感器数据、温度传感器数 据、屏幕压力传感器数据、红外传感器数据、摄像头数据、光谱传感器数据等一种或多种的任意组合。
识别模块202,用于当所述行为数据与预设的行为数据判断模型中对应人的行为特征时,确定所述客户端的操作者是人;以及当所述行为数据与预设的行为数据判断模型的人的行为数据不匹配时,确定所述客户端的操作者是机器。
在一些实施例中,还可以包括判断模块,用于利用行为数据判断模型判断行为数据与人类行为数据和/或机器行为数据的匹配程度。
在一个具体的实施例中,如图4所示,该设备还包括:
防御模块203,用于当确定所述客户端的操作者是机器时,对所述客户端启动预设的防御流程。
具体的,当行为数据与预设的行为数据判断模型的人的行为数据不匹配时,则可以确认是机器在操作,在此情况下,由于机器操作为具有风险的操作,在此情况下,可以确定客户端的运行是有风险的,因此需要进行防御,可以启动防御流程,例如重新进行其他方式的认证,或者控制所述客户端关机等等。
在一个具体的实施例中,如图5所示,该设备还包括:
存储模块204,用于当确定所述客户端的操作者是人时,将所述行为数据以及对应的分析结果生成标本数据并存储在人类行为数据库中。所述存储模块204还可以用于存储人类行为数据库、机器行为数据库和/或行为数据判断模型。
在一个具体的实施例中,所述行为数据判断模型(或称作行为分析 模型)可以是通过所述人类分析数据库中数量超过预设阈值的标本数据来训练得到的。
在一个具体的实施例中,客户端的旋转数据可以用于反映客户端的旋转行为。具体的,该旋转数据可以包括三个分量,例如可以为三维空间中的X、Y、Z轴上的角速度分量,其可以通过陀螺仪、角速度传感器等监测得到。在一些实施例中,客户端的受力数据可以用于反映客户端的受力行为。具体的,该受力数据可以包括三个分量,如三维空间中X、Y、Z轴上的加速度。该受力数据可以通过加速度计、重力传感器、惯性传感器等监测得到。在一些实施例中,客户端的方位数据可以用于反映客户端的方位情况。具体的,方位数据可以包括三个分量,例如三维空间中X、Y、Z轴上的磁分量。方位数据可以通过磁力计、位置传感器等定位设备监测得到。在一些实施例中,客户端的屏幕操作数据可以用于反映使用者(如人或机器)在客户端屏幕上的按压、滑动等操作。具体的,屏幕操作数据可以通过屏幕压力传感器监测得到。例如,使用者使用屏幕键盘输入时,可以通过屏幕压力传感器获取使用者的输入速度、输入时按压键盘之间的时间差、按压时间长度、按压力度等。当进行滑动输入(例如九宫格滑动输入)时,可以获取滑过每个点(如每个格)的时间、滑过每两个点之间的时间间隔、滑动压力等。
应当理解,图3-5所示的系统(如设备)及其模块可以利用各种方式来实现。例如,在一些实施例中,系统及其模块可以通过硬件、软件或者软件和硬件的结合来实现。其中,硬件部分可以利用专用逻辑来实现;软件部分则可以存储在存储器中,由适当的指令执行系统,例如微处理器 或者专用设计硬件来执行。本领域技术人员可以理解上述的方法和系统可以使用计算机可执行指令和/或包含在处理器控制代码中来实现,例如在诸如磁盘、CD或DVD-ROM的载体介质、诸如只读存储器(固件)的可编程的存储器或者诸如光学或电子信号载体的数据载体上提供了这样的代码。本申请的系统及其模块不仅可以有诸如超大规模集成电路或门阵列、诸如逻辑芯片、晶体管等的半导体、或者诸如现场可编程门阵列、可编程逻辑设备等的可编程硬件设备的硬件电路实现,也可以用例如由各种类型的处理器所执行的软件实现,还可以由上述硬件电路和软件的结合(例如,固件)来实现。
需要注意的是,以上对于认证系统及其模块的描述,仅为描述方便,并不能把本申请限制在所举实施例范围之内。可以理解,对于本领域的技术人员来说,在了解该系统的原理后,可能在不背离这一原理的情况下,对各个模块进行任意组合,或者构成子系统与其他模块连接。例如,在一些实施例中,获取模块201、识别模块202、防御模块203、存储模块204可以是一个系统中的不同模块,也可以是一个模块实现上述的两个或两个以上模块的功能。以上的各模块可以根据需要进行灵活的搭配与组合,并不限于说明书附图中的几个具体的实施例。
以此,本发明实施例公开了一种人机识别的方法和设备,其中该方法包括:获取客户端在运行状态下的行为数据;其中,所述行为数据包括所述客户端的旋转数据、所述客户端的受力数据、所述客户端的方位数据;若所述行为数据与预设的行为分析模型中对应人的行为特征,则确定所述客户端的操作者是人;若所述行为数据与预设的行为数据判断模型的人的 行为数据不匹配,则确定所述客户端的操作者是机器。以此通过客户端在运行时的行为数据,也即通过客户端在运行时的旋转数据、受力数据、方位数据与预设的行为分析模型中对应人的行为特征是否匹配来确定为客户端的操作是人还是机器,以此基于人操作客户端的行为特征对是人还是机器进行识别,识别简单准确快捷。
本领域技术人员可以理解附图只是一个优选实施场景的示意图,附图中的模块或流程并不一定是实施本发明所必须的。
本领域技术人员可以理解实施场景中的装置中的模块可以按照实施场景描述进行分布于实施场景的装置中,也可以进行相应变化位于不同于本实施场景的一个或多个装置中。上述实施场景的模块可以合并为一个模块,也可以进一步拆分成多个子模块。
上述本发明序号仅仅为了描述,不代表实施场景的优劣。
以上公开的仅为本发明的几个具体实施场景,但是,本发明并非局限于此,任何本领域的技术人员能思之的变化都应落入本发明的保护范围。

Claims (30)

  1. 一种在服务器上实现的人机识别的方法,所述服务器包括至少一个处理器、存储器和连接到网络的通信平台,所述方法包括:
    获取客户端发送的所述客户端在操作者操作下的行为数据,所述行为数据为反映所述操作者对所述客户端的至少一个操作行为的传感器数据;
    利用行为数据判断模型判断所述行为数据与人类行为数据是否匹配;
    基于判断结果确定所述客户端的操作者是人或机器。
  2. 根据权利要求1所述的方法,其特征在于,所述行为数据包括以下数据中的至少一个:所述客户端的旋转数据、受力数据或方位数据。
  3. 根据权利要求1所述的方法,其特征在于,所述行为数据判断模型是基于人操作客户端的行为样本数据训练得到的。
  4. 根据权利要求1所述的方法,其特征在于,所述方法还包括基于判断所述客户端的操作者是人或机器的结果启动预设的防御流程。
  5. 根据权利要求1所述的方法,其特征在于,
    若所述行为数据与所述人类行为数据相匹配,则判断所述客户端的操作者是人;
    若所述行为数据与所述人类行为数据不匹配,则判断所述客户端的操作者是机器。
  6. 一种人机识别的系统,其特征在于,包括:
    获取模块,用于获取客户端发送的所述客户端在操作者操作下的行为数据,所述行为数据为反映所述操作者对所述客户端的至少一个操作行为的传感器数据;
    判断模块,用于利用行为数据判断模型判断所述行为数据与人类行为数据是否匹配;
    识别模块,用于基于判断结果确定所述客户端的操作者是人或机器。
  7. 根据权利要求6所述的系统,其特征在于,所述行为数据包括以下数据中的至少一个:所述客户端的旋转数据、受力数据或方位数据。
  8. 根据权利要求6所述的系统,其特征在于,所述行为数据判断模型是基于人操作客户端的行为样本数据训练得到的。
  9. 根据权利要求6所述的系统,其特征在于,所述系统还包括防御模块,用于基于判断所述客户端的操作者是人或机器的结果启动预设的防御流程。
  10. 根据权利要求6所述的系统,其特征在于,
    若所述行为数据与所述人类行为数据相匹配,则所述识别模块判断所述客户端的操作者是人;
    若所述行为数据与所述人类行为数据不匹配,则所述识别模块判断所述客户端的操作者是机器。
  11. 一种人机识别的装置,其特征在于,包括处理器,所述处理器被配置为:
    获取客户端发送的所述客户端在操作者操作下的行为数据,所述行为数据为反映所述操作者对所述客户端的至少一个操作行为的传感器数据;
    利用行为数据判断模型判断所述行为数据与人类行为数据是否匹配;
    基于判断结果确定所述客户端的操作者是人或机器。
  12. 一种计算机可读存储介质,所述存储介质存储计算机指令,当计算机读取存储介质中的计算机指令后,计算机执行人机识别的方法,所述方法包括:
    获取客户端发送的所述客户端在操作者操作下的行为数据,所述行为数据为反映所述操作者对所述客户端的至少一个操作行为的传感器数据;
    利用行为数据判断模型判断所述行为数据与人类行为数据是否匹配;
    基于判断结果确定所述客户端的操作者是人或机器。
  13. 一种在客户端上实现的人机识别的方法,所述客户端包括至少一个处理器、存储器和连接到网络的通信平台,所述方法包括:
    获取客户端在操作者操作下的行为数据并发送给服务器,所述行为数据为反映所述操作者对所述客户端的至少一个操作行为的传感器数据;
    接收服务器信息,根据所述服务器基于所述行为数据确定所述操作者是人或机器的判断结果启动或不启动预设的防御流程。
  14. 根据权利要求13所述的方法,其特征在于,所述行为数据包括以下数据中的至少一个:所述客户端的旋转数据、受力数据或方位数据。
  15. 一种人机识别的系统,其特征在于,包括:
    获取模块,用于获取客户端在操作者操作下的行为数据并发送给服务器,所述行为数据为反映所述操作者对所述客户端的至少一个操作行为的传感器数据;
    接收模块,用于接收服务器的信息;
    识别模块,用于根据所述服务器基于所述行为数据确定所述操作者是人或机器的判断结果启动或不启动预设的防御流程。
  16. 根据权利要求15所述的系统,其特征在于,所述行为数据包括以下数据中的至少一个:所述客户端的旋转数据、受力数据或方位数据。
  17. 一种人机识别的客户端,其特征在于,包括处理器,所述处理器被配置为:
    获取客户端在操作者操作下的行为数据并发送给服务器,所述行为数据为反映所述操作者对所述客户端的至少一个操作行为的传感器数据;
    接收服务器信息,根据所述服务器基于所述行为数据确定所述操作者 是人或机器的判断结果启动或不启动预设的防御流程。
  18. 根据权利要求17所述的客户端,其特征在于,所述行为数据包括以下数据中的至少一个:所述客户端的旋转数据、受力数据或方位数据。
  19. 一种计算机可读存储介质,所述存储介质存储计算机指令,当计算机读取存储介质中的计算机指令后,计算机执行人机识别的方法,所述方法包括:
    获取客户端在操作者操作下的行为数据并发送给服务器,所述行为数据为反映所述操作者对所述客户端的至少一个操作行为的传感器数据;
    接收服务器信息,根据所述服务器基于所述行为数据确定所述操作者是人或机器的判断结果启动或不启动预设的防御流程。
  20. 一种人机识别的方法,其特征在于,包括:
    获取客户端在运行状态下的行为数据;其中,所述行为数据包括以下数据中的至少一种:所述客户端的旋转数据、所述客户端的受力数据、所述客户端的方位数据;
    若所述行为数据与预设的行为分析模型中对应人的行为特征匹配,则确定所述客户端的操作者是人;
    若所述行为数据与预设的行为数据判断模型的人的行为数据不匹配,则确定所述客户端的操作者是机器。
  21. 根据权利要求20所述的方法,其特征在于,还包括:
    当确定所述客户端的操作者是机器时,对所述客户端启动预设的防御流程。
  22. 根据权利要求20所述的方法,其特征在于,还包括:
    当确定所述客户端的操作者是人时,将所述行为数据以及对应的分析结果生成标本数据并存储在人类行为数据库中。
  23. 根据权利要求20所述的方法,其特征在于,还包括:所述行为分析模型是通过人类分析数据库中数量超过预设阈值的标本数据来训练得到的。
  24. 根据权利要求20所述的方法,其特征在于,所述旋转数据是通过陀螺仪监测得到的、所述受力数据是通过所述加速计监测得到的、所述方位数据是通过所述磁力计定位设备监测得到的。
  25. 一种人机识别的设备,其特征在于,包括:
    获取模块,用于获取客户端在运行状态下的行为数据;其中,所述行为数据包括以下数据中的至少一种:所述客户端的旋转数据、所述客户端的受力数据、所述客户端的方位数据;
    识别模块,用于当所述行为数据与预设的行为分析模型中对应人的行为特征匹配时,确定所述客户端的操作者是人;以及当所述行为数据与预设的行为数据判断模型的人的行为数据不匹配时,确定所述客户端的操作者是机器。
  26. 根据权利要求25所述的设备,其特征在于,还包括:
    防御模块,用于当确定所述客户端的操作者是机器时,对所述客户端启动预设的防御流程。
  27. 根据权利要求25所述的设备,其特征在于,还包括:
    存储模块,用于当确定所述客户端的操作者是人时,将所述行为数据以及对应的分析结果生成标本数据并存储在人类行为数据库中。
  28. 根据权利要求25所述的设备,其特征在于,
    所述行为分析模型是通过人类分析数据库中数量超过预设阈值的标本数据来训练得到的。
  29. 根据权利要求25所述的设备,其特征在于,所述旋转数据是通过 陀螺仪监测得到的、所述受力数据是通过所述加速计监测得到的、所述方位数据是通过所述磁力计定位设备监测得到的。
  30. 一种计算机可读存储介质,所述存储介质存储计算机指令,当计算机读取存储介质中的计算机指令后,计算机执行人机识别的方法,所述方法包括:
    获取客户端在运行状态下的行为数据;其中,所述行为数据包括以下数据中的至少一种:所述客户端的旋转数据、所述客户端的受力数据、所述客户端的方位数据;
    若所述行为数据与预设的行为分析模型中对应人的行为特征匹配,则确定所述客户端的操作者是人;
    若所述行为数据与预设的行为数据判断模型的人的行为数据不匹配,则确定所述客户端的操作者是机器。
PCT/CN2018/093553 2017-06-29 2018-06-29 一种人机识别的方法和设备 WO2019001558A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201710517649.4A CN107294981B (zh) 2017-06-29 2017-06-29 一种认证的方法和设备
CN201710517666.8A CN107330311A (zh) 2017-06-29 2017-06-29 一种人机识别的方法和设备
CN201710517649.4 2017-06-29
CN201710517666.8 2017-06-29

Publications (1)

Publication Number Publication Date
WO2019001558A1 true WO2019001558A1 (zh) 2019-01-03

Family

ID=64741153

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/CN2018/093553 WO2019001558A1 (zh) 2017-06-29 2018-06-29 一种人机识别的方法和设备
PCT/CN2018/093618 WO2019001566A1 (zh) 2017-06-29 2018-06-29 一种认证的方法和设备

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/093618 WO2019001566A1 (zh) 2017-06-29 2018-06-29 一种认证的方法和设备

Country Status (1)

Country Link
WO (2) WO2019001558A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111723348A (zh) * 2019-03-18 2020-09-29 腾讯科技(深圳)有限公司 人机识别方法、装置、设备及存储介质
CN112580596A (zh) * 2020-12-30 2021-03-30 网易(杭州)网络有限公司 一种数据处理的方法和装置
CN113900889A (zh) * 2021-09-18 2022-01-07 百融至信(北京)征信有限公司 一种智能识别app人为操作的方法及系统

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11899765B2 (en) 2019-12-23 2024-02-13 Dts Inc. Dual-factor identification system and method with adaptive enrollment
CN111241518B (zh) * 2020-01-03 2023-03-24 北京字节跳动网络技术有限公司 用户验证方法、装置、设备和介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103530543A (zh) * 2013-10-30 2014-01-22 无锡赛思汇智科技有限公司 一种基于行为特征的用户识别方法及系统
US20140108653A1 (en) * 2012-09-25 2014-04-17 Huawei Technologies Co., Ltd. Man-Machine Interaction Data Processing Method and Apparatus
CN106155298A (zh) * 2015-04-21 2016-11-23 阿里巴巴集团控股有限公司 人机识别方法及装置、行为特征数据的采集方法及装置
CN106487747A (zh) * 2015-08-26 2017-03-08 阿里巴巴集团控股有限公司 用户识别方法、系统、装置及处理方法、装置
CN107294981A (zh) * 2017-06-29 2017-10-24 苏州锦佰安信息技术有限公司 一种认证的方法和设备
CN107330311A (zh) * 2017-06-29 2017-11-07 苏州锦佰安信息技术有限公司 一种人机识别的方法和设备
CN107491991A (zh) * 2017-08-15 2017-12-19 上海精数信息科技有限公司 基于晃动的人机识别方法及应用其的广告投放方法和系统

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104703180A (zh) * 2013-12-09 2015-06-10 江良洲 基于移动互联网智能终端的一种隐形多重认证方法
CN105827406A (zh) * 2015-01-05 2016-08-03 腾讯科技(深圳)有限公司 一种身份验证方法、装置和系统
CN104778387B (zh) * 2015-04-23 2017-12-08 西安交通大学 基于人机交互行为的跨平台身份认证系统及方法
CN105049421A (zh) * 2015-06-24 2015-11-11 百度在线网络技术(北京)有限公司 基于用户使用行为特征的认证方法、服务器、终端及系统
US10289819B2 (en) * 2015-08-12 2019-05-14 Kryptowire LLC Active authentication of users
CN106790129A (zh) * 2016-12-27 2017-05-31 中国银联股份有限公司 一种身份认证的方法及装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140108653A1 (en) * 2012-09-25 2014-04-17 Huawei Technologies Co., Ltd. Man-Machine Interaction Data Processing Method and Apparatus
CN103530543A (zh) * 2013-10-30 2014-01-22 无锡赛思汇智科技有限公司 一种基于行为特征的用户识别方法及系统
CN106155298A (zh) * 2015-04-21 2016-11-23 阿里巴巴集团控股有限公司 人机识别方法及装置、行为特征数据的采集方法及装置
CN106487747A (zh) * 2015-08-26 2017-03-08 阿里巴巴集团控股有限公司 用户识别方法、系统、装置及处理方法、装置
CN107294981A (zh) * 2017-06-29 2017-10-24 苏州锦佰安信息技术有限公司 一种认证的方法和设备
CN107330311A (zh) * 2017-06-29 2017-11-07 苏州锦佰安信息技术有限公司 一种人机识别的方法和设备
CN107491991A (zh) * 2017-08-15 2017-12-19 上海精数信息科技有限公司 基于晃动的人机识别方法及应用其的广告投放方法和系统

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111723348A (zh) * 2019-03-18 2020-09-29 腾讯科技(深圳)有限公司 人机识别方法、装置、设备及存储介质
CN112580596A (zh) * 2020-12-30 2021-03-30 网易(杭州)网络有限公司 一种数据处理的方法和装置
CN112580596B (zh) * 2020-12-30 2024-02-27 杭州网易智企科技有限公司 一种数据处理的方法和装置
CN113900889A (zh) * 2021-09-18 2022-01-07 百融至信(北京)征信有限公司 一种智能识别app人为操作的方法及系统
CN113900889B (zh) * 2021-09-18 2023-10-24 百融至信(北京)科技有限公司 一种智能识别app人为操作的方法及系统

Also Published As

Publication number Publication date
WO2019001566A1 (zh) 2019-01-03

Similar Documents

Publication Publication Date Title
WO2019001558A1 (zh) 一种人机识别的方法和设备
US12032668B2 (en) Identifying and authenticating users based on passive factors determined from sensor data
US10885306B2 (en) Living body detection method, system and non-transitory computer-readable recording medium
US11256793B2 (en) Method and device for identity authentication
US9813908B2 (en) Dynamic unlock mechanisms for mobile devices
EP3849130A1 (en) Method and system for biometric verification
Li et al. Unobservable re-authentication for smartphones.
US10257229B1 (en) Systems and methods for verifying users based on user motion
CN114144781A (zh) 身份验证和管理系统
KR102320723B1 (ko) 사용자를 인증하는 방법 및 시스템
US9436930B2 (en) Method and apparatus for recognizing image content
US9596087B2 (en) Token authentication for touch sensitive display devices
US9686274B2 (en) Informed implicit enrollment and identification
CN104298910A (zh) 便携式电子装置及互动式人脸登入方法
TWI793418B (zh) 圖像處理方法和系統
KR101798890B1 (ko) 일상 생활 습관의 행동 요소를 활용한 웨어러블 디바이스 사용자 인증 방법 및 그 시스템
US11500977B2 (en) User authentication in a three-dimensional (3D) alternative reality software application
US20140196156A1 (en) Capturing and manipulating content using biometric data
US20240037995A1 (en) Detecting wrapped attacks on face recognition
US12081543B2 (en) System and method for user authentication for information security
WO2016183891A1 (zh) 一种信息处理方法、电子设备及计算机存储介质
CN113518061B (zh) 人脸识别中的数据传输方法、设备、装置、系统及介质
US20220335248A1 (en) Systems for authenticating user permissions and methods of use thereof
US10986087B2 (en) Motion based authentication
US20210209217A1 (en) Method and system for authentication using mobile device id based two factor authentication

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18823890

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18823890

Country of ref document: EP

Kind code of ref document: A1