WO2024093797A1 - 一种人机识别方法、装置、设备和计算机可读存储介质 - Google Patents

一种人机识别方法、装置、设备和计算机可读存储介质 Download PDF

Info

Publication number
WO2024093797A1
WO2024093797A1 PCT/CN2023/126875 CN2023126875W WO2024093797A1 WO 2024093797 A1 WO2024093797 A1 WO 2024093797A1 CN 2023126875 W CN2023126875 W CN 2023126875W WO 2024093797 A1 WO2024093797 A1 WO 2024093797A1
Authority
WO
WIPO (PCT)
Prior art keywords
trajectory
dynamic real
time
real
human
Prior art date
Application number
PCT/CN2023/126875
Other languages
English (en)
French (fr)
Inventor
龙超
卢兴沄
张炜
Original Assignee
中移(杭州)信息技术有限公司
中国移动通信集团有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中移(杭州)信息技术有限公司, 中国移动通信集团有限公司 filed Critical 中移(杭州)信息技术有限公司
Publication of WO2024093797A1 publication Critical patent/WO2024093797A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication

Definitions

  • the present application relates to the technical field of human-machine verification, and in particular to a human-machine identification method, device, equipment and computer-readable storage medium.
  • SMS or email verification code method which is plain text transmission with fixed content, simple and easy to identify, but has low security; or requiring users to input, click, select, and calculate pictures of additional verification information.
  • the verification information generated in this technology is fixed and easy to crack, making it difficult to ensure the accuracy of human-machine identification; or collecting user mouse or touch behavior data features and performing human-machine identification based on a free knowledge base training model.
  • this technology is also implemented by generating fixed verification information, while machine learning can completely simulate real people's behavior, thereby reducing the accuracy of human-machine identification.
  • the embodiments of the present application hope to provide a human-machine identification method that can ensure the security of data and improve the accuracy of human-machine identification.
  • the present application provides a method for human-machine identification, including:
  • the first trajectory is matched with the dynamic real-time trajectory to obtain a matching result
  • a recognition result is determined based on the matching result, and the recognition result is used to characterize whether the first trajectory is triggered by a real person's operation.
  • the present application provides a human-machine identification device, including:
  • the first generation module is used to generate a dynamic real-time trajectory if the time for human-machine identification is determined;
  • Output module used to output dynamic real-time trajectory
  • a first matching module is used to match the first trajectory with the dynamic real-time trajectory to obtain a matching result if a first trajectory generated based on the dynamic real-time trajectory is detected;
  • the first determination module is used to determine the end condition of generating the dynamic real-time trajectory based on the matching result if the generation end condition of the dynamic real-time trajectory is detected.
  • a recognition result is determined, where the recognition result is used to indicate whether the first trajectory is triggered by a real person operation.
  • the present application provides a human-machine identification device, including:
  • a memory used for storing executable human-machine identification instructions
  • the processor is used to implement the human-machine identification method provided in the embodiment of the present application when executing the executable human-machine identification instructions stored in the memory.
  • An embodiment of the present application provides a computer-readable storage medium, in which computer-readable storage medium is stored computer-executable human-machine identification instructions, and the computer-executable human-machine identification instructions are configured to execute the human-machine identification method provided by the embodiment of the present application.
  • the embodiment of the present application provides a method, device, equipment and computer-readable storage medium for human-machine identification.
  • a dynamic real-time trajectory is generated; then, the dynamic real-time trajectory is output, and if a first trajectory generated based on the dynamic real-time trajectory is detected, the first trajectory and the dynamic real-time trajectory are matched to obtain a matching result; finally, if the generation end condition of the dynamic real-time trajectory is detected, the recognition result is determined based on the matching result, and the recognition result is used to characterize whether the first trajectory is triggered by a real person's operation.
  • the generation of the dynamic real-time trajectory is not fixed and cannot be predicted and learned, the security of the data can be guaranteed, so that when the generation end condition of the dynamic real-time trajectory is detected, the recognition result is determined based on the detected first trajectory generated based on the dynamic real-time trajectory and the matching result of the dynamic real-time trajectory, which can improve the accuracy of human-machine identification.
  • FIG1 is a schematic diagram of a flow chart of a human-machine identification method provided in an embodiment of the present application
  • FIG2 is a schematic diagram of a flow chart of another method for human-machine identification provided in an embodiment of the present application.
  • FIG3 is a schematic diagram of a trajectory shape provided in an embodiment of the present application.
  • FIG4 is a schematic diagram of a flow chart of a human-machine identification method for dynamically generating trajectories provided in an embodiment of the present application;
  • FIG5 is a schematic diagram of a flow chart of a human-machine verification method based on dynamic real-time trajectory provided in an embodiment of the present application;
  • FIG6 is a schematic diagram of the structure of a human-machine identification device provided in an embodiment of the present application.
  • FIG. 7 is a schematic diagram of the composition structure of a human-machine identification device provided in an embodiment of the present application.
  • first ⁇ second involved are merely used to distinguish similar objects and do not represent a specific ordering of the objects. It can be understood that “first ⁇ second” can be interchanged with a specific order or sequence where permitted, so that the embodiments of the present application described herein can be implemented in an order other than that illustrated or described herein.
  • the human-machine identification methods currently available in the industry mainly include the following:
  • SMS and email verification code methods In practice, users in different countries have different preferences for SMS and email verification. American users are more accustomed to using email, while users in our country prefer SMS. In addition, users in other countries around the world use email more commonly, and email verification is more common when registering on foreign websites or global websites, such as Adobe, LinkedIn, Twitter, Facebook, etc. However, SMS and email verification codes are transmitted in plain text, with fixed content, simple and easy to identify, and easily intercepted and captured, which can easily lead to leaks and have low security.
  • Image information verification methods For example, display text information through images, allowing users to enter the correct text in sequence; prompting users to select certain text; dragging images to complete puzzles; or obtaining the correct calculation results based on image text expressions. In this way, the information is fixed, and once the verification information is generated, it will not change. With the development of modern artificial intelligence technology, machine learning capabilities have been greatly enhanced, making it easier to identify these fixed information and crack them, thereby disrupting the normal service order. Even if some website verification codes increase the difficulty of cracking by increasing the difficulty of recognition, the current algorithm model based on deep learning has surpassed normal people in image recognition. Therefore, the method of identifying humans and machines through image information verification is no longer very effective.
  • Mouse and touch drag trajectory matching method By collecting the user's mouse and touch behavior data features, the model is trained based on the free knowledge base to determine whether the current operation is a real person or a machine. This method will also generate fixed verification information, and machine learning can completely simulate the behavior of real people, and cannot guarantee the accuracy of human-machine recognition.
  • an embodiment of the present application provides a human-machine identification method, which can ensure the security of data and improve the accuracy of human-machine identification.
  • FIG1 is a flow chart of a human-machine identification method provided by the embodiment of the present application. The method includes the following steps:
  • the human-machine identification device can be a server device or a terminal device.
  • the human-machine identification device when determining to perform human-machine identification, it can be sent to the human-machine identification device by the terminal of the user communicating with the server. If the human-machine identification device is a terminal, it can be that the terminal detects the need for human-machine identification verification process according to the user's operation.
  • the human-machine identification device can generate a dynamic real-time trajectory using a pre-set dynamic real-time trajectory generation method, or the human-machine identification device can generate a dynamic real-time trajectory according to a random generation method.
  • the dynamic real-time trajectory is output for dynamic verification.
  • the server device When the human-machine identification device is a server device, the server device outputs the dynamic real-time trajectory to the terminal for display on the terminal.
  • the terminal When the human-machine identification device is a terminal, the terminal outputs the dynamic real-time trajectory to the display interface of the terminal so that the user can perform corresponding serious operations.
  • the server device receives the first trajectory sent by the terminal, and the first trajectory can be generated by the user operating the dynamic real-time trajectory displayed on the terminal.
  • the terminal displays a dynamic real-time trajectory, and the user controls the mouse to draw the corresponding trajectory according to the dynamic real-time trajectory to obtain the first trajectory, or draws according to the dynamic real-time trajectory through touch operation with a finger on the display interface of the terminal to obtain the first trajectory.
  • the human-machine identification device matches the first trajectory with the dynamic real-time trajectory to obtain a matching result.
  • the final recognition result is determined according to the matching result obtained in the previous step, that is, it is determined who the target object of the first trajectory generated for the dynamic real-time trajectory is.
  • the target object can be a real person or a machine.
  • a dynamic real-time trajectory is generated; then, the dynamic real-time trajectory is output, and if a first trajectory generated based on the dynamic real-time trajectory is detected, the first trajectory and the dynamic real-time trajectory are matched to obtain a matching result; finally, if the generation end condition of the dynamic real-time trajectory is detected, the recognition result is determined based on the matching result, and the recognition result is used to characterize whether the first trajectory is triggered by a real person's operation.
  • the generation of the dynamic real-time trajectory is not fixed and cannot be predicted and learned, the security of the data can be guaranteed, so that when the generation end condition of the dynamic real-time trajectory is detected, the recognition result is determined based on the detected first trajectory generated based on the dynamic real-time trajectory and the matching result of the dynamic real-time trajectory, which can improve the accuracy of human-machine identification.
  • a flow chart of a human-machine identification method provided in an embodiment of the present application is provided, and the method includes the following steps:
  • the time to achieve human-machine identification can be when the user enters the human-machine identification interface and sends a human-machine verification request to the server through the terminal, or when the server actively sends a human-machine verification instruction to the terminal in a scene where human-machine identification is required.
  • the preset trajectory set can be a plurality of trajectories pre-stored in the database, and the trajectory length of each trajectory is less than the preset trajectory length threshold. Some trajectories in the trajectory set can also be the smallest trajectories that cannot be divided, such as a trajectory facing right horizontally and a trajectory facing downward vertically.
  • the simple one-way principle is the principle for determining the shortest path between the starting point and the end point.
  • a dynamic real-time trajectory can be generated based on the trajectory set, a simple one-way principle and a Bezier curve algorithm.
  • a trajectory can be randomly selected from the trajectory set, and the selected trajectory can be processed in combination with the Bezier curve algorithm.
  • a dynamic real-time trajectory with a simple path is generated based on the processed trajectory in a preset direction. Since the dynamic real-time trajectory is based on a trajectory randomly selected from the trajectory set and generated in combination with the Bezier curve generation algorithm, the dynamic real-time trajectory is not fixed and cannot be predicted.
  • the human-machine recognition device when the human-machine recognition device is a server, after obtaining the dynamic real-time trajectory, the dynamic real-time trajectory can be sent to the terminal, and after receiving the dynamic real-time trajectory, the terminal can display the dynamic real-time trajectory in its own display area.
  • the server after sending the dynamic real-time trajectory to the terminal, or while sending the dynamic real-time trajectory to the terminal, the server can send a dynamic real-time trajectory simulation instruction to the terminal to instruct the target object to simulate the dynamic real-time trajectory, and the target object can be a real person or a machine.
  • a first trajectory can be generated in the terminal, and the first trajectory is the trajectory generated after the target object simulates the dynamic real-time trajectory.
  • the first trajectory can be sent to the server so that the server obtains the first trajectory corresponding to the dynamic real-time trajectory.
  • the corresponding trajectory lengths of the first trajectory and the dynamic real-time trajectory in the same direction such as the vertical direction or the horizontal direction
  • the corresponding generation rates at the same position, etc. can be obtained, and then the respective error values of the corresponding trajectory lengths, generation rates, etc.
  • the respective error values are compared with the corresponding preset error values. If the respective error values are less than the corresponding preset error values, it can be determined that the first trajectory matches the dynamic real-time trajectory; if at least one error value is greater than or equal to the corresponding preset error value, it can be determined that the first trajectory does not match the dynamic real-time trajectory.
  • the recognition result is used to indicate whether the first trajectory is triggered by a human operation.
  • the generation end condition of the dynamic real-time trajectory may be that the trajectory length of the dynamic real-time trajectory is greater than or equal to a preset trajectory length threshold, and the generation time of the dynamic real-time trajectory is greater than or equal to a preset time threshold.
  • the recognition result is used to characterize whether the first trajectory sent by the terminal is triggered by a real person's operation. Exemplarily, if a dynamic real-time trajectory is generated based on a trajectory set, a simple one-way principle, and a Bezier curve algorithm, the timing starts from the starting point of the dynamic real-time trajectory and ends at the end point of the dynamic real-time trajectory.
  • the total time length is 1 second
  • the trajectory length of the dynamic real-time trajectory is 1.2 cm
  • the preset trajectory length threshold is 1 cm. If the preset time length threshold is 1 second, it can be determined that the dynamic real-time trajectory generation end condition is reached, that is, there is no need to continue to generate the dynamic real-time trajectory.
  • the recognition result is used to characterize whether the first trajectory is triggered by a real person's operation. If the matching result is that the first trajectory matches the dynamic real-time trajectory, it can be determined that the first trajectory is triggered by a real person's operation. If the matching result is that the first trajectory and the dynamic real-time trajectory do not match, it can be determined that the first trajectory is generated by triggering the machine operation.
  • a preset trajectory set is obtained, and a dynamic real-time trajectory is generated based on the trajectory set, the simple one-way principle and the Bezier curve algorithm; then, the dynamic real-time trajectory is output; and when the first trajectory generated based on the dynamic real-time trajectory is detected, the first trajectory and the dynamic real-time trajectory are matched to obtain a matching result; finally, when it is determined that the generation end condition of the dynamic real-time trajectory is reached, the recognition result is determined based on the matching result, and the recognition result is used to characterize whether the first trajectory sent by the terminal is a real person operation.
  • step S204 after matching the first trajectory with the dynamic real-time trajectory to obtain a matching result, that is, step S204, the following steps S301 to S304 may also be performed, and each step is described below.
  • a new dynamic real-time trajectory can be generated with the end point of the currently generated dynamic real-time trajectory as the starting point.
  • a trajectory can also be randomly selected from the trajectory set and generated based on a simple one-way principle and a Bezier curve algorithm.
  • the new dynamic real-time trajectory is output and processed.
  • the human-machine recognition device is a server device
  • the new dynamic real-time trajectory can be sent to the terminal, and after the terminal receives the new dynamic real-time trajectory, it can be displayed in its corresponding display area.
  • the human-machine recognition device is a terminal, it is directly output to the display area corresponding to the terminal for display.
  • a second trajectory is generated, and the second trajectory is a trajectory generated after the target object simulates the new dynamic real-time trajectory.
  • the terminal can send the second trajectory to the server so that the server obtains the second trajectory corresponding to the new dynamic real-time trajectory.
  • the new dynamic real-time trajectory generated with the end point of the dynamic real-time trajectory as the starting point may be the same as or different from the dynamic real-time trajectory.
  • the second trajectory can be matched with the new dynamic real-time trajectory.
  • the matching method of the first trajectory and the dynamic real-time trajectory is similar to the matching method of the second trajectory and the new dynamic real-time trajectory.
  • the first trajectory can be obtained.
  • the second trajectory and the new dynamic real-time trajectory have their respective trajectory lengths in the same direction, their respective corresponding generation rates at the same position, etc., and according to the various error values of the trajectory length, generation rate, etc., it is determined whether the second trajectory matches the new dynamic real-time trajectory.
  • the new dynamic real-time trajectory may include one or more segments.
  • the second trajectory corresponding to the new dynamic real-time trajectory when the generated new dynamic real-time trajectory includes multiple segments, the second trajectory corresponding to the new dynamic real-time trajectory also includes multiple segments, that is, each segment of the new dynamic real-time trajectory corresponds to a segment of the second trajectory, and the number of segments of the new dynamic real-time trajectory is the same as the number of segments of the second trajectory.
  • Each segment of the new dynamic real-time trajectory corresponds to a matching result
  • the recognition result of the first trajectory and the second trajectory sent by the terminal can be determined according to each matching result, wherein each matching result includes the matching result of the first trajectory and the dynamic real-time trajectory, and the matching result of each new dynamic real-time trajectory and the corresponding second trajectory.
  • step S304 can also be implemented by the following steps S3041 to S3045, and each step is described below.
  • the generated dynamic real-time trajectory includes a dynamic real-time trajectory and a new dynamic real-time trajectory
  • the total trajectory length value of the dynamic real-time trajectory and the new dynamic real-time trajectory may be the sum of the trajectory length of the dynamic real-time trajectory and the trajectory length of the new dynamic real-time trajectory
  • the total generation time of the dynamic real-time trajectory and the new dynamic real-time trajectory may be the sum of the generation time of the dynamic real-time trajectory and the generation time of the new dynamic real-time trajectory.
  • the generated dynamic real-time trajectory includes a dynamic real-time trajectory and multiple new dynamic real-time trajectories, wherein the first new The starting point of the dynamic real-time trajectory is the end point of the dynamic real-time trajectory, and multiple new dynamic real-time trajectories are connected in sequence.
  • the total trajectory length value of the dynamic real-time trajectory and the new dynamic real-time trajectory can be the sum of the trajectory length of the dynamic real-time trajectory and the trajectory lengths of the multiple new dynamic real-time trajectories
  • the total generation time of the dynamic real-time trajectory and the new dynamic real-time trajectory can be the sum of the generation time of the dynamic real-time trajectory and the generation time of the multiple new dynamic real-time trajectories.
  • the preset length threshold may be a preset trajectory length threshold
  • the preset time threshold may be a preset time length threshold.
  • the human-machine recognition result can be effectively determined based on the matching result of the dynamic real-time trajectory and the trajectory sent by the terminal.
  • the total trajectory length value reaching the preset length threshold value may be that the total trajectory length value is greater than or equal to the preset length threshold value
  • the total generation time reaching the preset time threshold value may be that the total generation time is greater than or equal to the preset time threshold value.
  • the determined proportion value is 60%.
  • the preset ratio value may be a pre-set ratio value, such as 100%, 80%, etc. If it is determined that the ratio value is greater than or equal to the preset ratio value, step S3044 is executed; otherwise, step S3045 is executed.
  • S3044 Determine that the recognition result is that the first trajectory and the second trajectory are triggered by human operation.
  • the ratio value is 85% and the preset ratio value is 80%, that is, the ratio value is greater than the preset ratio value, it can be determined that the first trajectory and the second trajectory sent by the terminal are triggered by real-person operation. In addition, if the ratio value is equal to the preset ratio value, it can also be determined that the first trajectory and the second trajectory sent are triggered by real-person operation.
  • S3045 Determine that the recognition result is that the first trajectory and the second trajectory are generated by triggering the machine operation.
  • the ratio value is 60% and the preset ratio value is 80%, that is, the ratio value is less than the preset ratio value, it can be determined that the first trajectory and the second trajectory sent by the terminal are triggered by machine operation.
  • step S202 "generating a dynamic real-time trajectory based on a trajectory set, a simple one-way principle and a Bezier curve algorithm" in step S202 can be implemented through the following steps S401 to S404, and each step is described below.
  • the current starting point may be the starting point for generating a dynamic real-time trajectory, and the current starting point may be any position in the area for displaying the dynamic real-time trajectory in the display interface of the terminal.
  • the current starting point may be located at the upper left corner, lower left corner, etc. in the rectangular area.
  • the preset movement direction may be a preset direction for generating a dynamic real-time trajectory, and the preset direction may be any extension direction starting from the current starting point, for example, horizontally to the right, vertically downward, etc.
  • S402 Determine a first target trajectory from the trajectory set based on a simple one-way principle, a current starting point and a preset movement direction.
  • a trajectory that meets the preset direction of motion and satisfies the simple unidirectional principle can be determined from the trajectory set, that is, the first target trajectory.
  • the starting point of the first target trajectory is the same as the current starting point
  • the extension direction of the first target trajectory is the same as the preset direction of motion
  • the first target trajectory is the trajectory with the simplest path among all trajectories that meet the preset direction of motion.
  • the trajectories in the trajectory set that meet the preset direction of motion include three trajectories a, b, and c, the paths of trajectories a and c are complex, and the path of trajectory b is relatively simple relative to the paths of trajectories a and c, then trajectory b can be used as the first target trajectory.
  • S403 Determine the midpoint of the first target trajectory, and obtain the start point and end point of the first target trajectory.
  • the starting point and the end point of the first target trajectory can be obtained, and the midpoint of the first target trajectory can be determined.
  • the line connecting the starting point and the end point of the first target trajectory can be first determined, and the midpoint on the line and the line segment passing through the midpoint and perpendicular to the line are determined, and the intersection of the line segment perpendicular to the line and the first target trajectory is determined as the midpoint of the first target trajectory.
  • S404 Move the midpoint to a preset position point, and generate a dynamic real-time trajectory based on the starting point, the preset position point and the end point.
  • the midpoint of the first target trajectory can be moved to a preset position point, and the preset position point can be located at any position above or below the midpoint.
  • the path formed by the starting point of the first target trajectory, the preset position point, and the end point of the first target trajectory can be determined as a dynamic real-time trajectory.
  • steps S501 to S502 may also be executed.
  • S501 Determine a second target trajectory from a trajectory set based on a simple one-way principle, a current starting point and a preset movement direction.
  • the starting point of the second target trajectory is the same as the current starting point.
  • the second target trajectory and the first target trajectory may be the same or different. For example, if the target trajectory determined from the trajectory set that meets the simple unidirectional principle and the preset motion direction includes only one, then the first target trajectory and the second target trajectory are the same; if the target trajectory determined from the trajectory set that meets the simple unidirectional principle and the preset motion direction includes multiple The first target trajectory and the second target trajectory may be the same or different.
  • S502 Determine the second target trajectory as a dynamic real-time trajectory.
  • the second target trajectory may be directly used as the dynamic real-time trajectory without performing transformation processing on the second trajectory.
  • matching the first trajectory with the dynamic real-time trajectory in step S204 to obtain a matching result can also be achieved through the following steps S2041 to S2045, and each step is described below.
  • the first trajectory data may include a generation rate, a trajectory length, etc. of the first trajectory
  • the second trajectory data may also include a generation rate, a trajectory length, etc. of the dynamic real-time trajectory
  • the generation rate may indicate the speed of the target object in the process of simulating the dynamic real-time trajectory
  • the trajectory length may indicate the length corresponding to the press-drag trajectory of the target object in the process of simulating the dynamic real-time trajectory
  • S2042 Determine an error value between the first trajectory data and the second trajectory data.
  • trajectory data of the same type may be compared and calculated to obtain error values corresponding to trajectory data of different types. For example, if the trajectory data includes a generation rate and a trajectory length, the determined error value may include an error value between the generation rates corresponding to the first trajectory and the dynamic real-time trajectory, respectively, and an error value between the trajectory lengths corresponding to the first trajectory and the dynamic real-time trajectory, respectively.
  • step S2044 when the error value is less than a preset threshold, step S2044 may be executed; otherwise, step S2045 may be executed.
  • the error values obtained may also include multiple types. Accordingly, each type of error value corresponds to a preset threshold. For example, if the error value includes an error value corresponding to a generation rate and an error value corresponding to a trajectory length, the preset threshold may include a preset generation rate threshold and a preset trajectory length threshold. When determining whether the error value is less than the preset threshold, the error value corresponding to the generation rate is compared with the preset generation rate threshold, and the error value corresponding to the trajectory length is compared with the preset trajectory length threshold.
  • the error value less than the preset threshold may mean that all types of error values are less than the corresponding preset threshold; if there is at least one type of error value greater than or equal to the corresponding preset threshold, it can be determined that the error value is greater than or equal to the preset threshold.
  • S2045 Determine that the matching result is that the first trajectory and the dynamic real-time trajectory fail to match.
  • a successful match indicates that the first trajectory generated after the target object simulates the dynamic real-time trajectory and the dynamic real-time trajectory have a high degree of similarity, for example, the degree of similarity is greater than or equal to a preset similarity threshold;
  • a failed match indicates that the first trajectory generated after the target object simulates the dynamic real-time trajectory and the dynamic real-time trajectory have a low degree of similarity, for example, the degree of similarity is less than a preset similarity threshold.
  • step S201 determining to perform human-machine identification in step S201 can also be implemented through the following steps S601 to S603, and each step is described below.
  • S601 Acquire a reference trajectory and a pre-collected historical trajectory.
  • the reference trajectory may be a pre-stored trajectory generated by a real person's operation, and the reference trajectory may include multiple ones, which are pre-stored in the database.
  • the historical trajectory may be a trajectory generated by the target object in the terminal by mouse click, movement, touch drag, etc. before entering the human-machine verification interface. After collecting the historical trajectory, the terminal may send the historical trajectory to the server.
  • S602 Acquire historical hardware environment data and current hardware environment data of the target device.
  • the target device is a device that generates the first trajectory.
  • the historical hardware data of the terminal may be the hardware environment data of the terminal pre-stored when the trajectory sent by the terminal is determined to be a trajectory triggered by a real person's operation;
  • the current hardware environment data may be the hardware environment data of the terminal acquired when the human-machine recognition is currently being performed.
  • the hardware environment data may be data of a mobile device, browser, touch screen, etc., such as mouse events, cookie logs, page windows, IP addresses, Mac addresses, network environments, comprehensive access frequencies, geographic locations, historical records, etc.
  • the historical trajectory and the reference trajectory satisfy the matching condition that the historical trajectory and the reference trajectory are the same, or the similarity between the historical trajectory and the reference trajectory is greater than a preset similarity threshold; the current hardware environment data and the historical hardware environment data satisfy the matching condition that all types of hardware environment data and the corresponding historical hardware environment data are the same, or partially the same. In some embodiments, if it is determined that the historical trajectory and the reference trajectory satisfy the matching condition, and the current hardware environment data and the historical hardware environment data also satisfy the matching condition, then it can be determined that the human-machine recognition result is a real person operation.
  • the historical trajectory and the reference trajectory do not meet the matching conditions; or the current hardware environment data and the historical hardware environment data do not meet the matching conditions; or the historical trajectory and the reference trajectory do not meet the matching conditions, and the current hardware environment data and the historical hardware environment data also do not meet the matching conditions, it can be determined to perform human-machine identification.
  • a flow chart of a human-machine identification method for dynamically generating trajectories provided in an embodiment of the present application is provided.
  • the human-machine identification method for dynamically generating trajectories provided in an embodiment of the present application can be implemented through the following steps S701 to S706 , and each step is described below.
  • the pre-collected trajectory features may be unconscious, irregular, and purposeless sliding operations of the behavior object collected by the client device (corresponding to the aforementioned terminal) during the initialization of the human-machine recognition system, including mouse trajectories and touch drag trajectories (corresponding to the pre-collected historical trajectories in other embodiments), etc.
  • the behavior object may be a real person or a machine, etc.
  • the hardware device environment data can be the hardware environment data of the client device (mobile device, browser, touch screen, etc.) obtained by the server during the human-machine recognition process (corresponding to the current hardware environment data in other embodiments), such as mouse events, cookies, page windows, IP addresses, Mac addresses, network environments, comprehensive access frequencies, geographic locations, etc.
  • the pre-collected trajectory features can be matched with the trajectory features when it was determined to be a real person operation in the previous human-machine recognition process, and the currently obtained hardware device environment data can be compared with the hardware device environment data when it was determined to be a real person operation in the previous human-machine recognition process.
  • the preliminary judgment result can be determined to be non-real person operation.
  • a travel behavior set (corresponding to a trajectory set in other embodiments) is obtained, and a dynamic real-time trajectory is generated based on the travel behavior set, a simple one-way principle, and a Bezier curve generation algorithm.
  • the set of travel behaviors may be shorter than multiple paths pre-stored in the database.
  • the meta-behavior may be an indivisible behavior (minimum behavior), such as rightward in the horizontal direction and downward in the vertical direction.
  • the starting point and the ending point when generating a dynamic real-time trajectory, can be placed on both sides of a square area, for example, if the starting point is the upper left corner, the ending point is at the lower right corner; if the starting point is on the left side, the ending point is on the right side, etc.
  • a quadratic Bezier curve generation algorithm may be used. Each time, a travel behavior is randomly selected from the behavior set based on the current starting point to satisfy the simple one-way principle. Then, the midpoint between the starting point and the end point of the travel behavior is taken as the control point. The control point is moved to a random position point to generate the Bezier curve path for the current stage, and the Bezier curve path is used as a dynamic real-time trajectory.
  • S703 Obtain a first trajectory generated after the behavior object simulates the dynamic real-time trajectory, and match the dynamic real-time trajectory with the first trajectory to obtain a matching result.
  • the trajectory generated after the behavior object performs a follow-up operation on the dynamic real-time trajectory i.e., the first trajectory
  • the following simulation data corresponding to the first trajectory may be collected, such as reaction rate, reaction interval, pressing and dragging trajectory, speed, etc.
  • the dynamic real-time trajectory may be matched with the first trajectory to obtain a matching result.
  • the tracking simulation data of the first trajectory and the trajectory data corresponding to the dynamic real-time trajectory can be input into a pre-trained neural network model for similarity analysis and processing to obtain a matching result between the dynamic real-time trajectory and the first trajectory.
  • the first trajectory that successfully matches the dynamic real-time trajectory can also be It is added to the validation set as validation set data to verify the pre-trained neural network model; the first trajectory that successfully matches the dynamic real-time trajectory can also be added to the training set data as data in the training set to train the pre-trained neural network model again to ensure that the neural network model obtains a more accurate matching result after performing similarity analysis on other dynamic real-time trajectories and the simulated trajectories of the corresponding behavior objects.
  • the trajectory parameters of the dynamic real-time trajectory may include the generation time of the dynamic real-time trajectory, the trajectory length, etc., and accordingly, the judgment target value may include a preset time length threshold, a preset trajectory length threshold, etc.
  • the generation time of the generated dynamic real-time trajectory is less than the preset time length threshold, and/or the trajectory length of the dynamic real-time trajectory is less than the preset trajectory length threshold, it can be determined that the trajectory parameters of the dynamic real-time trajectory do not meet the judgment target value (corresponding to "determining that the generation end condition of the dynamic real-time trajectory is not met" in other embodiments), at this time, it is necessary to continue to generate a new dynamic real-time trajectory, and the new dynamic real-time trajectory takes the end point of the dynamic real-time trajectory as the starting point.
  • the generation method of the new dynamic real-time trajectory is similar to the generation method of the dynamic real-time trajectory, which will not be repeated here.
  • the human-machine recognition result can be determined based on the matching result of the dynamic real-time trajectory and the first trajectory. If the dynamic real-time trajectory matches the first trajectory, it can be determined that the first trajectory is triggered by a real person's operation; otherwise, it can be determined that the first trajectory is generated by a machine operation.
  • a trajectory generated by the behavior object after simulation based on the new dynamic real-time trajectory i.e., a second trajectory
  • the second trajectory also needs to be matched with the new dynamic real-time trajectory to obtain a corresponding matching result.
  • the generated new dynamic real-time trajectory may include at least one, and the dynamic real-time trajectory and each new dynamic real-time trajectory are connected in sequence to form a reference dynamic real-time trajectory.
  • the trajectory parameters of the reference dynamic real-time trajectory may include the time spent on generating the reference dynamic real-time trajectory, that is, the sum of the generation time of the dynamic real-time trajectory and the generation time of each new dynamic real-time trajectory.
  • the trajectory parameters of the reference dynamic real-time trajectory may also include the trajectory length of the reference trajectory, that is, the sum of the trajectory length of the dynamic real-time trajectory and the trajectory length of each new dynamic real-time trajectory.
  • the human-machine recognition result can be determined based on the matching result of the dynamic real-time trajectory and the first trajectory, and the matching result of each new dynamic real-time trajectory and the corresponding second trajectory.
  • the proportion of successful matching results among all matching results can be determined. If the ratio value is greater than or equal to the preset ratio threshold, it can be determined that the first track and each second track are triggered by real-person operation, that is, the behavior object is a real-person user, and the server determines that the client verification is successful; if the ratio value is less than the preset ratio threshold, it can be determined that the first track and each second track are triggered by machine operation, that is, the behavior object is a machine, and the server determines that the client verification is not successful. In some embodiments, if it is determined that the client verification is successful, the server can control the client to log in successfully, receive various data submitted by the client, etc.
  • the embodiment of the present application utilizes pre-collected trajectory features and hardware device environment data, and combines the real-time randomly generated dynamic real-time trajectory with the corresponding simulated trajectory for matching and comparison to perform human-machine identification. Since the dynamic real-time trajectory is generated in real time, it cannot be predicted and learned, and it is difficult for automated tools and robots to simulate, so that the human-machine identification result determined based on the matching result of the dynamic real-time trajectory and the simulated trajectory will be more accurate.
  • the embodiment of the present application further provides a flow chart of a human-machine verification method based on dynamic real-time trajectory.
  • the execution process of the method is described below using FIG. 5 as an example.
  • Step S1 preliminarily determine whether it is a real person operation based on the obtained trajectory characteristics and hardware environment data.
  • the preliminary determination of whether it is a real person operation based on the obtained trajectory characteristics and hardware environment data can be considered as a preprocessing operation before generating a dynamic real-time trajectory.
  • step S2 if the preliminary determination is that it is a real person operation, step S2 is executed; if the preliminary determination is that it is not a real person operation, step S3 is executed.
  • Step S2 determine whether the human-machine verification result is passed.
  • Step S3 continuously generating dynamic real-time trajectories and obtaining simulated trajectories of the behavior objects.
  • the simulated trajectory is a trajectory generated after the behavior object performs a follow-up sliding operation based on the dynamic real-time trajectory.
  • the dynamic real-time trajectory will continue to be generated, and each time a dynamic real-time trajectory is generated, a corresponding simulated trajectory will be obtained, and then the next dynamic real-time trajectory will be generated, until the total generation time corresponding to each segment of the dynamic real-time trajectory reaches the preset time length threshold, and the total trajectory length corresponding to each segment of the dynamic real-time trajectory reaches the preset trajectory length threshold, and the generation of the dynamic real-time trajectory is stopped.
  • Step S4 comparing and matching the dynamic real-time trajectory with the corresponding simulated trajectory to determine whether the match is successful.
  • the dynamic real-time trajectory may only include one segment, and when performing comparison matching, the segment of the dynamic real-time trajectory is matched with the corresponding simulation trajectory, so that a matching result can be obtained. In some embodiments, if it is determined that the match is successful, step S5 is executed; otherwise, step S6 is executed.
  • Step S5 determining that the human-machine verification result is a real person operation.
  • the dynamic real-time trajectory may include multiple segments.
  • each segment of the dynamic real-time trajectory is matched with its corresponding simulated trajectory to obtain multiple matching results.
  • whether the human-machine verification result is a real person operation can be determined based on the relationship between the ratio of successful matching results in multiple matching results and a preset ratio value. For example, if the preset ratio value is 100%, only if multiple matching results are successful matches can the human-machine verification result be determined to be a real person operation, that is, the client verification is determined to be passed.
  • Step S6 determining that the human-machine verification result is machine operation.
  • trajectory features such as mouse and touch trajectory
  • the human-machine verification result is a real person operation
  • a secondary recognition is performed by matching the dynamic real-time trajectory with the corresponding simulated trajectory, thereby improving the accuracy of human-machine recognition.
  • the dynamic real-time trajectory cannot be predicted and learned, it is difficult for current artificial intelligence technology to identify and crack the dynamic real-time trajectory, thereby ensuring the security of the data and further improving the accuracy of human-machine recognition.
  • FIG6 is a schematic diagram of the composition structure of a human-machine identification device provided in an embodiment of the present application.
  • the human-machine identification device 800 includes:
  • the first generating module 801 is used to generate a dynamic real-time trajectory if it is determined to perform human-machine identification;
  • Output module 802 used to output dynamic real-time trajectory
  • a first matching module 803 is used to match the first trajectory with the dynamic real-time trajectory to obtain a matching result if a first trajectory generated based on the dynamic real-time trajectory is detected;
  • the first determination module 804 is used to determine a recognition result based on the matching result if the generation end condition of the dynamic real-time trajectory is detected, and the recognition result is used to indicate whether the generation of the first trajectory is triggered by a real person operation.
  • the first generating module 801 includes:
  • the first acquisition submodule is used to acquire a preset trajectory set if it is determined to perform human-machine identification
  • the first generation submodule is used to generate a dynamic real-time trajectory based on a trajectory set, a simple one-way principle and a Bezier curve algorithm; wherein the simple one-way principle is a principle for determining the shortest path between a starting point and an end point.
  • the human-machine identification device 800 further includes:
  • a second generation module is used to continue to generate a new dynamic real-time trajectory with the end point of the dynamic real-time trajectory as the starting point if the generation end condition of the dynamic real-time trajectory is not detected;
  • the output module is also used to output new dynamic real-time trajectories
  • a second matching module is used to match the second trajectory with the new dynamic real-time trajectory to obtain a matching result if a second trajectory generated based on the new dynamic real-time trajectory is detected;
  • the second determination module is used to determine the recognition result based on each matching result when it is determined that the generation end condition of the dynamic real-time trajectory is met.
  • the second determining module includes:
  • the second acquisition submodule is used to acquire the total trajectory length value of the dynamic real-time trajectory and the new dynamic real-time trajectory, and the total generation time of the dynamic real-time trajectory and the new dynamic real-time trajectory;
  • a first determination submodule is used to determine, when the total trajectory length value is greater than or equal to a preset length threshold, and the total generation time is greater than or equal to a preset time threshold, a ratio value of the matching result of the first trajectory and the dynamic real-time trajectory being successfully matched, and the second trajectory and the new dynamic real-time trajectory being successfully matched among the matching results;
  • a second determination submodule determines that the recognition result is that the first trajectory and the second trajectory are triggered by a real person operation
  • the third determination submodule determines that the recognition result is that the first track and the second track are generated by triggering the machine operation if the ratio value is less than a preset ratio value.
  • the first generating module 801 includes:
  • the fourth determination submodule is used to determine the current starting point and the preset movement direction
  • a fifth determination submodule configured to determine a first target trajectory from the trajectory set based on a simple one-way principle, a current starting point and a preset movement direction;
  • a third acquisition submodule is used to determine the midpoint of the first target trajectory and to acquire the starting point and the end point of the first target trajectory;
  • the second generation submodule is used to move the midpoint to a preset position point, and generate a dynamic real-time trajectory based on the starting point, the preset position point and the end point.
  • the first generating module 801 further includes:
  • the sixth determination submodule is used to determine a second target trajectory from the trajectory set based on the simple unidirectional principle, the current starting point and the preset movement direction, wherein the starting point of the second target trajectory is the same as the current starting point; and determine the second target trajectory as a dynamic real-time trajectory.
  • the first matching module 803 includes:
  • a fourth acquisition submodule used to acquire first trajectory data of the first trajectory and second trajectory data of the dynamic real-time trajectory
  • a seventh determination submodule configured to determine an error value between the first trajectory data and the second trajectory data
  • an eighth determination submodule configured to determine, when the error value is less than a preset threshold, that the matching result is that the first trajectory and the dynamic real-time trajectory are successfully matched;
  • the ninth determination submodule determines, when the error value is greater than or equal to a preset threshold, that the matching result is that the first trajectory and the dynamic real-time trajectory fail to match.
  • the human-machine identification device 800 further includes:
  • a first acquisition module is used to acquire a reference trajectory and a pre-collected historical trajectory, where the reference trajectory is a pre-stored trajectory triggered by a real person's operation;
  • a second acquisition module used to acquire historical hardware environment data and current hardware environment data of a target device, where the target device is a device that generates the first trajectory;
  • the third determination module is used to determine to perform human-machine identification when the historical trajectory and the reference trajectory do not meet the matching condition, and/or the current hardware environment data and the historical hardware environment data do not meet the matching condition.
  • control method if the above-mentioned control method is implemented in the form of a software function module and sold or used as an independent product, it can also be stored in a computer-readable storage medium.
  • the technical solution of the embodiments of the present application can essentially or in other words, the part that contributes to the relevant technology can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium and includes several instructions for enabling a computer device (which can be a personal computer, server, or network device, etc.) to execute all or part of the methods described in each embodiment of the present application.
  • the aforementioned storage medium includes: U
  • U The present invention can be used to store program codes on any medium such as a hard disk, a mobile hard disk, a read-only memory (ROM), a magnetic disk or an optical disk.
  • ROM read-only memory
  • the present invention is not limited to any specific combination of hardware and software.
  • an embodiment of the present application provides a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by a processor, the human-machine identification method provided in the above embodiment is implemented.
  • FIG7 is a schematic diagram of the composition structure of a human-machine identification device provided in the embodiment of the present application.
  • the human-machine identification device 900 includes: a memory 901, a processor 902, a communication interface 903, and a communication bus 904.
  • the memory 901 is used to store executable human-machine identification instructions
  • the processor 902 is used to execute the executable human-machine identification instructions stored in the memory to implement the human-machine identification method provided in the above embodiment.
  • the disclosed devices and methods can be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the units is only a logical function division.
  • the coupling, direct coupling, or communication connection between the components shown or discussed can be through some interfaces, and the indirect coupling or communication connection of the devices or units can be electrical, mechanical or other forms.
  • all functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may be a separate unit, or two or more units may be integrated into one unit; the above-mentioned integrated units may be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the integrated unit of the present application can also be stored in a computer-readable storage medium.
  • the technical solution of the embodiment of the present application can essentially or in other words, the part that contributes to the prior art can be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for a product to execute all or part of the methods described in each embodiment of the present application.
  • the aforementioned storage medium includes: various media that can store program codes, such as mobile storage devices, ROMs, magnetic disks, or optical disks.
  • the embodiments of the present application provide a method, device, equipment and computer-readable storage medium for human-machine identification.
  • the method includes: if it is determined to perform human-machine identification, a dynamic real-time trajectory is generated; the dynamic real-time trajectory is output; if a first trajectory generated based on the dynamic real-time trajectory is detected, the first trajectory is matched with the dynamic real-time trajectory to obtain a matching result; if the generation end condition of the dynamic real-time trajectory is detected, the recognition result is determined based on the matching result.
  • the generation of the dynamic real-time trajectory is not fixed and cannot be predicted and learned, the security of the data can be guaranteed, so that when it is determined that the generation end condition of the dynamic real-time trajectory is met, the recognition result is determined based on the matching result of the first trajectory and the dynamic real-time trajectory, which can improve the accuracy of human-machine identification.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Security & Cryptography (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Train Traffic Observation, Control, And Security (AREA)
  • Image Analysis (AREA)

Abstract

本申请实施例公开了一种人机识别方法,方法包括:若确定进行人机识别时机,生成动态实时轨迹;输出所述动态实时轨迹;若检测到基于所述动态实时轨迹生成的第一轨迹,将所述第一轨迹和所述动态实时轨迹进行匹配,获得匹配结果;若检测到动态实时轨迹的生成结束条件,基于所述匹配结果确定识别结果,所述识别结果用于表征所述第一轨迹是否由真人操作触发生成。本申请实施例还公开了一种人机识别装置、设备及计算机可读存储介质。

Description

一种人机识别方法、装置、设备和计算机可读存储介质
相关申请的交叉引用
本申请基于申请号为202211365931.2,申请日为2022年10月31日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请涉及人机校验技术领域,尤其涉及一种人机识别方法、装置、设备和计算机可读存储介质。
背景技术
随着信息技术、互联网技术快速发展,网络业务越来越多,人机识别问题日益凸显,人机识别技术可防御绝大多数自动化工具和机器人的非法请求行为。
相关技术中,通常使用的人机识别技术包括:短信或邮箱验证码的方式,该技术为明文传输,且内容固定,简单易识别,但是安全性较低;或者要求用户输入、点击、选择、计算附加验证信息的图片的方式,该技术中生成的验证信息的固定的,容易被破解,难以保证人机识别的准确性;或者采集用户鼠标或触控行为数据特征,根据自由知识库训练模型来进行人机识别的方式,但是该技术也是通过生成固定的验证信息来实现的,而通过机器学习可以完全模拟出真人的行为,从而降低了人机识别的准确性。
发明内容
为解决上述技术问题,本申请实施例期望提供一种人机识别方法,能够保证数据的安全性,提高人机识别的准确性。
申请实施例的技术方案是这样实现的:
本申请实施例提供一种人机识别方法,包括:
若确定进行人机识别,生成动态实时轨迹;
输出动态实时轨迹;
若检测到基于动态实时轨迹生成的第一轨迹,将第一轨迹和动态实时轨迹进行匹配,获得匹配结果;
若检测到动态实时轨迹的生成结束条件,基于匹配结果确定识别结果,识别结果用于表征第一轨迹是否由真人操作触发生成。
本申请实施例提供一种人机识别装置,包括:
第一生成模块,用于若确定进行人机识别时机,生成动态实时轨迹;
输出模块,用于输出动态实时轨迹;
第一匹配模块,用于若检测到基于动态实时轨迹生成的第一轨迹,将第一轨迹和动态实时轨迹进行匹配,获得匹配结果;
第一确定模块,用于若检测到动态实时轨迹的生成结束条件,基于匹配结果确 定识别结果,识别结果用于表征第一轨迹是否由真人操作触发生成。
本申请实施例提供一种人机识别设备,包括:
存储器,用于存储可执行人机识别指令;
处理器,用于执行存储器中存储的可执行人机识别指令时,实现本申请实施例提供的人机识别方法。
本申请实施例提供一种计算机可读存储介质,计算机可读存储介质中存储有计算机可执行人机识别指令,计算机可执行人机识别指令配置为执行本申请实施例提供的人机识别方法。
本申请实施例提供一种人机识别方法、装置、设备和计算机可读存储介质,首先,若确定进行人机识别,生成动态实时轨迹;然后,输出动态实时轨迹,若检测到基于动态实时轨迹生成的第一轨迹,将第一轨迹和动态实时轨迹进行匹配,获得匹配结果;最后,若检测到动态实时轨迹的生成结束条件,基于匹配结果确定识别结果,识别结果用于表征第一轨迹是否由真人操作触发生成。如此,由于生成动态实时轨迹是不固定的,无法进行预判和学习,可以保证数据的安全性,使得在检测到动态实时轨迹的生成结束条件时,基于检测到的基于动态实时轨迹生成的第一轨迹和动态实时轨迹的匹配结果确定识别结果,能够提高人机识别的准确性。
附图说明
图1为本申请实施例提供的一种人机识别方法的流程示意图;
图2为本申请实施例提供的另一种人机识别方法的流程示意图;
图3为本申请实施例提供的一种轨迹形状示意图;
图4为本申请实施例提供的一种动态生成轨迹的人机识别方法的流程示意图;
图5为本申请实施例提供的一种基于动态实时轨迹的人机校验方法的流程示意图;
图6为本申请实施例提供的一种人机识别装置的组成结构示意图;
图7为本申请实施例提供的一种人机识别设备的组成结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
为了使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请作进一步地详细描述,所描述的实施例不应视为对本申请的限制,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
在以下的描述中,涉及到“一些实施例\另一些实施例”,其描述了所有可能实施例的子集,但是可以理解,“一些实施例\另一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。
在以下的描述中,所涉及的术语“第一\第二”仅仅是区别类似的对象,不代表针对对象的特定排序,可以理解地,“第一\第二”在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
除非另有定义,本申请所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本申请中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。
相关技术中,目前业界已有的人机识别方法主要包括以下几种:
一、短信、邮箱验证码的方式。在实际中,不同国家的用户对短信、邮箱验证的偏好不同,美国用户比较习惯用邮箱,我们国家用户比较喜欢用短信,另外,在世界范围内的其他国家用户对邮箱的使用更普遍,国外网站或全球化网站注册邮箱验证较多,例如adobe、LinkedIn、Twitter、Facebook等。但是,短信、邮箱验证码的方式为明文传输,内容固定,简单易识别很容易被拦截捕获,容易导致泄密,安全性较低。
二、图片信息验证的方式。例如,通过图片来显示文字信息,让用户依次输入正确的文字;提示用户选择某些文字;拖拽图片完成拼图;或者根据图片文字表达式来得出输入正确的计算结果等。这种方式信息都是固定的,一旦生成验证信息就不会改变,随着现代人工智能技术的发展,机器学习能力大大增强,越来越容易识别这些固定信息,并进行破解,达到扰乱正常服务秩序的目的。即使部分网站验证码通过增加识别难度来增大被破解的难度,然而目前基于深度学习的算法模型在图片识别上已经超越正常人,因此,通过图片信息验证识别人机的方式已没有多大的效果。
三、鼠标、触控拖拽轨迹匹配的方式。通过采集用户鼠标、触控行为数据特征,根据自由知识库训练模型来判断当前操作是真人还是机器,该方法也会生成固定的验证信息,而机器学习完全可模拟出真人的行为,无法保证人机识别的准确性。
基于相关技术中的问题,本申请实施例提供了一种人机识别方法,该方法能够保证数据的安全性,提高人机识别的准确性。
下面,将说明本申请实施例提供的人机识别方法,如图1所示为本申请实施例提供的一种人机识别方法的流程示意图,该方法包括以下步骤:
S101、若确定进行人机识别,生成动态实时轨迹。
在一些实施例中,人机识别设备可以是服务器设备,也可以是终端设备。在人机识别设备是服务器设备时,确定进行人机识别时,可以是与服务器通信的用户的终端发送至人机识别设备的,若人机识别设备是终端时,可以是终端根据用户的操作检测到需要进行人机识别的验证过程的。人机识别设备可以采用预先设置的动态实时轨迹生成的方法生成动态实时轨迹,也可以是人机识别设备根据随机的生成方法生成动态实时轨迹的。
S102、输出动态实时轨迹。
在一些实施例中,将动态实时轨迹进行输出处理,以便进行动态验证。在人机识别设备为服务器设备时,服务器设备将动态实时轨迹输出至终端,以在终端上进行显示。在人机识别设备为终端时,终端将动态实时轨迹输出至终端的显示界面,以便用户进行相应的严重操作。
S103、若检测到基于动态实时轨迹生成的第一轨迹,将第一轨迹和动态实时轨迹进行匹配,获得匹配结果。
在本申请实施例中,在人机识别设备为服务器设备时,服务器设备接收终端发送的第一轨迹,第一轨迹可以是用户对终端上显示的动态实时轨迹进行操作生成的。在人机识别设备为终端时,终端显示的动态实时轨迹,用户控制鼠标按照动态实时轨迹描画对应的轨迹,得到第一轨迹,或者在终端的显示界面上用手指通过触控操作按照动态实时轨迹进行描画,得到第一轨迹。人机识别设备在检测到第一轨迹后,对第一轨迹和动态实时轨迹进行匹配处理,得到匹配结果。
S104、若检测到动态实时轨迹的生成结束条件,基于匹配结果确定识别结果,识别结果用于表征第一轨迹是否由真人操作触发生成。
在一些实施例中,在检测到动态实时轨迹的生成结束条件时,根据前一步骤确定得到的匹配结果,来确定最终的识别结果,即确定针对动态实时轨迹生成的第一轨迹的目标对象是谁,目标对象可以是真人,也可以是机器。
在本申请实施例中,首先,若确定进行人机识别,生成动态实时轨迹;然后,输出动态实时轨迹,若检测到基于动态实时轨迹生成的第一轨迹,将第一轨迹和动态实时轨迹进行匹配,获得匹配结果;最后,若检测到动态实时轨迹的生成结束条件,基于匹配结果确定识别结果,识别结果用于表征第一轨迹是否由真人操作触发生成。如此,由于生成动态实时轨迹是不固定的,无法进行预判和学习,可以保证数据的安全性,使得在检测到动态实时轨迹的生成结束条件时,基于检测到的基于动态实时轨迹生成的第一轨迹和动态实时轨迹的匹配结果确定识别结果,能够提高人机识别的准确性。
在一些实施例中,如图2所示,为本申请实施例提供的一种人机识别方法的流程示意图,该方法包括以下步骤:
S201、若确定进行人机识别,获取预设的轨迹集合。
在一些实施例中,达到人机识别的时机可以是用户进入到人机识别界面,通过终端向服务器发送人机验证请求,也可以是在需要进行人机识别的场景,服务器主动向终端发送人机验证指令时。预设的轨迹集合可以是预先存储在数据库中的多个轨迹,每个轨迹的轨迹长度均小于预设的轨迹长度阈值,轨迹集合中的部分轨迹也可以为不能再分的最小轨迹,例如水平方向朝右的轨迹,垂直方向朝下的轨迹。
S202、基于轨迹集合、简单单向原则和贝塞尔曲线算法,生成动态实时轨迹。
其中,简单单向原则为确定起点和终点之间最短路径的原则。
在一些实施例中,在获取了预设的轨迹集合之后,便可以基于轨迹集合、简单单向原则和贝塞尔曲线算法生成动态实时轨迹,在实际中,可以随机地从轨迹集合中选择轨迹,并结合贝塞尔曲线算法对选择出的轨迹进行处理,之后按照预设的方向基于处理后的轨迹生成路径简单的动态实时轨迹,由于动态实时轨迹是基于从轨迹集合中随机选择出的轨迹,并结合贝塞尔曲线生成算法生成的,因此,动态实时轨迹不是固定的,且无法预判。
S203、输出动态实时轨迹。
在一些实施例中,在人机识别设备时服务器时,获得了动态实时轨迹之后,便可以将该动态实时轨迹发送给终端,终端在接收到该动态实时轨迹后,可以在自身的显示区域显示该动态实时轨迹。此外,服务器在将该动态实时轨迹发送至终端之后,或在将该动态实时轨迹发送至终端的同时,可以向终端发送动态实时轨迹模拟指令,以指示目标对象对该动态实时轨迹进行模拟,目标对象可以是真人或机器。
S204、若检测到基于动态实时轨迹生成的第一轨迹,将第一轨迹和动态实时轨迹进行匹配,获得匹配结果。
在一些实施例中,通过目标对象基于动态实时轨迹进行跟随滑动操作,可以在终端生成第一轨迹,第一轨迹为目标对象对动态实时轨迹进行模拟之后生成的轨迹。在获得了第一轨迹之后,可以将该第一轨迹发送至服务器,以使得服务器获得动态实时轨迹对应的第一轨迹。将第一轨迹和动态实时轨迹进行匹配时,可以获取第一轨迹和动态实时轨迹在同一方向(如垂直方向或水平方向)上各自对应的轨迹长度,在同一位置处各自对应的生成速率等,之后确定第一轨迹和动态实时轨迹对应轨迹长度、生成速率等的各个误差值,根据各个误差值确定二者是否匹配,例如将各个误差值和对应的预设误差值进行比较,若各个误差值均小于对应的预设误差值,则可以确定第一轨迹和动态实时轨迹匹配;若有至少一个误差值大于或等于对应的预设误差值,则可以确定第一轨迹和动态实时轨迹不匹配。
S205、若检测到动态实时轨迹的生成结束条件,基于匹配结果确定识别结果。
识别结果用于表征第一轨迹是否由真人操作触发生成。
在一些实施例中,动态实时轨迹的生成结束条件可以是动态实时轨迹的轨迹长度大于或等于预设轨迹长度阈值,且动态实时轨迹的生成时间大于或等于预设时间阈值。识别结果用于表征终端发送的第一轨迹是否由真人操作触发生成。示例性地,若基于轨迹集合、简单单向原则和贝塞尔曲线算法生成了一段动态实时轨迹,从该动态实时轨迹的起点开始计时,到该动态实时轨迹的终点结束计时,共花费时间长度为1秒,该动态实时轨迹的轨迹长度为1.2厘米,预设轨迹长度阈值为1厘米。预设时间长度阈值为1秒,则可以确定达到动态实时轨迹生成结束条件,即无需在继续生成动态实时轨迹。
在一些实施例中,识别结果用于表征第一轨迹是否由真人操作触发生成,若匹配结果为第一轨迹和动态实时轨迹匹配,则可以确定第一轨迹由真人操作触发生成; 若匹配结果为第一轨迹和动态实时轨迹不匹配,则可以确定第一轨迹由机器操作触发生成。
在本申请实施例中,首先,在确定达到人机识别时机时,获取预设的轨迹集合,基于轨迹集合、简单单向原则和贝塞尔曲线算法生成动态实时轨迹;然后,输出动态实时轨迹;并在检测到基于动态实时轨迹生成的第一轨迹的情况下,将第一轨迹和动态实时轨迹进行匹配,获得匹配结果;最后,当确定达到动态实时轨迹的生成结束条件时,基于匹配结果确定识别结果,识别结果用于表征终端发送的第一轨迹是否为真人操作。如此,由于基于轨迹集合、简单单向原则和贝塞尔曲线算法生成动态实时轨迹是不固定的,无法进行预判和学习,可以保证数据的安全性,使得在确定达到动态实时轨迹的生成结束条件时,基于第一轨迹和动态实时轨迹的匹配结果确定识别结果,能够提高人机识别的准确性。
在本申请的一些实施例中,在将第一轨迹和动态实时轨迹进行匹配,获得匹配结果,即步骤S204之后,还可以执行下述步骤S301至步骤S304,以下对各个步骤进行说明。
S301、若未检测到动态实时轨迹的生成结束条件,以动态实时轨迹的终点为起点继续生成新的动态实时轨迹。
在一些实施例中,若未达到动态实时轨迹的生成结束条件,即当前生成的动态实时轨迹的轨迹长度未达到预设轨迹长度阈值,或当前生成的动态实时轨迹的生成时间未达到时间长度阈值,则可以以当前生成的动态实时轨迹的终点为起点继续生成新的动态实时轨迹。在生成新的动态实时轨迹时,也可以从轨迹集合中随机选择轨迹,并基于简单单向原则和贝塞尔曲线算法来生成。
S302、输出新的动态实时轨迹。
在一些实施例中,在人机识别设备生成了新的动态实时轨迹之后,将新的动态实时轨迹进行输出处理。例如在人机识别设备为服务器设备时,可以将新的动态实时轨迹发送至终端,终端在接收到新的动态实时轨迹之后,可以在自身对应的显示区域中显示。或者,在人机识别设备为终端时,直接输出至终端对应的显示区域中进行显示。
S303、若检测到基于新的动态实时轨迹生成的第二轨迹,将第二轨迹和新的动态实时轨迹进行匹配,获得匹配结果。
在一些实施例中,并在目标对象基于该新的动态实时轨迹进行跟随滑动操作后,生成第二轨迹,第二轨迹为目标对象对新的动态实时轨迹进行模拟之后生成的轨迹。在生成了第二轨迹之后,终端可以将该第二轨迹发送至服务器,以使得服务器获得新的动态实时轨迹对应的第二轨迹。动态实时轨迹的终点为起点生成的新的动态实时轨迹和动态实时轨迹可以相同,也可以不同。在获得了新的动态实时轨迹之后,可以将第二轨迹和新的动态实时轨迹进行匹配。在实现时,第一轨迹和动态实时轨迹的匹配方式,和第二轨迹和新的动态实时轨迹的匹配方式类似,例如可以获取第 二轨迹和新的动态实时轨迹在同一方向上各自的轨迹长度,在同一位置处各自对应的生成速率等,根据轨迹长度、生成速率等的各个误差值,确定第二轨迹是否与新的动态实时轨迹匹配。
S304、若检测到动态实时轨迹的生成结束条件,基于各个匹配结果确定识别结果。
在一些实施例中,新的动态实时轨迹可能包括一段或多段,在实际中,在以动态实时轨迹的终点为起点生成了一段新的动态实时轨迹之后,可以确定是否达到动态实时轨迹的生成结束条件,若确定并未达到动态实时轨迹的生成结束条件时,可以以该新的动态实时轨迹的终点为起点继续生成其他新的动态实时轨迹,直至确定达到动态实时轨迹的生成结束条件,结束新的动态实时轨迹的生成;若确定达到动态实时轨迹的生成结束条件,则可以结束动态实时轨迹的生成,基于第一轨迹和动态实时轨迹的匹配结果,以及第二轨迹和新的动态实时轨迹的匹配确定,确定终端发送的第一轨迹和第二轨迹是否有真人操作触发生成。
在一些实施例中,当生成的新的动态实时轨迹包括多段时,新的动态实时轨迹对应的第二轨迹也包括多段,即每段新的动态实时轨迹对应一段第二轨迹,新的动态实时轨迹的段数和第二轨迹的段数相同。每一段新的动态实时轨迹对应一个匹配结果,根据各个匹配结果可以确定针对终端发送的第一轨迹和第二轨迹的识别结果,其中,各个匹配结果包括第一轨迹和动态实时轨迹的匹配结果,以及各个新的动态实时轨迹和对应的第二轨迹的匹配结果。
可以理解的是,在本申请实施例中,在确定未达到动态实时轨迹的生成结束条件,以动态实时轨迹的终点为起点继续生成新的动态实时轨迹,动态实时轨迹和新的动态实时轨迹均不会被识别和破解,保证了动态实时轨迹的安全性,之后在确定达到动态负极的生成结束条件时,根据新的动态实时轨迹和对应第二轨迹的匹配结果,以及动态实时轨迹和第一轨迹的匹配结果可以准确定地确定出人机识别结果。
在本申请的一些实施例中,步骤S304中的“若检测到动态实时轨迹的生成结束条件,基于各个匹配结果确定识别结果”还可以通过下述步骤S3041至步骤S3045实现,以下对各个步骤进行说明。
S3041、获取动态实时轨迹和新的动态实时轨迹的总轨迹长度值,以及动态实时轨迹和新的动态实时轨迹的总生成时间。
在一些实施例中,若确定达到动态实时轨迹的生成结束条件时,已经生成的动态实时轨迹包括一段动态实时轨迹和一段新的动态实时轨迹,则动态实时轨迹和新的动态实时轨迹的总轨迹长度值可以是动态实时轨迹的轨迹长度和新的动态实时轨迹的轨迹长度之和,动态实时轨迹和新的动态实时轨迹的总生成时间可以是动态实时轨迹的生成时间和新的动态实时轨迹的生成时间之和。
在另一些实施例中,若确定达到动态实时轨迹的生成结束条件时,已经生成的动态实时轨迹包括一段动态实时轨迹和多段新的动态实时轨迹,其中,第一段新的 动态实时轨迹的起点为动态实时轨迹的终点,多段新的动态实时轨迹依次首位相连,则动态实时轨迹和新的动态实时轨迹的总轨迹长度值可以是动态实时轨迹的轨迹长度和多段新的动态实时轨迹的轨迹长度之和,动态实时轨迹和新的动态实时轨迹的总生成时间可以是动态实时轨迹的生成时间和多段新的动态实时轨迹的生成时间之和。
S3042、当总轨迹长度值大于或等于预设长度阈值,且总生成时间大于或等于预设时间阈值时,确定各个匹配结果中匹配结果为第一轨迹和动态实时轨迹匹配成功,以及第二轨迹和新的动态实时轨迹匹配成功的比例值。
需要说明的是,预设长度阈值可以是预先设定的轨迹长度阈值,预设时间阈值可以是预先设定的时间长度阈值。在一些实施例中,当动态实时轨迹的总轨迹长度达到预设长度阈值,且动态实时轨迹的总生成时间达到预设时间阈值时,则可以根据动态实时轨迹和终端发送的轨迹的匹配结果,有效地确定出人机识别结果。
在一些实施例中,总轨迹长度值达到预设长度阈值可以是总轨迹长度值大于或等于预设长度阈值,总生成时间达到预设时间阈值可以是总生成时间大于或等于预设时间阈值。在一些实施例中,若总轨迹长度值大于或等于预设长度阈值,且总生成时间大于或等于预设时间阈值,则可以确定达到动态实时轨迹生成的结束条件,此时可以确定第一轨迹和动态实时轨迹匹配成功,以及第二轨迹和新的动态实时轨迹匹配成功在所有的匹配结果中所占的比例值。示例性地,若匹配结果包括10个,第一轨迹和动态实时轨迹匹配不成功,6个第二轨迹和对应的新的动态实时轨迹匹配成功,3个第二轨迹和对应的新的动态实时轨迹匹配不成功,则确定出的比例值为60%。
S3043、确定比例值是否大于或等于预设比例值。
在一些实施例中,预设比例值可以是预先设定的比例值,可以是100%,80%等,如果确定比例值大于或等于预设比例值,则执行步骤S3044;反之,则执行步骤S3045。
S3044、确定识别结果为第一轨迹和第二轨迹由真人操作触发生成。
在一些实施例中,例如,若比例值为85%,预设比例值为80%,即比例值大于预设比例值,则可以确定终端发送的第一轨迹和第二轨迹由真人操作触发生成。此外,若比例值等于预设比例值也可以确定发送的第一轨迹和第二轨迹由真人操作触发生成。
S3045、确定识别结果为第一轨迹和第二轨迹由机器操作触发生成。
在一些实施例中,例如,若比例值为60%,预设比例值为80%,即比例值小于预设比例值,则可以确定终端发送的第一轨迹和第二轨迹由机器操作触发生成。
在本申请的一些实施例中,步骤S202中的“基于轨迹集合、简单单向原则和贝塞尔曲线算法生成动态实时轨迹”可以通过下述步骤S401至步骤S404实现,以下对各个步骤进行说明。
S401、确定当前起点和预设运动方向。
需要说明的是,当前起点可以是生成动态实时轨迹的起点,当前起点可以是在终端的显示界面中,用于显示动态实时轨迹的区域中的任意位置。示例性地,若用于显示动态实时轨迹的区域呈现为矩形形状,则当前起点可以位于该矩形区域中的左上角、左下角等位置。预设运动方向可以是预先设定的动态实时轨迹的生成方向,预设方向可以是以当前起点为出发点的任意延伸方向,例如,水平方向朝右、垂直方向朝下等。
S402、基于简单单向原则、当前起点和预设运动方向,从轨迹集合中确定出第一目标轨迹。
在一些实施例中,在确定了当前起点和预设运动方向之后,可以从轨迹集合中确定出符合该预设运动方向,且满足简单单向原则的轨迹,即第一目标轨迹。在实际中,第一目标轨迹的起点与当前起点相同,第一目标轨迹的延伸方向与预设运动方向相同,且第一目标轨迹为所有符合预设运动方向的轨迹中,路径最为简单的轨迹。示例性地,如图3所示,若预设运动方向为A点至B点所在的方向,轨迹集合中符合该预设运动方向的轨迹包括a、b、c三条,轨迹a、c的路径复杂,而轨迹b的路径相对于轨迹a、c的路径来说相对简单,则可以将轨迹b作为第一目标轨迹。
S403、确定第一目标轨迹的中点,并获取第一目标轨迹的起点和终点。
在一些实施例中,在获得了第一目标轨迹之后,可以获得第一目标轨迹的起点和终点,并确定第一目标轨迹的中点。在确定第一目标轨迹的中点时,可以先确定第一目标轨迹的起点和终点的连线,并确定该连线上的中点,以及经过该中点与该连线垂直的线段,将与该连线垂直的线段与第一目标轨迹的交点确定为第一目标轨迹的中点。
S404、将中点移动至预设位置点,基于起点、预设位置点和终点,生成动态实时轨迹。
在一些实施例中,在确定了第一目标轨迹的中点之后,可以将第一目标轨迹的中点移动至预设位置点,预设位置点可以位于中点的上方、下方等的任意位置。在将第一目标轨迹的中点移动至预设位置点之后,可以将第一目标轨迹的起点、预设位置点和第一目标轨迹的终点构成的路径确定为动态实时轨迹。
在本申请的另一些实施例中,在确定当前起点和预设运动方向,即执行完步骤S401之后,还可以执行下述步骤S501至步骤S502。
S501、基于简单单向原则、当前起点和预设运动方向,从轨迹集合中确定出第二目标轨迹。
在一些实施例中,第二目标轨迹的起点和当前起点相同。第二目标轨迹和第一目标轨迹可以相同,也可以不同,示例性地,若从轨迹集合中确定出的符合简单单向原则和预设运动方向的目标轨迹仅包括一条,则第一目标轨迹和第二目标轨迹相同;若从轨迹集合中确定出的符合简单单向原则和预设运动方向的目标轨迹包括多 条,则第一目标轨迹和第二目标轨迹可以相同,也可以不同。
S502、将第二目标轨迹确定为动态实时轨迹。
在一些实施例中,在从轨迹集合中确定出第二目标轨迹之后,可以直接将该第二目标轨迹作为动态实时轨迹,而不需要对该第二轨迹进行变换处理。
在本申请的一些实施例中,步骤S204中的将第一轨迹和动态实时轨迹进行匹配,获得匹配结果还可以通过下述步骤S2041至步骤S2045来实现,以下对各个步骤进行说明。
S2041、获取第一轨迹的第一轨迹数据,以及动态实时轨迹的第二轨迹数据。
在一些实施例中,第一轨迹数据可以包括第一轨迹的生成速率、轨迹长度等,第二轨迹数据也可以包括动态实时轨迹的生成速率、轨迹长度等,其中生成速率可以表示目标对象在对动态实时轨迹进行模拟过程中的速度快慢,轨迹长度可以表示目标对象在对动态实时轨迹进行模拟过程中的按压拖拽轨迹对应的长度。
S2042、确定第一轨迹数据和第二轨迹数据之间的误差值。
在一些实施例中,由于第一轨迹数据和第二轨迹数据均包括多种类型,在确定第一轨迹数据和第二轨迹数据之间的误差值时,可以将同一类型的轨迹数据进行对比计算,获得不同类型的轨迹数据对应的误差值,例如若轨迹数据包括生成速率和轨迹长度,则确定出的误差值可以包括第一轨迹和动态实时轨迹各自对应的生成速率之间的误差值,以及第一轨迹和动态实时轨迹各自对应的轨迹长度之间的误差值。
S2043、确定误差值是否小于预设阈值。
在一些实施例中,当误差值小于预设阈值,则可以执行步骤S2044;反之,则执行步骤S2045。
在一些实施例中,由于轨迹数据可能包括多种类型,因此获得的误差值也可能包括多种,相应地,各类误差值均对应有预设阈值,例如若误差值包括生成速率对应的误差值和轨迹长度对应的误差值,则预设阈值可以包括预设的生成速率阈值和预设的轨迹长度阈值,在确定误差值是否小于预设阈值时,将生成速率对应的误差值与预设的生成速率阈值进行比较,将轨迹长度对应的误差值预设的轨迹长度阈值进行比较。
在一些实施例中,若误差值包括多种,误差值小于预设阈值可以是各类误差值均小于对应的预设阈值;若存在至少一类误差值大于或等于对应的预设阈值,则可以确定误差值大于或等于预设阈值。
S2044、确定匹配结果为第一轨迹和动态实时轨迹匹配成功。
S2045、确定匹配结果为第一轨迹和动态实时轨迹匹配失败。
在一些实施例中,匹配成功表示目标对象对动态实时轨迹进行模拟后生成的第一轨迹和动态实时轨迹的相似度程度高,例如相似程度大于或等于预设的相似程度阈值;匹配失败表示目标对象对动态实时轨迹进行模拟后生成的第一轨迹和动态实时轨迹的相似度程度低,例如相似程度小于预设的相似程度阈值。
在本申请的一些实施例中,步骤S201中的“确定进行人机识别”还可以通过下述步骤S601至步骤S603来实现,以下对各个步骤进行说明。
S601、获取参考轨迹和预先采集的历史轨迹。
在一些实施例中,参考轨迹可以为预先存储的由真人操作触发生成的轨迹,参考轨迹可以包括多个,预先存储在数据库中。历史轨迹可以是在进入人机验证界面之前,目标对象通过鼠标点击、移动或触控拖拽等方式在终端生成的轨迹,终端在采集到历史轨迹之后,可以将历史轨迹发送至服务器。
S602、获取目标设备的历史硬件环境数据和当前硬件环境数据。
其中,目标设备为生成第一轨迹的设备。
在一些实施例中,终端的历史硬件数据可以为在之前确定终端发送的轨迹为真人操作触发的轨迹时,预先存储的终端的硬件环境数据;当前硬件环境数据可以为当前进行人机识别时,获取的终端的硬件环境数据。硬件环境数据可以为移动设备、浏览器、触控屏等的数据,例如鼠标事件,cookie日志、页面窗口、IP地址、Mac地址、网络环境、综合访问频率、地理位置、历史记录等。
S603、当历史轨迹和参考轨迹不满足匹配条件,和/或当前硬件环境数据和历史硬件环境数据不满足匹配条件时,确定进行人机识别。
在一些实施例中,历史轨迹和参考轨迹满足匹配条件可以是历史轨迹和参考轨迹相同,或者历史轨迹和参考轨迹之间的相似程度大于预设的相似程度阈值;当前硬件环境数据和历史硬件环境数据满足匹配条件可以是各类硬件环境数据和对应的历史硬件环境数据均相同,或者部分相同。在一些实施例中,若确定历史轨迹和参考轨迹满足匹配条件,切当前硬件环境数据和历史硬件环境数据也满足匹配条件,则可以确定人机识别结果为真人操作。
在另一些实施例中,若历史轨迹和参考轨迹不满足匹配条件;或当前硬件环境数据和历史硬件环境数据不满足匹配条件;或者历史轨迹和参考轨迹不满足匹配条件,且当前硬件环境数据和历史硬件环境数据也不满足匹配条件,则可以确定进行人机识别。
下面,对本申请实施例在实际应用场景中的实现过程进行介绍。
在一些实施例中,如图4所示,为本申请实施例提供的一种动态生成轨迹的人机识别方法的流程示意图,本申请实施例提供的动态生成轨迹的人机识别方法可以通过下述步骤S701至步骤S706来实现,以下对各个步骤进行说明。
S701、获取预先采集的轨迹特征,以及硬件设备环境数据,基于轨迹特征和硬件设备环境数据初步判断是否为真人操作。
在一些实施例中,预先采集的轨迹特征可以是客户端设备(与前述终端对应)在初始化人机识别系统期间采集的行为对象无意识、无规律、无目的的滑动操作,包括鼠标轨迹和触控拖拽轨迹(对应其他实施例中的预先采集的历史轨迹)等,该行为对象可能是真人,也可能是机器等,客户端在获得了预先采集的轨迹特征之后, 可以将该轨迹特征发送至服务端。硬件设备环境数据可以是服务端在本次进行人机识别的过程中获取的客户端设备(移动设备,浏览器,触控屏等)的硬件环境数据(对应其他实施例中的当前硬件环境数据),比如鼠标事件,cookie,页面窗口,IP地址,Mac地址,网络环境,综合访问频率,地理位置等。
在基于轨迹特征和硬件设备环境数据初步判断是否为真人操作时,可以将预先采集的轨迹特征和之前人机识别过程中确定为真人操作时的轨迹特征进行匹配,同时将当前获得的硬件设备环境数据和之前人机识别过程中确定为真人操作时的硬件设备环境数据进行对比,若预先采集的轨迹特征和之前人机识别过程中确定为真人操作时的轨迹特征不匹配,和/或当前获得的硬件设备环境数据和之前人机识别过程中确定为真人操作时的硬件设备环境数据不同,则可以确定初步判断结果为非真人操作。
S702、若初步判断并非为真人操作,获取行程行为集合(对应其他实施例中的轨迹集合),基于行程行为集合、简单单向原则和贝塞尔曲线生成算法生成动态实时轨迹。
需要说明的是,行程行为集合可以与预先存储于数据库中的多条路径较短的轨迹。在一些实施例中,在生成动态实时轨迹之前,需要确定动态实时轨迹的起点,以及该起点对应的元行为,元行为可以为不能再分的行为(最小行为),例如水平方向朝右的,垂直方向朝下的。
在一些实施例中,在生成动态实时轨迹时,可以使开始点和结束点处于一块正方形区域的两侧,例如起点为左上角,结束点就在右下角;起点为左侧,结束点就在右侧等。在实现时,确定动态实时轨迹的起点对应多个方向的元行为,基于行程行为集合中的行程行为,控制下一步路径生成的起点和方向,并且满足简单单向原则。
在另一些实施例中,可以利用二次贝塞尔曲线生成算法,每次根据当前起点,从行为集中随机取一个行程行为,满足简单单向原则,然后取行程行为起点和终点之间的中点为控制点,将控制点移动到一个随机位置点,即生成当前阶段贝塞尔曲线路径,将该贝塞尔曲线路径作为动态实时轨迹。
S703、获取行为对象模拟动态实时轨迹后生成的第一轨迹,将动态实时轨迹与第一轨迹进行匹配,获得匹配结果。
在一些实施例中,在生成了动态实时轨迹之后,可以采集行为对象对动态实时轨迹执行跟随操作后生成的轨迹,即第一轨迹。在实际中,可以采集第一轨迹对应的跟随模拟数据,例如反应速率,反应间隔,按压拖拽轨迹,速度急缓等。在获得了第一轨迹之后,可以将动态实时轨迹和第一轨迹进行匹配,获得匹配结果。
在实现时,可以将第一轨迹的跟随模拟模拟数据和动态实时轨迹对应的轨迹数据输入预先训练好的神经网络模型中进行相似度分析处理,获得动态实时轨迹与第一轨迹的匹配结果。在一些实施例中,与动态实时轨迹匹配成功的第一轨迹也可以 作为验证集数据加入验证集中,以对预先训练好的神经网络模型进行验证;与动态实时轨迹匹配成功的第一轨迹也可以作为训练集中的数据加入训练集数据中,以对预先训练好的神经网络模型进行再次进行训练,保证神经网络模型对其他动态实时轨迹和对应行为对象的模拟轨迹进行相似度分析处理后,获得准确率更高的匹配结果。
S704、当动态实时轨迹的轨迹参数不满足判断目标值时,以动态实时轨迹的终点为起点继续生成新的动态实时轨迹。
在一些实施例中,动态实时轨迹的轨迹参数可以包括动态实时轨迹的生成时间、轨迹长度等,相应地,判断目标值可以包括预设时间长度阈值、预设轨迹长度阈值等。当生成的动态实时轨迹的生成时间小于预设时间长度阈值时,和/或动态实时轨迹的轨迹长度小于预设轨迹长度阈值时,可以确定动态实时轨迹的轨迹参数不满足判断目标值(对应其他实施例中的“确定未达到动态实时轨迹的生成结束条件”),此时,需要继续生成新的动态实时轨迹,新的动态实时轨迹以动态实时轨迹的终点为起点,新的动态实时轨迹的生成方法与动态实时轨迹的生成方法类似,此处不再赘述。
在另一些实施例中,当动态实时轨迹的轨迹参数满足判断目标值时,则可以根据动态实时轨迹与第一轨迹的匹配结果确定人机识别结果。若动态实时轨迹与第一轨迹匹配,则可以确定第一轨迹为真人操作触发产生;反之,则可以确定第一轨迹为机器操作产生。
S705、获取行为对象模拟新的动态实时轨迹后生成的第二轨迹,将新的动态实时轨迹与第二轨迹进行匹配,获得匹配结果。
在一些实施例中,在生成了新的动态实时轨迹之后,可以获取行为对象基于新的动态实时轨迹模拟之后生成的轨迹,即第二轨迹,之后,同样需要将第二轨迹和新的动态实时轨迹进行匹配,获得对应的匹配结果。
S706、当动态实时轨迹和新的动态实时轨迹所连接形成的参考动态实时轨迹的轨迹参数满足判断目标值时,基于各个匹配结果获得人机识别结果。
在一些实施例中,在生成了动态实时轨迹之后,生成的新的动态实时轨迹可能包括至少一条,动态实时轨迹和各条新的动态实时轨迹依次首位相连可以构成参考动态实时轨迹。参考动态实时轨迹的轨迹参数可以包括生成参考动态实时轨迹所花费的时间,即动态实时轨迹的生成时间和各条新的动态实时轨迹的生成时间总和,参考动态实时轨迹的轨迹参数还可以包括参考轨迹的轨迹长度,即动态实时轨迹的轨迹长度和各条新的动态实时轨迹的轨迹长度之和。当参考动态实时轨迹的生成时间大于或等于预设时间长度阈值,且参考动态实时轨迹的轨迹长度大于或等于预设轨迹长度阈值,则可以基于动态实时轨迹和第一轨迹的匹配结果,以及各个新的动态实时轨迹与对应的第二轨迹的匹配结果确定人机识别结果。
在一些实施例中,可以确定全部匹配结果中,匹配结果为成功的比例值,若该 比例值大于或等于预设比例阈值,则可以确定第一轨迹和各个第二轨迹由真人操作触发生成,即行为对象为真人用户,服务端确定客户端验证通过;若该比例值小于预设比例阈值,则可以确定第一轨迹和各个第二轨迹由机器操作触发生成,即行为对象为机器,服务端确定客户端验证不通过。在一些实施例中,若确定客户端验证通过后,服务端可以控制客户端登录成功、接收客户端提交的各种数据等。
可以理解的是,本申请实施例利用预先采集的轨迹特征,以及硬件设备环境数据,并结合实时随机生成的动态实时轨迹和对应的模拟轨迹进行匹配对比的方法来进行人机识别,由于动态实时轨迹是实时生成的,无法进行预判和学习,自动化工具和机器人等难以进行模拟,从而使得基于动态实时轨迹和模拟轨迹的匹配结果确定的人机识别结果将更加准确。
在一些实施例中,如图5所示,本申请实施例还提供一种基于动态实时轨迹的人机校验方法的流程示意图,以下以图5为例对该方法的执行过程进行说明。
步骤S1,根据获得的轨迹特征、硬件环境数据初步判断是否为真人操作。
在一些实施例中,根据获得的轨迹特征、硬件环境数据初步判断是否为真人操作可以认为是在生成动态实时轨迹之前的预处理操作。在一些实施例中,若初步判断确定为真人操作,则执行步骤S2;若初步判断确定并非为真人操作,则执行步骤S3。
步骤S2,确定人机校验结果通过。
步骤S3,持续生成动态实时轨迹,并获取行为对象的模拟轨迹。
在一些实施例中,模拟轨迹为行为对象基于动态实时轨迹执行跟随滑动操作后生成的轨迹。在实际中,若动态实时轨迹对应的生成时间未达到预设时间长度阈值,和/或动态实时轨迹的轨迹长度未达到预设长度阈值,则会持续生成动态实时轨迹,且每生成一段动态实时轨迹,将会获得对应的模拟轨迹,之后再生成下一段动态实时轨迹,直至各段动态实时轨迹对应的总生成时间达到预设时间长度阈值,且各段动态实时轨迹对应的总轨迹长度达到预设轨迹长度阈值,停止动态实时轨迹的生成。
步骤S4,将动态实时轨迹与对应的模拟轨迹进行对比匹配,确定是否匹配成功。
在一些实施例中,动态实时轨迹可能仅包括一段,在进行对比匹配时,将该段动态实时轨迹与对应的模拟轨迹进行匹配,从便可以获得匹配结果。在一些实施例中,若确定匹配成功,则执行步骤S5;反之,则执行步骤S6。
步骤S5,确定人机校验结果为真人操作。
在一些实施例中,动态实时轨迹可能包括多段,在进行对比匹配时,将各段动态实时轨迹与自身对应的模拟轨迹进行匹配,从而获得多个匹配结果,在实际中,可以根据多个匹配结果中匹配结果为匹配成功的比例值与预设比例值的大小关系确定人机检验结果是否为真人操作,例如若预设比例值为100%,则只有多个匹配结果均为匹配成功,才能确定人机校验结果为真人操作,即确定客户端验证通过。
步骤S6,确定人机校验结果为机器操作。
可以理解的是,在本申请实施例中,通过采集鼠标、触控轨迹等轨迹特征,结合实际硬件设备环境数据,初步判断是否真人操作,若初步判断为真人操作,则人机校验结果为真人操作;当初步确定行为对象是机器人时,通过动态实时轨迹和对应模拟轨迹匹配的方式进行二次识别,提高了人机识别的准确率。同时,由于动态实时轨迹无法进行预判和学习,目前人工智能技术很难对该动态实时轨迹进行识别和破解,从而保证了数据的安全性,更进一步地提高了人机识别的准确性。
本申请还提供一种人机识别装置,图6为本申请实施例提供的一种人机识别装置的组成结构示意图,如图6所示,人机识别装置800包括:
第一生成模块801,用于若确定进行人机识别,生成动态实时轨迹;
输出模块802,用于输出动态实时轨迹;
第一匹配模块803,用于若检测到基于动态实时轨迹生成的第一轨迹,将第一轨迹和动态实时轨迹进行匹配,获得匹配结果;
第一确定模块804,用于若检测到动态实时轨迹的生成结束条件,基于匹配结果确定识别结果,识别结果用于表征第一轨迹是否由真人操作触发生成。
在一些实施例中,第一生成模块801包括:
第一获取子模块,用于若确定进行人机识别,获取预设的轨迹集合;
第一生成子模块,用于基于轨迹集合、简单单向原则和贝塞尔曲线算法,生成动态实时轨迹;其中,简单单向原则为确定起点和终点之间最短路径的原则。
在一些实施例中,人机识别装置800还包括:
第二生成模块,用于若未检测到动态实时轨迹的生成结束条件,以动态实时轨迹的终点为起点继续生成新的动态实时轨迹;
输出模块,还用于输出新的动态实时轨迹;
第二匹配模块,用于若检测到基于新的动态实时轨迹生成的第二轨迹,将第二轨迹和新的动态实时轨迹进行匹配,获得匹配结果;
第二确定模块,用于当确定达到动态实时轨迹的生成结束条件时,基于各个匹配结果确定识别结果。
在一些实施例中,第二确定模块包括:
第二获取子模块,用于获取动态实时轨迹和新的动态实时轨迹的总轨迹长度值,以及动态实时轨迹和新的动态实时轨迹的总生成时间;
第一确定子模块,用于当总轨迹长度值大于或等于预设长度阈值,且总生成时间大于或等于预设时间阈值时,确定各个匹配结果中匹配结果为第一轨迹和动态实时轨迹匹配成功,以及第二轨迹和新的动态实时轨迹匹配成功的比例值;
第二确定子模块,如果比例值大于或等于预设比例值,确定识别结果为第一轨迹和第二轨迹由真人操作触发生成;
第三确定子模块,如果比例值小于预设比例值,确定识别结果为第一轨迹和第二轨迹由机器操作触发生成。
在一些实施例中,第一生成模块801包括:
第四确定子模块,用于确定当前起点和预设运动方向;
第五确定子模块,用于基于简单单向原则、当前起点和预设运动方向,从轨迹集合中确定出第一目标轨迹;
第三获取子模块,用于确定第一目标轨迹的中点,并获取第一目标轨迹的起点和终点;
第二生成子模块,用于将中点移动至预设位置点,基于起点、预设位置点和终点,生成动态实时轨迹。
在一些实施例中,第一生成模块801还包括:
第六确定子模块,用于基于简单单向原则、当前起点和预设运动方向,从轨迹集合中确定出第二目标轨迹,第二目标轨迹的起点和当前起点相同;将第二目标轨迹确定为动态实时轨迹。
在一些实施例中,第一匹配模块803包括:
第四获取子模块,用于获取第一轨迹的第一轨迹数据,以及动态实时轨迹的第二轨迹数据;
第七确定子模块,用于确定第一轨迹数据和第二轨迹数据之间的误差值;
第八确定子模块,用于当误差值小于预设阈值时,确定匹配结果为第一轨迹和动态实时轨迹匹配成功;
第九确定子模块,当误差值大于或等于预设阈值时,确定匹配结果为第一轨迹和动态实时轨迹匹配失败。
在一些实施例中,人机识别装置800还包括:
第一获取模块,用于获取参考轨迹和预先采集的历史轨迹,参考轨迹为预先存储的由真人操作触发生成的轨迹;
第二获取模块,用于获取目标设备的历史硬件环境数据和当前硬件环境数据,目标设备为生成第一轨迹的设备;
第三确定模块,用于当历史轨迹和参考轨迹不满足匹配条件,和/或当前硬件环境数据和历史硬件环境数据不满足匹配条件时,确定进行人机识别。
需要说明的是,本申请实施例人机识别装置的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果,因此不做赘述。对于本装置实施例中未披露的技术细节,请参照本申请方法实施例的描述而理解。
需要说明的是,本申请实施例中,如果以软件功能模块的形式实现上述的控制方法,并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本申请各个实施例所述方法的全部或部分。而前述的存储介质包括:U 盘、移动硬盘、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。这样,本申请实施例不限制于任何特定的硬件和软件结合。
相应地,本申请实施例提供一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述实施例中提供的人机识别方法。
本申请实施例还提供一种人机识别设备,图7为本申请实施例提供的一种人机识别设备的组成结构示意图,如图7所示,所述人机识别设备900包括:存储器901、处理器902、通信接口903和通信总线904。其中,存储器901,用于存储可执行人机识别指令;处理器902,用于执行存储器中存储的可执行人机识别指令时,以实现以上述实施例提供的人机识别方法。
以上人机识别设备和存储介质实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果。对于本申请人机识别设备和存储介质实施例中未披露的技术细节,请参照本申请方法实施例的描述而理解。
应理解,说明书通篇中提到的“一些实施例”或“一实施例”意味着与实施例有关的特定特征、结构或特性包括在本申请的至少一个实施例中。因此,在整个说明书各处出现的“在一些实施例中”或“在一实施例中”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元;既可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例 方案的目的。
另外,在本申请各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、ROM、磁碟或者光盘等各种可以存储程序代码的介质。
或者,本申请上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一个产品执行本申请各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。
工业实用性
本申请实施例提供一种人机识别方法、装置、设备及计算机可读存储介质。该方法包括:若确定进行人机识别,生成动态实时轨迹;输出动态实时轨迹;若检测到基于动态实时轨迹生成的第一轨迹,将第一轨迹和动态实时轨迹进行匹配,获得匹配结果;若检测到动态实时轨迹的生成结束条件,基于匹配结果确定识别结果。如此,由于生成动态实时轨迹是不固定的,无法进行预判和学习,可以保证数据的安全性,使得在确定达到动态实时轨迹的生成结束条件的情况下,基于第一轨迹和动态实时轨迹的匹配结果来确定识别结果,能够提高人机识别的准确性。

Claims (11)

  1. 一种人机识别方法,包括:
    若确定进行人机识别,生成动态实时轨迹;
    输出所述动态实时轨迹;
    若检测到基于所述动态实时轨迹生成的第一轨迹,将所述第一轨迹和所述动态实时轨迹进行匹配,获得匹配结果;
    若检测到动态实时轨迹的生成结束条件,基于所述匹配结果确定识别结果,所述识别结果用于表征所述第一轨迹是否由真人操作触发生成。
  2. 根据权利要求1所述的方法,其中,所述若确定进行人机识别,生成动态实时轨迹,包括:
    若确定进行人机识别,获取预设的轨迹集合;
    基于所述轨迹集合、简单单向原则和贝塞尔曲线算法,生成所述动态实时轨迹;其中,所述简单单向原则为确定起点和终点之间最短路径的原则。
  3. 根据权利要求1所述的方法,其中,所述方法还包括:
    若未检测到动态实时轨迹的生成结束条件,以所述动态实时轨迹的终点为起点继续生成新的动态实时轨迹;
    输出所述新的动态实时轨迹;
    若检测到基于所述新的动态实时轨迹生成的第二轨迹,将所述第二轨迹和所述新的动态实时轨迹进行匹配,获得匹配结果;
    若检测到动态实时轨迹的生成结束条件,基于各个匹配结果确定识别结果。
  4. 根据权利要求3所述的方法,其中,所述若检测到动态实时轨迹的生成结束条件,基于各个匹配结果确定识别结果,包括:
    获取所述动态实时轨迹和所述新的动态实时轨迹的总轨迹长度值,以及所述动态实时轨迹和所述新的动态实时轨迹的总生成时间;
    当所述总轨迹长度值大于或等于预设长度阈值,且所述总生成时间大于或等于预设时间阈值时,确定所述各个匹配结果中匹配结果为第一轨迹和动态实时轨迹匹配成功,以及第二轨迹和新的动态实时轨迹匹配成功的比例值;
    如果所述比例值大于或等于预设比例值,确定所述识别结果为所述第一轨迹和所述第二轨迹由真人操作触发生成;
    如果所述比例值小于所述预设比例值,确定所述识别结果为所述第一轨迹和所述第二轨迹由机器操作触发生成。
  5. 根据权利要求2所述的方法,其中,所述基于所述轨迹集合、简单单向原则和贝塞尔曲线算法生成动态实时轨迹,包括:
    确定当前起点和预设运动方向;
    基于所述简单单向原则、所述当前起点和所述预设运动方向,从所述轨迹集合中确定出第一目标轨迹;
    确定所述第一目标轨迹的中点,并获取所述第一目标轨迹的起点和终点;
    将所述中点移动至预设位置点,基于所述起点、预设位置点和所述终点,生成动态实时轨迹。
  6. 根据权利要求5所述的方法,其中,还包括:
    基于所述简单单向原则、所述当前起点和所述预设运动方向,从所述轨迹集合中确定出第二目标轨迹,所述第二目标轨迹的起点和所述当前起点相同;
    将所述第二目标轨迹确定为所述动态实时轨迹。
  7. 根据权利要求1所述的方法,其中,所述将所述第一轨迹和所述动态实时轨迹进行匹配,获得匹配结果,包括:
    获取所述第一轨迹的第一轨迹数据,以及所述动态实时轨迹的第二轨迹数据;
    确定所述第一轨迹数据和所述第二轨迹数据之间的误差值;
    当所述误差值小于预设阈值时,确定所述匹配结果为所述第一轨迹和所述动态实时轨迹匹配成功;
    当所述误差值大于或等于所述预设阈值时,确定所述匹配结果为所述第一轨迹和所述动态实时轨迹匹配失败。
  8. 根据权利要求1至7任一项所述的方法,其中,所述方法还包括:
    获取参考轨迹和预先采集的历史轨迹,所述参考轨迹为预先存储的由真人操作触发生成的轨迹;
    获取目标设备的历史硬件环境数据和当前硬件环境数据,所述目标设备为生成第一轨迹的设备;
    当所述历史轨迹和所述参考轨迹不满足匹配条件,和/或所述当前硬件环境数据和所述历史硬件环境数据不满足匹配条件时,确定进行人机识别。
  9. 一种人机识别装置,包括:
    第一生成模块,用于若确定进行人机识别,生成动态实时轨迹;
    输出模块,用于输出所述动态实时轨迹;
    第一匹配模块,用于若检测到基于所述动态实时轨迹生成的第一轨迹,将所述第一轨迹和所述动态实时轨迹进行匹配,获得匹配结果;
    第一确定模块,用于若检测到动态实时轨迹的生成结束条件,基于所述匹配结果确定识别结果,所述识别结果用于表征所述第一轨迹是否由真人操作触发生成。
  10. 一种人机识别设备,包括:
    存储器,用于存储可执行人机识别指令;
    处理器,用于执行所述存储器中存储的可执行人机识别指令时,实现权利要求1至8中任一项所述的方法。
  11. 一种计算机可读存储介质,存储有可执行人机识别指令,用于引起处理器执行时,实现如权利要求1至8中任一项所述的方法。
PCT/CN2023/126875 2022-10-31 2023-10-26 一种人机识别方法、装置、设备和计算机可读存储介质 WO2024093797A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211365931.2A CN118013479A (zh) 2022-10-31 2022-10-31 一种人机识别方法、装置、设备和计算机可读存储介质
CN202211365931.2 2022-10-31

Publications (1)

Publication Number Publication Date
WO2024093797A1 true WO2024093797A1 (zh) 2024-05-10

Family

ID=90929686

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/126875 WO2024093797A1 (zh) 2022-10-31 2023-10-26 一种人机识别方法、装置、设备和计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN118013479A (zh)
WO (1) WO2024093797A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105975823A (zh) * 2016-05-05 2016-09-28 百度在线网络技术(北京)有限公司 用于区分人机的验证方法及装置
CN106815515A (zh) * 2016-12-12 2017-06-09 微梦创科网络科技(中国)有限公司 一种基于轨迹验证的验证码实现方法及装置
CN107766705A (zh) * 2016-08-23 2018-03-06 中国移动通信有限公司研究院 验证信息处理方法、客户端及验证平台
US20190034074A1 (en) * 2017-07-28 2019-01-31 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method for recognizing a screen-off gesture, and storage medium and terminal thereof
CN109388934A (zh) * 2018-09-10 2019-02-26 平安科技(深圳)有限公司 信息验证方法、装置、计算机设备及存储介质
CN109933970A (zh) * 2017-12-15 2019-06-25 深圳市腾讯计算机系统有限公司 一种图形验证码检测方法、装置及存储介质
CN110532755A (zh) * 2019-08-09 2019-12-03 北京三快在线科技有限公司 一种计算机实现的风险识别的方法及装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105975823A (zh) * 2016-05-05 2016-09-28 百度在线网络技术(北京)有限公司 用于区分人机的验证方法及装置
CN107766705A (zh) * 2016-08-23 2018-03-06 中国移动通信有限公司研究院 验证信息处理方法、客户端及验证平台
CN106815515A (zh) * 2016-12-12 2017-06-09 微梦创科网络科技(中国)有限公司 一种基于轨迹验证的验证码实现方法及装置
US20190034074A1 (en) * 2017-07-28 2019-01-31 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method for recognizing a screen-off gesture, and storage medium and terminal thereof
CN109933970A (zh) * 2017-12-15 2019-06-25 深圳市腾讯计算机系统有限公司 一种图形验证码检测方法、装置及存储介质
CN109388934A (zh) * 2018-09-10 2019-02-26 平安科技(深圳)有限公司 信息验证方法、装置、计算机设备及存储介质
CN110532755A (zh) * 2019-08-09 2019-12-03 北京三快在线科技有限公司 一种计算机实现的风险识别的方法及装置

Also Published As

Publication number Publication date
CN118013479A (zh) 2024-05-10

Similar Documents

Publication Publication Date Title
US11687631B2 (en) Method for generating a human likeness score
CN109460463B (zh) 基于数据处理的模型训练方法、装置、终端及存储介质
CN107682368B (zh) 基于交互操作的验证方法、客户端、服务器及系统
US20180253542A1 (en) Variation Analysis-Based Public Turing Test to Tell Computers and Humans Apart
CN104318138A (zh) 一种验证用户身份的方法和装置
JP5868529B2 (ja) フォームに関係する検証
WO2020252932A1 (zh) 操作行为的人机识别方法、装置及计算机设备
US10192042B2 (en) User verifying method, terminal device, server and storage medium
CN103873455B (zh) 一种信息校验的方法及装置
CN108446321B (zh) 一种基于深度学习的自动问答方法
CN112187702A (zh) 一种对客户端进行验证的方法和装置
CN106603545A (zh) 基于交互操作的验证方法、服务器、终端设备及系统
KR102513334B1 (ko) 픽처 검증 방법, 장치, 전자기기, 컴퓨터 판독 가능 기록 매체 및 컴퓨터 프로그램
CN106250756A (zh) 验证码的生成方法、验证方法及相应装置
CN107358088A (zh) 基于时钟的验证方法和系统
WO2021135322A1 (zh) 一种自动出题方法、装置及系统
CN114356747A (zh) 显示内容的测试方法、装置、设备、存储介质及程序产品
WO2024093797A1 (zh) 一种人机识别方法、装置、设备和计算机可读存储介质
CN109857634A (zh) 接口测试参数校验方法、装置、电子设备及存储介质
CN111062022A (zh) 基于扰动视觉反馈的滑块验证方法、装置、电子设备
CN109621427A (zh) 恶意用户的处理方法、装置、服务器及存储介质
CN110543754A (zh) 存储器、验证码实现方法、装置和设备
CN110602709B (zh) 可穿戴式设备的网络数据安全方法、装置及存储介质
Jin et al. Ar captcha: Recognizing robot by augmented reality
CN117395083B (zh) 基于联邦学习的数据保护方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23884729

Country of ref document: EP

Kind code of ref document: A1