CN109684803A - Man-machine verification method based on gesture sliding - Google Patents

Man-machine verification method based on gesture sliding Download PDF

Info

Publication number
CN109684803A
CN109684803A CN201811557562.0A CN201811557562A CN109684803A CN 109684803 A CN109684803 A CN 109684803A CN 201811557562 A CN201811557562 A CN 201811557562A CN 109684803 A CN109684803 A CN 109684803A
Authority
CN
China
Prior art keywords
submodule
user
gesture
man
machine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811557562.0A
Other languages
Chinese (zh)
Other versions
CN109684803B (en
Inventor
高海昌
裴歌
罗赛男
常国沁
程诺
郑涵
张阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201811557562.0A priority Critical patent/CN109684803B/en
Publication of CN109684803A publication Critical patent/CN109684803A/en
Application granted granted Critical
Publication of CN109684803B publication Critical patent/CN109684803B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/44Program or device authentication
    • G06F21/445Program or device authentication by mutual authentication, e.g. between devices or programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/21Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/2133Verifying human interaction, e.g., Captcha

Abstract

The invention proposes a kind of man-machine verification methods based on gesture sliding, and the safety of man-machine verifying is effectively improved on the basis of guaranteeing user friendly, realize step are as follows: 1. building gesture data collection;2. a couple target detection network YOLO V3 is improved;3. a couple improved target detection network YOLO V3 is trained;4. generating the user's checking interface of gesture sliding identifying code;5. judge real-time detection to hand center the starting point for marking whether to be moved to target trajectory;6. judge real-time detection to hand center the terminal for marking whether to be moved to target trajectory;7. couple user carries out man-machine verifying.Man-machine verification method based on gesture and the man-machine verification method based on sliding are combined and carry out man-machine verifying to user by the present invention, reduce risk of the internet by malicious attack, be can be used for logging in, registers etc. under network scenarios to the man-machine verifying of user's progress.

Description

Man-machine verification method based on gesture sliding
Technical field
The invention belongs to technical field of safety protection, are related to a kind of man-machine verification method, and in particular to one kind is based on gesture The man-machine verification method of sliding can be used for logging in, register etc. under network scenarios to the man-machine verifying of user's progress.
Background technique
Man-machine verifying is a kind of full-automatic turing test for distinguishing computer and the mankind, is commonly called as identifying code.Identifying code conduct A kind of simple and convenient defense mechanism is widely used in a variety of applications in computer security technical field, prevents internet by malice Attack, is that man-machine most important means are distinguished in network application.Currently, identifying code is broadly divided into following base class: text is tested Demonstrate,prove code, graphical verification code and audio-video identifying code.
With the rapid development of computer technology, text and image authentication code are easy to by computer vision and deep learning Technology is cracked with very high accuracy rate.Some novel identifying code forms based on base class are also suggested, and gesture identifying code is exactly One of them, mainly constructs verification mode using the difference between some different hand postures, for example, application publication number: CN105718776A, the application for a patent for invention of entitled " a kind of three-dimension gesture verification method and system ", discloses a kind of three-dimensional Gesture verification method, it is man-machine to carry out by the gesture motion for matching the gesture motion sent to client and being returned from client The method of verifying solves and is difficult to visually be accurately identified in conventional authentication code or easily asking by automaton procedure identification Topic, has a defect that, the gesture motion classification sent to client is limited, and client behavior can be saved and simulate back It puts, reduces its safety.
Sliding identifying code is also a kind of novel identifying code, mainly utilizes mouse is mobile to generate sliding trace, and then carry out The judgement of legitimacy, for example, application publication number: CN106991315A, entitled " verification method and system of gesture verifying " Application for a patent for invention discloses a kind of identity identifying method based on sliding trace feature, which provides generates containing random Reference locus, preset smallest match degree, according to user draw similar track to match with reference to infer, and utilize similar in Color merges target trajectory and background picture, causes traditional image processing method to fail, to improve man-machine verifying Safety.But due to the powerful stationkeeping ability of current target detection network, target trajectory is caused easily to be positioned, and automate inspection Survey tool can simply simulate mouse action, draw sliding trace, cause the mankind to slide behavior and be easily modeled, this method Safety reduces, and the otherness between the operation for the operation and legitimate user simulated is not fully utilized.Therefore, It is still one, field urgent problem to be solved that the safety of identifying code how is further increased on the basis of guaranteeing friendly.
Target detection network is a kind of deep neural network that specific objective is detected using depth learning technology, main point For based on candidate frame and based on recurrence two major classes.Wherein, as the masterpiece based on homing method, it will test YOLO series Task states a regression problem end to end as, and only to handle a picture while obtain target position and classification results And gain the name, have both real-time and accuracy.Wherein, the first generation of the YOLO V1 as YOLO series realizes and utilizes recurrence thought The leap for carrying out target positioning, greatly improves target detection speed, but to the close object and microcommunity leaned on Detection effect is undesirable.YOLO V2 is by BN layers of addition, using sides such as multiple dimensioned training, and raising input image resolution Formula improves detection accuracy, reduces trained difficulty, but model is unstable.In contrast, YOLO V3 is as YOLO series Latest edition, using DarkNet-53 as feature extraction network, in the case where keeping accuracy rate constant, speed is promoted twice.
Summary of the invention
It is an object of the invention to overcome the deficiencies in the prior art, propose a kind of man-machine authentication based on gesture sliding Method effectively improves the safety of man-machine verifying on the basis of guaranteeing user friendly, to reduce internet by malicious attack Risk.
To achieve the above object, the technical solution that the present invention takes includes the following steps:
(1) gesture data collection is constructed:
The multiframe gesture animation that camera is shot is saved in JPEGImages file by (1a) with graphic form, and right Every width picture is named, and then using the picture of more than half as training sample set, the part of remaining picture is as verifying sample As test sample collection finally ImageSets/Main file is written in title in all sample sets by this collection, another part Under train.txt, val.txt, test.txt file in, by training sample set and verifying sample set title be written In trainval.txt file under ImageSets/Main file;
Markup information after the bounding box mark for the gesture that (1b) is included to width picture every in JPEGImages file It is standardized, obtains the processing result that bounding box is square, and processing result is saved in xml format In Annotations file, the title of each xml formatted file with it includes markup information corresponding to picture name phase Together, then make from selection xml document identical with width picture name every in train.txt file in Annotations file For the markup information collection of training sample set, xml document identical with width picture name every in val.txt file is chosen as verifying The markup information collection of sample set chooses xml document identical with the title of width picture every in test.txt file as test specimens The markup information collection of this collection;
(1c) makees verifying sample set and its markup information collection using training sample set and its markup information collection as training set Collect for verifying, while using test sample collection and its markup information collection as test set, and training set, verifying are collected and tested and is gathered It and is gesture data collection;
(2) target detection network YOLO V3 is improved:
Characteristic extracting module in target detection network YOLO V3 is reconstructed into Conv_cRelu submodule, Mixed_5b_ Dilation_module submodule and InvertedResidual submodule, while deleting in YOLO V3 in target prediction module With the output layer of most shallow-layer Fusion Features, improved target detection network YOLO V3 is obtained;
(3) improved target detection network YOLO V3 is trained:
(3a) initializes the training parameter in improved target detection network YOLO V3 using random number:
(3b) randomly selects fixed picture as a batch from the training set that gesture data is concentrated and carries out data increasing By force, and the training set comprising diversified sample will be obtained, is input in improved target detection network YOLO V3, carries out N altogether Secondary iteration, N >=10000 obtain trained improved target detection network YOLO V3;
(4) the user's checking interface of gesture sliding identifying code is generated:
(4a) delimit the validation region of identifying code on the screen of the user terminal, and the camera in user terminal is shot Content Real-time Feedback in validation region, then by trained improved target detection network YOLO V3 to verifying area The content of domain internal feedback carries out continuing detection, if the center of opponent is marked with the presence of hand in the content of feedback;
(4b) marks more than two random points, and the position of fixed each random point in validation region, then passes through straight line Successively each random point is connected, forms target trajectory, and by the content of validation region internal feedback, the label and mesh of hand center Mark user's checking interface of the track as gesture sliding identifying code;
(5) judge real-time detection to hand center the starting point for marking whether to be moved to target trajectory:
Persistently judge in T time real-time detection to hand center mark whether to be moved to target trajectory rise Point if so, persistently recording the coordinate points of user hand center, and executes step (6), no to then follow the steps (4), T >=30s;
(6) judge real-time detection to hand center the terminal for marking whether to be moved to target trajectory:
Persistently judge in T time real-time detection to hand center the end for marking whether to be moved to target trajectory Point, if so, stop recording the coordinate points of user hand center, and using the content recorded as user gesture slip information, And step (7) are executed, otherwise, execute step (4), T >=30s;
(7) man-machine verifying is carried out to user:
(7a) carries out preliminary man-machine verifying to user by abnormal point:
Judge whether each of user gesture slip information coordinate points are less than S to the distance of target trajectory, if so, The coordinate points are normal point, and otherwise, then the coordinate points are abnormal point;It calculates the centre coordinate of all abnormal points and each is different The average value that often point arrives all abnormal point centre coordinate distances determines that all abnormal points are by legal if the average value is less than M User's faulty operation causes, and it is removed from the record of the coordinate points of hand center, executes step (7b) and otherwise sentences Determining user is machine, wherein S >=100, M >=2;
(7b) carries out final man-machine verifying to user by target trajectory:
Judge whether each normal point meets D≤20 to the maximum value D in target trajectory distance, if so, user is legal, Otherwise, whether the ratio P for judging that normal point accounts for all normal points within the scope of α D meets P >=β, if so, determine that user is the mankind, Otherwise determine that user is machine, wherein α >=0.5, β >=0.5.
Compared with prior art, the present invention having the advantage that
The present invention combines gesture sliding process and target detection network, by target detection network, slides to gesture The position of hand is detected in the process, obtains user's sliding trace, passes sequentially through abnormal point filtering and the analysis of user's sliding trace, Compare the matching degree between user's sliding trace and target trajectory, the man-machine classification capacity of high-accuracy is obtained, simultaneously as often The target trajectory once obtained is random, avoids the human user operation retained in advance using playback and records a video to carry out malicious attack The case where, machine is difficult to simulate the user gesture sliding behavior before camera, greatly improves the safety of man-machine verifying;And And, it is only necessary to simply gesture motion is completed to verify, and mode of operation is simpler, user experience is ensured.
Detailed description of the invention
Fig. 1 is implementation flow chart of the invention;
Fig. 2 is the gesture identifying code schematic diagram of the invention with inflection point;
Fig. 3 is the gesture identifying code schematic diagram of no inflection point of the invention;
Fig. 4 is verification process schematic diagram of the invention.
Specific embodiment
Below in conjunction with the drawings and specific embodiments, present invention is further described in detail:
Referring to Fig.1, a kind of man-machine verification method based on gesture sliding, includes the following steps:
Step 1) constructs gesture data collection:
Step 1a) by the multiframe gesture animation of camera shooting, a frame is extracted every 15 frames, is saved in graphic form In JPEGImages file, and every width picture is named, wherein video resolution is 1920 × 1080, JPEGImages The picture number retained in file is 80000 width, and then using 70000 width pictures as training sample set, 5000 width pictures are made To verify sample set, as test sample collection finally ImageSets/ is written in the title in all sample sets by 5000 width pictures In train.txt, val.txt, test.txt file under Main file, by the title of training sample set and verifying sample set It is written in the trainval.txt file under ImageSets/Main file;
Step 1b) hand that is included to width picture every in JPEGImages file is labeled:
Step 1b1) position coordinates (x1, y1, x2, y2) of opponent are labeled, and wherein x1 and y1 is wrapped in image respectively The abscissa and ordinate of the upper left position of rectangle frame containing hand, x2 and y2 are the right side of the rectangle frame in image comprising hand respectively The abscissa and ordinate of lower Angle Position;
Step 1b2) information after mark is standardized, the processing result that bounding box is square is obtained, with Training burden is reduced, training difficulty is reduced, herein there are many processing method, such as by the long side of the rectangular bounding box in markup information Length adjustment be the length of short side, uniformly arrive random length etc., in the present embodiment, using by the square boundary in markup information The length adjustment of the short side of frame is the method for the length of long side, to guarantee the integrality of standardization defensive position;
Step 1b3) processing result is saved in Annotations file with xml format, each xml formatted file Title with it includes markup information corresponding to picture name it is identical, then from Annotations file choose with Markup information collection of the identical xml document of every width picture name as training sample set in train.txt file, choose with Markup information collection of the identical xml document of every width picture name as verifying sample set in val.txt file, choose with Markup information collection of the identical xml document of title of every width picture as test sample collection in test.txt file;
Step 1c) using training sample set and its markup information collection as training set, by verifying sample set and its markup information Collection is used as verifying collection, while using test sample collection and its markup information collection as test set, and training set, verifying are collected and tested Collection merges into gesture data collection;
Step 2) improves target detection network YOLO V3:
Target detection network YOLO V3 is improved in the present embodiment, allows to accurately position in 50 milliseconds Center in one's hands is applied so as to the real-time detection stage of man-machine verification process in the present invention.
Step 2a) by the characteristic extracting module in target detection network YOLO V3 be reconstructed into Conv_cRelu submodule, Mixed_5b_dilation_module submodule and InvertedResidual submodule: convolution kernel size is 7, step-length 4, Conv_cRelu submodule → convolution kernel size that Feature Mapping figure number is 10 is 3, step-length 2, and Feature Mapping figure number is Mixed_5b_dilation_module submodule → spy that 12 Conv_cRelu submodule → Feature Mapping figure number is 32 The Mixed_5b_ that InvertedResidual submodule → Feature Mapping figure number that sign mapping graph number is 32 is 48 InvertedResidual submodule → Feature Mapping figure that dilation_module submodule → Feature Mapping figure number is 48 The Mixed_5b_dilation_module submodule that number is 60 → Feature Mapping figure number is 60 InvertedResidual submodule:
Step 2a1) Conv_cRelu submodule: convolution results and convolution results for being inputted to the submodule it is opposite Number is separately input in two Relu nonlinear activation functions, and the output of two Relu nonlinear activation functions is spliced into this The output result of submodule;
Step 2a2) Mixed_5b_dilation_module submodule: it include four branches, one of branch is used for Carrying out convolution kernel size to the input of the submodule is 1, and the convolution operation that step-length is 1 a, branch is used for the submodule Input carries out the convolution operation of dual serial, and convolution kernel size is respectively 1,3, and step-length is respectively 1,1, and a branch is used for this The input of submodule carries out three serial convolution operations, and convolution kernel size is respectively 1,3,3, and step-length is respectively 1,1,1, and one Branch is used to the input to the submodule and carries out window size be 3, and the average pondization that step-length is 1 operates, and connects a convolution Core size is 1, then the output of four branches is spliced, obtains the output knot of the submodule by the convolution operation that step-length is 1 Fruit;
Step 2a3) InvertedResidual submodule: it include three operational groups, first operational group is to the submodule Input carry out convolution kernel size be 1, step-length 1, edge extended parameter be 0 convolution operation and carry out batch normalization and Relu Nonlinear Mapping;Second operational group is used to carry out convolution kernel size to the result of first operational group to be 3, step-length 2, Edge extended parameter be 1 convolution operation and carry out batch normalization and Relu Nonlinear Mapping;Third operational group is used for Carrying out convolution kernel size to the result of second operational group is 1, step-length 1, the convolution operation that edge extended parameter is 0 and progress Batch normalizes;
Step 2b) output layer in YOLO V3 in target prediction module with most shallow-layer Fusion Features is deleted, it obtains improved Target detection network YOLO V3;
Step 3) is trained improved target detection network YOLO V3:
Step 3a) training parameter in improved target detection network YOLO V3 is initialized using random number;
Step 3b) 64 pictures are randomly selected from the training set that gesture data is concentrated as a batch progress data increasing By force, i.e., each picture in batch being subjected to equal proportion random cropping first, form and aspect, brightness and saturation degree adjust at random, Random equal proportion fills out side, and zooms to the input size 448 × 448 of network requirement, and will obtain comprising diversified sample Training set is input in improved target detection network YOLO V3, carries out 546875 iteration altogether, herein loss function It need to be combined using any one Classification Loss and any one recurrence loss, in order to improve detection accuracy, the present embodiment In, loss function is combined as using intersection entropy loss and error sum of squares loss, obtains trained improved target detection Network YOLO V3;
Step 4) generates the user's checking interface of gesture sliding identifying code:
Step 4a) validation region of identifying code delimited on the screen of the user terminal, and by the camera in user terminal The content Real-time Feedback of shooting is in validation region, then by trained improved target detection network YOLO V3 to testing The content of card region internal feedback carries out continuing detection, if being the yellow of 10 pixels with radius with the presence of hand in the content of feedback The center of dot opponent is marked, and the mark point of hand center changes with the variation of the position of user hand;
Step 4b) more than two random points, and the position of fixed each random point are marked in validation region, then by straight Line successively connects each random point, formed target trajectory, and by the content of validation region internal feedback, hand center label and User's checking interface of the target trajectory as gesture sliding identifying code, the schematic diagram of the gesture identifying code with inflection point as shown in Fig. 2, Wherein, the label of starting point, terminal and inflection point that No. 1, No. 2 and No. 3 respectively represent track, the signal of the gesture identifying code of no inflection point Figure is as shown in Figure 3, wherein the label of No. 1 and No. 2 beginning and end for respectively representing track, verification process schematic diagram such as Fig. 4 institute Show, wherein No. 1, No. 2, No. 3 and No. 4 respectively represent track starting point, terminal, inflection point and hand center label;
Step 5) judge real-time detection to hand center the starting point for marking whether to be moved to target trajectory:
Persistently judge within the 30s time real-time detection to hand center mark whether to be moved to target trajectory rise Point if so, persistently recording the coordinate points of user hand center, and executes step (6), no to then follow the steps (4);
Step 6) judge real-time detection to hand center the terminal for marking whether to be moved to target trajectory:
Persistently judge in 30s real-time detection to hand center the terminal for marking whether to be moved to target trajectory, If so, stopping recording the coordinate points of user hand center, and using the content recorded as user gesture slip information, and hold Row step (7) otherwise executes step (4);
Step 7) carries out man-machine verifying to user:
Step 7a) pass through abnormal point to the preliminary man-machine verifying of user's progress:
Judge record hand center each coordinate points to target trajectory distance whether less than 100 pixels, if It is that then the coordinate points are normal point, otherwise, then the coordinate points are abnormal point;Calculate the centre coordinate of all abnormal points and every One abnormal point to all abnormal point centre coordinate distances average value, if the average value determines all exceptions less than 2 pixels Point is to be operated to cause by legitimate user errors, and it is removed from the record of the coordinate points of hand center, executes step (7b), otherwise, it is determined that user is machine;
Step 7b) pass through target trajectory to the final man-machine verifying of user's progress:
Judge whether each normal point meets D≤20 to the maximum value D in target trajectory distance, if so, user is legal, Otherwise, whether the ratio P for judging that normal point accounts for all normal points within the scope of α D meets P >=β, if so, determine that user is the mankind, Otherwise determine that user is machine, wherein need to only meet α >=0.5, β >=0.5, for balancing safety and use in the present embodiment Family friendly, using α=0.8, β=0.7.

Claims (4)

1. a kind of man-machine verification method based on gesture sliding, which comprises the steps of:
(1) gesture data collection is constructed:
The multiframe gesture animation that camera is shot is saved in JPEGImages file by (1a) with graphic form, and to every width Picture is named, and then using the picture of more than half as training sample set, the part of remaining picture is used as verifying sample set, Finally the title in all sample sets is written under ImageSets/Main file as test sample collection for another part In train.txt, val.txt, test.txt file, ImageSets/ is written into the title of training sample set and verifying sample set In trainval.txt file under Main file;
Markup information after the bounding box mark for the gesture that (1b) is included to width picture every in JPEGImages file carries out Standardization obtains the processing result that bounding box is square, and processing result is saved in xml format In Annotations file, the title of each xml formatted file with it includes markup information corresponding to picture name phase Together, then make from selection xml document identical with width picture name every in train.txt file in Annotations file For the markup information collection of training sample set, xml document identical with width picture name every in val.txt file is chosen as verifying The markup information collection of sample set chooses xml document identical with the title of width picture every in test.txt file as test specimens The markup information collection of this collection;
(1c) using training sample set and its markup information collection as training set, will verifying sample set and its markup information collection as testing Card collection, while using test sample collection and its markup information collection as test set, and training set, verifying collection and test set are merged into Gesture data collection;
(2) target detection network YOLO V3 is improved:
Characteristic extracting module in target detection network YOLO V3 is reconstructed into Conv_cRelu submodule, Mixed_5b_ Dilation_module submodule and InvertedResidual submodule, while deleting in YOLO V3 in target prediction module With the output layer of most shallow-layer Fusion Features, improved target detection network YOLO V3 is obtained;
(3) improved target detection network YOLO V3 is trained:
(3a) initializes the training parameter in improved target detection network YOLO V3 using random number;
(3b) randomly selects fixed picture as a batch from the training set that gesture data is concentrated and carries out data enhancing, and The training set comprising diversified sample will be obtained, be input in improved target detection network YOLO V3, carried out n times altogether and change Generation, N >=10000 obtain trained improved target detection network YOLO V3;
(4) the user's checking interface of gesture sliding identifying code is generated:
(4a) delimit the validation region of identifying code on the screen of the user terminal, and will be in the camera shooting in user terminal Hold Real-time Feedback in validation region, then by trained improved target detection network YOLO V3 in validation region The content of feedback carries out continuing detection, if the center of opponent is marked with the presence of hand in the content of feedback;
(4b) marks more than two random points, and the position of fixed each random point in validation region, then successively by straight line Each random point is connected, forms target trajectory, and by the content of validation region internal feedback, the label and target track of hand center User's checking interface of the mark as gesture sliding identifying code;
(5) judge real-time detection to hand center the starting point for marking whether to be moved to target trajectory:
Persistently judge in T time real-time detection to hand center the starting point for marking whether to be moved to target trajectory, if It is persistently to record the coordinate points of user hand center, and execute step (6), it is no to then follow the steps (4), T >=30s;
(6) judge real-time detection to hand center the terminal for marking whether to be moved to target trajectory:
Persistently judge in T time real-time detection to hand center the terminal for marking whether to be moved to target trajectory, if It is to stop recording the coordinate points of user hand center, and using the content recorded as user gesture slip information, and executes Step (7) otherwise executes step (4), T >=30s;
(7) man-machine verifying is carried out to user:
(7a) carries out preliminary man-machine verifying to user by abnormal point:
Judge whether each of user gesture slip information coordinate points are less than S to the distance of target trajectory, if so, the seat Punctuate is normal point, and otherwise, then the coordinate points are abnormal point;Calculate the centre coordinate and each abnormal point of all abnormal points Determine that all abnormal points are by legitimate user if the average value is less than M to the average value of all abnormal point centre coordinate distances Faulty operation causes, and it is removed from the record of the coordinate points of hand center, executes step (7b), otherwise, it is determined that with Family is machine, wherein S >=100, M >=2;
(7b) carries out final man-machine verifying to user by target trajectory:
Judge whether each normal point meets D≤20 to the maximum value D in target trajectory distance, if so, user is legal, it is no Then, whether the ratio P for judging that normal point accounts for all normal points within the scope of α D meets P >=β, if so, determine that user is the mankind, it is no Then determine user for machine, wherein α >=0.5, β >=0.5.
2. the man-machine verification method according to claim 1 based on gesture sliding, which is characterized in that described in step (1b) The gesture for being included to width picture every in JPEGImages file bounding box mark after markup information be standardized Processing, method particularly includes:
It is the length of long side by the length adjustment of the short side of the rectangular bounding box in markup information, obtains what bounding box was square Processing result.
3. the man-machine verification method according to claim 1 based on gesture sliding, which is characterized in that described in step (2a) Reconstruct characteristic extracting module, realize process are as follows:
Convolution kernel size is 7, step-length 4, and Conv_cRelu submodule → convolution kernel size that Feature Mapping figure number is 10 is 3, step-length 2, the Mixed_5b_ that the Conv_cRelu submodule that Feature Mapping figure number is 12 → Feature Mapping figure number is 32 InvertedResidual submodule → Feature Mapping figure that dilation_module submodule → Feature Mapping figure number is 32 The Mixed_5b_dilation_module submodule that number is 48 → Feature Mapping figure number is 48 The Mixed_5b_dilation_module submodule that InvertedResidual submodule → Feature Mapping figure number is 60 → The InvertedResidual submodule that Feature Mapping figure number is 60, in which:
Conv_cRelu submodule: the opposite number of convolution results and convolution results for inputting to the submodule is separately input to In two Relu nonlinear activation functions, and the output of two Relu nonlinear activation functions is spliced into the output of the submodule As a result;
Mixed_5b_dilation_module submodule: including four branches, and one of branch is used for the submodule It is 1 that input, which carries out convolution kernel size, the convolution operation that step-length is 1, and a branch is used for the input to the submodule and carries out two Serial convolution operation, convolution kernel size are respectively 1,3, and step-length is respectively 1,1, and a branch is for the input to the submodule Three serial convolution operations are carried out, convolution kernel size is respectively 1,3,3, and step-length is respectively 1,1,1, and a branch is used for this It is 3 that the input of submodule, which carries out window size, and the average pondization that step-length is 1 operates, and connecting a convolution kernel size is 1, step Then a length of 1 convolution operation splices the output of four branches, obtain the output result of the submodule;
InvertedResidual submodule: including three operational groups, and first operational group rolls up the input of the submodule Product core size is 1, step-length 1, and the convolution operation that edge extended parameter is 0 simultaneously carries out batch normalization and Relu is non-linear reflects It penetrates;Second operational group is used to carry out convolution kernel size to the result of first operational group to be 3, step-length 2, edge extended parameter For 1 convolution operation and carry out batch normalization and Relu Nonlinear Mapping;Third operational group is used to operate second It is 1 that the result of group, which carries out convolution kernel size, step-length 1, and the convolution operation that edge extended parameter is 0 simultaneously carries out batch normalization.
4. the man-machine verification method according to claim 1 based on gesture sliding, which is characterized in that described in step (3b) The training set concentrated of slave gesture data in randomly select fixed picture as batch and carry out data enhancing, implementation method Are as follows:
Each picture in batch is subjected to equal proportion random cropping first, form and aspect, brightness and saturation degree adjust at random, with Machine equal proportion fills out side, and zooms to the input size of network requirement, obtains the training set comprising diversified sample.
CN201811557562.0A 2018-12-19 2018-12-19 Man-machine verification method based on gesture sliding Active CN109684803B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811557562.0A CN109684803B (en) 2018-12-19 2018-12-19 Man-machine verification method based on gesture sliding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811557562.0A CN109684803B (en) 2018-12-19 2018-12-19 Man-machine verification method based on gesture sliding

Publications (2)

Publication Number Publication Date
CN109684803A true CN109684803A (en) 2019-04-26
CN109684803B CN109684803B (en) 2021-04-20

Family

ID=66186364

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811557562.0A Active CN109684803B (en) 2018-12-19 2018-12-19 Man-machine verification method based on gesture sliding

Country Status (1)

Country Link
CN (1) CN109684803B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110070074A (en) * 2019-05-07 2019-07-30 安徽工业大学 A method of building pedestrian detection model
CN110807183A (en) * 2019-10-12 2020-02-18 广州多益网络股份有限公司 Sliding verification code man-machine behavior identification method of multi-dimensional feature system
CN110868327A (en) * 2019-11-28 2020-03-06 武汉极意网络科技有限公司 Behavior verification control method, behavior verification control device, behavior verification control equipment and storage medium
CN111382723A (en) * 2020-03-30 2020-07-07 北京云住养科技有限公司 Method, device and system for identifying help
CN111738056A (en) * 2020-04-27 2020-10-02 浙江万里学院 Heavy truck blind area target detection method based on improved YOLO v3
CN111860160A (en) * 2020-06-16 2020-10-30 北京华电天仁电力控制技术有限公司 Method for detecting wearing of mask indoors
CN112016077A (en) * 2020-07-14 2020-12-01 北京淇瑀信息科技有限公司 Page information acquisition method and device based on sliding track simulation and electronic equipment
CN112380508A (en) * 2020-11-16 2021-02-19 西安电子科技大学 Man-machine verification method based on common knowledge
CN113256724A (en) * 2021-07-07 2021-08-13 上海影创信息科技有限公司 Handle inside-out vision 6-degree-of-freedom positioning method and system
CN113450573A (en) * 2020-03-25 2021-09-28 重庆翼动科技有限公司 Traffic monitoring method and traffic monitoring system based on unmanned aerial vehicle image recognition
CN114465724A (en) * 2022-02-24 2022-05-10 深圳软牛科技有限公司 Verification code generation and verification method, client, server and system
WO2022099685A1 (en) * 2020-11-16 2022-05-19 深圳市优必选科技股份有限公司 Data enhancement method and apparatus for gesture recognition, computer device, and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107423760A (en) * 2017-07-21 2017-12-01 西安电子科技大学 Based on pre-segmentation and the deep learning object detection method returned
US20180032137A1 (en) * 2016-07-26 2018-02-01 Toyota Motor Engineering & Manufacturing North America, Inc. Human machine interface with haptic response based on phased array lidar

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180032137A1 (en) * 2016-07-26 2018-02-01 Toyota Motor Engineering & Manufacturing North America, Inc. Human machine interface with haptic response based on phased array lidar
CN107423760A (en) * 2017-07-21 2017-12-01 西安电子科技大学 Based on pre-segmentation and the deep learning object detection method returned

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
范钦民: "基于多层特征融合的SSD目标检测", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110070074A (en) * 2019-05-07 2019-07-30 安徽工业大学 A method of building pedestrian detection model
CN110807183A (en) * 2019-10-12 2020-02-18 广州多益网络股份有限公司 Sliding verification code man-machine behavior identification method of multi-dimensional feature system
CN110868327A (en) * 2019-11-28 2020-03-06 武汉极意网络科技有限公司 Behavior verification control method, behavior verification control device, behavior verification control equipment and storage medium
CN113450573A (en) * 2020-03-25 2021-09-28 重庆翼动科技有限公司 Traffic monitoring method and traffic monitoring system based on unmanned aerial vehicle image recognition
CN111382723A (en) * 2020-03-30 2020-07-07 北京云住养科技有限公司 Method, device and system for identifying help
CN111738056A (en) * 2020-04-27 2020-10-02 浙江万里学院 Heavy truck blind area target detection method based on improved YOLO v3
CN111738056B (en) * 2020-04-27 2023-11-03 浙江万里学院 Heavy truck blind area target detection method based on improved YOLO v3
CN111860160A (en) * 2020-06-16 2020-10-30 北京华电天仁电力控制技术有限公司 Method for detecting wearing of mask indoors
CN111860160B (en) * 2020-06-16 2023-12-12 国能信控互联技术有限公司 Method for detecting wearing of mask indoors
CN112016077A (en) * 2020-07-14 2020-12-01 北京淇瑀信息科技有限公司 Page information acquisition method and device based on sliding track simulation and electronic equipment
CN112016077B (en) * 2020-07-14 2024-03-12 北京淇瑀信息科技有限公司 Page information acquisition method and device based on sliding track simulation and electronic equipment
CN112380508B (en) * 2020-11-16 2022-10-21 西安电子科技大学 Man-machine verification method based on common knowledge
WO2022099685A1 (en) * 2020-11-16 2022-05-19 深圳市优必选科技股份有限公司 Data enhancement method and apparatus for gesture recognition, computer device, and storage medium
CN112380508A (en) * 2020-11-16 2021-02-19 西安电子科技大学 Man-machine verification method based on common knowledge
CN113256724A (en) * 2021-07-07 2021-08-13 上海影创信息科技有限公司 Handle inside-out vision 6-degree-of-freedom positioning method and system
CN114465724B (en) * 2022-02-24 2023-11-03 深圳软牛科技有限公司 Verification code generation and verification method, client, server and system
CN114465724A (en) * 2022-02-24 2022-05-10 深圳软牛科技有限公司 Verification code generation and verification method, client, server and system

Also Published As

Publication number Publication date
CN109684803B (en) 2021-04-20

Similar Documents

Publication Publication Date Title
CN109684803A (en) Man-machine verification method based on gesture sliding
CN105955889B (en) A kind of graphical interfaces automated testing method
CN109034069B (en) Method and apparatus for generating information
US10162742B2 (en) System and method for end to end performance response time measurement based on graphic recognition
CN106855421A (en) A kind of automobile instrument automatization test system and method for testing based on machine vision
CN109389153A (en) A kind of holographic false proof code check method and device
CN111310156B (en) Automatic identification method and system for slider verification code
CN111309222B (en) Sliding block notch positioning and dragging track generation method for sliding block verification code
CN110503099B (en) Information identification method based on deep learning and related equipment
CN110222148A (en) Method for evaluating confidence and device suitable for syntactic analysis
CN109272003A (en) A kind of method and apparatus for eliminating unknown error in deep learning model
CN113763348A (en) Image quality determination method and device, electronic equipment and storage medium
CN108304243A (en) Interface creating method, device, computer equipment and storage medium
CN114511710A (en) Image target detection method based on convolutional neural network
CN106484990A (en) A kind of engine test data three-dimensional Waterfall plot is rebuild, is shown and analysis method
CN102377732A (en) Method for verifying operation of natural person
Bernal-Cárdenas et al. Translating video recordings of complex mobile app ui gestures into replayable scenarios
CN113034421A (en) Image detection method, device and storage medium
CN108921138B (en) Method and apparatus for generating information
CN110321867A (en) Shelter target detection method based on part constraint network
Arcaini et al. ROBY: a tool for robustness analysis of neural network classifiers
CN110585730A (en) Rhythm sensing method and device for game and related equipment
CN107135402A (en) A kind of method and device for recognizing TV station's icon
CN109119157A (en) A kind of prediction technique and system of infant development
CN112084889A (en) Image behavior recognition method and device, computing equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant