CN114463858A - Signature behavior identification method and system based on deep learning - Google Patents

Signature behavior identification method and system based on deep learning Download PDF

Info

Publication number
CN114463858A
CN114463858A CN202210034269.6A CN202210034269A CN114463858A CN 114463858 A CN114463858 A CN 114463858A CN 202210034269 A CN202210034269 A CN 202210034269A CN 114463858 A CN114463858 A CN 114463858A
Authority
CN
China
Prior art keywords
behavior
image
signature
video
effective image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210034269.6A
Other languages
Chinese (zh)
Inventor
刘志忠
余敏
邓帅军
陈亚俊
钟瑞超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shuangzhao Electronic Technology Co ltd
Original Assignee
Guangzhou Shuangzhao Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shuangzhao Electronic Technology Co ltd filed Critical Guangzhou Shuangzhao Electronic Technology Co ltd
Priority to CN202210034269.6A priority Critical patent/CN114463858A/en
Publication of CN114463858A publication Critical patent/CN114463858A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Abstract

The invention discloses a signature behavior identification method and a system based on deep learning, which comprises the following steps: receiving video data uploaded by a user, and extracting a plurality of first images to be identified from the video data; sequentially detecting all first images according to a preset algorithm, and judging that the current first image is an effective image when detecting that a signature pen area in the current first image is intersected with a hand area and the rotation angle of the signature pen is greater than a preset angle; sequentially carrying out behavior recognition on all effective images, and acquiring signature behavior videos from video data according to all behavior recognition results; wherein one effective image corresponds to one behavior recognition result. According to the method, the first image which is intersected with the hand area and has the signature pen rotation angle larger than the preset angle is screened as the effective image, so that the situation that the behavior that the target takes up the pen but is not signed is mistakenly identified as the signature behavior is avoided, and the accuracy of signature behavior identification is improved.

Description

Signature behavior identification method and system based on deep learning
Technical Field
The invention relates to the field of video data processing, in particular to a signature behavior identification method and system based on deep learning.
Background
In order to guarantee the rights and interests of consumers, the financial insurance institution needs to standardize the selling behavior of the financial insurance institution through recording and video when selling financing products and selling the financing products based on the requirements of the supervision institution. Currently, a financial insurance institution generally caches a video file locally, and asynchronously uploads the video file to a cloud for storage after the whole video is recorded, so that the video file is ready for follow-up supervision departments to perform compliance examination. In order to ensure the compliance of business handling videos, financial insurance organizations generally adopt manual means to review videos. In the process of financial business transaction, a large amount of video data can be generated, and the amount of video data is continuously increased. Manual review of a video requires 10 to 15 minutes, and the ever-increasing business needs cannot be met by only manual review. In addition, although special auxiliary equipment can be used for detecting the signature behavior, the auxiliary equipment is complex to operate and high in cost, and meanwhile, the universal applicability is poor, and the use scene is limited.
The existing signature detection system usually determines the result of signature identification according to whether a sign pen and a hand appear in a video, and the behavior that a target takes up the pen without signature is easily judged as the signature behavior by mistake, so that the accuracy of the identification result is influenced.
Disclosure of Invention
The invention provides a signature behavior recognition method and system based on deep learning, which can reduce time cost and improve the recognition accuracy of signature behaviors.
In order to solve the above technical problem, an embodiment of the present invention provides a signature behavior recognition method based on deep learning, including:
receiving video data uploaded by a user, and extracting a plurality of first images to be identified from the video data;
sequentially detecting all the first images according to a preset algorithm, and when the fact that the sign pen area and the hand area in the current first image are intersected and the rotation angle of the sign pen is larger than a preset angle is detected, judging that the current first image is an effective image;
sequentially carrying out behavior recognition on all the effective images, and acquiring a signature behavior video from the video data according to all the behavior recognition results; wherein one of the effective images corresponds to one of the behavior recognition results.
Further, the detecting is sequentially performed on all the first images according to a preset algorithm, and when it is detected that the sign pen area and the hand area in the current first image are intersected and the rotation angle of the sign pen is greater than a preset angle, it is determined that the current first image is an effective image, specifically:
sequentially detecting all the first images according to a preset target detection algorithm to obtain sign pen detection results and hand detection results corresponding to a plurality of first images; wherein one first image corresponds to one sign pen detection result and one hand detection result;
selecting the first image, as the effective image, of which the sign pen area is intersected with the hand area and the rotation angle of the sign pen is larger than a preset angle according to the sign pen detection result and the hand detection result; wherein, the value range of the preset angle is as follows: 40 to 80 degrees.
Further, the sequentially performing behavior recognition on all the effective images, and acquiring a signature behavior video from the video data according to all the behavior recognition results specifically include:
sequentially carrying out behavior recognition on all the effective images according to a preset behavior recognition algorithm based on deep learning to obtain behavior recognition results corresponding to a plurality of effective images;
determining the behavior corresponding to the effective image according to the behavior identification result;
and determining whether video writing operation is carried out at the video moment corresponding to the effective image or not according to the first service state and the behavior corresponding to the effective image, and acquiring the signature behavior video from the video data.
Further, the determining, according to the behavior recognition result, a behavior corresponding to the effective image specifically includes:
if the behavior recognition result comprises a signature action and the probability of the signature action corresponding to the signature action is greater than a preset first threshold, determining that the behavior corresponding to the current effective image is the signature action;
otherwise, determining that the behavior corresponding to the current effective image is a non-signature behavior.
Further, the determining, according to the first service state and the behavior corresponding to the effective image, whether to perform a video write operation at a video time corresponding to the effective image, and acquiring the signature behavior video from the video data includes:
when the first service state is a non-signature state, judging whether the behavior corresponding to the effective image is a signature behavior, and determining whether video writing operation is performed at the video moment corresponding to the effective image;
if the behavior corresponding to the current effective image is a signature behavior, setting the continuous occurrence frequency of the non-signature behavior to zero, increasing the continuous occurrence frequency of the signature behavior by 1, and increasing the cumulative occurrence frequency of the signature behavior by 1; when video writing operation is carried out at the video moment corresponding to the last effective image, video writing operation is continuously carried out at the video moment corresponding to the current effective image, and when the continuous occurrence frequency of the signing behaviors or the accumulated occurrence frequency of the signing behaviors meet a preset first condition, the first service state is adjusted to be a signing state; when the video writing operation is not performed at the video moment corresponding to the last effective image, starting the video writing operation at the video moment corresponding to the current effective image, and taking the video moment corresponding to the current effective image as the starting moment of the signature behavior video;
if the behavior corresponding to the current effective image is a non-signature behavior, setting the continuous occurrence frequency of the signature behavior to zero, increasing the continuous occurrence frequency of the non-signature behavior by 1, and increasing the cumulative occurrence frequency of the non-signature behavior by 1; and when video writing operation is carried out at the video moment corresponding to the last effective image and the continuous occurrence frequency of the non-signature behaviors or the cumulative occurrence frequency of the non-signature behaviors meets a preset second condition, terminating the video writing operation at the video moment corresponding to the current effective image and deleting the video data stored in the process of carrying out the video writing operation.
Further, the determining, according to the first service state and the behavior corresponding to the effective image, whether to perform a video write operation at a video moment corresponding to the effective image, and acquiring the signature behavior video further includes:
when the first service state is a signature state, judging whether the behavior corresponding to the effective image is a signature behavior, and determining whether video writing operation is performed at the video moment corresponding to the effective image;
if the behavior corresponding to the current effective image is a signature behavior, continuing to perform video writing operation at the video moment corresponding to the effective image to serve as a video operation result corresponding to the effective image;
if the behavior corresponding to the current effective image is a non-signature behavior, setting the continuous occurrence frequency of the signature behavior to zero, increasing the continuous occurrence frequency of the non-signature behavior by 1, increasing the cumulative occurrence frequency of the non-signature behavior by 1, adjusting the first service state to be a non-signature state when the continuous occurrence frequency of the non-signature behavior or the cumulative occurrence frequency of the non-signature behavior meets a preset second condition, then terminating the video writing operation at the video moment corresponding to the current effective image, taking the video moment corresponding to the current effective image as the end moment of the video of the signature behavior, and acquiring the video of the signature behavior from the video data according to the start moment and the end moment of the video of the signature behavior.
Further, the receiving of the video data uploaded by the user and the extracting of the plurality of first images to be identified from the video data specifically include:
receiving video data uploaded by a user, decoding the video data, sequentially acquiring a plurality of initial images, and calculating image histograms of all the initial images;
according to the acquisition sequence, taking a first initial image as an initial frame image and taking a second initial image as a comparison frame image;
calculating the similarity of the image histogram of the initial frame image and the image histogram of the comparison frame image according to a preset similarity algorithm, and sampling the current initial frame image according to the size relation between the similarity and a preset second threshold value until all the initial images are sampled to obtain a plurality of first images to be identified;
if the similarity is smaller than the second threshold, taking the current initial frame image as the first image, taking the current comparison frame image as the initial frame image, taking the next initial image as the comparison frame image, and continuing to sample the next initial image;
if the similarity is larger than or equal to the second threshold, discarding the current comparison frame image, taking the next initial image as the comparison frame image, and continuing to sample the next initial image.
In order to solve the same technical problem, the invention also provides a signature behavior recognition system based on deep learning, which comprises:
the system comprises a preprocessing module, a recognition module and a recognition module, wherein the preprocessing module is used for receiving video data uploaded by a user and extracting a plurality of first images to be recognized from the video data; sequentially detecting all the first images according to a preset algorithm, and when the fact that the sign pen area and the hand area in the current first image are intersected and the rotation angle of the sign pen is larger than a preset angle is detected, judging that the current first image is an effective image;
the behavior recognition module is used for sequentially carrying out behavior recognition on all the effective images and acquiring a signature behavior video from the video data according to all the behavior recognition results; wherein one of the effective images corresponds to one of the behavior recognition results.
Further, the preprocessing module further includes:
the video decoding unit is used for receiving video data uploaded by a user, decoding the video data, sequentially acquiring a plurality of initial images and calculating image histograms of all the initial images;
the sampling unit is used for taking the first initial image as an initial frame image and taking the second initial image as a comparison frame image according to the acquisition sequence; calculating the similarity of the image histogram of the initial frame image and the image histogram of the comparison frame image according to a preset similarity algorithm, and sampling the current initial frame image according to the size relation between the similarity and a preset second threshold value until all the initial images are sampled to obtain a plurality of first images; if the similarity is smaller than the second threshold, taking the current initial frame image as the first image, taking the current comparison frame image as the initial frame image, taking the next initial image as the comparison frame image, and continuing to sample the next initial image; if the similarity is larger than or equal to the second threshold, discarding the current comparison frame image, taking the next initial image as the comparison frame image, and continuing to sample the next initial image;
the detection unit is used for sequentially detecting all the first images according to a preset target detection algorithm to obtain sign pen detection results and hand detection results corresponding to a plurality of first images; wherein one first image corresponds to one sign pen detection result and one hand detection result;
the selecting unit is used for selecting the first image, as the effective image, of which the sign pen area is intersected with the hand area and the rotation angle of the sign pen is larger than a preset angle according to the sign pen detection result and the hand detection result; wherein, the value range of the preset angle is as follows: 40 to 80 degrees.
Further, the behavior recognition module further includes:
the behavior recognition unit is used for sequentially carrying out behavior recognition on all the effective images according to a preset behavior recognition algorithm based on deep learning to obtain behavior recognition results corresponding to a plurality of effective images;
the behavior judging unit is used for judging the behavior corresponding to the current effective image according to the behavior identification result; if the behavior recognition result comprises a signature action and the probability of the signature action corresponding to the signature action is greater than a preset first threshold, determining that the behavior corresponding to the current effective image is the signature action; otherwise, determining that the behavior corresponding to the current effective image is a non-signature behavior;
the first processing unit is used for judging whether the behavior corresponding to the effective image is a signature behavior or not when the first service state is a non-signature state, and determining whether video writing operation is performed at the video moment corresponding to the effective image or not; if the behavior corresponding to the current effective image is a signature behavior, setting the continuous occurrence frequency of the non-signature behavior to zero, increasing the continuous occurrence frequency of the signature behavior by 1, and increasing the cumulative occurrence frequency of the signature behavior by 1; when video writing operation is carried out at the video moment corresponding to the last effective image, video writing operation is continuously carried out at the video moment corresponding to the current effective image, and when the continuous occurrence frequency of the signing behaviors or the accumulated occurrence frequency of the signing behaviors meet a preset first condition, the first service state is adjusted to be a signing state; when the video writing operation is not performed at the video moment corresponding to the last effective image, starting the video writing operation at the video moment corresponding to the current effective image, and taking the video moment corresponding to the current effective image as the starting moment of the signature behavior video; if the behavior corresponding to the current effective image is a non-signature behavior, setting the continuous occurrence frequency of the signature behavior to zero, increasing the continuous occurrence frequency of the non-signature behavior by 1, and increasing the cumulative occurrence frequency of the non-signature behavior by 1; when video writing operation is carried out at the video moment corresponding to the last effective image and the continuous occurrence frequency of the non-signature behaviors or the cumulative occurrence frequency of the non-signature behaviors meets a preset second condition, the video writing operation is stopped at the video moment corresponding to the current effective image, and the video data stored in the process of carrying out the video writing operation is deleted;
the second processing unit is used for judging whether the behavior corresponding to the effective image is a signature behavior or not when the first service state is the signature state, and determining whether video writing operation is performed at the video moment corresponding to the effective image or not; if the behavior corresponding to the current effective image is a signature behavior, continuing to perform video writing operation at the video moment corresponding to the effective image to serve as a video operation result corresponding to the effective image; if the behavior corresponding to the current effective image is a non-signature behavior, setting the continuous occurrence frequency of the signature behavior to zero, increasing the continuous occurrence frequency of the non-signature behavior by 1, increasing the cumulative occurrence frequency of the non-signature behavior by 1, adjusting the first service state to be a non-signature state when the continuous occurrence frequency of the non-signature behavior or the cumulative occurrence frequency of the non-signature behavior meets a preset second condition, then terminating the video writing operation at the video moment corresponding to the current effective image, taking the video moment corresponding to the current effective image as the end moment of the video of the signature behavior, and acquiring the video of the signature behavior from the video data according to the start moment and the end moment of the video of the signature behavior.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
the invention provides a signature behavior recognition method and system based on deep learning, which are used for performing subsequent behavior recognition by screening a first image, as an effective image, of which a signature pen region and a hand region are intersected and the rotation angle of a signature pen is larger than a preset angle, so that the situation that a behavior of taking up a target but not signing is recognized as a signature behavior by mistake in the signature behavior recognition process is avoided, and the accuracy of signature behavior recognition is improved.
Furthermore, according to the service states and behavior recognition results corresponding to all effective images, the starting time and the ending time of the signature behavior video are determined, the signature behavior video is obtained and stored, a large number of irrelevant non-signature video frames are reduced, and the subsequent viewing by a user is facilitated. Meanwhile, the video data uploaded by the user is subjected to decoding and sampling preprocessing, the number of first images obtained after conventional preprocessing is reduced, and the calculated amount in the signature behavior identification process is effectively reduced.
Drawings
FIG. 1: the invention provides a flow diagram of an embodiment of a signature behavior recognition method based on deep learning;
FIG. 2: the invention provides a structural schematic diagram of a signature behavior recognition method based on deep learning;
FIG. 3: the invention provides a structural schematic diagram of a preprocessing module of a signature behavior recognition system based on deep learning;
FIG. 4: the invention provides a structural schematic diagram of a behavior recognition module of a signature behavior recognition system based on deep learning.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The first embodiment is as follows:
referring to fig. 1, a signature behavior recognition method based on deep learning according to an embodiment of the present invention includes steps S1 to S3, which include the following steps:
step S1: the method comprises the steps of receiving video data uploaded by a user, and extracting a plurality of first images to be identified from the video data.
Further, step S1 specifically includes step S11 to step S15, and each step specifically includes the following steps:
step S11: receiving video data uploaded by a user, decoding the video data, sequentially obtaining a plurality of initial images, and calculating image histograms of all the initial images.
In this embodiment, OpenCV software is used to decode video data uploaded by a user, and a plurality of initial images are sequentially acquired.
Step S12: according to the acquisition sequence, the first initial image is used as an initial frame image, and the second initial image is used as a comparison frame image.
Step S13: according to a preset similarity algorithm, calculating the similarity of the image histogram of the initial frame image and the image histogram of the comparison frame image, and sampling the current initial frame image according to the size relation between the similarity and a preset second threshold value until all the initial images are sampled, so as to obtain a plurality of first images to be identified.
In this embodiment, based on the statistical babbitt distance, the normalized correlation coefficient of the image histogram of the initial frame image and the image histogram of the comparison frame image is calculated as the similarity, and the current initial frame image is sampled according to the magnitude relationship between the similarity and the preset second threshold value. The second threshold may be set based on a plurality of tests, and for example, the second threshold is 0.2.
If the similarity is less than the second threshold, step S14 is executed, and if the similarity is greater than or equal to the second threshold, step S15 is executed.
Step S14: and taking the current initial frame image as a first image, taking the current comparison frame image as an initial frame image, taking the next initial image as a comparison frame image, and continuously sampling the next initial image.
Step S15: and discarding the current comparison frame image, taking the next initial image as the comparison frame image, and continuously sampling the next initial image.
In the embodiment, a plurality of first images to be identified are selected by sequentially sampling the initial images, part of the initial images with higher similarity are abandoned, the initial images with lower similarity are retained, redundant information in a video is reduced, and the calculation amount in the subsequent signature behavior identification process is effectively reduced.
Step S2: and sequentially detecting all the first images according to a preset algorithm, and when the condition that the sign pen area and the hand area in the current first image are intersected and the rotation angle of the sign pen is greater than a preset angle is detected, judging that the current first image is an effective image.
Further, step S2 specifically includes step S21 to step S22, and each step specifically includes the following steps:
step S21: sequentially detecting all the first images according to a preset target detection algorithm to obtain sign pen detection results and hand detection results corresponding to the first images; wherein, a first image corresponds to a sign pen test result and a hand test result.
In this embodiment, the initial target detection algorithm YOLOV5 is improved, and the angle information is added to obtain the preset target detection algorithm. And detecting the first image according to a preset target detection algorithm, wherein the obtained detection result is in the form of [ A, B and C ], wherein A represents the label name of the detected object, B represents the confidence coefficient of the detected object, and C represents the position coordinate of the detected object.
As an example, the first image is detected according to a preset target detection algorithm, and the obtained detection result of the sign pen is [ pen, 0.695, (464, 381, 99, 10, 62) ]; wherein pen represents the label name of the sign pen, 0.695 represents the confidence of the sign pen detection, and (464, 381, 99, 10, 62) represents the data information of the rectangular box of the sign pen, and the data information is respectively the central point x-axis coordinate, the central point y-axis coordinate, the longest side, the shortest side and the rotation angle of the rectangular box of the sign pen. Based on the data information of (464, 381, 99, 10, 62) in the above-mentioned sign pen detection result, the coordinates of four vertexes of the rectangular box of the sign pen are calculated to be (240, 225), (218, 181), (222, 179) and (244, 223), respectively.
As an example, the first image is detected according to a preset target detection algorithm, and the obtained hand detection result is [ hand, 0.845, (533, 346, 642, 439) ]; here, hand denotes a label name of the hand, 0.845 denotes a confidence of hand detection, (533, 346, 642, 439) is data information indicating a rectangular frame of the hand, where (533, 346) is the top left vertex coordinate of the hand rectangular frame, and (642, 439) is the bottom right vertex coordinate of the hand rectangular frame.
Step S22: selecting a first image as an effective image, wherein the first image is formed by intersecting a sign pen area and a hand area and has a sign pen rotation angle larger than a preset angle according to a sign pen detection result and a hand detection result; wherein, the value range of the preset angle is as follows: 40 to 80 degrees.
In this embodiment, according to a preset sign pen angle and a screening condition that the sign pen region and the hand region are intersected, a first image is screened to obtain an effective image, signature behaviors are preliminarily identified, the situation that behaviors which are not signed when a target is taken up are mistakenly identified as signature behaviors in a subsequent signature behavior identification process is avoided, and the precision of signature behavior identification is improved.
Step S3: sequentially carrying out behavior recognition on all effective images, and acquiring signature behavior videos from video data according to all behavior recognition results; wherein one effective image corresponds to one behavior recognition result.
Further, step S3 specifically includes step S31 to step S33, and each step specifically includes the following steps:
step S31: and sequentially carrying out behavior recognition on all effective images according to a preset behavior recognition algorithm based on deep learning to obtain behavior recognition results corresponding to a plurality of effective images.
In the embodiment, based on a deep learning behavior recognition algorithm SlowFast, effective images are sequentially subjected to behavior recognition, and behavior recognition results corresponding to a plurality of effective images are obtained; each behavior recognition result comprises a plurality of most probable occurring actions and corresponding action probabilities.
As an example, the behavior recognition result is:
Figure BDA0003466837000000101
where writing denotes a signature action and 0.448 denotes a signature action probability.
Step S32: and determining the behavior corresponding to the effective image according to the behavior identification result.
Further, step S32 specifically includes step S321 to step S323, and each step specifically includes the following steps:
step S321: and judging whether the behavior recognition result contains a signature action or not, and whether the signature action probability corresponding to the signature action is greater than a preset first threshold or not.
In this embodiment, the second threshold may be set based on a plurality of tests, and for example, the second threshold is 0.3.
It should be noted that, if the behavior recognition result includes a signing action and the probability of the signing action corresponding to the signing action is greater than the preset first threshold, step S322 is executed, otherwise, step S323 is executed.
Step S322: determining that the behavior corresponding to the current valid image is a signature behavior.
Step S323: and determining that the corresponding behavior of the current effective image is a non-sign behavior.
Step S33: and determining whether video writing operation is carried out at the video moment corresponding to the effective image according to the first service state and the behavior corresponding to the effective image, and acquiring a signature behavior video from the video data.
In this embodiment, according to the order of acquiring valid images, before step S33 is executed according to the behavior corresponding to the first valid image, the initial state of the first service state is set as the unsigned state, and the number of consecutive occurrences of the signed behavior, the cumulative number of occurrences of the signed behavior, the number of consecutive occurrences of the unsigned behavior, and the cumulative number of occurrences of the unsigned behavior are initialized to 0.
Further, step S33 specifically includes steps S331 to S336, and each step specifically includes the following steps:
step S331: and when the first service state is a non-signature state, judging whether the behavior corresponding to the effective image is a signature behavior, and determining whether video writing operation is performed at the video moment corresponding to the effective image.
It should be noted that, if the behavior corresponding to the current valid image is a signature behavior, step S332 is executed, and if the behavior corresponding to the current valid image is a non-signature behavior, step S333 is executed.
Step S332: resetting the continuous occurrence frequency F _ tune _ num of the non-signature behavior to zero, increasing the continuous occurrence frequency T _ tune _ num of the signature behavior by 1, and increasing the cumulative occurrence frequency T _ all _ num of the signature behavior by 1; when video writing operation is carried out at the video moment corresponding to the last effective image, video writing operation is continuously carried out at the video moment corresponding to the current effective image, and when the continuous occurrence frequency T _ tune _ num of the signing behavior or the cumulative occurrence frequency T _ all _ num of the signing behavior meets a preset first condition, the first service state is adjusted to be a signing state; and when the video writing operation is not performed at the video time corresponding to the last effective image, starting the video writing operation at the video time corresponding to the current effective image, and taking the video time corresponding to the current effective image as the starting time of the signature behavior video.
In the present embodiment, the preset first condition may be set based on a plurality of tests. As an example, the preset first condition is: the continuous occurrence frequency T _ tune _ num of the signature behavior is more than or equal to 3, or the cumulative occurrence frequency T _ all _ num of the signature behavior is more than or equal to 5 in the process of continuously processing the behavior recognition result for 7 times.
Step S333: setting the continuous occurrence frequency T _ tune _ num of the signing behavior to zero, increasing the continuous occurrence frequency F _ tune _ num of the non-signing behavior by 1, and increasing the cumulative occurrence frequency F _ all _ num of the non-signing behavior by 1; and when the video writing operation is carried out at the video moment corresponding to the last effective image and the continuous occurrence frequency F _ tune _ num of the non-signature behavior or the cumulative occurrence frequency F _ all _ num of the non-signature behavior meets a preset second condition, terminating the video writing operation at the video moment corresponding to the current effective image and deleting the video data saved in the process of carrying out the video writing operation.
In the present embodiment, the preset second condition may be set based on a plurality of tests. As an example, the preset second condition is: the continuous occurrence frequency F _ tune _ num of the non-signature behaviors is more than or equal to 3, or the cumulative occurrence frequency F _ all _ num of the non-signature behaviors is more than or equal to 5 in the process of continuously processing behavior identification results for 7 times.
Step S334: and when the first service state is the signature state, judging whether the behavior corresponding to the effective image is the signature behavior, and determining whether video writing operation is carried out at the video moment corresponding to the effective image.
It should be noted that, if the behavior corresponding to the current valid image is a signature behavior, step S335 is executed, and if the behavior corresponding to the current valid image is a non-signature behavior, step S336 is executed.
Step S335: and if the behavior corresponding to the current effective image is the signature behavior, continuing to perform the video writing operation at the video moment corresponding to the effective image.
Step S336: if the behavior corresponding to the current effective image is a non-signature behavior, setting the number of continuous occurrences of the signature behavior T _ tune _ num to zero, increasing the number of continuous occurrences of the non-signature behavior F _ tune _ num by 1, increasing the cumulative number of occurrences of the non-signature behavior F _ all _ num by 1, adjusting the first service state to be a non-signature state when the number of continuous occurrences of the non-signature behavior F _ tune _ num or the cumulative number of occurrences of the non-signature behavior F _ all _ num meet a preset second condition, terminating the video writing operation at the video time corresponding to the current effective image, taking the video time corresponding to the current effective image as the end time of the video of the signature behavior, and acquiring the video of the signature behavior from the video data uploaded by the user according to the start time and the end time of the video of the signature behavior.
In this embodiment, the start time and the end time of the signature behavior video are located according to the first service state, the behavior corresponding to the effective image, and whether the video write operation is performed at the video time corresponding to the previous effective image, so that the signature behavior video is intercepted from the video data, a large number of irrelevant non-signature video frames in the video data are reduced, and the subsequent viewing by a user is facilitated.
In order to solve the same technical problem, the invention also provides a signature behavior recognition system based on deep learning, which comprises:
the system comprises a preprocessing module 1, a recognition module and a recognition module, wherein the preprocessing module is used for receiving video data uploaded by a user and extracting a plurality of first images to be recognized from the video data; sequentially detecting all first images according to a preset algorithm, and when the fact that a sign pen area and a hand area in the current first image are intersected and the rotation angle of the sign pen is larger than a preset angle is detected, judging that the current first image is an effective image;
the behavior recognition module 2 is used for sequentially carrying out behavior recognition on all effective images and acquiring a signature behavior video from video data according to all behavior recognition results; wherein one effective image corresponds to one behavior recognition result.
Further, the preprocessing module 1 further includes:
the video decoding unit is used for receiving video data uploaded by a user, decoding the video data, sequentially obtaining a plurality of initial images and calculating image histograms of all the initial images;
the sampling unit is used for taking the first initial image as an initial frame image and taking the second initial image as a comparison frame image according to the acquisition sequence; calculating the similarity of an image histogram of an initial frame image and an image histogram of a comparison frame image according to a preset similarity algorithm, and sampling the current initial frame image according to the size relation between the similarity and a preset second threshold value until all initial images are sampled to obtain a plurality of first images; if the similarity is smaller than a second threshold value, taking the current initial frame image as a first image, taking the current comparison frame image as an initial frame image, taking the next initial image as a comparison frame image, and continuously sampling the next initial image; if the similarity is larger than or equal to the second threshold, discarding the current comparison frame image, taking the next initial image as the comparison frame image, and continuing to sample the next initial image;
the detection unit is used for sequentially detecting all the first images according to a preset target detection algorithm to obtain sign pen detection results and hand detection results corresponding to a plurality of first images; wherein, one first image corresponds to one sign pen detection result and one hand detection result;
the selecting unit is used for selecting a first image as an effective image, wherein the first image is formed by intersecting a sign pen area and a hand area and has a sign pen rotation angle larger than a preset angle according to a sign pen detection result and a hand detection result; wherein, the value range of the preset angle is as follows: 40 to 80 degrees.
Further, the behavior recognition module 2 further includes:
the behavior recognition unit is used for sequentially carrying out behavior recognition on all effective images according to a preset behavior recognition algorithm based on deep learning to obtain behavior recognition results corresponding to a plurality of effective images;
the behavior judging unit is used for judging the behavior corresponding to the current effective image according to the behavior identification result; if the behavior recognition result contains a signature action and the signature action probability corresponding to the signature action is greater than a preset first threshold, determining that the behavior corresponding to the current effective image is the signature action; otherwise, determining that the behavior corresponding to the current effective image is a non-signature behavior;
the first processing unit is used for judging whether the behavior corresponding to the effective image is a signature behavior or not when the first service state is a non-signature state, and determining whether video writing operation is performed at the video moment corresponding to the effective image or not; if the behavior corresponding to the current effective image is a signature behavior, setting the continuous occurrence frequency of the non-signature behavior to zero, increasing the continuous occurrence frequency of the signature behavior by 1, and increasing the cumulative occurrence frequency of the signature behavior by 1; when video writing operation is carried out at the video moment corresponding to the last effective image, the video writing operation is continuously carried out at the video moment corresponding to the current effective image, and when the continuous occurrence frequency of the signing action or the accumulated occurrence frequency of the signing action meets a preset first condition, the first service state is adjusted to be the signing state; when the video writing operation is not performed at the video moment corresponding to the last effective image, the video writing operation is started at the video moment corresponding to the current effective image, and the video moment corresponding to the current effective image is taken as the starting moment of the signature behavior video; if the behavior corresponding to the current effective image is a non-signature behavior, setting the continuous occurrence frequency of the signature behavior to zero, increasing the continuous occurrence frequency of the non-signature behavior by 1, and increasing the cumulative occurrence frequency of the non-signature behavior by 1; when video writing operation is carried out at the video moment corresponding to the last effective image and the continuous occurrence frequency of the non-signature behaviors or the accumulated occurrence frequency of the non-signature behaviors meet a preset second condition, the video writing operation is stopped at the video moment corresponding to the current effective image, and the video data stored in the process of carrying out the video writing operation is deleted;
the second processing unit is used for judging whether the behavior corresponding to the effective image is a signature behavior or not when the first service state is the signature state, and determining whether video writing operation is performed at the video moment corresponding to the effective image or not; if the behavior corresponding to the current effective image is a signature behavior, continuing to perform video writing operation at the video moment corresponding to the effective image as a video operation result corresponding to the effective image; if the behavior corresponding to the current effective image is a non-signature behavior, setting the continuous occurrence frequency of the signature behavior to zero, increasing the continuous occurrence frequency of the non-signature behavior by 1, increasing the cumulative occurrence frequency of the non-signature behavior by 1, adjusting the first service state to be a non-signature state when the continuous occurrence frequency of the non-signature behavior or the cumulative occurrence frequency of the non-signature behavior meets a preset second condition, then terminating the video writing operation at the video moment corresponding to the current effective image, taking the video moment corresponding to the current effective image as the end moment of the video of the signature behavior, and acquiring the video of the signature behavior from the video data according to the start moment and the end moment of the video of the signature behavior.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working process of the system described above may refer to the corresponding process in the foregoing method embodiment, and details are not described herein again.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
the invention provides a signature behavior recognition method and system based on deep learning, which are used for performing subsequent behavior recognition by screening a first image, as an effective image, of which a signature pen region and a hand region are intersected and the rotation angle of a signature pen is larger than a preset angle, so that the situation that a behavior of taking up a target but not signing is recognized as a signature behavior by mistake in the signature behavior recognition process is avoided, and the accuracy of signature behavior recognition is improved.
Furthermore, according to the service states and behavior recognition results corresponding to all effective images, the starting time and the ending time of the signature behavior video are determined, the signature behavior video is obtained and stored, a large number of irrelevant non-signature video frames are reduced, and the subsequent viewing by a user is facilitated. Meanwhile, the video data uploaded by the user is subjected to decoding and sampling preprocessing, the number of first images obtained after conventional preprocessing is reduced, and the calculated amount in the signature behavior identification process is effectively reduced.
The above-mentioned embodiments are provided to further explain the objects, technical solutions and advantages of the present invention in detail, and it should be understood that the above-mentioned embodiments are only examples of the present invention and are not intended to limit the scope of the present invention. It should be understood that any modifications, equivalents, improvements and the like, which come within the spirit and principle of the invention, may occur to those skilled in the art and are intended to be included within the scope of the invention.

Claims (10)

1. A signature behavior recognition method based on deep learning is characterized by comprising the following steps:
receiving video data uploaded by a user, and extracting a plurality of first images to be identified from the video data;
sequentially detecting all the first images according to a preset algorithm, and when the fact that the sign pen area and the hand area in the current first image are intersected and the rotation angle of the sign pen is larger than a preset angle is detected, judging that the current first image is an effective image;
sequentially carrying out behavior recognition on all the effective images, and acquiring a signature behavior video from the video data according to all the behavior recognition results; wherein one of the effective images corresponds to one of the behavior recognition results.
2. The method for recognizing signature behaviors based on deep learning as claimed in claim 1, wherein all the first images are sequentially detected according to a preset algorithm, and when it is detected that a sign pen region and a hand region in the current first image intersect and a rotation angle of the sign pen is greater than a preset angle, it is determined that the current first image is an effective image, specifically:
sequentially detecting all the first images according to a preset target detection algorithm to obtain sign pen detection results and hand detection results corresponding to a plurality of first images; wherein one first image corresponds to one sign pen detection result and one hand detection result;
selecting the first image, as the effective image, of which the sign pen area is intersected with the hand area and the rotation angle of the sign pen is larger than a preset angle according to the sign pen detection result and the hand detection result; wherein, the value range of the preset angle is as follows: 40 to 80 degrees.
3. The method as claimed in claim 1, wherein the step of sequentially performing behavior recognition on all the effective images and obtaining a signature behavior video from the video data according to all the behavior recognition results includes:
sequentially carrying out behavior recognition on all the effective images according to a preset behavior recognition algorithm based on deep learning to obtain behavior recognition results corresponding to a plurality of effective images;
determining the behavior corresponding to the effective image according to the behavior identification result;
and determining whether video writing operation is carried out at the video moment corresponding to the effective image or not according to the first service state and the behavior corresponding to the effective image, and acquiring the signature behavior video from the video data.
4. A signature behavior recognition method based on deep learning as claimed in claim 3, wherein the determining the behavior corresponding to the valid image according to the behavior recognition result specifically includes:
if the behavior recognition result comprises a signature action and the probability of the signature action corresponding to the signature action is greater than a preset first threshold, determining that the behavior corresponding to the current effective image is the signature action;
otherwise, determining that the behavior corresponding to the current effective image is a non-signature behavior.
5. The method as claimed in claim 3, wherein the determining whether to perform a video write operation at a video time corresponding to the valid image according to the first service status and the behavior corresponding to the valid image, and obtaining the video of the signing behavior from the video data includes:
when the first service state is a non-signature state, judging whether the behavior corresponding to the effective image is a signature behavior, and determining whether video writing operation is performed at the video moment corresponding to the effective image;
if the behavior corresponding to the current effective image is a signature behavior, setting the continuous occurrence frequency of the non-signature behavior to zero, increasing the continuous occurrence frequency of the signature behavior by 1, and increasing the cumulative occurrence frequency of the signature behavior by 1; when video writing operation is carried out at the video moment corresponding to the last effective image, video writing operation is continuously carried out at the video moment corresponding to the current effective image, and when the continuous occurrence frequency of the signing behaviors or the accumulated occurrence frequency of the signing behaviors meet a preset first condition, the first service state is adjusted to be a signing state; when the video writing operation is not performed at the video moment corresponding to the last effective image, starting the video writing operation at the video moment corresponding to the current effective image, and taking the video moment corresponding to the current effective image as the starting moment of the signature behavior video;
if the behavior corresponding to the current effective image is a non-signature behavior, setting the continuous occurrence frequency of the signature behavior to zero, increasing the continuous occurrence frequency of the non-signature behavior by 1, and increasing the cumulative occurrence frequency of the non-signature behavior by 1; and when video writing operation is carried out at the video moment corresponding to the last effective image and the continuous occurrence frequency of the non-signature behaviors or the cumulative occurrence frequency of the non-signature behaviors meets a preset second condition, terminating the video writing operation at the video moment corresponding to the current effective image and deleting the video data stored in the process of carrying out the video writing operation.
6. The method as claimed in claim 3, wherein the determining whether to perform a video write operation at a video time corresponding to the valid image and obtain the video of the signing behavior according to the first service status and the behavior corresponding to the valid image further comprises:
when the first service state is a signature state, judging whether the behavior corresponding to the effective image is a signature behavior, and determining whether video writing operation is performed at the video moment corresponding to the effective image;
if the behavior corresponding to the current effective image is a signature behavior, continuing to perform video writing operation at the video moment corresponding to the effective image to serve as a video operation result corresponding to the effective image;
if the behavior corresponding to the current effective image is a non-signature behavior, setting the continuous occurrence frequency of the signature behavior to zero, increasing the continuous occurrence frequency of the non-signature behavior by 1, increasing the cumulative occurrence frequency of the non-signature behavior by 1, adjusting the first service state to be a non-signature state when the continuous occurrence frequency of the non-signature behavior or the cumulative occurrence frequency of the non-signature behavior meets a preset second condition, then terminating the video writing operation at the video moment corresponding to the current effective image, taking the video moment corresponding to the current effective image as the end moment of the video of the signature behavior, and acquiring the video of the signature behavior from the video data according to the start moment and the end moment of the video of the signature behavior.
7. The method as claimed in claim 1, wherein the receiving video data uploaded by a user and extracting a plurality of first images to be recognized from the video data are specifically:
receiving video data uploaded by a user, decoding the video data, sequentially acquiring a plurality of initial images, and calculating image histograms of all the initial images;
according to the acquisition sequence, taking a first initial image as an initial frame image and taking a second initial image as a comparison frame image;
calculating the similarity of the image histogram of the initial frame image and the image histogram of the comparison frame image according to a preset similarity algorithm, and sampling the current initial frame image according to the size relation between the similarity and a preset second threshold value until all the initial images are sampled to obtain a plurality of first images to be identified;
if the similarity is smaller than the second threshold, taking the current initial frame image as the first image, taking the current comparison frame image as the initial frame image, taking the next initial image as the comparison frame image, and continuing to sample the next initial image;
if the similarity is larger than or equal to the second threshold, discarding the current comparison frame image, taking the next initial image as the comparison frame image, and continuing to sample the next initial image.
8. A deep learning based signature behavior recognition system, comprising:
the system comprises a preprocessing module, a recognition module and a recognition module, wherein the preprocessing module is used for receiving video data uploaded by a user and extracting a plurality of first images to be recognized from the video data; sequentially detecting all the first images according to a preset algorithm, and when the fact that the sign pen area and the hand area in the current first image are intersected and the rotation angle of the sign pen is larger than a preset angle is detected, judging that the current first image is an effective image;
the behavior recognition module is used for sequentially carrying out behavior recognition on all the effective images and acquiring a signature behavior video from the video data according to all the behavior recognition results; wherein one of the effective images corresponds to one of the behavior recognition results.
9. A deep learning based signature behavior recognition system as recited in claim 8, wherein the preprocessing module further comprises:
the video decoding unit is used for receiving video data uploaded by a user, decoding the video data, sequentially acquiring a plurality of initial images and calculating image histograms of all the initial images;
the sampling unit is used for taking the first initial image as an initial frame image and taking the second initial image as a comparison frame image according to the acquisition sequence; calculating the similarity of the image histogram of the initial frame image and the image histogram of the comparison frame image according to a preset similarity algorithm, and sampling the current initial frame image according to the size relation between the similarity and a preset second threshold value until all the initial images are sampled to obtain a plurality of first images; if the similarity is smaller than the second threshold, taking the current initial frame image as the first image, taking the current comparison frame image as the initial frame image, taking the next initial image as the comparison frame image, and continuing to sample the next initial image; if the similarity is larger than or equal to the second threshold, discarding the current comparison frame image, taking the next initial image as the comparison frame image, and continuing to sample the next initial image;
the detection unit is used for sequentially detecting all the first images according to a preset target detection algorithm to obtain sign pen detection results and hand detection results corresponding to a plurality of first images; wherein one first image corresponds to one sign pen detection result and one hand detection result;
the selecting unit is used for selecting the first image, as the effective image, of which the sign pen area is intersected with the hand area and the rotation angle of the sign pen is larger than a preset angle according to the sign pen detection result and the hand detection result; wherein, the value range of the preset angle is as follows: 40 to 80 degrees.
10. A deep learning based signature behavior recognition system as recited in claim 8, wherein the behavior recognition module further comprises:
the behavior recognition unit is used for sequentially carrying out behavior recognition on all the effective images according to a preset behavior recognition algorithm based on deep learning to obtain behavior recognition results corresponding to a plurality of effective images;
the behavior judging unit is used for judging the behavior corresponding to the current effective image according to the behavior identification result; if the behavior recognition result comprises a signature action and the probability of the signature action corresponding to the signature action is greater than a preset first threshold, determining that the behavior corresponding to the current effective image is the signature action; otherwise, determining that the behavior corresponding to the current effective image is a non-signature behavior;
the first processing unit is used for judging whether the behavior corresponding to the effective image is a signature behavior or not when the first service state is a non-signature state, and determining whether video writing operation is performed at the video moment corresponding to the effective image or not; if the behavior corresponding to the current effective image is a signature behavior, setting the continuous occurrence frequency of the non-signature behavior to zero, increasing the continuous occurrence frequency of the signature behavior by 1, and increasing the cumulative occurrence frequency of the signature behavior by 1; when video writing operation is carried out at the video moment corresponding to the last effective image, video writing operation is continuously carried out at the video moment corresponding to the current effective image, and when the continuous occurrence frequency of the signing behaviors or the accumulated occurrence frequency of the signing behaviors meet a preset first condition, the first service state is adjusted to be a signing state; when the video writing operation is not performed at the video moment corresponding to the last effective image, starting the video writing operation at the video moment corresponding to the current effective image, and taking the video moment corresponding to the current effective image as the starting moment of the signature behavior video; if the behavior corresponding to the current effective image is a non-signature behavior, setting the continuous occurrence frequency of the signature behavior to zero, increasing the continuous occurrence frequency of the non-signature behavior by 1, and increasing the cumulative occurrence frequency of the non-signature behavior by 1; when video writing operation is carried out at the video moment corresponding to the last effective image and the continuous occurrence frequency of the non-signature behaviors or the cumulative occurrence frequency of the non-signature behaviors meets a preset second condition, the video writing operation is stopped at the video moment corresponding to the current effective image, and the video data stored in the process of carrying out the video writing operation is deleted;
the second processing unit is used for judging whether the behavior corresponding to the effective image is a signature behavior or not when the first service state is the signature state, and determining whether video writing operation is performed at the video moment corresponding to the effective image or not; if the behavior corresponding to the current effective image is a signature behavior, continuing to perform video writing operation at the video moment corresponding to the effective image to serve as a video operation result corresponding to the effective image; if the behavior corresponding to the current effective image is a non-signature behavior, setting the continuous occurrence frequency of the signature behavior to zero, increasing the continuous occurrence frequency of the non-signature behavior by 1, increasing the cumulative occurrence frequency of the non-signature behavior by 1, adjusting the first service state to be a non-signature state when the continuous occurrence frequency of the non-signature behavior or the cumulative occurrence frequency of the non-signature behavior meets a preset second condition, then terminating the video writing operation at the video moment corresponding to the current effective image, taking the video moment corresponding to the current effective image as the end moment of the video of the signature behavior, and acquiring the video of the signature behavior from the video data according to the start moment and the end moment of the video of the signature behavior.
CN202210034269.6A 2022-01-12 2022-01-12 Signature behavior identification method and system based on deep learning Pending CN114463858A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210034269.6A CN114463858A (en) 2022-01-12 2022-01-12 Signature behavior identification method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210034269.6A CN114463858A (en) 2022-01-12 2022-01-12 Signature behavior identification method and system based on deep learning

Publications (1)

Publication Number Publication Date
CN114463858A true CN114463858A (en) 2022-05-10

Family

ID=81409996

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210034269.6A Pending CN114463858A (en) 2022-01-12 2022-01-12 Signature behavior identification method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN114463858A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115880782A (en) * 2023-02-16 2023-03-31 广州佰锐网络科技有限公司 AI-based signature action recognition positioning method, recognition training method and system

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999005816A1 (en) * 1997-07-24 1999-02-04 Wondernet Ltd. System and method for authenticating signatures
JP2008046781A (en) * 2006-08-11 2008-02-28 National Institute Of Advanced Industrial & Technology Handwritten signature personal authentication system with time series data of contact force
JP2014027564A (en) * 2012-07-27 2014-02-06 Sharp Corp Verification device and electronic signature authentication method
CN105095709A (en) * 2015-09-09 2015-11-25 西南大学 On-line signature identification method and on-line signature identification system
CN107657241A (en) * 2017-10-09 2018-02-02 河海大学常州校区 A kind of signature true or false identification system towards signature pen
CN109643176A (en) * 2016-08-17 2019-04-16 立顶科技有限公司 Stylus, touch-sensing system, touch-sensing controller and touch-sensing method
DE102019104025A1 (en) * 2018-02-20 2019-08-22 RheinLand Versicherungs Aktiengesellschaft Method and system for carrying out an insurance transaction
CN111401826A (en) * 2020-02-14 2020-07-10 平安科技(深圳)有限公司 Double-recording method and device for signing electronic contract, computer equipment and storage medium
CN112016538A (en) * 2020-10-29 2020-12-01 腾讯科技(深圳)有限公司 Video processing method, video processing device, computer equipment and storage medium
CN112583601A (en) * 2020-12-04 2021-03-30 湖南环境生物职业技术学院 Financial system is with authorizing system of signing a seal based on thing networking
CN113095203A (en) * 2021-04-07 2021-07-09 中国工商银行股份有限公司 Client signature detection method and device in double-record data quality inspection
CN113313092A (en) * 2021-07-29 2021-08-27 太平金融科技服务(上海)有限公司深圳分公司 Handwritten signature recognition method, and claims settlement automation processing method, device and equipment

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999005816A1 (en) * 1997-07-24 1999-02-04 Wondernet Ltd. System and method for authenticating signatures
JP2008046781A (en) * 2006-08-11 2008-02-28 National Institute Of Advanced Industrial & Technology Handwritten signature personal authentication system with time series data of contact force
JP2014027564A (en) * 2012-07-27 2014-02-06 Sharp Corp Verification device and electronic signature authentication method
CN105095709A (en) * 2015-09-09 2015-11-25 西南大学 On-line signature identification method and on-line signature identification system
CN109643176A (en) * 2016-08-17 2019-04-16 立顶科技有限公司 Stylus, touch-sensing system, touch-sensing controller and touch-sensing method
CN107657241A (en) * 2017-10-09 2018-02-02 河海大学常州校区 A kind of signature true or false identification system towards signature pen
DE102019104025A1 (en) * 2018-02-20 2019-08-22 RheinLand Versicherungs Aktiengesellschaft Method and system for carrying out an insurance transaction
CN111401826A (en) * 2020-02-14 2020-07-10 平安科技(深圳)有限公司 Double-recording method and device for signing electronic contract, computer equipment and storage medium
CN112016538A (en) * 2020-10-29 2020-12-01 腾讯科技(深圳)有限公司 Video processing method, video processing device, computer equipment and storage medium
CN112583601A (en) * 2020-12-04 2021-03-30 湖南环境生物职业技术学院 Financial system is with authorizing system of signing a seal based on thing networking
CN113095203A (en) * 2021-04-07 2021-07-09 中国工商银行股份有限公司 Client signature detection method and device in double-record data quality inspection
CN113313092A (en) * 2021-07-29 2021-08-27 太平金融科技服务(上海)有限公司深圳分公司 Handwritten signature recognition method, and claims settlement automation processing method, device and equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115880782A (en) * 2023-02-16 2023-03-31 广州佰锐网络科技有限公司 AI-based signature action recognition positioning method, recognition training method and system
CN115880782B (en) * 2023-02-16 2023-08-08 广州佰锐网络科技有限公司 Signature action recognition positioning method based on AI, recognition training method and system

Similar Documents

Publication Publication Date Title
CN108229297B (en) Face recognition method and device, electronic equipment and computer storage medium
US9760789B2 (en) Robust cropping of license plate images
CN110197146B (en) Face image analysis method based on deep learning, electronic device and storage medium
CN108960211B (en) Multi-target human body posture detection method and system
US10438055B2 (en) Human facial detection and recognition system
EP2660753B1 (en) Image processing method and apparatus
CN109409288B (en) Image processing method, image processing device, electronic equipment and storage medium
CN111400528B (en) Image compression method, device, server and storage medium
CN111931548B (en) Face recognition system, method for establishing face recognition data and face recognition method
CN110796108B (en) Method, device and equipment for detecting face quality and storage medium
CN111275040A (en) Positioning method and device, electronic equipment and computer readable storage medium
CN111932582A (en) Target tracking method and device in video image
CN113158777A (en) Quality scoring method, quality scoring model training method and related device
CN111680546A (en) Attention detection method, attention detection device, electronic equipment and storage medium
CN114463858A (en) Signature behavior identification method and system based on deep learning
CN111368632A (en) Signature identification method and device
CN110555406B (en) Video moving target identification method based on Haar-like characteristics and CNN matching
CN115620083A (en) Model training method, face image quality evaluation method, device and medium
CN114972880A (en) Label identification method and device, electronic equipment and storage medium
CN115311632A (en) Vehicle weight recognition method and device based on multiple cameras
CN115393755A (en) Visual target tracking method, device, equipment and storage medium
CN114694209A (en) Video processing method and device, electronic equipment and computer storage medium
CN113177479A (en) Image classification method and device, electronic equipment and storage medium
CN113361426A (en) Vehicle loss assessment image acquisition method, medium, device and electronic equipment
CN112559342A (en) Method, device and equipment for acquiring picture test image and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination