CN114463858B - Signature behavior recognition method and system based on deep learning - Google Patents

Signature behavior recognition method and system based on deep learning Download PDF

Info

Publication number
CN114463858B
CN114463858B CN202210034269.6A CN202210034269A CN114463858B CN 114463858 B CN114463858 B CN 114463858B CN 202210034269 A CN202210034269 A CN 202210034269A CN 114463858 B CN114463858 B CN 114463858B
Authority
CN
China
Prior art keywords
behavior
video
image
signature
effective image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210034269.6A
Other languages
Chinese (zh)
Other versions
CN114463858A (en
Inventor
刘志忠
余敏
邓帅军
陈亚俊
钟瑞超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shuangzhao Electronic Technology Co ltd
Original Assignee
Guangzhou Shuangzhao Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shuangzhao Electronic Technology Co ltd filed Critical Guangzhou Shuangzhao Electronic Technology Co ltd
Priority to CN202210034269.6A priority Critical patent/CN114463858B/en
Publication of CN114463858A publication Critical patent/CN114463858A/en
Application granted granted Critical
Publication of CN114463858B publication Critical patent/CN114463858B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a signature behavior recognition method and a system based on deep learning, comprising the following steps: receiving video data uploaded by a user, and extracting a plurality of first images to be identified from the video data; detecting all the first images in sequence according to a preset algorithm, and judging the current first image as a valid image when the sign pen area and the hand area in the current first image are intersected and the rotation angle of the sign pen is larger than a preset angle; performing behavior recognition on all the effective images in sequence, and acquiring signature behavior videos from video data according to all behavior recognition results; wherein one valid image corresponds to one behavior recognition result. According to the method, the first image which is intersected with the sign pen area and the hand area and has the rotation angle larger than the preset angle is screened and used as the effective image, the situation that the behavior which is not signed and is taken up by a target is erroneously identified as the sign behavior is avoided, and the accuracy of sign behavior identification is improved.

Description

Signature behavior recognition method and system based on deep learning
Technical Field
The invention relates to the field of video data processing, in particular to a signature behavior recognition method and system based on deep learning.
Background
To ensure the rights and interests of consumers, the financial insurance institutions need to standardize the sales behavior of the financial insurance institutions by recording audio and video when selling financial products and selling financial products instead based on the requirements of the regulatory authorities. At present, a financial insurance institution generally adopts a local cache video file, and the whole video is asynchronously uploaded to a cloud for storage after the whole video is recorded, so that a follow-up supervision department can carry out compliance examination. To ensure compliance of business handling videos, financial insurance institutions generally use manual means to review videos. A large amount of video data is generated during the financial transaction, and the amount of video data is continuously increasing. While manual review of a video takes 10 to 15 minutes, relying on manual review alone cannot meet the increasing business needs. In addition, although special auxiliary equipment can be used for detecting signature behaviors, the auxiliary equipment is complex to operate and high in cost, and meanwhile, the universal applicability is poor, and the auxiliary equipment is limited by the use situation.
The existing signature detection system generally determines the signature recognition result according to whether a sign pen and a hand appear in a video, so that the behavior that a target picks up the pen and does not sign is easily misjudged as a signature behavior, and the accuracy of the recognition result is affected.
Disclosure of Invention
The invention provides a signature behavior recognition method and a signature behavior recognition system based on deep learning, which reduce time cost and improve accuracy of signature behavior recognition.
In order to solve the above technical problems, an embodiment of the present invention provides a signature behavior recognition method based on deep learning, including:
Receiving video data uploaded by a user, and extracting a plurality of first images to be identified from the video data;
Detecting all the first images in sequence according to a preset algorithm, and judging that the current first image is an effective image when the sign pen area and the hand area in the current first image are intersected and the rotation angle of the sign pen is larger than a preset angle;
performing behavior recognition on all the effective images in sequence, and acquiring signature behavior videos from the video data according to all behavior recognition results; wherein one of the valid images corresponds to one of the behavior recognition results.
Further, detecting all the first images in turn according to a preset algorithm, and when it is detected that a sign pen area and a hand area in the current first image intersect and the rotation angle of the sign pen is greater than a preset angle, determining that the current first image is an effective image, specifically:
Detecting all the first images in sequence according to a preset target detection algorithm to obtain a plurality of sign pen detection results and hand detection results corresponding to the first images; wherein one of the first images corresponds to one of the sign pen detection results and one of the hand detection results;
Selecting the first image which is intersected with a sign pen area and a hand area and has a sign pen rotation angle larger than a preset angle as the effective image according to the sign pen detection result and the hand detection result; wherein, the value range of the preset angle is as follows: 40 degrees to 80 degrees.
Further, the behavior recognition is sequentially performed on all the effective images, and signature behavior videos are obtained from the video data according to all the behavior recognition results, specifically:
according to a preset behavior recognition algorithm based on deep learning, sequentially performing behavior recognition on all the effective images to obtain behavior recognition results corresponding to a plurality of the effective images;
Determining the behavior corresponding to the effective image according to the behavior identification result;
and determining whether video writing operation is carried out at the video moment corresponding to the effective image according to the first service state and the action corresponding to the effective image, and acquiring the signature action video from the video data.
Further, the determining, according to the behavior recognition result, the behavior corresponding to the effective image specifically includes:
if the behavior identification result comprises a signature action and the signature action probability corresponding to the signature action is larger than a preset first threshold value, determining that the current behavior corresponding to the effective image is a signature behavior;
otherwise, determining that the behavior corresponding to the effective image is an unsigned behavior.
Further, the determining whether to perform video writing operation at the video moment corresponding to the effective image according to the first service state and the behavior corresponding to the effective image, and acquiring the signature behavior video from the video data includes:
When the first service state is in an unsigned state, judging whether the behavior corresponding to the effective image is a signed behavior, and determining whether video writing operation is performed at the video moment corresponding to the effective image;
If the current behavior corresponding to the effective image is a signature behavior, setting the continuous occurrence times of the non-signature behavior to zero, increasing the continuous occurrence times of the signature behavior by 1, and increasing the accumulated occurrence times of the signature behavior by 1; when video writing operation is carried out at the video moment corresponding to the last effective image, continuing to carry out video writing operation at the video moment corresponding to the current effective image, and adjusting the first service state to be a signature state when the number of continuous signature behaviors or the cumulative signature behavior number meets a preset first condition; when the video write operation is not performed at the video time corresponding to the last effective image, starting to perform the video write operation at the video time corresponding to the current effective image, and taking the video time corresponding to the current effective image as the starting time of the signature behavior video;
If the behavior corresponding to the current effective image is an unsigned behavior, setting the continuous occurrence times of the signed behavior to zero, increasing the continuous occurrence times of the unsigned behavior by 1, and increasing the accumulated occurrence times of the unsigned behavior by 1; when video writing operation is carried out at the video moment corresponding to the last effective image, and the continuous occurrence times of the unsigned behaviors or the accumulated occurrence times of the unsigned behaviors meet a preset second condition, stopping video writing operation at the video moment corresponding to the current effective image, and deleting video data stored in the process of carrying out video writing operation.
Further, the determining whether to perform video writing operation at the video moment corresponding to the effective image according to the first service state and the behavior corresponding to the effective image, and acquiring the signature behavior video, further includes:
when the first service state is a signature state, judging whether the behavior corresponding to the effective image is a signature behavior, and determining whether video writing operation is performed at the video moment corresponding to the effective image;
If the current behavior corresponding to the effective image is signature behavior, continuing to perform video writing operation at the video moment corresponding to the effective image as a video operation result corresponding to the effective image;
if the behavior corresponding to the current effective image is an unsigned behavior, setting the continuous occurrence times of signature behaviors to zero, increasing the continuous occurrence times of unsigned behaviors by 1, increasing the cumulative occurrence times of unsigned behaviors by 1, adjusting the first business state to be an unsigned state when the continuous occurrence times of unsigned behaviors or the cumulative occurrence times of unsigned behaviors meet a preset second condition, terminating a video writing operation at the video moment corresponding to the current effective image, taking the video moment corresponding to the current effective image as the ending moment of signature behavior video, and acquiring the signature behavior video from the video data according to the starting moment and the ending moment of signature behavior video.
Further, the receiving the video data uploaded by the user, and extracting a plurality of first images to be identified from the video data specifically includes:
Receiving video data uploaded by a user, decoding the video data, sequentially obtaining a plurality of initial images, and calculating image histograms of all the initial images;
taking a first initial image as an initial frame image and a second initial image as a comparison frame image according to the acquisition sequence;
Calculating the similarity of the image histogram of the initial frame image and the image histogram of the comparison frame image according to a preset similarity algorithm, and sampling the current initial frame image according to the size relation between the similarity and a preset second threshold until all the initial images are sampled, so as to obtain a plurality of first images to be identified;
If the similarity is smaller than the second threshold, taking the current initial frame image as the first image, taking the current comparison frame image as the initial frame image, taking the next initial image as the comparison frame image, and continuing to sample the next initial image;
And if the similarity is greater than or equal to the second threshold, discarding the current comparison frame image, taking the next initial image as the comparison frame image, and continuing to sample the next initial image.
In order to solve the same technical problem, the invention also provides a signature behavior recognition system based on deep learning, which comprises:
The preprocessing module is used for receiving video data uploaded by a user and extracting a plurality of first images to be identified from the video data; detecting all the first images in sequence according to a preset algorithm, and judging that the current first image is an effective image when the sign pen area and the hand area in the current first image are intersected and the rotation angle of the sign pen is larger than a preset angle;
The behavior recognition module is used for sequentially carrying out behavior recognition on all the effective images and acquiring signature behavior videos from the video data according to all the behavior recognition results; wherein one of the valid images corresponds to one of the behavior recognition results.
Further, the preprocessing module further includes:
the video decoding unit is used for receiving video data uploaded by a user, decoding the video data, sequentially acquiring a plurality of initial images and calculating image histograms of all the initial images;
The sampling unit is used for taking a first initial image as an initial frame image and a second initial image as a comparison frame image according to the acquisition sequence; calculating the similarity of the image histogram of the initial frame image and the image histogram of the comparison frame image according to a preset similarity algorithm, and sampling the current initial frame image according to the size relation between the similarity and a preset second threshold until all the initial images are completely sampled to obtain a plurality of first images; if the similarity is smaller than the second threshold, taking the current initial frame image as the first image, taking the current comparison frame image as the initial frame image, taking the next initial image as the comparison frame image, and continuing to sample the next initial image; if the similarity is greater than or equal to the second threshold, discarding the current comparison frame image, taking the next initial image as the comparison frame image, and continuing to sample the next initial image;
The detection unit is used for sequentially detecting all the first images according to a preset target detection algorithm to obtain a plurality of sign pen detection results and hand detection results corresponding to the first images; wherein one of the first images corresponds to one of the sign pen detection results and one of the hand detection results;
The selecting unit is used for selecting the first image which is intersected with the sign pen area and the hand area and has the sign pen rotation angle larger than a preset angle as the effective image according to the sign pen detection result and the hand detection result; wherein, the value range of the preset angle is as follows: 40 degrees to 80 degrees.
Further, the behavior recognition module further includes:
The behavior recognition unit is used for sequentially performing behavior recognition on all the effective images according to a preset behavior recognition algorithm based on deep learning to obtain behavior recognition results corresponding to a plurality of the effective images;
The behavior judging unit is used for judging the behavior corresponding to the current effective image according to the behavior identification result; if the behavior identification result comprises a signature action and the signature action probability corresponding to the signature action is larger than a preset first threshold value, determining that the current behavior corresponding to the effective image is a signature behavior; otherwise, determining the behavior corresponding to the current effective image to be an unsigned behavior;
The first processing unit is used for judging whether the behavior corresponding to the effective image is signature behavior or not when the first service state is in an unsigned state, and determining whether video writing operation is performed at the video moment corresponding to the effective image or not; if the current behavior corresponding to the effective image is a signature behavior, setting the continuous occurrence times of the non-signature behavior to zero, increasing the continuous occurrence times of the signature behavior by 1, and increasing the accumulated occurrence times of the signature behavior by 1; when video writing operation is carried out at the video moment corresponding to the last effective image, continuing to carry out video writing operation at the video moment corresponding to the current effective image, and adjusting the first service state to be a signature state when the number of continuous signature behaviors or the cumulative signature behavior number meets a preset first condition; when the video write operation is not performed at the video time corresponding to the last effective image, starting to perform the video write operation at the video time corresponding to the current effective image, and taking the video time corresponding to the current effective image as the starting time of the signature behavior video; if the behavior corresponding to the current effective image is an unsigned behavior, setting the continuous occurrence times of the signed behavior to zero, increasing the continuous occurrence times of the unsigned behavior by 1, and increasing the accumulated occurrence times of the unsigned behavior by 1; when video writing operation is carried out at the video moment corresponding to the last effective image, and the continuous occurrence times of the unsigned behaviors or the accumulated occurrence times of the unsigned behaviors meet a preset second condition, stopping video writing operation at the video moment corresponding to the current effective image, and deleting video data stored in the process of carrying out video writing operation;
The second processing unit is used for judging whether the behavior corresponding to the effective image is signature behavior or not when the first service state is signature state, and determining whether video writing operation is performed at the video moment corresponding to the effective image or not; if the current behavior corresponding to the effective image is signature behavior, continuing to perform video writing operation at the video moment corresponding to the effective image as a video operation result corresponding to the effective image; if the behavior corresponding to the current effective image is an unsigned behavior, setting the continuous occurrence times of signature behaviors to zero, increasing the continuous occurrence times of unsigned behaviors by 1, increasing the cumulative occurrence times of unsigned behaviors by 1, adjusting the first business state to be an unsigned state when the continuous occurrence times of unsigned behaviors or the cumulative occurrence times of unsigned behaviors meet a preset second condition, terminating a video writing operation at the video moment corresponding to the current effective image, taking the video moment corresponding to the current effective image as the ending moment of signature behavior video, and acquiring the signature behavior video from the video data according to the starting moment and the ending moment of signature behavior video.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
The invention provides a signature behavior recognition method and a system based on deep learning, which are used for carrying out subsequent behavior recognition by screening a first image which is intersected with a sign pen area and a hand area and has a rotation angle larger than a preset angle of the sign pen as an effective image, so that the situation that a behavior which is not signed and is taken up by a target is erroneously recognized as a signature behavior in the signature behavior recognition process is avoided, and the accuracy of signature behavior recognition is improved.
Further, according to the service states and the behavior recognition results corresponding to all the effective images, the starting time and the ending time of the signature behavior video are determined, the signature behavior video is acquired and stored, a large number of irrelevant non-signature video frames are reduced, and the user can conveniently view the signature behavior video subsequently. Meanwhile, the video data uploaded by the user is decoded and sampled, so that the number of first images obtained after conventional preprocessing is reduced, and the calculated amount of the signature behavior recognition process is effectively reduced.
Drawings
Fig. 1: the invention provides a flow diagram of one embodiment of a signature behavior recognition method based on deep learning;
fig. 2: the invention provides a structural schematic diagram of a signature behavior recognition method based on deep learning;
Fig. 3: the invention provides a schematic structure diagram of a preprocessing module of a signature behavior recognition system based on deep learning;
fig. 4: the invention provides a structure schematic diagram of a behavior recognition module of a signature behavior recognition system based on deep learning.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Embodiment one:
Referring to fig. 1, a signature behavior recognition method based on deep learning provided in an embodiment of the present invention includes steps S1 to S3, where each step is specifically as follows:
step S1: and receiving video data uploaded by a user, and extracting a plurality of first images to be identified from the video data.
Further, step S1 specifically includes steps S11 to S15, each of which specifically includes:
step S11: receiving video data uploaded by a user, decoding the video data, sequentially obtaining a plurality of initial images, and calculating image histograms of all the initial images.
In this embodiment, openCV software is used to decode video data uploaded by a user, and a plurality of initial images are sequentially acquired.
Step S12: and taking the first initial image as an initial frame image and the second initial image as a comparison frame image according to the acquisition sequence.
Step S13: calculating the similarity of the image histogram of the initial frame image and the image histogram of the comparison frame image according to a preset similarity algorithm, and sampling the current initial frame image according to the size relation between the similarity and a preset second threshold until the sampling of all the initial images is completed, so as to obtain a plurality of first images to be identified.
In this embodiment, based on the statistical barbituric distance, the normalized correlation coefficient of the image histogram of the initial frame image and the image histogram of the comparison frame image is calculated as the similarity, and the current initial frame image is sampled according to the magnitude relation between the similarity and the preset second threshold. The second threshold may be set based on a plurality of tests, and as an example, the second threshold is 0.2.
It should be noted that if the similarity is smaller than the second threshold, step S14 is executed, and if the similarity is greater than or equal to the second threshold, step S15 is executed.
Step S14: taking the current initial frame image as a first image, taking the current comparison frame image as an initial frame image, taking the next initial image as a comparison frame image, and continuing to sample the next initial image.
Step S15: and discarding the current comparison frame image, taking the next initial image as the comparison frame image, and continuing to sample the next initial image.
In this embodiment, by sampling the initial images sequentially, selecting a plurality of first images to be identified, discarding the initial images with higher partial similarity, retaining the initial images with lower similarity, reducing redundant information in the video, and effectively reducing the calculation amount in the subsequent signature behavior identification process.
Step S2: and detecting all the first images in sequence according to a preset algorithm, and judging the current first image as a valid image when the sign pen area and the hand area in the current first image are intersected and the rotation angle of the sign pen is larger than a preset angle.
Further, step S2 specifically includes steps S21 to S22, each of which specifically includes:
Step S21: detecting all the first images in sequence according to a preset target detection algorithm to obtain sign pen detection results and hand detection results corresponding to a plurality of first images; wherein, a first image corresponds to a sign pen detection result and a hand detection result.
In this embodiment, an initial target detection algorithm YOLOV is modified, angle information is added, and a preset target detection algorithm is obtained. And detecting the first image according to a preset target detection algorithm, wherein the detection result is in the form of [ A, B and C ], wherein A represents the label name of the detected object, B represents the confidence of the detected object, and C represents the position coordinate of the detected object.
As an example, the first image is detected according to a preset target detection algorithm, and the obtained sign pen detection result is [ pen,0.695, (464, 381, 99, 10, 62) ]; wherein pen represents the sign of the sign pen,0.695 represents the confidence of sign pen detection, (464, 381, 99, 10, 62) is the data information of the rectangular frame of the sign pen, which are the center point x-axis coordinate, the center point y-axis coordinate, the longest side, the shortest side and the rotation angle of the rectangular frame of the sign pen respectively. Based on the data information of (464, 381, 99, 10, 62) in the sign pen detection result, four vertex coordinates of the rectangular frame of the sign pen are calculated to be (240, 225), (218, 181), (222, 179) and (244, 223), respectively.
As an example, the first image is detected according to a preset target detection algorithm, and the obtained hand detection result is [ hand,0.845, (533, 346, 642, 439) ]; where hand represents the label of the hand,0.845 represents the confidence of hand detection, (533, 346, 642, 439) is the data information of the rectangular frame representing the hand, where (533, 346) is the upper left vertex coordinates of the rectangular frame of the hand, and (642, 439) is the lower right vertex coordinates of the rectangular frame of the hand.
Step S22: selecting a first image which is intersected with a sign pen area and a hand area and has a sign pen rotation angle larger than a preset angle as an effective image according to a sign pen detection result and a hand detection result; wherein, the value range of the preset angle is: 40 degrees to 80 degrees.
In this embodiment, the first image is screened according to the preset sign pen angle and the screening condition that the sign pen area and the hand area intersect, so as to obtain an effective image, and the sign behaviors are primarily identified, so that the situation that the behavior which is not signed but is taken up by a target is erroneously identified as the sign behavior in the subsequent sign behavior identification process is avoided, and the accuracy of sign behavior identification is improved.
Step S3: performing behavior recognition on all the effective images in sequence, and acquiring signature behavior videos from video data according to all behavior recognition results; wherein one valid image corresponds to one behavior recognition result.
Further, step S3 specifically includes steps S31 to S33, each of which specifically includes:
Step S31: and according to a preset behavior recognition algorithm based on deep learning, sequentially performing behavior recognition on all the effective images to obtain behavior recognition results corresponding to a plurality of effective images.
In the embodiment, based on a deep learning behavior recognition algorithm SlowFast, sequentially performing behavior recognition on the effective images to obtain behavior recognition results corresponding to a plurality of effective images; wherein each behavior recognition result includes a plurality of most likely actions and corresponding action probabilities.
As an example, the behavior recognition results are:
where writing represents the signature action and 0.448 represents the signature action probability.
Step S32: and determining the behavior corresponding to the effective image according to the behavior identification result.
Further, step S32 specifically includes steps S321 to S323, each of which specifically includes:
step S321: judging whether the behavior recognition result contains a signature action or not, and judging whether the signature action probability corresponding to the signature action is larger than a preset first threshold value or not.
In this embodiment, the second threshold may be set based on a plurality of tests, and as an example, the second threshold is 0.3.
It should be noted that, if the behavior recognition result includes a signature action and the signature action probability corresponding to the signature action is greater than the preset first threshold, step S322 is executed, otherwise, step S323 is executed.
Step S322: it is determined that the behavior corresponding to the currently valid image is a signature behavior.
Step S323: and determining the behavior corresponding to the current valid image as an unsigned behavior.
Step S33: and determining whether video writing operation is carried out at the video moment corresponding to the effective image according to the first service state and the behavior corresponding to the effective image, and acquiring signature behavior video from video data.
In this embodiment, according to the order of obtaining the valid images, before executing step S33 according to the behavior corresponding to the first valid image, the initial state of the first service state is set to an unsigned state, and the number of continuous occurrences of the signature behavior, the number of cumulative occurrences of the signature behavior, the number of continuous occurrences of the unsigned behavior, and the number of cumulative occurrences of the unsigned behavior are initialized to be 0.
Further, step S33 specifically includes steps S331 to S336, each of which specifically includes:
Step S331: when the first service state is in an unsigned state, judging whether the behavior corresponding to the effective image is a signature behavior, and determining whether video writing operation is performed at the video moment corresponding to the effective image.
It should be noted that, if the behavior corresponding to the current valid image is a signature behavior, step S332 is executed, and if the behavior corresponding to the current valid image is an unsigned behavior, step S333 is executed.
Step S332: setting the non-signature behavior continuous occurrence times F_ contune _num to zero, increasing the signature behavior continuous occurrence times T_ contune _num by 1, and increasing the signature behavior cumulative occurrence times T_all_num by 1; when video writing operation is carried out at the video moment corresponding to the last effective image, continuing to carry out video writing operation at the video moment corresponding to the current effective image, and when the sign action continuous occurrence times T_ contune _num or the sign action accumulated occurrence times T_all_num meet a preset first condition, adjusting the first service state to be a sign state; when the video write operation is not performed at the video time corresponding to the last valid image, the video write operation is started at the video time corresponding to the current valid image, and the video time corresponding to the current valid image is used as the starting time of the signature behavior video.
In this embodiment, the preset first condition may be set based on a plurality of trials. As an example, the first preset condition is: the number of continuous occurrence times T_ contune _num of signature behaviors is more than or equal to 3, or the accumulated number of occurrence times T_all_num of signature behaviors is more than or equal to 5 in the process of continuously processing behavior identification results for 7 times.
Step S333: setting the sign behavior continuous occurrence times T_ contune _num to zero, increasing the non-sign behavior continuous occurrence times F_ contune _num by 1, and increasing the non-sign behavior cumulative occurrence times F_all_num by 1; when the video write operation is performed at the video time corresponding to the last valid image and the number of continuous occurrences of unsigned behavior f_ contune _num or the number of cumulative occurrences of unsigned behavior f_all_num satisfies a preset second condition, the video write operation is terminated at the video time corresponding to the current valid image, and the video data stored during the video write operation is deleted.
In this embodiment, the preset second condition may be set based on a plurality of trials. As an example, the preset second condition is: the number of times of continuous occurrence of the unsigned behavior F_ contune _num is more than or equal to 3, or the number of times of accumulated occurrence of the unsigned behavior F_all_num is more than or equal to 5 in the process of continuously processing the behavior recognition result 7 times.
Step S334: when the first service state is a signature state, judging whether the behavior corresponding to the effective image is a signature behavior, and determining whether video writing operation is performed at the video moment corresponding to the effective image.
It should be noted that, if the behavior corresponding to the current valid image is a signature behavior, step S335 is executed, and if the behavior corresponding to the current valid image is an unsigned behavior, step S336 is executed.
Step S335: if the behavior corresponding to the current effective image is signature behavior, continuing to perform video writing operation at the video moment corresponding to the effective image.
Step S336: if the behavior corresponding to the current effective image is an unsigned behavior, setting the continuous occurrence times T_ contune _num of the signed behavior to zero, increasing the continuous occurrence times F_ contune _num of the unsigned behavior by 1, increasing the cumulative occurrence times F_all_num of the unsigned behavior by 1, adjusting the first business state to be an unsigned state when the continuous occurrence times F_ contune _num of the unsigned behavior or the cumulative occurrence times F_all_num of the unsigned behavior meet a preset second condition, terminating the video write operation at the video moment corresponding to the current effective image, taking the video moment corresponding to the current effective image as the ending moment of the signed behavior video, and acquiring the signed behavior video from the video data uploaded by the user according to the starting moment and the ending moment of the signed behavior video.
In this embodiment, according to the behavior corresponding to the first service state and the effective image, and whether the video moment corresponding to the last effective image is subjected to the video writing operation or not, the starting moment and the ending moment of the signature behavior video are positioned, so that the signature behavior video is intercepted from the video data, a large number of irrelevant non-signature video frames in the video data are reduced, and the user can conveniently view the non-signature behavior video subsequently.
In order to solve the same technical problem, the invention also provides a signature behavior recognition system based on deep learning, which comprises:
The preprocessing module 1 is used for receiving video data uploaded by a user and extracting a plurality of first images to be identified from the video data; detecting all the first images in sequence according to a preset algorithm, and judging the current first image as a valid image when the sign pen area and the hand area in the current first image are intersected and the rotation angle of the sign pen is larger than a preset angle;
The behavior recognition module 2 is used for sequentially performing behavior recognition on all the effective images and acquiring signature behavior videos from the video data according to all the behavior recognition results; wherein one valid image corresponds to one behavior recognition result.
Further, the preprocessing module 1 further includes:
the video decoding unit is used for receiving video data uploaded by a user, decoding the video data, sequentially acquiring a plurality of initial images, and calculating image histograms of all the initial images;
The sampling unit is used for taking the first initial image as an initial frame image and the second initial image as a comparison frame image according to the acquisition sequence; calculating the similarity of an image histogram of an initial frame image and an image histogram of a comparison frame image according to a preset similarity algorithm, and sampling the current initial frame image according to the size relation between the similarity and a preset second threshold until all initial images are completely sampled to obtain a plurality of first images; if the similarity is smaller than the second threshold, taking the current initial frame image as a first image, taking the current comparison frame image as an initial frame image, taking the next initial image as a comparison frame image, and continuing to sample the next initial image; if the similarity is greater than or equal to a second threshold value, discarding the current comparison frame image, taking the next initial image as the comparison frame image, and continuing to sample the next initial image;
the detection unit is used for sequentially detecting all the first images according to a preset target detection algorithm to obtain sign pen detection results and hand detection results corresponding to a plurality of first images; wherein, a first image corresponds to a sign pen detection result and a hand detection result;
the selecting unit is used for selecting a first image which is intersected with the sign pen area and the hand area and has a rotation angle larger than a preset angle of the sign pen as an effective image according to the sign pen detection result and the hand detection result; wherein, the value range of the preset angle is: 40 degrees to 80 degrees.
Further, the behavior recognition module 2 further includes:
the behavior recognition unit is used for sequentially performing behavior recognition on all the effective images according to a preset behavior recognition algorithm based on deep learning to obtain behavior recognition results corresponding to a plurality of the effective images;
The behavior judging unit is used for judging the behavior corresponding to the current effective image according to the behavior identification result; if the behavior identification result comprises a signature action and the signature action probability corresponding to the signature action is larger than a preset first threshold value, determining that the behavior corresponding to the current effective image is a signature behavior; otherwise, determining the behavior corresponding to the current valid image to be an unsigned behavior;
The first processing unit is used for judging whether the behavior corresponding to the effective image is signature behavior or not when the first service state is in an unsigned state, and determining whether video writing operation is performed at the video moment corresponding to the effective image or not; if the behavior corresponding to the current effective image is a signature behavior, setting the continuous occurrence times of the non-signature behavior to zero, increasing the continuous occurrence times of the signature behavior by 1, and increasing the accumulated occurrence times of the signature behavior by 1; when video writing operation is carried out at the video moment corresponding to the last effective image, continuing to carry out video writing operation at the video moment corresponding to the current effective image, and adjusting the first service state to be a signature state when the number of continuous signature actions or the cumulative signature action number meets a preset first condition; when the video write operation is not performed at the video time corresponding to the last effective image, starting to perform the video write operation at the video time corresponding to the current effective image, and taking the video time corresponding to the current effective image as the starting time of the signature behavior video; if the behavior corresponding to the current valid image is an unsigned behavior, setting the continuous occurrence times of the signed behavior to zero, increasing the continuous occurrence times of the unsigned behavior by 1, and increasing the accumulated occurrence times of the unsigned behavior by 1; when video write operation is carried out at the video moment corresponding to the last effective image and the number of continuous occurrences of the unsigned behavior or the number of cumulative occurrences of the unsigned behavior meets a preset second condition, terminating the video write operation at the video moment corresponding to the current effective image and deleting video data stored in the process of carrying out the video write operation;
The second processing unit is used for judging whether the behavior corresponding to the effective image is signature behavior or not when the first service state is signature state, and determining whether video writing operation is performed at the video moment corresponding to the effective image or not; if the behavior corresponding to the current effective image is signature behavior, continuing to perform video writing operation at the video moment corresponding to the effective image as a video operation result corresponding to the effective image; if the behavior corresponding to the current effective image is an unsigned behavior, setting the continuous occurrence times of the signed behavior to zero, increasing the continuous occurrence times of the unsigned behavior by 1, increasing the cumulative occurrence times of the unsigned behavior by 1, adjusting the first business state to be an unsigned state when the continuous occurrence times of the unsigned behavior or the cumulative occurrence times of the unsigned behavior meet a preset second condition, terminating the video writing operation at the video moment corresponding to the current effective image, taking the video moment corresponding to the current effective image as the ending moment of the signed behavior video, and acquiring the signed behavior video from the video data according to the starting moment and the ending moment of the signed behavior video.
It will be clear to those skilled in the art that, for convenience and brevity of description, reference may be made to the corresponding process in the foregoing method embodiment for the specific working process of the above-described system, which is not described herein again.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
The invention provides a signature behavior recognition method and a system based on deep learning, which are used for carrying out subsequent behavior recognition by screening a first image which is intersected with a sign pen area and a hand area and has a rotation angle larger than a preset angle of the sign pen as an effective image, so that the situation that a behavior which is not signed and is taken up by a target is erroneously recognized as a signature behavior in the signature behavior recognition process is avoided, and the accuracy of signature behavior recognition is improved.
Further, according to the service states and the behavior recognition results corresponding to all the effective images, the starting time and the ending time of the signature behavior video are determined, the signature behavior video is acquired and stored, a large number of irrelevant non-signature video frames are reduced, and the user can conveniently view the signature behavior video subsequently. Meanwhile, the video data uploaded by the user is decoded and sampled, so that the number of first images obtained after conventional preprocessing is reduced, and the calculated amount of the signature behavior recognition process is effectively reduced.
The foregoing embodiments have been provided for the purpose of illustrating the general principles of the present invention, and are not to be construed as limiting the scope of the invention. It should be noted that any modifications, equivalent substitutions, improvements, etc. made by those skilled in the art without departing from the spirit and principles of the present invention are intended to be included in the scope of the present invention.

Claims (9)

1. A signature behavior recognition method based on deep learning, comprising:
Receiving video data uploaded by a user, and extracting a plurality of first images to be identified from the video data;
Detecting all the first images in sequence according to a preset algorithm, and judging that the current first image is an effective image when the sign pen area and the hand area in the current first image are intersected and the rotation angle of the sign pen is larger than a preset angle;
Performing behavior recognition on all the effective images in sequence, and acquiring signature behavior videos from the video data according to all behavior recognition results; wherein one of the valid images corresponds to one of the behavior recognition results;
the method comprises the steps of sequentially carrying out behavior recognition on all the effective images, and obtaining signature behavior videos from the video data according to all behavior recognition results, wherein the specific steps are as follows:
according to a preset behavior recognition algorithm based on deep learning, sequentially performing behavior recognition on all the effective images to obtain behavior recognition results corresponding to a plurality of the effective images;
Determining the behavior corresponding to the effective image according to the behavior identification result;
and determining whether video writing operation is carried out at the video moment corresponding to the effective image according to the first service state and the action corresponding to the effective image, and acquiring the signature action video from the video data.
2. The method for identifying signature behavior based on deep learning as claimed in claim 1, wherein all the first images are sequentially detected according to a preset algorithm, and when it is detected that a sign pen area and a hand area in the current first image intersect, and the sign pen rotation angle is greater than a preset angle, the current first image is determined to be a valid image, specifically:
Detecting all the first images in sequence according to a preset target detection algorithm to obtain a plurality of sign pen detection results and hand detection results corresponding to the first images; wherein one of the first images corresponds to one of the sign pen detection results and one of the hand detection results;
Selecting the first image which is intersected with a sign pen area and a hand area and has a sign pen rotation angle larger than a preset angle as the effective image according to the sign pen detection result and the hand detection result; wherein, the value range of the preset angle is as follows: 40 degrees to 80 degrees.
3. The method for identifying signature behaviors based on deep learning as claimed in claim 1, wherein the determining the behavior corresponding to the valid image according to the behavior identification result is specifically:
if the behavior identification result comprises a signature action and the signature action probability corresponding to the signature action is larger than a preset first threshold value, determining that the current behavior corresponding to the effective image is a signature behavior;
otherwise, determining that the behavior corresponding to the effective image is an unsigned behavior.
4. The method for identifying signature behavior based on deep learning as claimed in claim 1, wherein said determining whether to perform video writing operation at a video time corresponding to said effective image according to a first service state and behavior corresponding to said effective image, and obtaining said signature behavior video from said video data, comprises:
When the first service state is in an unsigned state, judging whether the behavior corresponding to the effective image is a signed behavior, and determining whether video writing operation is performed at the video moment corresponding to the effective image;
If the current behavior corresponding to the effective image is a signature behavior, setting the continuous occurrence times of the non-signature behavior to zero, increasing the continuous occurrence times of the signature behavior by 1, and increasing the accumulated occurrence times of the signature behavior by 1; when video writing operation is carried out at the video moment corresponding to the last effective image, continuing to carry out video writing operation at the video moment corresponding to the current effective image, and adjusting the first service state to be a signature state when the number of continuous signature behaviors or the cumulative signature behavior number meets a preset first condition; when the video write operation is not performed at the video time corresponding to the last effective image, starting to perform the video write operation at the video time corresponding to the current effective image, and taking the video time corresponding to the current effective image as the starting time of the signature behavior video;
If the behavior corresponding to the current effective image is an unsigned behavior, setting the continuous occurrence times of the signed behavior to zero, increasing the continuous occurrence times of the unsigned behavior by 1, and increasing the accumulated occurrence times of the unsigned behavior by 1; when video writing operation is carried out at the video moment corresponding to the last effective image, and the continuous occurrence times of the unsigned behaviors or the accumulated occurrence times of the unsigned behaviors meet a preset second condition, stopping video writing operation at the video moment corresponding to the current effective image, and deleting video data stored in the process of carrying out video writing operation.
5. The method for identifying signature behavior based on deep learning as claimed in claim 1, wherein said determining whether to perform video writing operation at a video time corresponding to said effective image according to a first service state and behavior corresponding to said effective image, and acquiring said signature behavior video, further comprises:
when the first service state is a signature state, judging whether the behavior corresponding to the effective image is a signature behavior, and determining whether video writing operation is performed at the video moment corresponding to the effective image;
If the current behavior corresponding to the effective image is signature behavior, continuing to perform video writing operation at the video moment corresponding to the effective image as a video operation result corresponding to the effective image;
if the behavior corresponding to the current effective image is an unsigned behavior, setting the continuous occurrence times of signature behaviors to zero, increasing the continuous occurrence times of unsigned behaviors by 1, increasing the cumulative occurrence times of unsigned behaviors by 1, adjusting the first business state to be an unsigned state when the continuous occurrence times of unsigned behaviors or the cumulative occurrence times of unsigned behaviors meet a preset second condition, terminating a video writing operation at the video moment corresponding to the current effective image, taking the video moment corresponding to the current effective image as the ending moment of signature behavior video, and acquiring the signature behavior video from the video data according to the starting moment and the ending moment of signature behavior video.
6. The method for identifying signature behaviors based on deep learning as recited in claim 1, wherein the receiving video data uploaded by a user and extracting a plurality of first images to be identified from the video data comprises:
Receiving video data uploaded by a user, decoding the video data, sequentially obtaining a plurality of initial images, and calculating image histograms of all the initial images;
taking a first initial image as an initial frame image and a second initial image as a comparison frame image according to the acquisition sequence;
Calculating the similarity of the image histogram of the initial frame image and the image histogram of the comparison frame image according to a preset similarity algorithm, and sampling the current initial frame image according to the size relation between the similarity and a preset second threshold until all the initial images are sampled, so as to obtain a plurality of first images to be identified;
If the similarity is smaller than the second threshold, taking the current initial frame image as the first image, taking the current comparison frame image as the initial frame image, taking the next initial image as the comparison frame image, and continuing to sample the next initial image;
And if the similarity is greater than or equal to the second threshold, discarding the current comparison frame image, taking the next initial image as the comparison frame image, and continuing to sample the next initial image.
7. A deep learning-based signature behavior recognition system, comprising:
The preprocessing module is used for receiving video data uploaded by a user and extracting a plurality of first images to be identified from the video data; detecting all the first images in sequence according to a preset algorithm, and judging that the current first image is an effective image when the sign pen area and the hand area in the current first image are intersected and the rotation angle of the sign pen is larger than a preset angle;
the behavior recognition module is used for sequentially carrying out behavior recognition on all the effective images and acquiring signature behavior videos from the video data according to all the behavior recognition results; wherein one of the valid images corresponds to one of the behavior recognition results;
The method comprises the steps of sequentially carrying out behavior recognition on all the effective images, and obtaining signature behavior videos from the video data according to all behavior recognition results, wherein the specific steps are as follows: according to a preset behavior recognition algorithm based on deep learning, sequentially performing behavior recognition on all the effective images to obtain behavior recognition results corresponding to a plurality of the effective images; determining the behavior corresponding to the effective image according to the behavior identification result; and determining whether video writing operation is carried out at the video moment corresponding to the effective image according to the first service state and the action corresponding to the effective image, and acquiring the signature action video from the video data.
8. The deep learning based signature behavior recognition system of claim 7, wherein the preprocessing module further comprises:
the video decoding unit is used for receiving video data uploaded by a user, decoding the video data, sequentially acquiring a plurality of initial images and calculating image histograms of all the initial images;
The sampling unit is used for taking a first initial image as an initial frame image and a second initial image as a comparison frame image according to the acquisition sequence; calculating the similarity of the image histogram of the initial frame image and the image histogram of the comparison frame image according to a preset similarity algorithm, and sampling the current initial frame image according to the size relation between the similarity and a preset second threshold until all the initial images are completely sampled to obtain a plurality of first images; if the similarity is smaller than the second threshold, taking the current initial frame image as the first image, taking the current comparison frame image as the initial frame image, taking the next initial image as the comparison frame image, and continuing to sample the next initial image; if the similarity is greater than or equal to the second threshold, discarding the current comparison frame image, taking the next initial image as the comparison frame image, and continuing to sample the next initial image;
The detection unit is used for sequentially detecting all the first images according to a preset target detection algorithm to obtain a plurality of sign pen detection results and hand detection results corresponding to the first images; wherein one of the first images corresponds to one of the sign pen detection results and one of the hand detection results;
The selecting unit is used for selecting the first image which is intersected with the sign pen area and the hand area and has the sign pen rotation angle larger than a preset angle as the effective image according to the sign pen detection result and the hand detection result; wherein, the value range of the preset angle is as follows: 40 degrees to 80 degrees.
9. A deep learning based signature behavior recognition system as defined in claim 7, wherein the behavior recognition module further comprises:
The behavior recognition unit is used for sequentially performing behavior recognition on all the effective images according to a preset behavior recognition algorithm based on deep learning to obtain behavior recognition results corresponding to a plurality of the effective images;
The behavior judging unit is used for judging the behavior corresponding to the current effective image according to the behavior identification result; if the behavior identification result comprises a signature action and the signature action probability corresponding to the signature action is larger than a preset first threshold value, determining that the current behavior corresponding to the effective image is a signature behavior; otherwise, determining the behavior corresponding to the current effective image to be an unsigned behavior;
The first processing unit is used for judging whether the behavior corresponding to the effective image is signature behavior or not when the first service state is in an unsigned state, and determining whether video writing operation is performed at the video moment corresponding to the effective image or not; if the current behavior corresponding to the effective image is a signature behavior, setting the continuous occurrence times of the non-signature behavior to zero, increasing the continuous occurrence times of the signature behavior by 1, and increasing the accumulated occurrence times of the signature behavior by 1; when video writing operation is carried out at the video moment corresponding to the last effective image, continuing to carry out video writing operation at the video moment corresponding to the current effective image, and adjusting the first service state to be a signature state when the number of continuous signature behaviors or the cumulative signature behavior number meets a preset first condition; when the video write operation is not performed at the video time corresponding to the last effective image, starting to perform the video write operation at the video time corresponding to the current effective image, and taking the video time corresponding to the current effective image as the starting time of the signature behavior video; if the behavior corresponding to the current effective image is an unsigned behavior, setting the continuous occurrence times of the signed behavior to zero, increasing the continuous occurrence times of the unsigned behavior by 1, and increasing the accumulated occurrence times of the unsigned behavior by 1; when video writing operation is carried out at the video moment corresponding to the last effective image, and the continuous occurrence times of the unsigned behaviors or the accumulated occurrence times of the unsigned behaviors meet a preset second condition, stopping video writing operation at the video moment corresponding to the current effective image, and deleting video data stored in the process of carrying out video writing operation;
The second processing unit is used for judging whether the behavior corresponding to the effective image is signature behavior or not when the first service state is signature state, and determining whether video writing operation is performed at the video moment corresponding to the effective image or not; if the current behavior corresponding to the effective image is signature behavior, continuing to perform video writing operation at the video moment corresponding to the effective image as a video operation result corresponding to the effective image; if the behavior corresponding to the current effective image is an unsigned behavior, setting the continuous occurrence times of signature behaviors to zero, increasing the continuous occurrence times of unsigned behaviors by 1, increasing the cumulative occurrence times of unsigned behaviors by 1, adjusting the first business state to be an unsigned state when the continuous occurrence times of unsigned behaviors or the cumulative occurrence times of unsigned behaviors meet a preset second condition, terminating a video writing operation at the video moment corresponding to the current effective image, taking the video moment corresponding to the current effective image as the ending moment of signature behavior video, and acquiring the signature behavior video from the video data according to the starting moment and the ending moment of signature behavior video.
CN202210034269.6A 2022-01-12 2022-01-12 Signature behavior recognition method and system based on deep learning Active CN114463858B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210034269.6A CN114463858B (en) 2022-01-12 2022-01-12 Signature behavior recognition method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210034269.6A CN114463858B (en) 2022-01-12 2022-01-12 Signature behavior recognition method and system based on deep learning

Publications (2)

Publication Number Publication Date
CN114463858A CN114463858A (en) 2022-05-10
CN114463858B true CN114463858B (en) 2024-05-24

Family

ID=81409996

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210034269.6A Active CN114463858B (en) 2022-01-12 2022-01-12 Signature behavior recognition method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN114463858B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115880782B (en) * 2023-02-16 2023-08-08 广州佰锐网络科技有限公司 Signature action recognition positioning method based on AI, recognition training method and system
CN118155284A (en) * 2024-03-20 2024-06-07 飞虎互动科技(北京)有限公司 Signature action detection method, signature action detection device, electronic equipment and readable storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999005816A1 (en) * 1997-07-24 1999-02-04 Wondernet Ltd. System and method for authenticating signatures
JP2008046781A (en) * 2006-08-11 2008-02-28 National Institute Of Advanced Industrial & Technology Handwritten signature personal authentication system with time series data of contact force
JP2014027564A (en) * 2012-07-27 2014-02-06 Sharp Corp Verification device and electronic signature authentication method
CN105095709A (en) * 2015-09-09 2015-11-25 西南大学 On-line signature identification method and on-line signature identification system
CN107657241A (en) * 2017-10-09 2018-02-02 河海大学常州校区 A kind of signature true or false identification system towards signature pen
CN109643176A (en) * 2016-08-17 2019-04-16 立顶科技有限公司 Stylus, touch-sensing system, touch-sensing controller and touch-sensing method
DE102019104025A1 (en) * 2018-02-20 2019-08-22 RheinLand Versicherungs Aktiengesellschaft Method and system for carrying out an insurance transaction
CN111339842A (en) * 2020-02-11 2020-06-26 深圳壹账通智能科技有限公司 Video jamming identification method and device and terminal equipment
CN111401826A (en) * 2020-02-14 2020-07-10 平安科技(深圳)有限公司 Double-recording method and device for signing electronic contract, computer equipment and storage medium
CN112016538A (en) * 2020-10-29 2020-12-01 腾讯科技(深圳)有限公司 Video processing method, video processing device, computer equipment and storage medium
CN112583601A (en) * 2020-12-04 2021-03-30 湖南环境生物职业技术学院 Financial system is with authorizing system of signing a seal based on thing networking
CN113095203A (en) * 2021-04-07 2021-07-09 中国工商银行股份有限公司 Client signature detection method and device in double-record data quality inspection
CN113313092A (en) * 2021-07-29 2021-08-27 太平金融科技服务(上海)有限公司深圳分公司 Handwritten signature recognition method, and claims settlement automation processing method, device and equipment

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999005816A1 (en) * 1997-07-24 1999-02-04 Wondernet Ltd. System and method for authenticating signatures
JP2008046781A (en) * 2006-08-11 2008-02-28 National Institute Of Advanced Industrial & Technology Handwritten signature personal authentication system with time series data of contact force
JP2014027564A (en) * 2012-07-27 2014-02-06 Sharp Corp Verification device and electronic signature authentication method
CN105095709A (en) * 2015-09-09 2015-11-25 西南大学 On-line signature identification method and on-line signature identification system
CN109643176A (en) * 2016-08-17 2019-04-16 立顶科技有限公司 Stylus, touch-sensing system, touch-sensing controller and touch-sensing method
CN107657241A (en) * 2017-10-09 2018-02-02 河海大学常州校区 A kind of signature true or false identification system towards signature pen
DE102019104025A1 (en) * 2018-02-20 2019-08-22 RheinLand Versicherungs Aktiengesellschaft Method and system for carrying out an insurance transaction
CN111339842A (en) * 2020-02-11 2020-06-26 深圳壹账通智能科技有限公司 Video jamming identification method and device and terminal equipment
CN111401826A (en) * 2020-02-14 2020-07-10 平安科技(深圳)有限公司 Double-recording method and device for signing electronic contract, computer equipment and storage medium
CN112016538A (en) * 2020-10-29 2020-12-01 腾讯科技(深圳)有限公司 Video processing method, video processing device, computer equipment and storage medium
CN112583601A (en) * 2020-12-04 2021-03-30 湖南环境生物职业技术学院 Financial system is with authorizing system of signing a seal based on thing networking
CN113095203A (en) * 2021-04-07 2021-07-09 中国工商银行股份有限公司 Client signature detection method and device in double-record data quality inspection
CN113313092A (en) * 2021-07-29 2021-08-27 太平金融科技服务(上海)有限公司深圳分公司 Handwritten signature recognition method, and claims settlement automation processing method, device and equipment

Also Published As

Publication number Publication date
CN114463858A (en) 2022-05-10

Similar Documents

Publication Publication Date Title
US9760789B2 (en) Robust cropping of license plate images
CN109858555B (en) Image-based data processing method, device, equipment and readable storage medium
CN110827247B (en) Label identification method and device
CN114463858B (en) Signature behavior recognition method and system based on deep learning
CN108229297B (en) Face recognition method and device, electronic equipment and computer storage medium
CN109118473B (en) Angular point detection method based on neural network, storage medium and image processing system
CN110796108B (en) Method, device and equipment for detecting face quality and storage medium
EP2660753B1 (en) Image processing method and apparatus
CN109409288B (en) Image processing method, image processing device, electronic equipment and storage medium
CN111400528B (en) Image compression method, device, server and storage medium
CN111967286A (en) Method and device for identifying information bearing medium, computer equipment and medium
US10922535B2 (en) Method and device for identifying wrist, method for identifying gesture, electronic equipment and computer-readable storage medium
CN112149663A (en) RPA and AI combined image character extraction method and device and electronic equipment
US10423817B2 (en) Latent fingerprint ridge flow map improvement
CN112784712B (en) Missing child early warning implementation method and device based on real-time monitoring
CN111275040A (en) Positioning method and device, electronic equipment and computer readable storage medium
CN111368632A (en) Signature identification method and device
CN111932582A (en) Target tracking method and device in video image
CN111680546A (en) Attention detection method, attention detection device, electronic equipment and storage medium
WO2023241102A1 (en) Label recognition method and apparatus, and electronic device and storage medium
CN112668462A (en) Vehicle loss detection model training method, vehicle loss detection device, vehicle loss detection equipment and vehicle loss detection medium
CN111178254A (en) Signature identification method and device
CN114821194B (en) Equipment running state identification method and device
CN113177479A (en) Image classification method and device, electronic equipment and storage medium
CN112559342A (en) Method, device and equipment for acquiring picture test image and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant