CN115565097A - Method and device for detecting compliance of personnel behaviors in transaction scene - Google Patents
Method and device for detecting compliance of personnel behaviors in transaction scene Download PDFInfo
- Publication number
- CN115565097A CN115565097A CN202211043586.0A CN202211043586A CN115565097A CN 115565097 A CN115565097 A CN 115565097A CN 202211043586 A CN202211043586 A CN 202211043586A CN 115565097 A CN115565097 A CN 115565097A
- Authority
- CN
- China
- Prior art keywords
- person
- personnel
- video image
- feature information
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q40/00—Finance; Insurance; Tax strategies; Processing of corporate or income taxes
- G06Q40/02—Banking, e.g. interest calculation or account maintenance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/53—Recognition of crowd images, e.g. recognition of crowd congestion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Business, Economics & Management (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Accounting & Taxation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Finance (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Development Economics (AREA)
- Economics (AREA)
- Marketing (AREA)
- Strategic Management (AREA)
- Technology Law (AREA)
- General Business, Economics & Management (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of financial transactions, in particular to a method and a device for detecting whether the behavior of a person in a transaction scene is compliant. The method for detecting whether the behavior of the personnel in the transaction scene conforms to the standard comprises the following steps: after a video image of a scene to be detected is obtained, feature extraction is carried out on the video image through a pre-trained detection model so as to obtain personnel feature information included in the video image, and then whether personnel leave the seat midway or change the personnel midway is determined according to the personnel feature information, so that the behavior compliance of the traders is determined. On one hand, the method adopts the detection model to extract the characteristics of the video image, so that the extracted personnel characteristic information is more accurate; on the other hand, whether the person leaves the seat midway or changes the person midway is determined based on the person characteristic information, so that the judgment result is more accurate.
Description
Technical Field
The invention relates to the technical field of financial transactions, in particular to a method and a device for detecting whether the behavior of a person in a transaction scene is compliant.
Background
When the bank transacts the services such as insurance, fund transaction and the like, the recorded video is recorded according to the legal and legal requirements, and the quality inspection and supervision are carried out on the recorded video to judge whether the recorded video meets the requirements of the service specification. For both parties of a transaction in a video, whether the person leaves or changes people in the middle of the transaction process needs to be judged.
Most of the prior art is based on a face detection technology, and whether the situation of leaving or changing people in the middle of a transaction process occurs is judged by simply judging according to the number of detected faces or by using a face detection and tracking mode. Some techniques also use face pose estimation, for example, to determine that the head is off-seat according to the deflection angle, which is obviously too extensive. Because bank transaction scenes are various and transaction places and recording equipment in various places are different, the face angles in the videos cannot be guaranteed to be all front-view cameras, and even in a small part of videos, the two transaction parties are back-to-back cameras. Therefore, the accuracy is low by simply utilizing the deflection angle to judge; in addition, depending on the way of face tracking, only whether a face exists in the monitored area can be determined, but the judgment on midway person changing cannot be carried out. Therefore, whether the personnel behaviors in the transaction scenes are in compliance detection or not in the prior art has certain limitation, namely, the generalization of different scenes is not good enough, and the detection precision is low
Disclosure of Invention
The invention provides a method and a device for detecting whether the behavior of personnel in a transaction scene is in compliance or not, which are used for solving the technical problem of low precision when the behavior of personnel in the transaction scene is in compliance or not in the prior art.
In one aspect, the invention provides a method for detecting whether a person behavior in a transaction scene is compliant, which comprises the following steps:
acquiring a video image of a scene to be detected;
inputting the video image into a pre-trained detection model to obtain personnel characteristic information in the video image; the detection model is used for carrying out feature extraction on the video image so as to obtain personnel feature information included in the video image;
and determining whether the person leaves or changes the person midway according to the person characteristic information, if so, determining that the person behavior is not in compliance, and otherwise, determining that the person behavior is in compliance.
According to the method for detecting whether the personnel behaviors in the transaction scene are in compliance or not, the personnel characteristic information comprises one or more of personnel head characteristic information, personnel face characteristic information and personnel hand characteristic information.
According to the method for detecting whether the personnel behavior in the transaction scene is in compliance or not, the personnel characteristic information comprises personnel head characteristic information and personnel face characteristic information;
the inputting the video image into a pre-trained detection model to obtain the personnel feature information in the video image comprises:
the detection model carries out feature detection on the video image to obtain a person head detection area in the video image;
performing feature extraction on the person head detection area to acquire the person head feature information;
expanding the person head detection area to obtain an expanded person face detection area;
extracting features of the person face detection area to obtain the person face feature information;
associating the person facial feature information with the person head feature information.
According to the method for detecting whether the personnel behavior in the transaction scene is in compliance or not, the personnel characteristic information further comprises the personnel hand characteristic information;
the inputting the video image into a pre-trained detection model to obtain the personnel feature information in the video image further comprises:
the detection model detects the hand characteristics of the personnel in the video image to acquire the hand characteristic information of the personnel in the video image;
and associating the person hand feature information with the person face feature information and the person head feature information.
According to the method for detecting whether the personnel behavior in the transaction scene is in compliance or not, the step of determining whether the personnel midway leave according to the personnel characteristic information comprises the following steps:
determining whether the person characteristic information is continuously acquired according to the video image, and if the person characteristic information is not acquired in a first preset time period, determining that the person leaves the seat midway;
or when the fact that the shelters exist in the person head detection area of the video image is detected, obtaining position information of the head and/or the face of the person before sheltering, and adopting a tracking algorithm to predict the predicted position information of the head and/or the face of the person after the sheltering is finished; acquiring actual position information of the head and/or face of a person in a video image after the shielding is finished, comparing the predicted position information with the actual position information of the head and/or face of the person, if the difference value of the predicted position information and the actual position information exceeds a preset first threshold value, determining that the person leaves the seat midway, otherwise, determining that the person does not leave the seat midway.
According to the method for detecting whether the personnel behavior in the transaction scene is in compliance or not, the step of determining whether the midway person changing exists or not according to the personnel characteristic information comprises the following steps:
when the tracking of the personnel feature information in the video image is interrupted by adopting a tracking algorithm and a new personnel is detected according to the video image, acquiring the personnel feature information of the new personnel; and comparing the personnel characteristic information of the new personnel with the personnel characteristic information acquired before interruption, if the difference value of the two exceeds a preset second threshold value, determining that the person is changed in the midway, otherwise, determining that the person is changed in the midway.
According to the method for detecting whether the behavior of the personnel in the transaction scene is in compliance or not, the method further comprises the following steps: when the fact that the shelter exists in the person head detection area of the video image is detected, the shelter is further identified, and feature information of the shelter is obtained.
According to the method for detecting whether the personnel behavior in the transaction scene is in compliance or not, before determining whether the personnel leaves or changes people in the midway according to the personnel characteristic information, the method further comprises the following steps:
and respectively acquiring two pieces of personnel characteristic information from the video images corresponding to the two transaction parties, and determining that the two transaction parties are in the same frame when determining that the variation values of the two pieces of personnel characteristic information in a second preset time period do not exceed a third threshold value.
According to the method for detecting whether the behavior of the personnel in the transaction scene is in compliance or not, the method further comprises the following steps:
after the face feature information of the person is obtained, the quality of the face feature information of the person is determined by adopting a preset method, if the quality of the face feature information of the person obtained this time is higher than that of the face feature information of the person obtained last time, the face feature information of the person obtained this time is used as the latest face feature information of the person, and the latest face feature information of the person is associated with the head feature information of the person.
On the other hand, the invention also provides a device for detecting whether the behavior of the personnel in the transaction scene is in compliance, which comprises:
the acquisition module is used for acquiring a video image of a scene to be detected;
the characteristic extraction module is used for inputting the video image into a pre-trained detection model to obtain personnel characteristic information in the video image; the detection model is used for carrying out feature extraction on the video image so as to obtain personnel feature information included in the video image;
and the determining module is used for determining whether the person leaves the seat in the midway or changes the person in the midway according to the person characteristic information, if so, determining that the person behavior is not in compliance, and otherwise, determining that the person behavior is in compliance.
In another aspect, the present invention further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement any one of the above methods for detecting compliance of behavior of a person in a transaction scenario.
In another aspect, the present invention also provides a non-transitory computer readable storage medium, on which a computer program is stored, the computer program, when executed by a processor, implementing the method for detecting compliance of personnel in a trading scenario as described in any one of the above.
In another aspect, the present invention further provides a computer program product, which includes a computer program, and when the computer program is executed by a processor, the method for detecting compliance of the behavior of the person in the transaction scenario is implemented.
According to the method and the device for detecting whether the behavior of the personnel in the trading scene is in compliance or not, after the video image of the scene to be detected is obtained, the characteristics of the video image are extracted through a pre-trained detection model so as to obtain the personnel characteristic information included in the video image, and then whether the personnel leave the seat or change the person in the middle or not is determined according to the personnel characteristic information, so that the behavior compliance of the trading personnel is determined. On one hand, the method adopts the detection model to extract the characteristics of the video image, so that the extracted personnel characteristic information is more accurate; on the other hand, whether the person leaves the seat midway or changes the person midway is determined based on the person characteristic information, so that the judgment result is more accurate.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a method for detecting compliance of personnel in a transaction scenario according to the present invention;
FIG. 2 is a second schematic flow chart of a method for detecting compliance of personnel in a transaction scenario according to the present invention;
FIG. 3 is a schematic structural diagram of a device for detecting compliance of personnel in a transaction scenario according to the present invention;
fig. 4 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
When a bank transacts insurance, fund, financing and other transactions, the record and the video are required according to the requirements of relevant laws and regulations, two parties of the transaction need to be in the same frame in the double-record video, and the absence or midway exchange of people cannot occur. In the prior art, a face tracking mode is relied on, and whether a face exists in a monitored area can only be determined, but the midway person changing can not be judged. According to the invention, the detection model is adopted to extract the characteristics of the video image, so that the extracted personnel characteristic information is more accurate; meanwhile, whether the person leaves the seat in the midway or changes the person in the midway is determined based on the person characteristic information, so that the judgment result is more accurate.
The technical solution of the present invention is further explained below with reference to fig. 1 to 4.
The first embodiment is as follows:
referring to fig. 1, the present embodiment provides a method for detecting whether a behavior of a person in a transaction scenario is compliant, including:
step 101: and acquiring a video image of the scene to be detected.
The method of the embodiment is mainly directed to a trading scene that both trading parties appear in the same picture, and the whole picture can be obtained through one camera generally; in other scenes, two parties in the transaction may appear in two pictures, and at this time, the method of the present application may be used for monitoring the trader for each picture, or the method of the present application may be used for monitoring after the two pictures are spliced according to the time sequence.
In the embodiment, the video images of the transaction scenes are acquired through the camera or the camera, for example, the video images of two scenes of both parties of the transaction are acquired through the two cameras respectively. For example, a picture is extracted from a video at a specified frame rate, and the extracted picture is preprocessed and input into a detection model.
Step 102: inputting the video image into a pre-trained detection model to obtain personnel characteristic information in the video image; the detection model is used for carrying out feature extraction on the video image so as to obtain the personnel feature information included in the video image. Generally, the person feature information mainly refers to facial feature information of a person.
Generally, the obtained personnel feature information is multidimensional, for example, a plurality of different feature extraction models are adopted to respectively extract the feature information with different dimensions, so as to obtain the multidimensional personnel feature information.
The detection model of the embodiment adopts a YOLOv5 model, and the YOLOv5 model is trained for multiple times, and it is determined that training of the YOLOv5 model is completed after verifying that the accuracy of the feature information extracted by the YOLOv5 model meets the requirement.
Step 103: and determining whether the person leaves or changes the person midway according to the person characteristic information, if so, determining that the person behavior is not in compliance, and otherwise, determining that the person behavior is in compliance.
Generally, in the course of two-party transaction, since the transacting person may not face the camera, for example, the side faces the camera or the back faces the camera, the face information of the user in the video image used at this time is not clear enough, in such a scenario, if the face feature information is used as the judgment reference to determine the person feature information in the current video image, and thus the time sequence information of the picture is used as the basis to judge whether the person leaves or changes person midway, the face feature information is less unclear and the judgment result is not accurate enough. In order to overcome the defect, please refer to fig. 2 in this embodiment, when collecting the personnel feature information, the method specifically includes:
step 201: the detection model carries out feature detection on the video image to obtain a person head detection area in the video image.
Step 202: and extracting the characteristics of the person head detection area to obtain the person head characteristic information.
Step 203: and expanding the head detection area of the person to obtain an expanded face detection area of the person. For example, after the human head detection region H is detected, the length and width of the human head detection region H are respectively expanded to two times of the original length and width by taking the center point of H as the center on the video image, so as to obtain an expanded human face detection region, and facial feature extraction is performed from the expanded human face detection region, so that all facial information can be extracted, and information omission is avoided. The face feature extraction candidate region can be regarded as a face feature extraction candidate region, and face feature extraction is performed on the face feature extraction candidate region to obtain the person face feature information.
Step 204: and extracting the characteristics of the person face detection area to obtain the person face characteristic information.
In the embodiment, whether a human face exists in the face detection area specified by the image is judged through the convolutional neural network, and if the human face exists, the position and the confidence coefficient of the human face are returned. Then determining the size, position, distance and other attributes of facial image facial contour such as iris, nasal wing, mouth angle and the like, and forming a feature vector for describing the human face by using the spatial position relation with the standard face and the image feature quantity calculated after affine transformation.
Step 205: the person facial feature information is associated with the person head feature information. I.e. a feature vector representing facial feature information of a person is associated with a feature vector representing head information of the person.
Because the human hand is easily recognized as the human head by mistake when the distance between the human hand and the camera is close or the illumination condition is not ideal, in order to avoid characteristic misidentification, the detection model is trained in advance in the embodiment so that the detection model can accurately recognize the human hand characteristic information, the human hand characteristic information is extracted, and the extracted human hand characteristic information is associated with the head characteristic information and the face characteristic information, so that the detection result is more accurate.
Therefore, the embodiment introduces a mode of associating the head feature information detection with the face information detection, and well overcomes the technical problem of low detection precision when detecting the side face or the back face of the user in the prior art.
Specifically, in this embodiment, a training data set needs to be acquired when training YOLOv5, people's head and hands areas in pictures extracted from a banking transaction scene video and a daily scene video such as a classroom and a restaurant are labeled first, and a detection model is trained by using YOLOv 5. The training data set of this example consists of 4 parts: (1) A SCUTHEAD dataset for a head detection dataset for a classroom scene; (2) A BrainWash data set, a head detection data set (3) for restaurant scenes, which extracts data of pictures marking heads and hands from banking business videos; (4) And acquiring images of common life scenes, and labeling data of human heads and human hands. The detection model trained by the data set can use image detection under multiple scenes, and the detection precision is higher. Through testing, the mAP (detection index) on the verification set of the detection model of the embodiment reaches 0.908.
Determining whether the person midway leaves according to the person characteristic information in the embodiment includes: and determining whether the characteristic information of the personnel is continuously acquired according to the video image, and if the characteristic information of the personnel is not acquired in a first preset time period, determining that the personnel leaves the seat midway. The first preset time is set according to a specific scene, and for example, if the person feature information is not detected within 3 minutes, the person is determined to leave the seat midway. According to relevant regulations, various cards, forms, documents and the like need to be displayed in videos recorded in bank transactions, and the faces can be temporarily shielded during display. When the fact that an occlusion object exists in a person head detection area of a video image is detected, obtaining position information of the head and/or face of a person before occlusion, and predicting the predicted position information of the head and/or face of the person after occlusion is finished by adopting an SORT (target body recognition) tracking algorithm; and acquiring actual position information of the head and/or face of the person in the video image after the shielding is finished, comparing the predicted position information and the actual position information of the head and/or face of the person, if the difference value of the predicted position information and the actual position information exceeds a preset first threshold value, determining that the person leaves the seat midway, otherwise, determining that the person does not leave the seat midway. Wherein the skilled person sets the first threshold value according to the actual scenario.
Specifically, when the fact that an occlusion exists in a person head detection area of a video image is detected, position information of the person head and/or face in the previous frame or a plurality of frames of video images is obtained, and the predicted position information of the person head and/or face after occlusion is finished is predicted by adopting an SORT (target body recognition) tracking algorithm; and acquiring actual position information of the head and/or face of the person in one or more frames of video images after the shielding is finished, comparing the predicted position information and the actual position information of the head and/or face of the person, and determining that the person leaves the seat in the midway if the difference value of the predicted position information and the actual position information exceeds a preset first threshold value, otherwise determining that the person does not leave the seat in the midway.
Wherein, determining whether a midway person change exists according to the person characteristic information in the embodiment includes: when the tracking algorithm is adopted to interrupt the tracking of the personnel feature information in the video image and a new personnel is detected according to the video image, acquiring the personnel feature information of the new personnel; and comparing the personnel characteristic information of the new personnel with the personnel characteristic information acquired before interruption, if the difference value of the personnel characteristic information of the new personnel and the personnel characteristic information exceeds a preset second threshold value, determining that the midway person changing exists, and otherwise, determining that the midway person changing does not exist. Wherein the second threshold value can be set according to the actual scene.
In one embodiment, when the fact that a blocking object exists in a person head detection area of a video image is detected, the blocking object is further identified, and feature information of the blocking object is obtained. For example, the position information of the obstruction in the current picture is identified, for example, when the obstruction is detected, the position information detection, the feature identification and the like are performed on the obstruction through a single processing module to determine the relevant information of the obstruction.
As various cards, forms, documents and the like need to be displayed in the video recorded in the bank transaction, card or paper areas which meet certain specifications, such as identity cards, work cards, post qualifications, risk assessment tables, signature documents and the like, are detected in the video in the bank transaction scene. In this embodiment, when it is detected that a blocking object exists in the person head detection area of the video image, whether the blocking object is a text blocking object is further detected, if yes, feature extraction is performed on the text blocking object to obtain text information included in the text blocking object, and the text information is further associated with corresponding person feature information, where necessary, the text information may also be used as a basis for feature information determination.
In some scenes, whether both trading parties have the same frame or not is judged according to the requirement when the trading starts, namely, the double parties of the trading need to have the same frame in the double-recording video, and the absence or the midway person changing can not occur. In this embodiment, two pieces of person characteristic information are respectively obtained from video images corresponding to two parties in a transaction, and when it is determined that both change values of the two pieces of person characteristic information in a second preset time period do not exceed a third threshold, it is determined that the two parties in the transaction are in the same frame. Wherein the third threshold value can be set according to the actual scene.
In an actual scene, possibly due to the angle problem between a user and a camera, the face feature information right opposite to the user cannot be obtained from the beginning, therefore, after the face feature information of the person is obtained each time, the quality of the face feature information of the person is determined by adopting a preset method, if the quality of the face feature information of the person obtained this time is higher than that of the person obtained last time, the quality degree of the face feature is calculated through a convolution neural network model, the region and the feature point which are the face are input, and the contribution degree of the feature extracted by the group of the region and the feature point is measured, so as to identify the corresponding person. And taking the obtained face feature information of the person as the latest face feature information of the person, and associating the latest face feature information of the person with the head feature information of the person, or not updating the face feature information of the person until the quality of the face feature information of the person detected at a certain time reaches a certain threshold value. For example, for facial feature information, the score of 0-1 output by the convolutional neural network is used for measuring whether the input face region is suitable for face feature matching, and the higher the score is, the better the face feature matching effect is, i.e. the higher the quality of the face feature information of the person is.
Specifically, in order to improve the efficiency of the method of the embodiment, the embodiment uses a container technology, and each implementation step is divided into distributed computer systems, so as to improve the detection efficiency. After testing, the YOLOv5 model is used for facial feature extraction, the facial feature information quality calculation part is packaged into a container, and each model and related server software are packaged, so that RPC or HTTP service can be provided for the outside. Distributed deployment, speed 150 pictures/second on NVIDIA V100 GPU. And subsequently, by combining with comprehensive logic judgment of text classification detection information, extracting 4 frames per second for about 25 minutes of video, wherein the total extracted pictures are 6000 pictures, and the processing time of the NVIDIA V100 GPU is within 3 seconds, and if the extracted 6000 pictures are processed on a single V100 GPU, the processing time is 40 seconds.
Example two:
the following describes the device for detecting whether the behavior of the person in the transaction scenario is in compliance, and the device for detecting whether the behavior of the person in the transaction scenario is in compliance described below and the method for detecting whether the behavior of the person in the transaction scenario is in compliance described above can be referred to correspondingly.
The embodiment provides a device for detecting whether a person behavior in a transaction scenario is compliant, as shown in fig. 3, including: an acquisition module 301, a feature extraction module 302, and a determination module 303.
The acquiring module 301 is configured to acquire a video image of a scene to be detected; the feature extraction module 302 is configured to input the video image into a pre-trained detection model to obtain person feature information in the video image; the detection model is used for carrying out feature extraction on the video image so as to obtain the personnel feature information included in the video image. The determining module 303 is configured to determine whether a person leaves or changes people midway according to the person feature information, determine that the person behavior is not compliant if the person leaves or changes people midway, and determine that the person behavior is compliant if the person does not leave or changes people midway.
The implementation method of the functions of each module in this embodiment corresponds to that in the first embodiment, and is not described again in this embodiment.
Example three:
this embodiment provides an entity structure schematic diagram of an electronic device, as shown in fig. 4, the electronic device may include: a processor (processor) 410, a communication Interface 420, a memory (memory) 430 and a communication bus 440, wherein the processor 410, the communication Interface 420 and the memory 430 are communicated with each other via the communication bus 440. The processor 410 may invoke logic instructions in the memory 430 to perform a transaction scenario personnel behavior compliance detection method, the method comprising: acquiring a video image of a scene to be detected; inputting the video image into a pre-trained detection model to obtain personnel characteristic information in the video image; the detection model is used for carrying out feature extraction on the video image so as to obtain personnel feature information included in the video image; and determining whether the person leaves or changes the person midway according to the person characteristic information, if so, determining that the person behavior is not in compliance, and otherwise, determining that the person behavior is in compliance.
In addition, the logic instructions in the memory 430 may be implemented in the form of software functional units and stored in a computer readable storage medium when the software functional units are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention further provides a computer program product, the computer program product including a computer program, the computer program being stored on a non-transitory computer-readable storage medium, wherein when the computer program is executed by a processor, the computer is capable of executing the method for detecting compliance of the behavior of the person in the transaction scenario provided by the above methods, the method including: acquiring a video image of a scene to be detected; inputting the video image into a pre-trained detection model to obtain personnel characteristic information in the video image; the detection model is used for carrying out feature extraction on the video image so as to obtain personnel feature information included in the video image; and determining whether the person leaves or changes the person midway according to the person characteristic information, if so, determining that the person behavior is not in compliance, and otherwise, determining that the person behavior is in compliance.
In another aspect, the present invention also provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program is implemented to execute a method for detecting compliance of a person in a transaction scenario provided by the above methods, where the method includes: acquiring a video image of a scene to be detected; inputting the video image into a pre-trained detection model to obtain personnel characteristic information in the video image; the detection model is used for carrying out feature extraction on the video image so as to obtain personnel feature information included in the video image; and determining whether the person leaves or changes the person midway according to the person characteristic information, if so, determining that the person behavior is not in compliance, and otherwise, determining that the person behavior is in compliance.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment may be implemented by software plus a necessary general hardware platform, and may also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (13)
1. A method for detecting whether a person behavior in a transaction scene is in compliance or not is characterized by comprising the following steps:
acquiring a video image of a scene to be detected;
inputting the video image into a pre-trained detection model to obtain personnel characteristic information in the video image; the detection model is used for carrying out feature extraction on the video image so as to obtain personnel feature information included in the video image;
and determining whether the person leaves or changes the person midway according to the person characteristic information, if so, determining that the person behavior is not in compliance, and otherwise, determining that the person behavior is in compliance.
2. The transaction scenario personnel behavior compliance detection method of claim 1, wherein the personnel characteristic information comprises one or more of personnel head characteristic information, personnel face characteristic information, and personnel hand characteristic information.
3. The transaction scenario personnel behavior compliance detection method according to claim 1, wherein the personnel feature information includes personnel head feature information and personnel face feature information;
the inputting the video image into a pre-trained detection model to obtain the personnel feature information in the video image comprises:
the detection model carries out feature detection on the video image to obtain a person head detection area in the video image;
extracting the characteristics of the person head detection area to obtain the person head characteristic information;
expanding the person head detection area to obtain an expanded person face detection area;
extracting features of the person face detection area to obtain the person face feature information;
associating the person facial feature information with the person head feature information.
4. The method of claim 3, wherein the person characteristic information further includes person hand characteristic information;
the step of inputting the video image into a pre-trained detection model to obtain the personnel feature information in the video image further comprises:
the detection model detects the hand characteristics of the personnel in the video image to acquire the hand characteristic information of the personnel in the video image;
and associating the person hand feature information with the person face feature information and the person head feature information.
5. The method for detecting compliance of personnel behaviors in transaction scenario according to claim 1, wherein the determining whether there is a person midway absence according to the personnel feature information comprises:
determining whether the person characteristic information is continuously acquired according to the video image, and if the person characteristic information is not acquired in a first preset time period, determining that the person leaves the seat midway;
or when the fact that the shelters exist in the person head detection area of the video image is detected, obtaining position information of the head and/or the face of the person before sheltering, and adopting a tracking algorithm to predict the predicted position information of the head and/or the face of the person after the sheltering is finished; acquiring actual position information of the head and/or the face of a person in a video image after the shielding is finished, comparing the predicted position information and the actual position information of the head and/or the face of the person, and if the difference value of the predicted position information and the actual position information exceeds a preset first threshold value, determining that the person leaves the seat midway, otherwise, determining that the person does not leave the seat midway.
6. The method for detecting compliance of personnel behaviors in transaction scenario according to claim 1, wherein the determining whether there is a man-in-the-middle change according to the personnel feature information comprises:
when the tracking of the personnel feature information in the video image is interrupted by adopting a tracking algorithm and a new personnel is detected according to the video image, acquiring the personnel feature information of the new personnel; and comparing the personnel characteristic information of the new personnel with the personnel characteristic information acquired before interruption, if the difference value of the two exceeds a preset second threshold value, determining that the person is changed in the midway, otherwise, determining that the person is changed in the midway.
7. The method of claim 5, further comprising: when the fact that a shelter exists in the head detection area of the person of the video image is detected, the shelter is further identified, feature information of the shelter is obtained, whether the shelter is a text shelter is detected, if yes, feature extraction is conducted on the text shelter, and text information included by the text shelter is obtained.
8. The method for detecting compliance of personnel behaviors in transaction scenario according to claim 1, further comprising, before determining whether there is a person leaving or changing person halfway according to the personnel feature information:
and respectively acquiring two pieces of personnel characteristic information from the video images corresponding to the two transaction parties, and determining that the two transaction parties are in the same frame when determining that the variation values of the two pieces of personnel characteristic information in a second preset time period do not exceed a third threshold value.
9. The method of claim 3, further comprising:
after the face feature information of the person is obtained, the quality of the face feature information of the person is determined by adopting a preset method, if the quality of the face feature information of the person obtained this time is higher than that of the face feature information of the person obtained last time, the face feature information of the person obtained this time is used as the latest face feature information of the person, and the latest face feature information of the person is associated with the head feature information of the person.
10. A device for detecting whether a person acts in a transaction scene is compliant or not is characterized by comprising:
the acquisition module is used for acquiring a video image of a scene to be detected;
the characteristic extraction module is used for inputting the video image into a pre-trained detection model to obtain personnel characteristic information in the video image; the detection model is used for carrying out feature extraction on the video image so as to obtain personnel feature information included in the video image;
and the determining module is used for determining whether the person leaves the seat in the midway or changes the person in the midway according to the person characteristic information, if so, determining that the person behavior is not in compliance, and otherwise, determining that the person behavior is in compliance.
11. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the method for detecting compliance of a person's behavior in a transaction scenario according to any one of claims 1 to 9.
12. A non-transitory computer-readable storage medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the method for detecting compliance of trading scenario personnel behaviors of any one of claims 1 to 9.
13. A computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the method for detecting compliance of trading scenario personnel behaviors of any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211043586.0A CN115565097A (en) | 2022-08-29 | 2022-08-29 | Method and device for detecting compliance of personnel behaviors in transaction scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211043586.0A CN115565097A (en) | 2022-08-29 | 2022-08-29 | Method and device for detecting compliance of personnel behaviors in transaction scene |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115565097A true CN115565097A (en) | 2023-01-03 |
Family
ID=84738778
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211043586.0A Pending CN115565097A (en) | 2022-08-29 | 2022-08-29 | Method and device for detecting compliance of personnel behaviors in transaction scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115565097A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116363623A (en) * | 2023-01-28 | 2023-06-30 | 苏州飞搜科技有限公司 | Vehicle detection method based on millimeter wave radar and vision fusion |
-
2022
- 2022-08-29 CN CN202211043586.0A patent/CN115565097A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116363623A (en) * | 2023-01-28 | 2023-06-30 | 苏州飞搜科技有限公司 | Vehicle detection method based on millimeter wave radar and vision fusion |
CN116363623B (en) * | 2023-01-28 | 2023-10-20 | 苏州飞搜科技有限公司 | Vehicle detection method based on millimeter wave radar and vision fusion |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210056360A1 (en) | System and method using machine learning for iris tracking, measurement, and simulation | |
CN112328999B (en) | Double-recording quality inspection method and device, server and storage medium | |
CN108717663B (en) | Facial tag fraud judging method, device, equipment and medium based on micro expression | |
CN108985134B (en) | Face living body detection and face brushing transaction method and system based on binocular camera | |
US20160371539A1 (en) | Method and system for extracting characteristic of three-dimensional face image | |
CN101142584B (en) | Method for facial features detection | |
US20160070956A1 (en) | Method and Apparatus for Generating Facial Feature Verification Model | |
CN101556717A (en) | ATM intelligent security system and monitoring method | |
CN104112114A (en) | Identity verification method and device | |
Boom et al. | The effect of image resolution on the performance of a face recognition system | |
WO2019228040A1 (en) | Facial image scoring method and camera | |
CN104123543A (en) | Eyeball movement identification method based on face identification | |
KR102593624B1 (en) | Online Test System using face contour recognition AI to prevent the cheating behaviour and method thereof | |
CN110827432B (en) | Class attendance checking method and system based on face recognition | |
US20150278584A1 (en) | Object discriminating apparatus and method | |
Rigas et al. | Gaze estimation as a framework for iris liveness detection | |
CN112560584A (en) | Face detection method and device, storage medium and terminal | |
KR20230007250A (en) | UBT system using face contour recognition AI and method thereof | |
CN111738199A (en) | Image information verification method, image information verification device, image information verification computing device and medium | |
CN115565097A (en) | Method and device for detecting compliance of personnel behaviors in transaction scene | |
Bhaskar et al. | Advanced algorithm for gender prediction with image quality assessment | |
KR20220016529A (en) | Online Test System using face contour recognition AI to prevent the cheating behaviour by using a front camera of examinee terminal and a auxiliary camera and method thereof | |
KR20210146185A (en) | UBT system using face contour recognition AI to prevent the cheating behaviour and method thereof | |
Patil et al. | A novel method for illumination normalization for performance improvement of face recognition system | |
CN110490149A (en) | A kind of face identification method and device based on svm classifier |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |