CN114882427A - Risk identification method, device and system and computer equipment - Google Patents

Risk identification method, device and system and computer equipment Download PDF

Info

Publication number
CN114882427A
CN114882427A CN202110078552.4A CN202110078552A CN114882427A CN 114882427 A CN114882427 A CN 114882427A CN 202110078552 A CN202110078552 A CN 202110078552A CN 114882427 A CN114882427 A CN 114882427A
Authority
CN
China
Prior art keywords
behavior
transaction
area
type
risk
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110078552.4A
Other languages
Chinese (zh)
Inventor
付烁
周宇飞
王兵
张梦阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202110078552.4A priority Critical patent/CN114882427A/en
Publication of CN114882427A publication Critical patent/CN114882427A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/82Protecting input, output or interconnection devices
    • G06F21/83Protecting input, output or interconnection devices input devices, e.g. keyboards, mice or controllers thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F19/00Complete banking systems; Coded card-freed arrangements adapted for dispensing or receiving monies or the like and posting such transactions to existing accounts, e.g. automatic teller machines
    • G07F19/20Automatic teller machines [ATMs]
    • G07F19/207Surveillance aspects at ATMs

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Computer Security & Cryptography (AREA)
  • Accounting & Taxation (AREA)
  • General Engineering & Computer Science (AREA)
  • Educational Administration (AREA)
  • Software Systems (AREA)
  • Game Theory and Decision Science (AREA)
  • Finance (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)

Abstract

A method of risk identification, the method comprising acquiring image data of a transaction area and an image area of a transaction device, determining from the image data a first type of behaviour of the transaction area and a second type of behaviour of the keypad area and determining a risk of a transaction behaviour.

Description

Risk identification method, device and system and computer equipment
Technical Field
The present application relates to the field of identification, and in particular, to a method, an apparatus, a system, and a computer device for risk identification.
Background
Because self-service transaction equipment does not need special staff to manage, convenient and fast to can use at any time 24 hours, user and trade company all tend to use unmanned transaction equipment to accomplish transaction activity more, self-service transaction equipment's prevalence is also higher and higher, consequently has produced a lot of crime cases that utilize these equipment to steal other people's money and user's identity information. The traditional method for identifying whether the transaction behavior of the user is risky records the identity information of the user through facial recognition, the method can only track the identity through the facial recognition after the user has risky behavior, has no real-time property, and cannot effectively identify the identity of illegal personnel intentionally shielded by the face. Therefore, how to provide a real-time and effective risk identification method for transaction behaviors on self-service transaction equipment becomes a technical problem to be solved urgently.
Disclosure of Invention
The application provides a risk identification method, a risk identification device and computing equipment, so that the risk of a transaction behavior of a user when the user uses an autonomous transaction device is accurately evaluated in real time, and the occurrence of illegal behaviors is reduced.
In a first aspect, a method for risk identification is provided, including: image data of a transaction device is acquired, wherein the transaction device comprises a transaction area and a keyboard area, and the image data comprises an image of the transaction area and an image of the keyboard area. The root image data determines a first behavior type for the transactional region and a second behavior type for the keyboard region, and determines a risk behavior for the transactional behavior based on the first behavior type and the second behavior type. By the method, the transaction equipment can be divided into the transaction area and the keyboard area, the types of the transaction behaviors of the two areas are automatically judged through the image data, and the risk of the transaction behaviors is efficiently identified.
As a possible implementation, the first behavior type includes trade area normal behavior and trade area risk behavior; the second behavior type includes keyboard region normal behavior and keyboard region risky behavior.
As another possible implementation, determining the risk of the transaction behavior according to the first behavior type and the second behavior type includes: determining that the transaction activity is a risk activity when at least one of the first activity type and the second activity type is a risk activity. By the method, the types of the transaction behaviors of the transaction area and the keyboard area can be comprehensively judged, and the risk of the transaction behaviors can be more accurately identified.
As another possible implementation, determining the first behavior type of the transaction area and the second behavior type of the keypad area according to the image data further includes: acquiring the probability that the first behavior type of the transaction area is risk behavior and the probability that the second behavior type of the keyboard area is risk behavior according to the image data; determining risk behavior for the transaction behavior based on the first behavior type and the second behavior type, comprising: determining that the transaction line is a risky action when at least one of the probability that the first action type is a risky action and the probability that the second action type is a risky action is greater than or equal to a first threshold. By the method, the probability can be used to further improve the accuracy of risk identification.
As another possible implementation, the determining a first behavior type of the transaction area according to the image data includes: and intercepting an image of the transaction area according to the image data, and using the image data of the transaction area as input of a classification network to obtain a first behavior type. The classification network can effectively identify the type of the behavior of the transaction area, and the accuracy of risk identification is improved.
As another possible implementation manner, the determining the second behavior type of the keyboard region according to the image data includes: the image of the keypad region is intercepted from the image data and the second type of behavior is derived using the image data of the keypad region as input to the countermeasure network. All wrong behavior types of the keyboard area are learned in the using process of the countermeasure network, and failure of behavior recognition caused by interference of normal actions of hands of the keyboard area is avoided.
As another possible implementation, a transaction device includes: the system comprises an automatic teller machine, a cash recycling machine, a virtual counter system, an automatic vending machine, an automatic ticket vending machine, an automatic recharging machine and an automatic payment machine.
In a second aspect, the present application provides a risk identification apparatus comprising means for performing the sensing method of the first aspect or any one of its possible implementations.
In a third aspect, the present application provides a system for risk identification, comprising a transaction device and a computing device. The transaction equipment comprises a camera, a transaction area and a keyboard area, wherein the camera is used for acquiring image data of the transaction equipment, and the image data comprises an image of the transaction area and an image of the keyboard area. The computing device is configured to implement the operational steps of the method performed by the corresponding subject matter in any one of the possible implementations of the first aspect and the first aspect as described above. Through the system, the computing equipment can remotely identify the risk behaviors of the transaction equipment, provide uniform risk management for the transaction equipment and reduce the manual workload.
In a fourth aspect, the present application provides a transaction device comprising a camera, a transaction area, a keyboard area, and a processor. The camera is configured to obtain image data of the transaction device, the image data includes an image of the transaction area and an image of the keyboard area, and the processor is configured to execute the operation steps of the method performed by the corresponding subject in any one of the above-mentioned first aspect and the first possible implementation manner. By the transaction equipment, the risk of the transaction behavior of the user using the transaction equipment can be monitored in real time, and illegal behaviors can be prevented in real time.
In a fifth aspect, the present application provides a computer-readable storage medium having stored therein instructions, which, when executed on a computer, cause the computer to perform the operational steps of the method according to the first aspect or any one of the possible implementations of the first aspect.
In a sixth aspect, the present application provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the operational steps of the method of the first aspect or any one of the possible implementations of the first aspect.
The present application can further combine to provide more implementations on the basis of the implementations provided by the above aspects.
Drawings
Fig. 1 is a schematic structural diagram of a risk identification system according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart of a method for risk identification provided herein;
FIG. 3 is a schematic flow chart of acquiring the behavior types of the transaction area provided by the present application;
FIG. 4 is an exemplary diagram of key areas of the behavior of a trading area provided by an embodiment of the present application;
fig. 5 is a diagram illustrating a structure of resNet according to this embodiment;
fig. 6 is a flowchart illustrating a calculation of a 3 × 3 convolution kernel according to an embodiment of the present disclosure;
fig. 7 is a structural diagram of a classification network according to an embodiment of the present application;
FIG. 8 is a flowchart illustrating a method for obtaining a behavior type of a keyboard region according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a GAN provided in an embodiment of the present application;
FIG. 10 is a schematic illustration of an interface for displaying a risk of a transaction provided herein;
FIG. 11 is a schematic illustration of an interface for displaying a risk of a transaction provided herein;
FIG. 12 is a schematic illustration of an interface for displaying a risk of a transaction provided herein;
FIG. 13 is a schematic illustration of an interface for displaying a risk of a transaction provided herein;
FIG. 14 is a schematic structural diagram of an apparatus for risk identification provided in an embodiment of the present application;
fig. 15 is a schematic diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application are described below with reference to the drawings.
Fig. 1 is a schematic structural diagram of a risk identification system 100 according to an embodiment of the present disclosure, and as shown in the drawing, the system 100 includes a data transaction device 101, a computing device 110, and a display screen 111. The transaction device 101 includes a camera 102, a display screen 103, a keyboard area 104, and a transaction area 105. And the user uses the transaction equipment to complete transaction behaviors including payment operation, recharging operation and information inquiry. The transaction device may be an Automated Teller Machine (ATM), a Cash Recycling System (CRS), a virtual counter system (VTS), a vending machine, a ticket vending machine, a recharging machine, or a payment machine. The transaction activity occurs primarily in the keyboard area 104 and the transaction area 105.
The camera 102 may capture images of the keyboard area 104 and the transaction area 105, and transmit the images to the computing device 110 via the network after the images are captured, so as to monitor the transaction behavior of the user in the keyboard area 104 and the transaction area 105. Optionally, the camera may be disposed outside the transaction device, and the angle of the camera may be adjusted to ensure that the transaction behaviors of the user in the keyboard area 104 and the transaction area 105 can be completely seen.
The keypad area 104 is used for user interaction with the delivery device, such as entering a password or entering a command as directed by the transaction device. The transaction area 105 is used for a user to complete a transaction using a payment medium that includes: financial cards and cell phones. Wherein, financial card can include again: bank cards, transportation cards, stored value cards, purchase cards, and custom cards. Ways to use payment media include card insertion transactions, card swipe transactions, and code scanning using two-dimensional codes.
The system 100 comprises two display screens, wherein the display screen 103 is arranged on the trading equipment and is used for displaying the risk of trading behaviors to the user; optionally, the user can also operate the display screen through a keyboard or touch the display screen to complete interaction with the transaction device. The display screen 111 is disposed outside the trading device, and is used for a manager of the trading device to acquire the risk of the trading behavior of the current trading device.
The computing device 110 may obtain a video or image captured by the camera and may use an artificial intelligence algorithm to identify the type of transaction activity in the video or image to further determine whether the user's activity is at risk. Risk behaviors of a transaction device include the following two categories:
keyboard zone risk behavior: installing a camera, destroying a keyboard and modifying a keyboard protective cover.
Transaction area risk behaviors: installing illegal card swiping equipment, breaking and quickly reading a card, modifying a card reader and pasting a false two-dimensional code.
The computing device 110 may, after recognizing the act of risking, issue an alert to the user via the display screen 103 and an alert to the manager of the transaction device via the display screen 111.
Optionally, the computing device may also obtain the risk value of the current transaction activity in real time and display it on the display screen 103 and the display screen 111. Optionally, the manager of the transaction device may also control the transaction device through the computing device, for example, after the manager finds that the transaction behavior of the user is identified as risk behavior, the current transaction device may be locked to prevent the user from continuing to operate.
In specific implementation, the computing device may be a software module deployed in a server, may also be one server, may also be a server cluster composed of a plurality of servers, or may be a cloud computing service center, which is not limited in this embodiment of the present application. A server, also called a server, is a device that provides computing services. In the embodiment of the present application, the server may be an X86 server, and the X86 server is also called a Complex Instruction Set (CISC) architecture server, that is, a Personal Computer (PC) server in general, which is based on a PC architecture and uses intel
Figure BDA0002908372220000041
Or other x86 instruction set compatible processor chip and
Figure BDA0002908372220000042
a server of an operating system.
It should be noted that the computing device 110 may be connected to at least one transaction device, and it should be understood that the number of transaction devices connected to the computing device in the system 100 is not limited to the present application, and fig. 1 only illustrates that the system includes one transaction device as an example.
Alternatively, the computing device 110 may also be a software module deployed in the transaction device.
It should be noted that the system architecture shown in fig. 1 is only an example of the system architecture provided for better explaining the model evaluation method provided in the present application, and does not constitute a limitation to the embodiments of the present application.
Based on the system shown in fig. 1, the embodiment of the present application provides a behavior detection method, which can divide a transaction device into a keyboard region and a transaction region, respectively monitor behaviors of two regions of a user, obtain a risk of a transaction behavior of the user in real time, and reduce workload of manual monitoring. For details of the present application, reference is made to the following description of examples.
Next, the method for model evaluation provided in the present application is further described in detail with reference to fig. 2. Fig. 2 is a schematic flow chart of a risk identification method provided in the present application, and as shown in the figure, the specific method includes:
s201, the computing device acquires image data of the transaction area and the keyboard area.
The camera can sample and convert the collected video data into image data according to a set time interval, and send the image data to the computing equipment to complete the next operation. The video data collected by the camera should include the complete transaction area and the keyboard area image. Optionally, the camera may also directly send the video data to the computing device, and the computing device converts the video samples into image data according to a set time interval, and then performs the next operation.
S202, the computing equipment obtains the behavior type of the transaction area according to the image data. Fig. 3 is a schematic flowchart of a process for obtaining a behavior type of a transaction area provided in the present application, and as shown in the figure, the step may further specifically include:
s2021, the computing device identifies a location of the transaction region in the image from the image data.
When the relative position of the camera and the transaction device is kept unchanged, for example, the camera is deployed on the transaction device, or the camera and the transaction device are both fixed in the same space, and at this time, the shooting direction and the shooting distance of the camera relative to the transaction area are fixed, the position of the transaction area in each frame of image data is kept unchanged. The coordinate regions of the transaction area in the image data for one image may be calculated and then the same coordinate regions may be selected for each frame of image.
Optionally, the coordinate regions of the transaction region in each frame of image may also be identified using an object detection algorithm. The target detection algorithm comprises a target detection algorithm based on traditional image processing and a target detection algorithm based on deep learning. For a target detection algorithm based on conventional image processing, the features of the image may be extracted first, converting the image data into information identifying the attributes of the transaction area. Common methods for feature extraction include a Scale Invariant Feature Transform (SIFT) method, a Histogram of Oriented Gradients (HOG) method, and a difference of gaussian (DOG) method. Then, the classifier is used for identifying and classifying the extracted features, the image features of the transaction area are matched with the image features of the known transaction area, and finally the coordinate area of the transaction area is obtained. Commonly used classifiers include Support Vector Machines (SVMs) and adaptive boosting (AdaBoost). For the deep learning-based target detection algorithm, optimization algorithms such as a region-based convolutional neural network (RCNN), a spatial pyramid pooling network (SPP) network, a fast region-based convolutional neural network (fast-RCNN), and a faster region-based convolutional neural network (fast-RCNN) may be used. The common characteristic of the two is that firstly, a boundary box of a possible object in an image is found, then a classifier is used for determining the class of the object in the frame, and finally the coordinate area of a trading area is obtained. Algorithms such as once-only look (yoly look once, YOLO), single-shot multi-box detector (SSD) can also be used to directly and simultaneously identify the transaction area and the coordinate region of the transaction area.
Since the transaction area in some transaction devices is close to the device environment around the transaction area, the coordinate area of the transaction area cannot be accurately identified directly using the target detection algorithm. At this time, an object with obvious characteristics in the transaction device included in the image, such as a colored label, a reflective display screen, etc., on the transaction device, may be first identified by using the above-mentioned target detection algorithm. And then, calculating the coordinate area of the transaction area according to the relative distance between the transaction area on the equipment and the identified object.
When the coordinate region of the transaction region is identified using the target detection algorithm, the relative position of the camera with respect to the transaction device may vary.
S2022, the computing device determines a transaction behavior critical area.
A rectangle that can completely cover the outline of the coordinate area obtained in step 2021 may be used as the key area of the behavior of the trading area, and fig. 4 is an exemplary diagram of a key area of the behavior of the trading area provided in this embodiment of the present application. Since the action amplitude of the user in the trading area may exceed the range of the trading area, the rectangular area is larger than the original identified coordinate area of the trading area, and the specific value may be specified according to an empirical value, and the scheme is not limited herein. For example, as shown in the figure, 401 is the coordinate area obtained in step 2021, and 402 is the key area of the determined action type of the transaction area.
S2023, the computing device determines a behavior type of the transaction region.
The computing device intercepts a key area of a behavior of the transaction area from the acquired image data, and assumes that the size of an image P of the key area is M × N, each row has M pixels, and each column has N pixels. The intercepted image is input into a classification network, and classified behavior types including three types of background, normal transaction area behaviors and risk transaction area behaviors can be identified. The following describes how to use a classification network to identify the behavior of a transaction region, taking a residual error network (resNet) as an example.
Fig. 5 is a structural example diagram of resNet according to this embodiment, where after resNet is input to matrix x, 64 convolution kernels of 3 × 3 are first passed. Fig. 6 is a flowchart of a computation of a 3 × 3 convolution kernel according to an embodiment of the present application, where 601 is a 3 × 3 convolution kernel, and 602 is an input 6 × 4 matrix x. It should be noted that the size and the value of the matrix x are only used to illustrate the calculation flow of the convolution kernel, and are not limited to the present application. Starting from the upper left corner of the matrix, each pixel in the computation region is multiplied by each element of the convolution kernel separately using 3 x 3, and the sum of all products is taken as the new value of the pixel in the center of the computation region, e.g. after the first computation the matrix 602 becomes the matrix 603. The calculation area is shifted to the left by one data, and the matrix 604 is calculated in the same way. Thereafter, moving the calculation region further to the left and downward eventually results in a 4 × 2 matrix 605. When the matrix x is used simultaneously and passes through multiple 3 × 3 convolution kernels, each convolution kernel is used to calculate the matrix, and the calculated 4 × 2 matrices are combined together to obtain a multidimensional matrix 606. For example, when a matrix x is used and subjected to 4 3 × 3 convolution kernels, a 4 × 3 × 3 matrix is finally obtained.
Then, the matrix x passing through the convolution kernel is input into a modified linear unit (ReLU) for calculation, as shown in formula one, the ReLU function is a piecewise linear function, and when x is a negative value, the output is 0; when x is positive, the output remains unchanged.
Figure BDA0002908372220000051
After that, the matrix x is again subjected to 64 convolution kernels of 3 × 3, and the output is added to the matrix x that has just started to be input, and then the result is again subjected to ReLU, so that the output of resNet can be obtained.
A complete classification network can use a plurality of resnets in an overlapping manner, and fig. 7 is a structural diagram of a classification network provided in this embodiment, and as shown in the figure, the classification network includes 64 convolution kernels 7 × 7, 16 resnets, and 1 full-connection layer. The fully-connected layer is also a convolution kernel whose size is equal to the size of the matrix output by the last resNet multiplied by the number of types output by the classification network, e.g., when the matrix size output by the last resNet is 3 × 3 × 5 and the results of the entire classification network are of three types, the convolution kernel size of the fully-connected layer is 3 × 3 × 5 × 3.
Before the classification network is used for identifying the behavior type of the transaction area, parameters of convolution kernels in the classification network need to be trained, and the training step can be completed before the risk identification method is started. The specific training method comprises the following steps:
1) the method comprises the steps of collecting images of behaviors of a plurality of users in a trading area by using a camera, manually identifying the behavior type of the trading area in the images, and labeling, wherein the size of the image is consistent with that of an image of a key area of the trading area.
2) Setting initial values of parameters of convolution kernels in the classification network, where the initial values may be any values, and the present application is not limited thereto.
3) And inputting an image acquired by a camera into a classification network, and calculating the error between a classification result output by the image after passing through the classification network and a label.
4) And (3) reversely updating the parameters of each node of each convolution kernel from the node of the last layer of the classification network according to the partial derivative of the error and the parameter to be trained by using a gradient descent method.
5) And (5) returning to execute the step (3) and the step (4) by using the updated parameter value until the obtained error value is smaller than the first threshold value.
6) And (5) repeating the steps (3), (4) and (5) until all the images acquired by the camera are executed, and obtaining the final trained classification network.
When the trained classification network is used for identifying the behavior type of the transaction area, the image P of the key area of the transaction area is input into the classification network, and the behavior type of the transaction area and the probability of the behavior type of the transaction area can be finally identified after the image P sequentially passes through 64 convolution kernels 7 × 7, 16 identical resNet and finally a full connection layer.
Optionally, a large-scale deep convolutional (VGG) network and a deep convolutional network (DSNet) may also be used in the classification network, and the method is similar to that of using resNet, and is not described herein again.
Optionally, the specific type of the normal behavior of the trading area and the specific type of the risk behavior of the trading area may also be used as output, for example, the behavior types of the trading area may be classified into 6 types: background, inserting a card, extracting a card, installing an illegal card swiping device, destroying a card reader, and modifying a card reader. At this time, the sizes of the convolution kernels of the fully-connected layers in the classification network and the training process also need to be modified accordingly.
After the behavior type of the transaction area is identified, the computing device further needs to comprehensively judge the risk degree of the transaction behavior by combining the behavior type of the keyboard area.
S203, the computing device acquires the behavior type of the keyboard region according to the image data. Fig. 8 is a flowchart illustrating a method for acquiring a behavior type of a keyboard region according to the present application, where as shown in the figure, the method specifically includes:
s2031, the computing device identifies the position of the keyboard area in the image from the image data.
The method for the computing device to identify the position of the keyboard area in the image is similar to the method for the computing device to identify the position of the transaction area in the image in S2021, except that the keyboard area is covered by a black shelter due to privacy protection, and other areas of the transaction device are not normally black, so when the coordinate area of the keyboard area in the image is identified by using an object detection algorithm, the color feature of the pixel in the image can be directly extracted, and the area with the black color is taken as the coordinate area of the keyboard area.
S2032, the computing device determines a key area of the behavior of the keyboard area.
Similarly to S2032, a rectangle that can completely cover the outline of the coordinate area obtained in step 2031 may be used as the key area of the behavior of the keyboard zone.
S2033, the computing device determines the behavior type of the keyboard area.
There is usually a platform in the keyboard area, and the user may add various hand motions during the waiting process, such as: when articles such as bags and mobile phones are placed at the position, the behaviors can interfere with the identification of the behavior type of the keyboard area and cannot be solved through a simple classification network. However, the types of risky behaviors of the keyboard area are limited, such as installing a camera, destroying the keyboard, and modifying a keyboard protection cover, so that only the risky behaviors of the keyboard area can be recognized by adopting an anomaly detection algorithm based on a generation countermeasure network (GAN).
First, the computing device intercepts a key area of the behavior of the keyboard region from the acquired image data, assuming that the size of an image P of the key area is M × N, each row has M pixels, and each column has N pixels. And inputting the intercepted image into the GAN, so that the type of the risk behaviors in the keyboard area can be identified. Fig. 9 is a schematic structural diagram of a GAN provided in the present application. As shown, the GAN includes two networks, a generator g (generator)802 and a discriminator d (discriminator) 801. And G is used for generating a picture according to the matrix of the input picture. D is used for judging whether a picture is a real picture, the input is matrix data x representing the picture, the output is the probability 804 that x is the real picture, when the output probability is 1, x hundred percent is the real picture, and when the output probability is 0, the representation x cannot be the real picture. In GAN, the generator and the arbiter can be gradient operators or neural network models.
Before using GAN to identify the behavior type of the keyboard region, the parameters of the generator and the arbiter in GAN need to be trained first, and the training step can be completed before the risk identification method of the present application is started. The specific training method comprises the following steps:
1) and acquiring image data x803 of risk behaviors of a plurality of users in the keyboard area by using the camera, wherein the size of the image is consistent with that of the image in the key area of the keyboard area.
2) And inputting the noise data z generated randomly into a generator to obtain a new picture G (z), and inputting the G (z) into a discriminator for judgment.
3) The discriminator calculates the probability D (G (z)) that G (x) is a real picture and the probability D (x) of the image data x of the risk row collected by the camera, and updates the parameters of the discriminator according to the difference between the two probabilities, wherein the updated formula II is as follows:
Figure BDA0002908372220000071
4) the parameters of the generator are updated using the formula three.
Figure BDA0002908372220000072
5) And repeatedly executing the step 3 and the step 4 until the parameters of the discriminator and the parameters of the generator are not changed any more.
When the trained GAN is used for identifying the behavior type of the keyboard area, inputting image data P (803) of a key area of the keyboard area into a generator to obtain a new generated picture, inputting the generated picture into a discriminator to obtain the probability of whether the picture is a real picture, when the probability is higher than or equal to a first threshold value, the input image data is a risk behavior, the probability is the probability of whether the picture is the real picture generated by the discriminator, and when the probability is lower than the first threshold value, the input image is a normal behavior.
And S204, the computing equipment determines the risk of the transaction behavior according to the behavior type of the transaction area and the behavior type of the keyboard area.
And when at least one behavior type in the transaction area and the keyboard area is risk behavior, determining the transaction behavior as the risk behavior.
Optionally, the transaction behavior is determined as a risk behavior when the probability that the at least one behavior type in the transaction area and the keyboard area is a risk behavior is greater than a second threshold.
After the transaction behavior of the user is determined to be risk behavior, the computing device may send an alert to the administrator via the display screen 111, and after the administrator receives the alert, the transaction operation of the user may be restricted; or the computing device sends an alert to the user via the display screen 103 of the transaction device.
Optionally, the computer may also send the behavior types and the probabilities of the behavior types of the transaction area and the keyboard area to the administrator through the display 111 or to the user through the display 103 of the transaction device in real time.
Optionally, the warning or the real-time risk information may also be sent to the terminal device of the manager via a network.
Fig. 10 is an interface 1000 for displaying risk of transaction behavior provided by the present application, where the interface 1000 includes a warning sign 1003, a model 1001 of a device having risk behavior, and a location 1002 of the device. For example, when a 0001 transaction device in a certain bank in a certain area has risk behaviors, the interface shown in fig. 10 can be presented in both the display screen and the display screen.
Alternatively, when a computing device is connected to multiple trading devices, FIG. 11 is another interface 1100 for displaying the risk of trading activity provided herein for presenting all trading devices that are at risk to an administrator in a display screen. The 1100 interface may include the device model, device location, and risk of each transaction device to which the computing device is connected, for example, the risk of 5 devices to which the computing device is connected is illustrated in fig. 11, where users at device 1 and device 3 have risky behavior.
Fig. 12 is another interface 1200 for displaying a risk of a transaction activity provided by the present application, where the interface 1200 includes a model 1201 of a device, a location 1202 of the device, a probability 1203 that a transaction area behavior is a risk behavior, and a probability 1204 that a keyboard area behavior is a risk behavior, for presenting the risk of a transaction activity to a user using the transaction device. When the transaction behavior is a risky behavior, a warning label 1205 can be used on the interface to alert the user. For example, the interface 1200 is a display interface of 0001 transaction devices in a certain bank in a certain area, the probability that the device transaction area behavior is risk behavior is 80%, the probability that the device transaction area behavior is risk behavior is 50%, at this time, the risk behavior of the transaction device occurs, and a warning identifier is displayed in the middle of the interface.
Alternatively, when a computing device is connected to multiple trading devices, fig. 13 is another interface 1300 provided herein for displaying the risk of trading behavior for presenting the risk lines of all trading devices connected by the computing device in a display screen to an administrator. The 1100 interface may include the device model, device location, probability that device transaction zone behavior is risky behavior, probability that keyboard zone behavior is risky behavior, and risk of each transaction device to which the computing device is connected, e.g., the risk of 5 devices connected by the computing device is illustrated in fig. 13, where users at device 1 and device 3 have risky behavior.
In summary, the risk identification method provided by the application can identify the risk of the transaction behavior at a far end based on the image of the camera without manual detection. The risk identification method provided by the application adopts different algorithms to directly identify the behavior types of the trading area and the keyboard area, and misjudgment on the trading behavior risk caused by indirect behaviors such as wandering and shielding of the user can be reduced.
It should be noted that, for simplicity of description, the above method embodiments are described as a series of acts or combination of acts, but those skilled in the art should understand that the present application is not limited by the order of acts or combination of acts described.
Other reasonable combinations of steps that can be conceived by one skilled in the art from the above description are also within the scope of the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
The method for risk identification provided by the embodiment of the present application is described in detail above with reference to fig. 2 to 13, and the apparatus and the computer device for risk identification provided by the embodiment of the present application are further described below with reference to fig. 14 and 15.
Fig. 14 is a risk identification apparatus 1400 provided in the present application, including: an acquisition unit 1401, a processing unit 1402, and a determination unit 1402.
An acquisition unit 1401 for acquiring image data of a transaction device, wherein the transaction device includes a transaction area and a keyboard area, and the image data includes an image of the transaction area and an image of the keyboard area.
A processing unit 1402 for determining a first behavior type of the transaction area and a second behavior type of the keypad area from the image data.
A determining unit 1403, configured to determine a risk of the transaction behavior according to the first behavior type and the second behavior type.
It should be understood that the risk identification apparatus 1400 of the embodiments of the present application may be implemented by a general-purpose processor, such as a Central Processing Unit (CPU), and the apparatus 1400 may also be implemented by an application-specific integrated circuit (ASIC), or a Programmable Logic Device (PLD), which may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof. When the sensing methods shown in fig. 2 to 5 can be implemented by software, the sensing device 600 and the modules thereof may also be software modules.
Optionally, the determining unit 1403 is further configured to determine a risk behavior in the transaction behavior when at least one of the first behavior type and the second behavior type is a risk behavior.
Optionally, the processing unit 1402 obtains, from the image data, a probability that the first behavior type of the transaction area is a risky behavior and a probability that the second behavior type of the keyboard area is a risky behavior; the determination unit 1403 is configured to determine that the transaction behavior is a risky behavior when at least one of the probability that the first behavior type is a risky behavior and the probability that the second behavior type is a risky behavior is greater than or equal to a first threshold.
Optionally, the processing unit 1402 is further configured to intercept an image of the transaction area according to the image data, and obtain the first behavior type using the image data of the transaction area as an input of the classification network.
Optionally, the processing unit 1402 is further configured to intercept an image of the keyboard region according to the image data, and obtain the second behavior type using the image data of the keyboard region as an input to the countermeasure network.
The risk identification apparatus 1400 according to the embodiment of the present application may correspond to performing the method described in the embodiment of the present application, and the above and other operations and/or functions of each unit in the risk identification apparatus 1400 are respectively for implementing corresponding flows of each method in fig. 2 to fig. 13, and are not described herein again for brevity.
In summary, in the device 1400 for risk identification provided in the embodiment of the present application, the processing unit may comprehensively determine the risk of the transaction behavior by combining the behavior types of the transaction area and the keyboard area, so as to reduce the probability of misjudgment.
Fig. 15 is a schematic diagram of a computer device 1500 according to an embodiment of the present disclosure, and as shown in the figure, the computer device 1500 includes a processor 1501, a storage 1502, a communication interface 1503, a bus 1504, and a memory 1505. The processor 1501, the memory 1502, the communication interface 1503, and the memory 1505 communicate via the bus 1504, or may communicate via other means such as wireless transmission. The memory 1505 is used for storing computer executable instructions, and the processor 1501 is used for executing the computer executable instructions stored in the memory 1505 to realize the following operation steps:
acquiring image data of transaction equipment, wherein the transaction equipment comprises a transaction area and a keyboard area, and the image data comprises an image of the transaction area and an image of the keyboard area;
determining a first behavior type of the transaction area and a second behavior type of the keyboard area according to the image data;
determining a risk of a transaction activity based on the first and second activity types.
It should be appreciated that, in the embodiments of the present application, the processor 1501 may be a CPU, and the processor 1501 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or any conventional processor or the like.
The memory 1502 may include read-only memory and random access memory, and provides instructions and data to the processor 1501. The memory 1502 may also include non-volatile random access memory. For example, the memory 1502 may also store device type information.
The memory 1502 may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available, such as static random access memory (static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), enhanced synchronous SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), and direct rambus RAM (DR RAM).
The bus 1504 may include a power bus, a control bus, a status signal bus, and the like, in addition to a data bus. But for purposes of clarity will be identified in the drawings as bus 1504.
It should be understood that the computer device 1500 according to the embodiment of the present application may correspond to the risk identification apparatus 1400 in the embodiment of the present application, and may correspond to a computing device executing the method 200 shown in fig. 2 to 13 in the embodiment of the present application, and the above and other operations and/or functions of each module in the computer device 1500 are respectively to implement the corresponding flows of each method in the figures, and are not described herein again for brevity.
In summary, the computer device provided in the embodiment of the present application can automatically identify and monitor the risk of the transaction device in real time, thereby improving the security of the transaction behavior.
The application also provides a risk identification system, which comprises the transaction device and the computing device. The transaction equipment comprises a camera, a transaction area and a keyboard area, wherein the camera is used for acquiring image data of the transaction equipment, and the image data comprises an image of the transaction area and an image of the keyboard area. The computing device is configured to implement the operational steps of the method performed by the corresponding subject matter in any one of the possible implementations of the first aspect and the first aspect as described above. Through the system, the computing equipment can remotely identify the risk behaviors of the transaction equipment, provide uniform risk management for the transaction equipment and reduce the manual workload.
The application also provides transaction equipment which comprises a camera, a transaction area, a keyboard area and a processor. The camera is configured to obtain image data of the transaction device, the image data includes an image of the transaction area and an image of the keyboard area, and the processor is configured to execute the operation steps of the method performed by the corresponding subject in any one of the above-mentioned first aspect and the first possible implementation manner. By the transaction equipment, the risk of the transaction behavior of the user using the transaction equipment can be monitored in real time, and illegal behaviors can be prevented in real time.
The above embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded or executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more collections of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium. The semiconductor medium may be a Solid State Drive (SSD).
The foregoing is only illustrative of the present application. Those skilled in the art can conceive of changes or substitutions based on the specific embodiments provided in the present application, and all such changes or substitutions are intended to be included within the scope of the present application.

Claims (17)

1. A method of risk identification, the method comprising:
acquiring image data of transaction equipment, wherein the transaction equipment comprises a transaction area and a keyboard area, and the image data comprises an image of the transaction area and an image of the keyboard area;
determining a first behavior type of the transaction area and a second behavior type of the keyboard area according to the image data;
determining a risk of a transaction activity based on the first and second activity types.
2. The method of claim 1, wherein the first behavior type comprises: normal behavior in the trading area and risk behavior in the trading area; the second behavior type includes: normal behavior in the keyboard region and risky behavior in the keyboard region.
3. The method of claim 2, wherein determining the risk of the transaction based on the first type of behavior and the second type of behavior comprises:
determining that the transaction behavior is a risk behavior when at least one of the first behavior type and the second behavior type is a risk behavior.
4. The method of claim 2, wherein determining the first type of behavior of the transaction area and the second type of behavior of the keypad area based on the image data further comprises:
acquiring the probability that the first action type of the transaction area is risk action and the probability that the second action type of the keyboard area is risk action according to the image data;
said determining risk behaviour for a transaction behaviour in dependence of said first behaviour type and said second behaviour type comprises:
determining that the transaction behavior is a risky behavior when at least one of the probability that the first behavior type is a risky behavior and the probability that the second behavior type is a risky behavior is greater than or equal to a first threshold.
5. The method of claim 1, wherein said determining a first type of behavior of the transaction region from the image data comprises:
intercepting an image of the transaction area according to the image data;
and using the image data of the transaction area as the input of a classification network to obtain the first behavior type.
6. The method of claim 1, wherein determining the second type of behavior for the keypad region from the image data comprises:
intercepting an image of the keyboard area according to the image data;
the second behavior type is derived using the image data of the keypad region as input against a network.
7. The method of any of claims 1 to 6, wherein the transaction device comprises: the system comprises an automatic teller machine, a cash recycling machine, a virtual counter system, an automatic vending machine, an automatic ticket vending machine, an automatic recharging machine and an automatic payment machine.
8. An apparatus for risk identification, the apparatus comprising an acquisition unit, a processing unit and a determination unit:
the acquisition unit is used for acquiring image data of transaction equipment, the transaction equipment comprises a transaction area and a keyboard area, and the image data comprises an image of the transaction area and an image of the keyboard area;
the processing unit is used for determining a first action type of the transaction area and a second action type of the keyboard area according to the image data;
the determining unit is used for determining the risk of the transaction behavior according to the first behavior type and the second behavior type.
9. The apparatus of claim 8, wherein the first behavior type comprises: normal behavior in the trading area and risk behavior in the trading area; the second behavior type includes: normal behavior in the keyboard region and risky behavior in the keyboard region.
10. The apparatus of claim 9, wherein the determining unit is further configured to:
determining a risk-at-transaction behavior when at least one of the first behavior type and the second behavior type is a risk behavior.
11. The apparatus of claim 9, wherein the processing unit is further configured to:
acquiring the probability that the first action type of the transaction area is risk action and the probability that the second action type of the keyboard area is risk action according to the image data;
the determining unit is further configured to:
determining that the transaction behavior is a risky behavior when at least one of the probability that the first behavior type is a risky behavior and the probability that the second behavior type is a risky behavior is greater than or equal to a first threshold.
12. The apparatus of claim 9, wherein the processing unit is further configured to:
intercepting an image of the transaction area according to the image data;
and using the image data of the transaction area as the input of a classification network to obtain the first behavior type.
13. The apparatus of claim 9, wherein the processing unit is further configured to:
intercepting an image of the keyboard area according to the image data;
the second behavior type is derived using the image data of the keypad region as input against a network.
14. The method of any of claims 8 to 13, wherein the transaction device comprises: the system comprises an automatic teller machine, a cash recycling machine, a virtual counter system, an automatic vending machine, an automatic ticket vending machine, an automatic recharging machine and an automatic payment machine.
15. A risk identification system is characterized in that the system comprises a transaction device and a computing device, wherein the transaction device comprises a camera, a transaction area and a keyboard area, the camera is used for acquiring image data of the transaction device, and the image data comprises an image of the transaction area and an image of the keyboard area; the computing device is configured to perform the operational steps of the method of any of claims 1-7.
16. A computer device comprising a processor and a memory, the memory storing computer-executable instructions, the processor executing the computer-executable instructions to cause the computer device to perform the operational steps of the method of any one of claims 1-7.
17. A computer-readable storage medium comprising instructions which, when executed on a computer, cause the computer to perform the operational steps of the method of any of claims 1-6.
CN202110078552.4A 2021-01-21 2021-01-21 Risk identification method, device and system and computer equipment Pending CN114882427A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110078552.4A CN114882427A (en) 2021-01-21 2021-01-21 Risk identification method, device and system and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110078552.4A CN114882427A (en) 2021-01-21 2021-01-21 Risk identification method, device and system and computer equipment

Publications (1)

Publication Number Publication Date
CN114882427A true CN114882427A (en) 2022-08-09

Family

ID=82667683

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110078552.4A Pending CN114882427A (en) 2021-01-21 2021-01-21 Risk identification method, device and system and computer equipment

Country Status (1)

Country Link
CN (1) CN114882427A (en)

Similar Documents

Publication Publication Date Title
US10944767B2 (en) Identifying artificial artifacts in input data to detect adversarial attacks
US8761517B2 (en) Human activity determination from video
CN108229297B (en) Face recognition method and device, electronic equipment and computer storage medium
Deb et al. Look locally infer globally: A generalizable face anti-spoofing approach
US20120075450A1 (en) Activity determination as function of transaction log
CN109345375B (en) Suspicious money laundering behavior identification method and device
US20200160680A1 (en) Techniques to provide and process video data of automatic teller machine video streams to perform suspicious activity detection
US20220207117A1 (en) Data theft prevention method and related product
CA3151157A1 (en) System, method, apparatus, and computer program product for utilizing machine learning to process an image of a mobile device to determine a mobile device integrity status
Sequeira et al. A realistic evaluation of iris presentation attack detection
US20230147685A1 (en) Generalized anomaly detection
Manikandan et al. A neural network aided attuned scheme for gun detection in video surveillance images
Agarwal et al. Deceiving the protector: Fooling face presentation attack detection algorithms
Rashid et al. On the design of embedded solutions to banknote recognition
CN114882427A (en) Risk identification method, device and system and computer equipment
EP4105825A1 (en) Generalised anomaly detection
WO2023272594A1 (en) Image forgery detection via pixel-metadata consistency analysis
Devi et al. Deep learn helmets-enhancing security at ATMs
CN113762249A (en) Image attack detection and image attack detection model training method and device
Maheshwari et al. Bilingual text detection in natural scene images using invariant moments
CN110956102A (en) Bank counter monitoring method and device, computer equipment and storage medium
Sehgal Palm recognition using LBP and SVM
Szczepanik et al. Security lock system for mobile devices based on fingerprint recognition algorithm
EP4361971A1 (en) Training images generation for fraudulent document detection
Jain Anomalous Behavior Detection in ATM using Artificial Intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination