WO2020232889A1 - 支票取现方法、装置、设备及计算机可读存储介质 - Google Patents

支票取现方法、装置、设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2020232889A1
WO2020232889A1 PCT/CN2019/103212 CN2019103212W WO2020232889A1 WO 2020232889 A1 WO2020232889 A1 WO 2020232889A1 CN 2019103212 W CN2019103212 W CN 2019103212W WO 2020232889 A1 WO2020232889 A1 WO 2020232889A1
Authority
WO
WIPO (PCT)
Prior art keywords
check
user
cash withdrawal
image
information
Prior art date
Application number
PCT/CN2019/103212
Other languages
English (en)
French (fr)
Inventor
廖敏
赵学亮
陈鲲
付毅民
王军
蓝福潮
谭卓
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020232889A1 publication Critical patent/WO2020232889A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/02Banking, e.g. interest calculation or account maintenance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07DHANDLING OF COINS OR VALUABLE PAPERS, e.g. TESTING, SORTING BY DENOMINATIONS, COUNTING, DISPENSING, CHANGING OR DEPOSITING
    • G07D7/00Testing specially adapted to determine the identity or genuineness of valuable papers or for segregating those which are unacceptable, e.g. banknotes that are alien to a currency
    • G07D7/06Testing specially adapted to determine the identity or genuineness of valuable papers or for segregating those which are unacceptable, e.g. banknotes that are alien to a currency using wave or particle radiation
    • G07D7/12Visible light, infrared or ultraviolet radiation

Definitions

  • This application relates to the field of artificial intelligence technology, and in particular to a check withdrawal method, device, equipment and computer-readable storage medium.
  • the current check cash withdrawal process is generally that the bank has a check cash withdrawal window, and the user needs to bring the check to the check cash withdrawal window, and the bank staff provide the check cash withdrawal service.
  • the bank window does not have staff on duty 24 hours a day, users who need to handle the check withdrawal service can only enjoy the check withdrawal service within the specified time period.
  • the inventor realizes that the current check cash withdrawal process is not convenient for users to handle check cash withdrawal services.
  • the main purpose of this application is to provide a check withdrawal method, device, equipment, and computer-readable storage medium, aiming to solve the technical problem that users cannot handle check withdrawal services on their own in the prior art.
  • this application provides a check cash withdrawal method, which includes the following steps:
  • the check is verified, and when the verification is passed, the face information of the check is extracted;
  • the identity information input by the user is received, and it is detected that the identity information is the first identity information, the user's head image collected by the camera is acquired in real time, and the living body detection is performed;
  • the password verification prompt will be output
  • this application also provides a check cash withdrawal device, which includes:
  • the authenticity verification module is used to verify the authenticity of the check, and when the authenticity is passed, extract the face information of the check;
  • the acquiring module is used to acquire the first identity information, the first cash withdrawal password, and the first face image taken by the camera, and combine the first identity information, the first cash withdrawal password, the first face image, and the ticket face Information related storage;
  • the living body detection module is configured to obtain the user's head image collected by the camera in real time when the identity information input by the user is received and it is detected that the identity information is the first identity information, and perform living body detection;
  • the face comparison module is configured to perform face comparison between the user's head image and the first face image if the living body detection passes;
  • the prompt module is used to output a password verification prompt if the face comparison is passed;
  • the cash withdrawal module is configured to receive the cash withdrawal password input by the user, and if it is detected that the cash withdrawal password is consistent with the first cash withdrawal password, the cash withdrawal is based on the face information.
  • this application also provides a check cash withdrawal device, the check cash withdrawal device comprising: a memory, a processor, and a check cash withdrawal program stored on the memory and running on the processor, so When the check withdrawal program is executed by the processor, the steps of the check withdrawal method described above are realized.
  • the present application also provides a non-volatile computer-readable storage medium with a check withdrawal program stored on the computer-readable storage medium, and the check withdrawal program is executed by the processor to achieve the above The steps of the check withdrawal method described.
  • Figure 1 is a schematic diagram of the structure of a check withdrawal device in a hardware operating environment involved in a solution of an embodiment of the application;
  • Figure 2 is a schematic flow diagram of the first embodiment of the method for cash withdrawal of a check application
  • FIG. 3 is a schematic diagram of the detailed flow of step S10 in FIG. 2;
  • Fig. 4 is a schematic diagram of functional modules of an embodiment of a check withdrawal device according to the application.
  • FIG. 1 is a schematic diagram of the structure of a check withdrawal device in a hardware operating environment involved in a solution of an embodiment of the application.
  • the check withdrawal device may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, and a communication bus 1002.
  • the communication bus 1002 is used to implement connection and communication between these components.
  • the user interface 1003 may include a display screen (Display) and an input unit such as a keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
  • the network interface 1004 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface).
  • the memory 1005 may be a high-speed RAM memory, or a non-volatile memory (non-volatile memory), such as a magnetic disk memory.
  • the memory 1005 may also be a storage device independent of the foregoing processor 1001.
  • the memory 1005 which is a computer storage medium, may include an operating system, a network communication module, a user interface module, and a check withdrawal program.
  • the network interface 1004 is mainly used to connect to a back-end server and perform data communication with the back-end server;
  • the user interface 1003 is mainly used to connect to a client (user side) and perform data communication with the client;
  • the processor 1001 can be used to call a check withdrawal program stored in the memory 1005, and execute the operations of the following check withdrawal methods in various embodiments.
  • Fig. 2 is a schematic flow chart of a first embodiment of a check withdrawal method of this application.
  • the check withdrawal method includes:
  • Step S10 the check is verified, and when the verification is passed, the face information of the check is extracted;
  • the check withdrawal method is applied to a check withdrawal device/device, which integrates multiple sub-devices, including: sub-device 1 (check authenticator), sub-device 2 (server), and sub-device 3 (outlet) Money machine).
  • a check authenticator can be used to verify the authenticity of the check for the anti-counterfeiting point of the check.
  • anti-counterfeiting technology is used.
  • all checks are added with new fluorescent fibers and are printed with two-color shading.
  • the check authenticity machine can be used to check whether the check contains the new fluorescent fiber, or whether the check shading is a two-color shading, if the check contains the new fluorescent fiber and/or the check shading is a two-color shading, the check It really passed.
  • the face information of the check is extracted.
  • the face information of the check can be extracted by OCR recognition technology.
  • the face information includes: the date of issuance, the name of the paying bank, the account number of the drawer, the amount of the banknote and other information (specifically expanded or reduced according to actual needs, here No restrictions).
  • Step S20 Obtain the first identity information, the first cash withdrawal password, and the first face image taken by the camera, and associate the first identity information, the first cash withdrawal password, the first face image, and the ticket information. storage;
  • prompt information may be output, for example, prompting the user to input the first identity information and the first withdrawal password through a preset keyboard (physical keyboard or virtual keyboard on the screen).
  • the first identity information may be ID number A or telephone number B, etc.
  • the first withdrawal password may be a 6-digit digital password.
  • Step S30 when the identity information input by the user is received, and it is detected that the identity information is the first identity information, the user's head image collected by the camera is acquired in real time, and the living body detection is performed;
  • the user when the user needs to withdraw cash by check, he first enters the previously reserved identity information (that is, the first identity information entered in step S20).
  • the identity information entered by the current user is the ID number A, search in the bank database, confirm that the ID number A is stored in the bank database, then enable the camera, and obtain the camera to collect the user's head image, and then according to the user's head Image, live detection.
  • This embodiment is not limited to the specific implementation of living body detection.
  • the live detection technology can prevent users from using masks, photos and other means to commit fraud.
  • Step S40 if the living body detection is passed, perform a face comparison between the user's head image and the first face image;
  • the collected user head image can be authenticated, specifically with the first face image of the user previously obtained (that is, the first face image stored in association with the first identity information in step S20).
  • One face image) for face comparison This embodiment does not limit the specific implementation of face comparison. For example, first extract the facial features of the user's head image and the first face image, and then compare the facial features. If the matching rate of the facial features exceeds a preset threshold, such as 90%, it can be determined that the facial comparison passes .
  • Step S50 if the face comparison is passed, a password verification prompt is output;
  • a further prompt (prompt by means of voice or image display) is issued to prompt the user to enter the cash withdrawal password.
  • Step S60 Receive the cash withdrawal password input by the user, and if it is detected that the cash withdrawal password is consistent with the first cash withdrawal password, then the cash is issued based on the ticket face information (the withdrawal amount on the ticket is X, and the cash withdrawal amount is also X ).
  • the cash withdrawal amount is determined based on the ticket face information, and proceed Cash out.
  • the check is verified, and when the verification is passed, the face information of the check is extracted; the first identity information input by the user, the first cash withdrawal password, and the first face image taken by the camera are obtained, and The first identity information, the first cash withdrawal password, the first face image, and the ticket information are stored in association; when the identity information input by the user is received and it is detected that the identity information is the first identity information, real-time Acquire the user's head image collected by the camera, and perform liveness detection; if the liveness detection passes, compare the user's head image with the first face image; if the face comparison passes, output Password verification prompt; receiving the cash withdrawal password input by the user, and if it is detected that the cash withdrawal password is consistent with the first cash withdrawal password, the cash is issued based on the ticket face information.
  • the security protection of the check cash withdrawal business is improved, and the convenience of the user to handle the check cash withdrawal business is also improved.
  • FIG. 3 is a detailed flowchart of step S10 in FIG. 2.
  • step S10 includes:
  • Step S101 obtaining the anti-counterfeiting information of the check, and comparing the anti-counterfeiting information with the standard anti-counterfeiting information to obtain the similarity;
  • this step specifically includes: emitting ultraviolet light to the check, obtaining the UR anti-counterfeiting shading image of the check under ultraviolet light, and comparing the UR anti-counterfeiting shading image with the standard UR anti-counterfeiting shading image to obtain similarity degree.
  • the real check is provided with a UR anti-counterfeiting shading image
  • the UR anti-counterfeiting shading image of the real check is called a standard UR anti-counterfeiting shading image. Therefore, when checking the authenticity of a check, the user needs to place the check in a designated position, and the check placed in this position is exposed to the environment irradiated by ultraviolet light. In this environment, the UR anti-counterfeiting shading of the cheque will be obvious. At this time, take a photo of the check, and extract the UR anti-counterfeiting shading image of the check from the photo. In this embodiment, the RGB values of the pixels of the standard UR anti-counterfeiting shading image are fixed.
  • R(x1), G(x2), B(x3) where the value range of x1, x2, and x3 is [0, 255], and the values of x1, x2, and x3 are set according to actual needs.
  • the specific implementation method for extracting the UR anti-counterfeiting shading image of the check from the photo is: extracting pixels with RGB values of R(x1), G(x2), B(x3) from the photo, and one of the extracted pixels The relative position between the two is the same as the relative position in the photo. In this way, the UR anti-counterfeiting shading image of the check (ie, the anti-counterfeiting information of the check) can be obtained.
  • the UR anti-counterfeiting shading image of the check is compared with the standard UR anti-counterfeiting shading image (that is, the standard anti-counterfeiting information) to obtain the similarity.
  • the standard UR anti-counterfeiting shading image that is, the standard anti-counterfeiting information
  • the fingerprint code 1 corresponding to the UR anti-counterfeiting shading image of the check and the fingerprint code 2 corresponding to the standard UR anti-counterfeiting shading image can be obtained. Compare whether the fingerprint code 1 and fingerprint code 2 have the same value on the same bit. If there are n cases where the value on the same bit is the same, the specific value of n/49 is the similarity of the two images.
  • step S102 if the similarity is greater than the preset threshold, the verification is passed, and then the face information of the check is extracted through OCR technology.
  • the preset threshold is set according to actual needs.
  • the threshold can be appropriately set higher, for example, set to 85%. That is, after the similarity is calculated, it is checked whether the similarity is greater than 85%, and if the similarity is greater than 85%, the face information of the check is extracted through OCR technology (Optical Character Recognition).
  • the face information includes information such as the date of issuance, the name of the paying bank, the account number of the drawer, the amount of cash withdrawn and other information (specifically expanded or reduced according to actual needs, which is not limited here).
  • the position of the face information on the check is fixed (the date of issuance is in position 1, the name of the paying bank is in position 2, the account of the drawer is in position 3, and the amount of money is in position 4).
  • the steps of acquiring a user's head image collected by a camera in real time and performing live body detection include:
  • a binocular camera is used to collect an image of the user's head.
  • the binocular camera refers to a camera device with two cameras. Compared with the images collected by a monocular camera, the binocular camera can collect images from two different angles, that is, it can simultaneously collect two images corresponding to different angles. A copy of the user's head image (that is, the binocular user avatar).
  • live body detection is performed.
  • the binocular user's head image carries more information, it is more conducive to living body detection and the detection result has higher reliability. For example, if the light characteristics of the user's face are different under different angles, the images captured by the binocular camera also have different texture characteristics, and the texture characteristics are difficult to forge through photos, masks, etc., so they have higher The anti-counterfeiting ability.
  • the step of performing living body detection based on the binocular user's head image includes:
  • the user's head image collected by the binocular camera includes a left-view user's head image and a right-view user's head image, and the parallax images corresponding to the left and right views can be obtained through the binocular stereo matching algorithm.
  • This embodiment is not limited to preset binocular stereo matching algorithms, such as SAD (Sum of Absolute Differences) algorithm, SSD (Sum of Square Differences) algorithm, etc.
  • the process of calculating the parallax image corresponding to the binocular user's head image through the binocular stereo matching algorithm is as follows:
  • the purpose of matching cost calculation is to measure the correlation between pixels to be matched and candidate pixels. Whether two pixels are points with the same name or not, the matching cost can be calculated through the matching cost function. The smaller the cost, the greater the correlation and the greater the probability of being a point with the same name.
  • a disparity search range D is often specified.
  • the range is limited to D, and a three-dimensional size W ⁇ H ⁇ D (W is the image width, H is the image height)
  • Matrix C stores the matching cost value of each pixel under each disparity within the disparity range.
  • Matrix C is usually called DSI (Disparity Space Image).
  • the matching cost such as the absolute value difference of the gray scale, the sum of the absolute value difference of the gray scale, and the normalized correlation coefficient to calculate the matching cost of two pixels.
  • the fundamental purpose of cost aggregation is to allow the cost value to accurately reflect the correlation between pixels.
  • the calculation of the matching cost in the previous step usually only considers local information, and calculates the cost value through the pixel information in a certain size window in the two pixel neighborhoods, which is easily affected by image noise, and when the image is in weak texture or repeated texture Area, this cost value may not accurately reflect the correlation between pixels.
  • the connection between adjacent pixels is established through cost aggregation, and the cost matrix is optimized with certain criteria (such as adjacent pixels should have continuous disparity values).
  • the new cost value of each pixel under a certain disparity will be recalculated according to the cost value of its neighboring pixels under the same disparity value or nearby disparity values to obtain a new DSI.
  • Commonly used cost aggregation methods include scan line method, dynamic programming method, path aggregation method in SGM algorithm, etc.
  • the calculation of disparity image is to determine the optimal disparity value of each pixel through the cost matrix S after cost aggregation, usually using the Winner-Takes-All algorithm (WTA, Winner-Takes-All) to calculate, that is, under all disparity of a certain pixel Among the cost values, the parallax corresponding to the smallest cost value is selected as the optimal parallax.
  • WTA Winner-Takes-All algorithm
  • parallax image optimization is to further optimize the parallax image obtained in the previous step and improve the quality of the parallax image, including processing such as removing false parallax, proper smoothing, and sub-pixel accuracy optimization.
  • processing such as removing false parallax, proper smoothing, and sub-pixel accuracy optimization.
  • the Left-Right Check algorithm is used. Eliminate false parallax caused by occlusion and noise; use the algorithm to eliminate small connected regions to eliminate isolated abnormal points; use smoothing algorithms such as Median Filter and Bilateral Filter to smooth the parallax image.
  • the parallax image after obtaining the parallax image corresponding to the head image of the binocular user, the parallax image can be converted into a depth image through the following conversion formula of parallax and depth:
  • depth represents the depth value of the depth map
  • f represents the normalized focal length
  • baseline represents the distance between the optical centers of the two cameras, called the baseline distance
  • disp represents the disparity value.
  • the unit of parallax is expressed in pixels, and the unit of depth value is often expressed in millimeters.
  • the gray-level co-occurrence matrix is obtained by statistically calculating the status of two pixels keeping a certain distance on the image with a certain gray level. Assuming to take any point (x, y) in an image (N ⁇ N) and another point (x+a, y+b) that deviates from it, set the gray value of this point pair to (g1, g2). If the point (x, y) is moved across the entire image, a variety of (g1, g2) values will be obtained. If the gray level is k, then the combination of (g1, g2) has a square of k.
  • the gray-level co-occurrence matrix of the image can reflect the comprehensive information of the image gray-level about the direction, the adjacent interval, and the range of change.
  • the statistics generated based on the gray-level co-occurrence matrix is specifically used as the texture feature of the depth image, and each texture feature is cascaded into a gray-level co-occurrence matrix feature vector.
  • the detection object corresponding to the head image of the binocular user is a living body.
  • the texture features of the face depth map are extracted, and each texture feature is cascaded into a gray-level co-occurrence matrix feature vector, so as to facilitate the use of support vector machines in vivo
  • the classification model classifies the feature vector of the gray level co-occurrence matrix, and finally determines whether the current detection object is a living body according to the classification result.
  • the gray-scale co-occurrence matrix feature vector is extracted from the face images of a large number of live and non-living users in advance, and the samples of the living and non-living bodies are labeled, and then the gray-scale co-occurrence matrix feature vector is used as training
  • the sample undergoes machine learning to obtain a living body classification model that can identify living and non-living bodies, so that the living body classification model can perform living detection based on the feature vector of the gray level co-occurrence matrix.
  • the existing living body detection method usually adopts the living body detection based on the user's actions, and the user needs to make corresponding actions according to the action instructions given by the system, that is, the existing living body detection method requires the user to cooperate.
  • This embodiment is based on the binocular camera for living body detection, the user can complete the detection without the cooperation of actions, and the living body detection result is more reliable and accurate.
  • step S60 it further includes:
  • Step S70 Determine the user who issued the invoice based on the face information, and generate a check cash withdrawal record based on the current time and the amount of cash out;
  • step S80 the check withdrawal record is sent to the terminal of the issuing user.
  • the ticket issuing user can be determined according to the issuer account number in the ticket information, so that the mobile phone number reserved by the issuing user can be obtained from the bank database. Then, the check cash withdrawal record consisting of the current time and the cash out amount is sent to the mobile phone number, so that the user who issues the invoice can know that the check has been cashed in time.
  • step S80 it further includes:
  • the bank backend server will monitor the entire process of the user’s check withdrawal.
  • step S30 "receive the identity information input by the user, and detect that the identity information is the first identity information” is taken as the start time of the surveillance video, and "based on the ticket information in step S60”
  • the “money out” is used as the termination time of surveillance video
  • check cash withdrawal records are stored in a bundle.
  • FIG. 4 is a schematic diagram of functional modules of an embodiment of a check withdrawal device of the present application.
  • the check withdrawal device includes:
  • the authenticity verification module 10 is used to verify the authenticity of the check, and when the authenticity is passed, extract the face information of the check;
  • the obtaining module 20 is configured to obtain the first identity information, the first cash withdrawal password, and the first face image taken by the camera, and combine the first identity information, the first cash withdrawal password, the first face image, and Relevant storage of ticket information;
  • the living body detection module 30 is configured to, when the identity information input by the user is received and it is detected that the identity information is the first identity information, obtain the user's head image collected by the camera in real time, and perform living body detection;
  • the face comparison module 40 is configured to perform face comparison between the user's head image and the first face image if the living body detection passes;
  • the prompt module 50 is configured to output a password verification prompt if the face comparison is passed;
  • the cash withdrawal module 60 is configured to receive the cash withdrawal password input by the user, and if it is detected that the cash withdrawal password is consistent with the first cash withdrawal password, the cash withdrawal is based on the face information.
  • the check is verified, and when the verification is passed, the face information of the check is extracted; the first identity information input by the user, the first cash withdrawal password, and the first face image taken by the camera are obtained, and The first identity information, the first cash withdrawal password, the first face image, and the ticket information are stored in association; when the identity information input by the user is received and it is detected that the identity information is the first identity information, real-time Acquire the user's head image collected by the camera, and perform liveness detection; if the liveness detection passes, compare the user's head image with the first face image; if the face comparison passes, output Password verification prompt; receiving the cash withdrawal password input by the user, and if it is detected that the cash withdrawal password is consistent with the first cash withdrawal password, the cash is issued based on the ticket face information.
  • the security protection of the check cash withdrawal business is improved, and the convenience of the user to handle the check cash withdrawal business is also improved.
  • the embodiment of the present application also proposes a computer-readable storage medium, the computer-readable storage medium stores a check withdrawal program, and when the check withdrawal program is executed by a processor, the steps of the check withdrawal method described above are implemented .
  • the check withdrawal program is executed by the processor to verify the authenticity of the check.
  • the step of extracting the face information of the check includes the following steps:
  • the verification is passed, and then the face information of the check is extracted by OCR technology.
  • the check withdrawal program is executed by the processor to obtain the anti-counterfeiting information of the check, and the anti-counterfeiting information is compared with the standard anti-counterfeiting information to obtain the similarity. , Including the following steps:
  • the UR security shading image is compared with the standard UR security shading image to obtain the similarity.
  • the steps include the following steps:
  • live body detection is performed.
  • the check withdrawal program when executed by the processor to realize the step of performing the living body detection based on the binocular user's head image, it includes the following steps:
  • the detection object corresponding to the head image of the binocular user is a living body.
  • the ticket issuing user After receiving the cash withdrawal password input by the user, if it is detected that the cash withdrawal password is consistent with the first cash withdrawal password, after the step of issuing banknotes based on the ticket face information, the ticket issuing user is determined based on the ticket face information The current time and the amount of cash out, generate check cash withdrawal records;
  • the check withdrawal record is sent to the terminal of the issuing user.
  • the evidence storage unit is used to obtain the monitoring video of the whole process of the user's check withdrawal, and store the monitoring video and the check withdrawal record in association.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Theoretical Computer Science (AREA)
  • Finance (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Strategic Management (AREA)
  • Toxicology (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Technology Law (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Security & Cryptography (AREA)
  • Collating Specific Patterns (AREA)

Abstract

本申请涉及人工智能技术领域,公开了一种支票取现方法、装置、设备及计算机可读存储介质,支票取现方法包括:对支票进行验真,当验真通过时,提取支票的票面信息;获取并关联存储用户输入的第一身份信息、第一取现密码,以及摄像头拍摄的第一人脸图像;当接收到用户输入的第一身份信息时,实时获取摄像头采集的用户头部图像,并进行活体检测;若活体检测通过,则将用户头部图像与第一人脸图像进行人脸比对;若人脸比对通过,则输出密码校验提示;接收用户输入的取现密码,若检测到取现密码与第一取现密码一致,则基于所述票面信息出钞。通过本申请,既提高了对支票取现业务的安全防护,也提高了用户办理支票取现业务的便捷性。

Description

支票取现方法、装置、设备及计算机可读存储介质
本申请要求于2019年5月23日提交中国专利局、申请号为201910432137.7,发明名为“支票取现方法、装置、设备及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在申请中。
技术领域
本申请涉及人工智能技术领域,尤其涉及一种支票取现方法、装置、设备及计算机可读存储介质。
背景技术
目前的支票取现流程,一般是银行设置有支票取现窗口,需要用户携带支票至支票取现窗口,由银行工作人员提供支票取现服务。但是,由于银行窗口并不是24小时都有工作人员值班,导致需要办理支票取现业务的用户,只有在规定的时间段内才能享受支票取现服务。发明人意识到现有的支票取现流程对于用户来说,办理支票取现业务不够便利。
发明内容
本申请的主要目的在于提供一种支票取现方法、装置、设备及计算机可读存储介质,旨在解决现有技术中用户无法自助办理支票取现业务的技术问题。
为实现上述目的,本申请提供一种支票取现方法,所述支票取现方法包括以下步骤:
对支票进行验真,当验真通过时,提取所述支票的票面信息;
获取用户输入的第一身份信息、第一取现密码,以及摄像头拍摄的第一人脸图像,并将所述第一身份信息、第一取现密码、第一人脸图像以及票面信息关联存储;
当接收到用户输入的身份信息,且检测到所述身份信息为所述第一身份信息时,实时获取摄像头采集的用户头部图像,并进行活体检测;
若活体检测通过,则将所述用户头部图像与所述第一人脸图像进行人脸比对;
若人脸比对通过,则输出密码校验提示;
接收用户输入的取现密码,若检测到所述取现密码与所述第一取现密码一致,则基于所述票面信息出钞。
此外,为实现上述目的,本申请还提供一种支票取现装置,所述支票取现装置包括:
验真模块,用于对支票进行验真,当验真通过时,提取所述支票的票面信息;
获取模块,用于获取用户输入的第一身份信息、第一取现密码,以及摄像头拍摄的第一人脸图像,并将所述第一身份信息、第一取现密码、第一人脸图像以及票面信息关联存储;
活体检测模块,用于当接收到用户输入的身份信息,且检测到所述身份信息为所述第一身份信息时,实时获取摄像头采集的用户头部图像,并进行活体检测;
人脸对比模块,用于若活体检测通过,则将所述用户头部图像与所述第一人脸图像进行人脸比对;
提示模块,用于若人脸比对通过,则输出密码校验提示;
出钞模块,用于接收用户输入的取现密码,若检测到所述取现密码与所述第一取现密码一致,则基于所述票面信息出钞。
此外,为实现上述目的,本申请还提供一种支票取现设备,所述支票取现设备包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的支票取现程序,所述支票取现程序被所述处理器执行时实现如上所述的支票取现方法的步骤。
此外,为实现上述目的,本申请还提供一种非易失性计算机可读存储介质,所述计算机可读存储介质上存储有支票取现程序,所述支票取现程序被处理器执行时实现如上所述的支票取现方法的步骤。
附图说明
图1为本申请实施例方案涉及的硬件运行环境的支票取现设备结构示意 图;
图2为本申请支票取现方法第一实施例的流程示意图;
图3为图2中步骤S10的细化流程示意图;
图4为本申请支票取现装置一实施例的功能模块示意图。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
如图1所示,图1为本申请实施例方案涉及的硬件运行环境的支票取现设备结构示意图。
如图1所示,该支票取现设备可以包括:处理器1001,例如CPU,网络接口1004,用户接口1003,存储器1005,通信总线1002。其中,通信总线1002用于实现这些组件之间的连接通信。用户接口1003可以包括显示屏(Display)、输入单元比如键盘(Keyboard),可选用户接口1003还可以包括标准的有线接口、无线接口。网络接口1004可选的可以包括标准的有线接口、无线接口(如WI-FI接口)。存储器1005可以是高速RAM存储器,也可以是稳定的存储器(non-volatile memory),例如磁盘存储器。存储器1005可选的还可以是独立于前述处理器1001的存储装置。
如图1所示,作为一种计算机存储介质的存储器1005中可以包括操作系统、网络通信模块、用户接口模块以及支票取现程序。
在图1所示的支票取现设备中,网络接口1004主要用于连接后台服务器,与后台服务器进行数据通信;用户接口1003主要用于连接客户端(用户端),与客户端进行数据通信;而处理器1001可以用于调用存储器1005中存储的支票取现程序,并执行以下支票取现方法的各个实施例的操作。
参照图2,图2为本申请支票取现方法第一实施例的流程示意图。
本申请支票取现方法第一实施例中,支票取现方法包括:
步骤S10,对支票进行验真,当验真通过时,提取所述支票的票面信息;
本实施例中,支票取现方法应用于支票取现设备/装置,该设备/装置集成了多个子设备,包括:子设备1(支票验真机)、子设备2(服务器)、子设备3(出钞机)。
本实施例中,可以通过支票验真机,针对支票的防伪点,对支票进行验真。对于支票来说,均使用了防伪工艺,例如所有支票中都增加有新型荧光纤维且采用双色底纹印刷。可以通过支票验真机检验支票中是否含有该新型荧光纤维,或检验支票的底纹是否是双色底纹,若支票中含有该新型荧光纤维和/或支票的底纹是双色底纹,则验真通过。当验真通过时,则提取支票的票面信息。本实施例中,可通过OCR识别技术提取支票的票面信息,票面信息包括:出票日期、付款行名称、出票人账号、出钞金额等信息(具体根据实际需要进行扩充或缩减,在此不作限制)。
步骤S20,获取用户输入的第一身份信息、第一取现密码,以及摄像头拍摄的第一人脸图像,并将所述第一身份信息、第一取现密码、第一人脸图像以及票面信息关联存储;
本实施例中,当步骤S10处理完成后,可输出提示信息,例如提示用户通过预置的键盘(实体键盘或屏幕上的虚拟键盘)输入第一身份信息以及第一取款密码。其中,第一身份信息可以是身份证号码A或电话号码B等,第一取款密码可以是6位数的数字密码。用户在输入第一身份信息后,点击确认键,然后再输入第一取款密码,点击确认键,则认定第一身份信息、第一取现密码输入完成,并启用摄像头对用户进行拍摄,得到第一人脸图片。将获取的第一身份信息、第一取现密码、第一人脸图像以及票面信息关联存储。例如,存储在银行或其他安全机构的数据库中。
步骤S30,当接收到用户输入的身份信息,且检测到所述身份信息为所述第一身份信息时,实时获取摄像头采集的用户头部图像,并进行活体检测;
本实施例中,当用户需要支票取现时,首先输入之前预留的身份信息(即在步骤S20中输入的第一身份信息)。例如,当前用户输入的身份信息为身份证号码A,在银行数据库中进行检索,确认身份证号码A存储于银行数据库中,则启用摄像头,并获取摄像头采集用户头部图像,然后根据用户头部图像,进行活体检测。
本实施例对于活体检测的具体实现方式不限。通过活体检测技术可以避免用户使用面具、照片等手段实施欺诈。
步骤S40,若活体检测通过,则将所述用户头部图像与所述第一人脸图像进行人脸比对;
本实施例中,若活体检测通过,则可对采集的用户头部图像进行身份验证,具体与之前获得的该用户的第一人脸图像(即步骤S20中与第一身份信息关联存储的第一人脸图像)进行人脸比对。本实施例对于人脸比对具体实现方式不限。比如先分别提取用户头部图像与第一人脸图像的人脸特征,然后进行人脸特征比对,若人脸特征匹配率超过预设阈值,比如90%,则可确定人脸比对通过。
步骤S50,若人脸比对通过,则输出密码校验提示;
本实施例中,若人脸对比通过,则进一步发出提示(通过语音或图像显示的方式进行提示),提示用户输入取现密码。
步骤S60,接收用户输入的取现密码,若检测到所述取现密码与所述第一取现密码一致,则基于所述票面信息出钞(票面上的可取金额为X,则出钞金额也为X)。
本实施例中,若用户当前输入的取现密码与之前设置的取现密码(即步骤S20中与第一身份信息关联存储的第一取现密码)一致,则基于票面信息,确定出钞金额,并进行出钞。
本实施例中,对支票进行验真,当验真通过时,提取所述支票的票面信息;获取用户输入的第一身份信息、第一取现密码,以及摄像头拍摄的第一人脸图像,并将所述第一身份信息、第一取现密码、第一人脸图像以及票面信息关联存储;当接收到用户输入的身份信息,且检测到所述身份信息为所述第一身份信息时,实时获取摄像头采集的用户头部图像,并进行活体检测;若活体检测通过,则将所述用户头部图像与所述第一人脸图像进行人脸比对;若人脸比对通过,则输出密码校验提示;接收用户输入的取现密码,若检测到所述取现密码与所述第一取现密码一致,则基于所述票面信息出钞。通过本实施,既提高了对支票取现业务的安全防护,也提高了用户办理支票取现业务的便捷性。
进一步地,参照图3,图3为图2中步骤S10的细化流程示意图。
本申请支票取现方法一实施例中,步骤S10包括:
步骤S101,获取支票的防伪信息,并将所述防伪信息与标准防伪信息进行对比,得到相似度;
本实施例中,该步骤具体包括:向支票发射紫外光,获取紫外光照下所述支票的UR防伪底纹图像,将所述UR防伪底纹图像与标准UR防伪底纹图像进行对比,得到相似度。
本实施例中,对真支票而言,真支票设置有UR防伪底纹图像,将真支票的UR防伪底纹图像称为标准UR防伪底纹图像。因此,在进行支票验真时,用户需要将支票放在指定位置,放置于该位置中的支票暴露在紫外光照射的环境中。在该环境中,支票的UR防伪底纹会明显显现出来。此时,对该支票进行拍照,并从照片中提取该支票的UR防伪底纹图像。本实施例中,由于标准UR防伪底纹图像的像素的RGB值是固定的。例如,R(x1)、G(x2)、B(x3),其中x1、x2、x3的取值范围为[0,255],x1、x2、x3的取值根据实际需要进行设置。从照片中提取该支票的UR防伪底纹图像的具体实施手段为:从照片中提取RGB值为R(x1)、G(x2)、B(x3)的像素点,且提取出来的像素点之间的相对位置与其在照片中的相对位置相同,如此,即可得到该支票的UR防伪底纹图像(即支票的防伪信息)。然后将该支票的UR防伪底纹图像与标准UR防伪底纹图像进行图像(即标准防伪信息)对比,得到相似度。例如,首先将图像进行灰度化处理,然后将经过灰度化处理的图像大小归一化到特定尺寸(例如7*7),然后简化灰度以减少计算量,比如所有灰度除以5,然后计算平均灰度值,然后比较每7*7=49个像素与平均灰度值的大小,若大则记为1,否则记为0,从而得到图像对应的49位的2进制指纹编码。按照上述步骤分别对两张图像进行处理,即可得到该支票的UR防伪底纹图像对应的指纹编码1以及标准UR防伪底纹图像对应的指纹编码2。比较指纹编码1与指纹编码2在同一位上的数值是否相同,若同一位上的数值相同的情况有n个,则n/49的具体数值即为两张图像的相似度。
步骤S102,若相似度大于预设阈值,则验真通过,然后通过OCR技术提取所述支票的票面信息。
本实施例中,预设阈值根据实际需要进行设置。为了避免使用伪造支票的事件发生,可以适当将阈值设置的高一些,例如设置为85%。即当计算得到相似度之后,检测相似度是否大于85%,若相似度大于85%,则通过OCR技术(Optical Character Recognition,光学字符识别)提取支票的票面信息。本实施例中,票面信息包括:出票日期、付款行名称、出票人账号、出钞金 额等信息(具体根据实际需要进行扩充或缩减,在此不作限制)。本实施例中,这些票面信息在支票上的位置是固定的(出票日期在位置1、付款行名称在位置2、出票人账号在位置3、出钞金额在位置4),因此,可以直接对位置1~4所处的区域进行OCR识别,从而得到票面信息,提高了OCR识别的效率以及准确度。
进一步地,本申请支票取现方法一实施例中,所述实时获取摄像头采集的用户头部图像,并进行活体检测的步骤包括:
实时获取双目摄像头采集的用户头部图像,获得双目用户头部图像;
本实施例中,为提升用户身份核验的准确性,因此采用双目摄像头采集用户头部图像。其中,双目摄像头是指具有两个摄像头的摄像装置,相对于单目摄像头采集的图像而言,双目摄像头能分别从两个不同角度采集图像,也即可以同时采集到不同角度对应的两份用户头部图像(也即双目用户头像)。
基于所述双目用户头部图像,进行活体检测。
本实施例中,由于双目用户头部图像携带有更多的信息,进而更利于进行活体检测并且检测结果具有较高的可信度。比如不同角度下用户脸部的光线特征不一样,则使用双目摄像头拍摄得到的图像也具有不一样的纹理特征,而该纹理特征是难以通过照片、面具等方式伪造出来的,因此具有较高的防伪能力。
进一步地,本申请支票取现方法一实施例中,所述基于所述双目用户头部图像,进行活体检测的步骤包括:
基于预置双目立体匹配算法,计算所述双目用户头部图像对应的视差图像;
本实施例中,通过双目摄像头采集的用户头部图像包括左视图用户头部图像与右视图用户头部图像,通过双目立体匹配算法可获得左右视图对应的视差图像。
本实施例对于预置的双目立体匹配算法不限,例如SAD(Sum of Absolute Differences)算法、SSD(Sum of Square Differences)算法等。
本实施例中,通过双目立体匹配算法计算双目用户头部图像对应的视差 图像的处理流程如下:
(1)进行匹配代价计算
匹配代价计算的目的是衡量待匹配像素与候选像素之间的相关性。两个像素无论是否为同名点,都可以通过匹配代价函数计算匹配代价,代价越小则说明相关性越大,是同名点的概率也越大。
每个像素在搜索同名点之前,往往会指定一个视差搜索范围D,视差搜索时将范围限定在D内,用一个大小为W×H×D(W为图像宽度,H为图像高度)的三维矩阵C来存储每个像素在视差范围内每个视差下的匹配代价值。矩阵C通常称为DSI(Disparity Space Image)。
匹配代价计算的方法有很多,例如灰度绝对值差、灰度绝对值差之和、归一化相关系数等方法来计算两个像素的匹配代价。
(2)对匹配代价进行聚合
代价聚合的根本目的是让代价值能够准确的反映像素之间的相关性。上一步匹配代价的计算往往只会考虑局部信息,通过两个像素邻域内一定大小的窗口内的像素信息来计算代价值,这很容易受到图像噪声的影响,而且当图像处于弱纹理或重复纹理区域,这个代价值极有可能无法准确的反映像素之间的相关性。
因此,通过代价聚合建立邻接像素之间的联系,以一定的准则(如相邻像素应该具有连续的视差值)来对代价矩阵进行优化。每个像素在某个视差下的新代价值都会根据其相邻像素在同一视差值或者附近视差值下的代价值来重新计算,得到新的DSI。常用的代价聚合方法有扫描线法、动态规划法、SGM算法中的路径聚合法等。
(3)计算视差图像
视差图像计算是通过代价聚合之后的代价矩阵S来确定每个像素的最优视差值,通常使用赢家通吃算法(WTA,Winner-Takes-All)来计算,即某个像素的所有视差下的代价值中,选择最小代价值所对应的视差作为最优视差。
(4)对视差图像进行优化
视差图像优化的目的是对上一步得到的视差图像进一步优化,改善视差图像的质量,包括剔除错误视差、适当平滑以及子像素精度优化等处理,一般采用左右一致性检查(Left-Right Check)算法剔除因为遮挡和噪声而导致 的错误视差;采用剔除小连通区域算法来剔除孤立异常点;采用中值滤波(Median Filter)、双边滤波(Bilateral Filter)等平滑算法对视差图像进行平滑。
将所述视差图像转换为深度图像,并计算所述深度图像的灰度共生矩阵;
本实施例中,在获得双目用户头部图像对应的视差图像后,通过以下视差与深度的转换公式,可将视差图像转换为深度图像:
depth=(f*baseline)/disp;
上式中,depth表示深度图的深度值,f表示归一化的焦距,baseline表示两个相机光心之间的距离,称作基线距离;disp表示视差值。视差的单位是像素表示,深度值的单位往往是毫米表示。
在获得深度图像后,需进一步计算深度图像的灰度共生矩阵。灰度共生矩阵是对图像上保持某距离的两像素分别具有某灰度的状况进行统计得到的。假设取图像(N×N)中任意一点(x,y)及偏离它的另一点(x+a,y+b),设该点对的灰度值为(g1,g2)。令点(x,y)在整个图像上移动,则会得到各种(g1,g2)值,设灰度值的级数为k,则(g1,g2)的组合共有k的平方种。对于整个图像,统计出每一种(g1,g2)值出现的次数,然后排列成一个矩阵,再用(g1,g2)出现的总次数将它们归一化为出现的概率P(g1,g2),则得到的矩阵称为灰度共生矩阵。图象的灰度共生矩阵能反映出图象灰度关于方向、相邻间隔、变化幅度等综合信息。
从所述灰度共生矩阵中提取所述深度图像的纹理特征,并将各纹理特征级联成灰度共生矩阵特征向量;
由于灰度共生矩阵的数据量较大,一般不直接作为区分纹理的特征,而是基于它构建的一些统计量作为区分纹理的特征,包括:能量、熵、对比度、均匀性、相关性、方差、和平均、和方差、和熵、差方差、差平均、差熵、相关信息测度以及最大相关系数。
本实施例中具体以上述基于灰度共生矩阵所生成的统计量作为深度图像的纹理特征,并将各纹理特征级联成灰度共生矩阵特征向量。
将所述灰度共生矩阵特征向量输入预置活体分类模型进行分类,得到分类结果;
基于所述分类结果,判定所述双目用户头部图像对应的检测对象是否为活体。
本实施例中,在获得人脸深度图的灰度共生矩阵后,提取人脸深度图的纹理特征,并将各纹理特征级联成灰度共生矩阵特征向量,以便于采用支持向量机的活体分类模型,对灰度共生矩阵特征向量进行分类,最后根据分类结果,确定当前检测对象是否为活体。
其中,在进行活体检测之前,预先从大量活体以及非活体用户的人脸图像中提取灰度共生矩阵特征向量,并进行活体与非活体的样本标记,然后再以灰度共生矩阵特征向量为训练样本进行机器学习,得到能够识别活体与非活体的活体分类模型,使得该活体分类模型能够基于灰度共生矩阵特征向量进行活体检测。
现有活体检测方式通常采用的是基于用户动作的活体检测,用户需要按照系统给出的动作指令作出相应动作,也即现有活体检测方式需要用户予以配合。本实施例基于双目摄像头进行活体检测,用户无需动作上的配合即可完成检测,并且活体检测结果更为可信与准确。
进一步地,本申请支票取现方法一实施例中,步骤S60之后,还包括:
步骤S70,根据所述票面信息,确定开票用户,并基于当前时间及出钞金额,生成支票取现记录;
步骤S80,将所述支票取现记录发送至所述开票用户的终端。
本实施例中,可根据票面信息中的出票人账号确定开票用户,从而银行数据库中获取该开票用户预留的手机号。然后将由当前时间以及出钞金额组成的支票取现记录发送至该手机号,以供开票用户及时知晓该支票已被取现。
进一步地,本申请支票取现方法一实施例中,步骤S80之后,还包括:
获取用户支票取现全过程的监控视频,并将所述监控视频以及支票取现记录关联存储。
本实施例中,为便于进一步保证用户银行账户的安全性,同时维护银行自身的合法权益,本实施例中,针对用户的每一次支票取现,银行后台服务器都会将用户支票取现全过程的监控视频(例如将步骤S30中“接收到用户输入的身份信息,且检测到所述身份信息为所述第一身份信息”的时刻作为监控视频的起始时刻,将步骤S60中“基于所述票面信息出钞”的时刻作为 监控视频的终止时刻)以及支票取现记录捆绑存储,通过监控视频,可了解取款人的真实身份以及取款情况,进而避免他人非法窃取用户账户资金,同时也避免某些用户恶意诬陷银行。
参照图4,图4为本申请支票取现装置一实施例的功能模块示意图。
本申请支票取现装置一实施例中,支票取现装置包括:
验真模块10,用于对支票进行验真,当验真通过时,提取所述支票的票面信息;
获取模块20,用于获取用户输入的第一身份信息、第一取现密码,以及摄像头拍摄的第一人脸图像,并将所述第一身份信息、第一取现密码、第一人脸图像以及票面信息关联存储;
活体检测模块30,用于当接收到用户输入的身份信息,且检测到所述身份信息为所述第一身份信息时,实时获取摄像头采集的用户头部图像,并进行活体检测;
人脸对比模块40,用于若活体检测通过,则将所述用户头部图像与所述第一人脸图像进行人脸比对;
提示模块50,用于若人脸比对通过,则输出密码校验提示;
出钞模块60,用于接收用户输入的取现密码,若检测到所述取现密码与所述第一取现密码一致,则基于所述票面信息出钞。
本实施例中,对支票进行验真,当验真通过时,提取所述支票的票面信息;获取用户输入的第一身份信息、第一取现密码,以及摄像头拍摄的第一人脸图像,并将所述第一身份信息、第一取现密码、第一人脸图像以及票面信息关联存储;当接收到用户输入的身份信息,且检测到所述身份信息为所述第一身份信息时,实时获取摄像头采集的用户头部图像,并进行活体检测;若活体检测通过,则将所述用户头部图像与所述第一人脸图像进行人脸比对;若人脸比对通过,则输出密码校验提示;接收用户输入的取现密码,若检测到所述取现密码与所述第一取现密码一致,则基于所述票面信息出钞。通过本实施,既提高了对支票取现业务的安全防护,也提高了用户办理支票取现业务的便捷性。
此外,本申请实施例还提出一种计算机可读存储介质,所述计算机可读存储介质上存储有支票取现程序,所述支票取现程序被处理器执行时实现如上所述的支票取现方法的步骤。
可选地,在一具体实施例中,所述支票取现程序被所述处理器执行实现对支票进行验真,当验真通过时,提取所述支票的票面信息的步骤时,包括如下步骤:
获取支票的防伪信息,并将所述防伪信息与标准防伪信息进行对比,得到相似度;
若相似度大于预设阈值,则验真通过,然后通过OCR技术提取所述支票的票面信息。
可选地,在一具体实施例中,所述支票取现程序被所述处理器执行实现所述获取支票的防伪信息,并将所述防伪信息与标准防伪信息进行对比,得到相似度的步骤时,包括如下步骤:
向支票发射紫外光,获取紫外光照下所述支票的UR防伪底纹图像;
将所述UR防伪底纹图像与标准UR防伪底纹图像进行对比,得到相似度。
可选地,在一具体实施例中,所述支票取现程序被所述处理器执行实现所述实时获取摄像头采集的用户头部图像,并进行活体检测的步骤时,包括如下步骤:
实时获取双目摄像头采集的用户头部图像,获得双目用户头部图像;
基于所述双目用户头部图像,进行活体检测。
可选地,在一具体实施例中,所述支票取现程序被所述处理器执行实现所述基于所述双目用户头部图像,进行活体检测的步骤时,包括如下步骤:
基于预置双目立体匹配算法,计算所述双目用户头部图像对应的视差图像;
将所述视差图像转换为深度图像,并计算所述深度图像的灰度共生矩阵;
从所述灰度共生矩阵中提取所述深度图像的纹理特征,并将各纹理特征级联成灰度共生矩阵特征向量;
将所述灰度共生矩阵特征向量输入预置活体分类模型进行分类,得到分类结果;
基于所述分类结果,判定所述双目用户头部图像对应的检测对象是否为 活体。
可选地,在一具体实施例中,所述支票取现程序被所述处理器执行时还实现如下支票取现方法的步骤:
在所述接收用户输入的取现密码,若检测到所述取现密码与所述第一取现密码一致,则基于所述票面信息出钞的步骤之后,根据所述票面信息,确定开票用户,并基于当前时间及出钞金额,生成支票取现记录;
将所述支票取现记录发送至所述开票用户的终端。
可选地,在一具体实施例中,所述支票取现程序被所述处理器执行时还实现如下支票取现方法的步骤:
证据存储单元,用于获取用户支票取现全过程的监控视频,并将所述监控视频以及支票取现记录关联存储。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。

Claims (20)

  1. 一种支票取现方法,所述支票取现方法包括以下步骤:
    对支票进行验真,当验真通过时,提取所述支票的票面信息;
    获取用户输入的第一身份信息、第一取现密码,以及摄像头拍摄的第一人脸图像,并将所述第一身份信息、第一取现密码、第一人脸图像以及票面信息关联存储;
    当接收到用户输入的身份信息,且检测到所述身份信息为所述第一身份信息时,实时获取摄像头采集的用户头部图像,并进行活体检测;
    若活体检测通过,则将所述用户头部图像与所述第一人脸图像进行人脸比对;
    若人脸比对通过,则输出密码校验提示;
    接收用户输入的取现密码,若检测到所述取现密码与所述第一取现密码一致,则基于所述票面信息出钞。
  2. 如权利要求1所述的支票取现方法,所述对支票进行验真,当验真通过时,提取所述支票的票面信息的步骤包括:
    获取支票的防伪信息,并将所述防伪信息与标准防伪信息进行对比,得到相似度;
    若相似度大于预设阈值,则验真通过,然后通过OCR技术提取所述支票的票面信息。
  3. 如权利要求2所述的支票取现方法,所述获取支票的防伪信息,并将所述防伪信息与标准防伪信息进行对比,得到相似度的步骤包括:
    向支票发射紫外光,获取紫外光照下所述支票的UR防伪底纹图像;
    将所述UR防伪底纹图像与标准UR防伪底纹图像进行对比,得到相似度。
  4. 如权利要求1所述的支票取现方法,所述实时获取摄像头采集的用户头部图像,并进行活体检测的步骤包括:
    实时获取双目摄像头采集的用户头部图像,获得双目用户头部图像;
    基于所述双目用户头部图像,进行活体检测。
  5. 如权利要求4所述的支票取现方法,所述基于所述双目用户头部图像,进行活体检测的步骤包括:
    基于预置双目立体匹配算法,计算所述双目用户头部图像对应的视差图像;
    将所述视差图像转换为深度图像,并计算所述深度图像的灰度共生矩阵;
    从所述灰度共生矩阵中提取所述深度图像的纹理特征,并将各纹理特征级联成灰度共生矩阵特征向量;
    将所述灰度共生矩阵特征向量输入预置活体分类模型进行分类,得到分类结果;
    基于所述分类结果,判定所述双目用户头部图像对应的检测对象是否为活体。
  6. 如权利要求1至5中任一项所述的支票取现方法,所述接收用户输入的取现密码,若检测到所述取现密码与所述第一取现密码一致,则基于所述票面信息出钞的步骤之后,还包括:
    根据所述票面信息,确定开票用户,并基于当前时间及出钞金额,生成支票取现记录;
    将所述支票取现记录发送至所述开票用户的终端。
  7. 如权利要求6所述的支票取现方法,所述将所述支票取现记录发送至所述开票用户的终端的步骤之后,还包括:
    获取用户支票取现全过程的监控视频,并将所述监控视频以及支票取现记录关联存储。
  8. 一种支票取现装置,所述支票取现装置包括:
    验真模块,用于对支票进行验真,当验真通过时,提取所述支票的票面信息;
    获取模块,用于获取用户输入的第一身份信息、第一取现密码,以及摄像头拍摄的第一人脸图像,并将所述第一身份信息、第一取现密码、第一人脸图像以及票面信息关联存储;
    活体检测模块,用于当接收到用户输入的身份信息,且检测到所述身份信息为所述第一身份信息时,实时获取摄像头采集的用户头部图像,并进行活体检测;
    人脸对比模块,用于若活体检测通过,则将所述用户头部图像与所述第一人脸图像进行人脸比对;
    提示模块,用于若人脸比对通过,则输出密码校验提示;
    出钞模块,用于接收用户输入的取现密码,若检测到所述取现密码与所述第一取现密码一致,则基于所述票面信息出钞。
  9. 如权利要求8所述的支票取现装置,所述验真模块包括:
    验真子单元,用于获取支票的防伪信息,并将所述防伪信息与标准防伪信息进行对比,得到相似度;
    提取子单元,用于若相似度大于预设阈值,则验真通过,然后通过OCR技术提取所述支票的票面信息。
  10. 如权利要求8所述的支票取现装置,所述活体检测模块包括:
    头部图像获取单元,用于实时获取双目摄像头采集的用户头部图像,获得双目用户头部图像;
    活体检测单元,用于基于所述双目用户头部图像,进行活体检测。
  11. 如权利要求10所述的支票取现装置,所述活体检测单元包括:
    第一计算子单元,用于基于预置双目立体匹配算法,计算所述双目用户头部图像对应的视差图像;
    第二计算子单元,用于将所述视差图像转换为深度图像,并计算所述深度图像的灰度共生矩阵;
    级联子单元,用于从所述灰度共生矩阵中提取所述深度图像的纹理特征,并将各纹理特征级联成灰度共生矩阵特征向量;
    分类子单元,用于将所述灰度共生矩阵特征向量输入预置活体分类模型进行分类,得到分类结果;
    判定子单元,用于基于所述分类结果,判定所述双目用户头部图像对应的检测对象是否为活体。
  12. 如权利要求9至11任一项所述的支票取现装置,所述支票取现装置还包括:
    记录模块,用于根据所述票面信息,确定开票用户,并基于当前时间及出钞金额,生成支票取现记录;
    通知模块,用于将所述支票取现记录发送至所述开票用户的终端。
  13. 如权利要求12所述的支票取现装置,所述支票取现装置还包括:
    证据存储单元,用于获取用户支票取现全过程的监控视频,并将所述监控视频以及支票取现记录关联存储。
  14. 一种支票取现设备,所述支票取现设备包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的支票取现程序,所述支票取现程序被所述处理器执行时实现如下所述的支票取现方法的步骤:
    对支票进行验真,当验真通过时,提取所述支票的票面信息;
    获取用户输入的第一身份信息、第一取现密码,以及摄像头拍摄的第一人脸图像,并将所述第一身份信息、第一取现密码、第一人脸图像以及票面信息关联存储;
    当接收到用户输入的身份信息,且检测到所述身份信息为所述第一身份信息时,实时获取摄像头采集的用户头部图像,并进行活体检测;
    若活体检测通过,则将所述用户头部图像与所述第一人脸图像进行人脸比对;
    若人脸比对通过,则输出密码校验提示;
    接收用户输入的取现密码,若检测到所述取现密码与所述第一取现密码一致,则基于所述票面信息出钞。
  15. 如权利要求14所述的支票取现设备,所述支票取现程序被所述处理器执行实现对支票进行验真,当验真通过时,提取所述支票的票面信息的步骤时,包括如下步骤:
    获取支票的防伪信息,并将所述防伪信息与标准防伪信息进行对比,得到相似度;
    若相似度大于预设阈值,则验真通过,然后通过OCR技术提取所述支票的票面信息。
  16. 如权利要求15所述的支票取现设备,所述支票取现程序被所述处理器执行实现所述获取支票的防伪信息,并将所述防伪信息与标准防伪信息进行对比,得到相似度的步骤时,包括如下步骤:
    向支票发射紫外光,获取紫外光照下所述支票的UR防伪底纹图像;
    将所述UR防伪底纹图像与标准UR防伪底纹图像进行对比,得到相似度。
  17. 如权利要求14所述的支票取现设备,所述支票取现程序被所述处理器执行实现所述实时获取摄像头采集的用户头部图像,并进行活体检测的步骤时,包括如下步骤:
    实时获取双目摄像头采集的用户头部图像,获得双目用户头部图像;
    基于所述双目用户头部图像,进行活体检测。
  18. 如权利要求17所述的支票取现设备,所述支票取现程序被所述处理器执行实现所述基于所述双目用户头部图像,进行活体检测的步骤时,包括如下步骤:
    基于预置双目立体匹配算法,计算所述双目用户头部图像对应的视差图像;
    将所述视差图像转换为深度图像,并计算所述深度图像的灰度共生矩阵;
    从所述灰度共生矩阵中提取所述深度图像的纹理特征,并将各纹理特征级联成灰度共生矩阵特征向量;
    将所述灰度共生矩阵特征向量输入预置活体分类模型进行分类,得到分 类结果;
    基于所述分类结果,判定所述双目用户头部图像对应的检测对象是否为活体。
  19. 如权利要求14至18任一项所述的支票取现设备,所述支票取现程序被所述处理器执行时还实现如下支票取现方法的步骤:
    在所述接收用户输入的取现密码,若检测到所述取现密码与所述第一取现密码一致,则基于所述票面信息出钞的步骤之后,根据所述票面信息,确定开票用户,并基于当前时间及出钞金额,生成支票取现记录;
    将所述支票取现记录发送至所述开票用户的终端。
  20. 一种非易失性计算机可读存储介质,所述计算机可读存储介质上存储有支票取现程序,所述支票取现程序被处理器执行时实现如下所述的支票取现方法的步骤:
    对支票进行验真,当验真通过时,提取所述支票的票面信息;
    获取用户输入的第一身份信息、第一取现密码,以及摄像头拍摄的第一人脸图像,并将所述第一身份信息、第一取现密码、第一人脸图像以及票面信息关联存储;
    当接收到用户输入的身份信息,且检测到所述身份信息为所述第一身份信息时,实时获取摄像头采集的用户头部图像,并进行活体检测;
    若活体检测通过,则将所述用户头部图像与所述第一人脸图像进行人脸比对;
    若人脸比对通过,则输出密码校验提示;
    接收用户输入的取现密码,若检测到所述取现密码与所述第一取现密码一致,则基于所述票面信息出钞。
PCT/CN2019/103212 2019-05-23 2019-08-29 支票取现方法、装置、设备及计算机可读存储介质 WO2020232889A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910432137.7A CN110321793A (zh) 2019-05-23 2019-05-23 支票取现方法、装置、设备及计算机可读存储介质
CN201910432137.7 2019-05-23

Publications (1)

Publication Number Publication Date
WO2020232889A1 true WO2020232889A1 (zh) 2020-11-26

Family

ID=68118834

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/103212 WO2020232889A1 (zh) 2019-05-23 2019-08-29 支票取现方法、装置、设备及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN110321793A (zh)
WO (1) WO2020232889A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115424353A (zh) * 2022-09-07 2022-12-02 杭银消费金融股份有限公司 基于ai模型的业务用户特征识别方法及系统

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060115797A1 (en) * 2004-01-06 2006-06-01 Gray Stuart F Bar codes or radio frequency identification tags on paper currency, checks, credit/debit cards and personal identification
CN101176105A (zh) * 2005-03-09 2008-05-07 迪布尔特有限公司 支票接收和现金分发的自动银行机系统及方法
CN101510337A (zh) * 2008-02-14 2009-08-19 北京银融科技有限责任公司 一种计划自助取现的方法及装置
CN103605958A (zh) * 2013-11-12 2014-02-26 北京工业大学 一种基于灰度共生矩阵和小波分析的活体人脸检测方法
CN106407914A (zh) * 2016-08-31 2017-02-15 北京旷视科技有限公司 用于检测人脸的方法、装置和远程柜员机系统
CN107274592A (zh) * 2017-07-17 2017-10-20 深圳贝斯特机械电子有限公司 一种能识别支票的存取款金融终端机及其应用方法
CN107393220A (zh) * 2017-08-30 2017-11-24 重庆中科云丛科技有限公司 基于人脸识别的银行自助取款终端及取款方法
CN107451575A (zh) * 2017-08-08 2017-12-08 济南大学 一种身份认证系统中的人脸防欺骗检测方法
CN107610320A (zh) * 2017-09-06 2018-01-19 深圳怡化电脑股份有限公司 一种票据识别方法和装置
CN108446690A (zh) * 2018-05-31 2018-08-24 北京工业大学 一种基于多视角动态特征的人脸活体检测方法
CN108985134A (zh) * 2017-06-01 2018-12-11 重庆中科云丛科技有限公司 基于双目摄像机的人脸活体检测及刷脸交易方法及系统

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050009506A (ko) * 2003-07-16 2005-01-25 한성희 현금 자동인출기의 사용자 식별장치 및 방법
US20100063928A1 (en) * 2008-09-11 2010-03-11 Hart Mandi C Electronic check cashing system
CN103065149B (zh) * 2012-12-21 2016-05-04 上海交通大学 网纹甜瓜果实表型提取与量化方法
CN104143140B (zh) * 2014-04-02 2018-03-13 深圳市雁联计算系统有限公司 一种支票的兑付方法、兑付系统以及圈存系统
CN105023010B (zh) * 2015-08-17 2018-11-06 中国科学院半导体研究所 一种人脸活体检测方法及系统
CN106023211B (zh) * 2016-05-24 2019-02-26 深圳前海勇艺达机器人有限公司 基于深度学习的机器人图像定位方法及系统
CN108470337A (zh) * 2018-04-02 2018-08-31 江门市中心医院 一种基于图像深度特征的亚实性肺结节定量分析方法及系统
CN109218710B (zh) * 2018-09-11 2019-10-08 宁波大学 一种自由视点视频质量评估方法

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060115797A1 (en) * 2004-01-06 2006-06-01 Gray Stuart F Bar codes or radio frequency identification tags on paper currency, checks, credit/debit cards and personal identification
CN101176105A (zh) * 2005-03-09 2008-05-07 迪布尔特有限公司 支票接收和现金分发的自动银行机系统及方法
CN101510337A (zh) * 2008-02-14 2009-08-19 北京银融科技有限责任公司 一种计划自助取现的方法及装置
CN103605958A (zh) * 2013-11-12 2014-02-26 北京工业大学 一种基于灰度共生矩阵和小波分析的活体人脸检测方法
CN106407914A (zh) * 2016-08-31 2017-02-15 北京旷视科技有限公司 用于检测人脸的方法、装置和远程柜员机系统
CN108985134A (zh) * 2017-06-01 2018-12-11 重庆中科云丛科技有限公司 基于双目摄像机的人脸活体检测及刷脸交易方法及系统
CN107274592A (zh) * 2017-07-17 2017-10-20 深圳贝斯特机械电子有限公司 一种能识别支票的存取款金融终端机及其应用方法
CN107451575A (zh) * 2017-08-08 2017-12-08 济南大学 一种身份认证系统中的人脸防欺骗检测方法
CN107393220A (zh) * 2017-08-30 2017-11-24 重庆中科云丛科技有限公司 基于人脸识别的银行自助取款终端及取款方法
CN107610320A (zh) * 2017-09-06 2018-01-19 深圳怡化电脑股份有限公司 一种票据识别方法和装置
CN108446690A (zh) * 2018-05-31 2018-08-24 北京工业大学 一种基于多视角动态特征的人脸活体检测方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115424353A (zh) * 2022-09-07 2022-12-02 杭银消费金融股份有限公司 基于ai模型的业务用户特征识别方法及系统

Also Published As

Publication number Publication date
CN110321793A (zh) 2019-10-11

Similar Documents

Publication Publication Date Title
US11676285B1 (en) System, computing device, and method for document detection
JP4862447B2 (ja) 顔認識システム
JP3753722B2 (ja) 歯牙映像からの歯牙領域の抽出方法及び歯牙映像を利用した身元確認方法及び装置
US9652658B1 (en) Fingerprint check to reduce check fraud
CN106663157A (zh) 用户认证方法、执行该方法的装置及存储该方法的记录介质
CN105989263A (zh) 身份认证方法、开户方法、装置及系统
CN110321792A (zh) 无卡取现方法、装置、设备及计算机可读存储介质
WO2018192448A1 (zh) 一种人证比对的认证方法、系统及相机
EP3594879A1 (en) System and method for authenticating transactions from a mobile device
US20210406351A1 (en) Non-face-to-face authentication system
CN111666835A (zh) 一种人脸活体检测方法和装置
CN108446687A (zh) 一种基于移动端和后台互联的自适应人脸视觉认证方法
CN113177480A (zh) 基于人脸识别的金融业务处理方法、装置、设备及介质
CN112396004A (zh) 用于人脸识别的方法、装置和计算机可读存储介质
KR101334744B1 (ko) 무인대출 처리방법
CN113642639B (zh) 活体检测方法、装置、设备和存储介质
US20220277311A1 (en) A transaction processing system and a transaction method based on facial recognition
CN113066237B (zh) 用于自动取款机的人脸活体检测识别方法以及自动取款机
WO2020232889A1 (zh) 支票取现方法、装置、设备及计算机可读存储介质
KR101806028B1 (ko) 사용자 신체 특징을 이용한 사용자 인증 방법 및 사용자 인증 시스템
CN111915307A (zh) 一种无接触式移动支付系统及方法
KR20000061100A (ko) 은행거래시스템의 거래자 안면인식방법
JP2005149145A (ja) 物体検出装置、物体検出方法、およびコンピュータプログラム
CN108875472A (zh) 图像采集装置及基于该图像采集装置的人脸身份验证方法
CN109242489B (zh) 认证方式选择方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19929582

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19929582

Country of ref document: EP

Kind code of ref document: A1