WO2022126964A1 - 业务数据核验方法、装置、设备及存储介质 - Google Patents

业务数据核验方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2022126964A1
WO2022126964A1 PCT/CN2021/090188 CN2021090188W WO2022126964A1 WO 2022126964 A1 WO2022126964 A1 WO 2022126964A1 CN 2021090188 W CN2021090188 W CN 2021090188W WO 2022126964 A1 WO2022126964 A1 WO 2022126964A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
data
voice
information
reply
Prior art date
Application number
PCT/CN2021/090188
Other languages
English (en)
French (fr)
Inventor
史其选
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2022126964A1 publication Critical patent/WO2022126964A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue

Definitions

  • the present application relates to the field of big data and blockchain technology, and in particular, to a business data verification method, device, device and storage medium.
  • a user when a user chooses to purchase an insurance product, he first selects the relevant package on the purchase page, then fills in the relevant information, and submits the relevant information to the insurance review server. Finally, the insurance review server generates policy information based on the relevant information submitted by the user. , and then the policyholder checks it, and only after the policy information is verified can the corresponding policy information be generated.
  • the main purpose of the present application is to solve the technical problems of low data verification accuracy and low reliability due to the difficulty in evaluating the accuracy of the verification results in the existing business data verification methods.
  • a first aspect of the present application provides a method for verifying business data, including: acquiring business data to be verified; acquiring first biometric information of a current user, and selecting a preset biometric feature according to the user identity information in the business data In the database, query the corresponding second biometric information; compare and identify the first biometric information and the second biometric information to obtain an identification result; if the identification results are consistent, extract the business In the data, the data content to be confirmed; call the preset AI voice conversion model to convert the data content into broadcast voice; broadcast the broadcast voice, and obtain the user's reply voice in real time; based on the preset answer data , match the reply voice to obtain a matching result; if the matching result is that the reply voice is consistent with the answer data, output the service data.
  • a second aspect of the present application provides a service data verification device, comprising: an acquisition module for acquiring business data to be verified; a query module for acquiring first biometric information of a current user, and according to the business data Query the corresponding second biometric information from the preset biometric database; the identification module is used to compare and identify the first biometric information and the second biometric information to obtain the identification Results; an extraction module for extracting the data content to be confirmed in the business data if the recognition results are consistent; a voice conversion module for calling a preset AI voice conversion model to convert the data content to broadcast the voice; the broadcast module is used to broadcast the broadcast voice and obtain the user's reply voice in real time; the matching module is used to match the reply voice based on the preset answer data to obtain a matching result; information An output module, configured to output the service data if the matching result is that the reply voice is consistent with the answer data.
  • a third aspect of the present application provides a service data verification device, including: a memory and at least one processor, where instructions are stored in the memory; the at least one processor invokes the instructions in the memory, so that all
  • the business data verification device performs the following steps of the business data verification method: acquiring the business data to be verified; acquiring the first biometric feature information of the current user, and according to the user identity information in the business data from a preset biometric In the feature database, query the corresponding second biometric information; compare and identify the first biometric information and the second biometric information to obtain an identification result; if the identification results are consistent, extract the In the business data, the data content to be confirmed; call the preset AI voice conversion model to convert the data content into broadcast voice; broadcast the broadcast voice, and obtain the user's reply voice in real time; based on the preset answer data, and match the reply voice to obtain a matching result; if the matching result is that the reply voice is consistent with the answer data, output the service data.
  • a fourth aspect of the present application provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the computer-readable storage medium runs on a computer, the computer performs the steps of the business data verification method as follows: Obtain the business data to be verified; obtain the first biometric information of the current user, and query the corresponding second biometric information from the preset biometric database according to the user identity information in the business data; A biometric information is compared and identified with the second biometric information, and a recognition result is obtained; if the identification results are consistent, extract the data content to be confirmed in the business data; call the preset AI voice conversion model, convert the data content into the broadcast voice; broadcast the broadcast voice, and obtain the user's reply voice in real time; based on the preset answer data, match the reply voice to obtain a matching result; if the If the matching result is that the reply voice is consistent with the answer data, the service data is output.
  • the business data to be verified and the first biometric information of the current user are obtained, and the corresponding second biometric feature is queried from the preset biometric database according to the user identity information in the business data. information; compare and identify the first biometric information and the second biometric information to obtain the identification result; if the identification results are consistent, extract the data content to be confirmed in the policy information; call the preset AI voice conversion model, convert the data content into the broadcast voice; broadcast the broadcast voice, and obtain the reply voice of the current user in real time; based on the preset answer data, match the reply voice to obtain a matching result; if the matching result is the reply voice If it is consistent with the answer data, the business data is output.
  • the technical solution ensures the authenticity and accuracy of the verification result obtained after the verification of the business data, and improves the accuracy and reliability of the verification result.
  • FIG. 1 is a schematic diagram of a first embodiment of a service data verification method in an embodiment of the application
  • FIG. 2 is a schematic diagram of a second embodiment of a service data verification method in an embodiment of the present application
  • FIG. 3 is a schematic diagram of a third embodiment of a service data verification method in an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a fourth embodiment of a business data verification method in an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a fifth embodiment of the service data verification method in the embodiment of the present application.
  • FIG. 6 is a schematic diagram of an embodiment of a service data verification apparatus in an embodiment of the present application.
  • FIG. 7 is a schematic diagram of another embodiment of the apparatus for verifying service data in an embodiment of the present application.
  • FIG. 8 is a schematic diagram of an embodiment of a service data verification device in an embodiment of the present application.
  • the embodiments of the present application provide a business data verification method, device, device, and storage medium.
  • the corresponding biometric information is queried from a biometric database. And make a comparison to ensure that the current user identity is consistent with the user identity in the business data, and then call the preset AI voice conversion model to convert the data content to be confirmed in the business data into broadcast voice, play it to the user, and obtain the user
  • the reply voice and based on the answer data, the reply voice is matched, and if the match is consistent, the service data is output.
  • the method ensures the authenticity of the entire business data verification process, and improves the accuracy and reliability of the verification results obtained after the verification.
  • the first embodiment of the service data verification method in the embodiment of the present application includes:
  • the execution subject of the present application may be a service data verification device, and may also be a terminal or a server, which is not specifically limited here.
  • the embodiments of the present application take the server as an execution subject as an example for description.
  • the business data to be verified is the policy information of the insurance applicant who needs to be verified
  • the current business data verification process is the user's underwriting process
  • obtaining the business data to be verified is to obtain the policy information to be verified.
  • the insurance application user enters the relevant personal identity information on the insurance application system. After receiving the relevant personal identity information, the insurance application system retrieves the policy information corresponding to the user's personal identity information from the policy information database on the system.
  • the user's personal identity information The information includes basic personal identity information such as name and ID number, and the policy information includes data information such as the policy number, the identity information of the insured, the type of insurance, and the content of insurance.
  • the corresponding policy information can be obtained by entering the personal identity information.
  • the system prompts the user to turn on the mobile camera and collect the biometric information of the current operating user in real time as the first step. biometric information for identity verification.
  • the identity information database of the applicant in the insurance application system is connected with the biometric information database of the public security network. According to the identity information of the applicant in the policy information, the corresponding biometric information is extracted from the biometric information database as the second biological information. characteristic information.
  • the first biometric information of the user is information including physiological characteristics (fingerprint, iris, face, DNA, etc.) or behavioral characteristics (gait, keystroke habit, etc.) inherent in the human body.
  • the identification verification operation here mainly uses physiological characteristics to identify and compare information. Specifically, the collected facial image can be selected as the identification basis, and the corresponding face information in the public security network can be retrieved. Compare and identify.
  • each characteristic information is compared, and then an identification result is obtained and fed back to the server as a basis for whether to perform the next step.
  • the insurance application data in the policy information that needs to be confirmed by the insured person includes: the identity information of the insured person, the type of insurance, the insurance claim information, etc.
  • the insurance application system extracts the insurance application data to be confirmed, and calls the AI voice conversion model set on the system to input the insurance application data into the AI voice conversion model.
  • the AI voice conversion technology is used to convert the voice, adjust the sound effect, and generate the broadcast voice.
  • the technology of using AI voice conversion technology to convert data content into voice belongs to the prior art, so it will not be repeated here.
  • the insurance application system After receiving the prompt message that the user starts the policy information confirmation process, the insurance application system extracts the corresponding broadcast voice according to the policy information, and broadcasts it to the insurance applicant. After broadcasting each item of policy information that needs to be confirmed, the user must make a voice confirmation reply. During this process, the system will collect the user's reply voice in real time and store it in the storage unit.
  • the insurance application system will pre-configure a reply setting, that is, answer data, which is used to check and match the user's reply. Preprocessing, extracting clean human voices, and then using speech recognition technology to perform speech recognition processing on the reply voice, after the speech recognition processing, the reply data is obtained, and then the answer data is used to match it, and a matching result is obtained, only Only when the user's reply is consistent with the answer data can the insurance information be generated.
  • the matching result between the user's reply data and the preset answer data is obtained.
  • the matching result is consistent, it indicates that the user has completed the verification of the policy information and confirmed all the contents of the policy information. After the policy information is confirmed, the policy information is output, and the entire underwriting process ends.
  • the matching results are inconsistent, it indicates that the user has doubts about the policy information, or the policy information contains errors and needs to be changed, then the user can apply to interrupt the underwriting process and report the situation to the insurance company.
  • the corresponding biometric information is queried from the biometric database, and compared to ensure that the current user identity is in the service data.
  • the identity of the user is the same, and then use the AI voice conversion technology in the preset AI voice conversion model to convert the data content that needs to be confirmed in the business data into the broadcast voice, play it to the user, and obtain the current user's reply voice, based on The answer data is matched with the reply voice, and if the match is consistent, the service data is output.
  • the second embodiment of the service data verification method in the embodiment of the present application includes:
  • the business data to be verified is the policy information of the insurance applicant who needs to be verified
  • the current business data verification process is the user's underwriting process
  • obtaining the business data to be verified is to obtain the policy information to be verified.
  • the insurance application user enters the relevant personal identity information on the insurance application system. After receiving the relevant personal identity information, the insurance application system retrieves the policy information corresponding to the user's personal identity information from the policy information database on the system.
  • the user's personal identity information The information includes basic personal identity information such as name and ID number, and the policy information includes data information such as the policy number, the identity information of the insured, the type of insurance, and the content of insurance.
  • the corresponding policy information can be obtained by entering the personal identity information.
  • the system prompts the user to turn on the mobile camera and collect the biometric information of the current operating user in real time as the first step. Biometric information for identity verification.
  • the identity information database of the applicant in the insurance application system is connected with the biometric information database of the public security network. According to the identity information of the applicant in the policy information, the corresponding biometric information is extracted from the biometric information database as the second biological information. characteristic information.
  • face image collection technology collect the face information of the insured on the spot, where the face information includes the information of the front face, left face and right face, and obtain the front face image, left face image and right face image in real time.
  • the front face image, left face image and right face image are formatted by image standard, and then the front face image, left face image and right face image are processed according to the pixels of the front face photo, left face photo and right face photo respectively.
  • Pixel normalization processing of front face image, left face image and right face image reduces or even disappears the difference between the face information in the face information database and the face feature information of the collected face image, and improves the accuracy.
  • the feature values of the processed front face image, left face image, and right face image are extracted as the first feature value of the front face, the first feature value of the left face, and the first feature value of the right face.
  • the feature value here refers to the facial features of the human face, including eyes, eyebrows, nose, mouth and ears, and may also include face shape, hair, or other facial features.
  • the face information database connected to the public security system, extract the face information data corresponding to the insured, including the front face photo, left face photo and right face photo, and extract the face information data contained in each photo.
  • the feature value of the face information including the second feature value of the front face, the second feature value of the left face, the second feature value of the right face, and set the front face contrast threshold M2, the first side face contrast threshold M2 and the second side face contrast threshold M3.
  • the preset correction formula calculate the corrected front face similarity score R1' and the corrected left face similarity score R2'; according to the correlation influence formula, calculate the correlation corrected left face similarity score R2"; according to the correction formula, calculate Correct the right face similarity score R3', and calculate the correlation corrected right face similarity score R3" according to the correlation influence formula; since the frontal face image, left face image and right face image are obtained at the same time, when the frontal face images are compared The result deviates from the threshold. Due to the misalignment of the face angle or other reasons, there may be left and right faces that are not aligned with the face images, resulting in an increase in the deviation between the side face image comparison results and the threshold, that is, the error increases. Therefore, it is necessary to eliminate the error influence of the front face on the left face and the error influence of the left face on the right face.
  • the correlation influence formula can be used for correction.
  • R1' and R2" are normalized to obtain the first face similarity score R, where the calculation formula of R is:
  • the face ratios in the face images and the face photos in the face information database are not completely aligned, that is, the facial features and face shapes of the faces are not completely aligned. Alignment may lead to comparison errors. Taking the position of the eyes as the benchmark, correcting the degree of misalignment between the face image and the face photo can improve the accuracy of the comparison.
  • i is a positive number
  • ⁇ i represents the alignment correction coefficient of Ri
  • ⁇ 1 represents the alignment correction coefficient of R1, that is, the alignment correction coefficient of the front face
  • ⁇ 2 represents the alignment correction coefficient of R2, that is, the alignment correction coefficient of the left face
  • ⁇ 3 represents the alignment correction coefficient of R3, that is, the right face alignment correction coefficient
  • ⁇ 1 is the original size of the front face
  • ⁇ 2 is the original size of the left face
  • ⁇ 3 is the original size of the right face
  • ⁇ 1 is the frontal face correction size
  • ⁇ 2 is the left face correction size
  • ⁇ 3 is the right face correction size.
  • the threshold deviation formula is:
  • i is a positive number, 1 ⁇ i ⁇ 2
  • Q1 is the deviation rate of the corrected frontal face similarity
  • Q2 is the corrected side face similarity deviation rate.
  • the insurance application data in the policy information that needs to be confirmed by the insured person includes: the identity information of the insured person, the type of insurance, the insurance claim information, etc.
  • the insurance application system extracts the insurance application data to be confirmed, and calls the AI voice conversion model set on the system to input the insurance application data into the AI voice conversion model.
  • the AI voice conversion technology is used to convert the voice, adjust the sound effect, and generate the broadcast voice.
  • the technology of using AI voice conversion technology to convert data content into voice belongs to the prior art, so it will not be repeated here.
  • the insurance application system After receiving the prompt message that the user starts the policy information confirmation process, the insurance application system extracts the corresponding broadcast voice according to the policy information, and broadcasts it to the insurance applicant. After broadcasting each item of policy information that needs to be confirmed, the user must make a voice confirmation reply. During this process, the system will collect the user's reply voice in real time and store it in the storage unit.
  • the insurance application system will pre-configure a reply setting, that is, answer data, which is used to check and match the user's reply. Preprocessing, extracting clean human voices, and then using speech recognition technology to perform speech recognition processing on the reply voice, after the speech recognition processing, the reply data is obtained, and then the answer data is used to match it, and a matching result is obtained, only Only when the user's reply is consistent with the answer data can the insurance information be generated.
  • the matching result between the user's reply data and the preset answer data is obtained.
  • the matching result is consistent, it indicates that the user has completed the verification of the policy information and confirmed all the contents of the policy information. After the policy information is confirmed, the user policy information is output, and the entire underwriting process ends.
  • the matching results are inconsistent, it indicates that the user has doubts about the policy information, or the policy information contains errors and needs to be changed, then the user can apply to interrupt the underwriting process and report the situation to the insurance company.
  • the identity of the current user is identified through the face recognition technology to ensure that the identity of the current user is consistent with the user identity in the business data to be verified, and then the AI voice conversion technology is used to broadcast the confirmation required by the user during the verification process.
  • the data content is obtained and the reply voice is obtained, and the confirmation result of the data content to be confirmed is verified according to the reply voice.
  • the identity of the current user is verified, the authenticity of the verification process is ensured, and the reliability of the verification result is improved.
  • the third embodiment of the service data verification method in the embodiment of the present application includes:
  • the business data to be verified is the policy information of the insurance applicant who needs to be verified
  • the current business data verification process is the user's underwriting process
  • obtaining the business data to be verified is to obtain the policy information to be verified.
  • the insurance application user enters the relevant personal identity information on the insurance application system. After receiving the relevant personal identity information, the insurance application system retrieves the policy information corresponding to the user's personal information from the policy information database on the system.
  • the corresponding policy information can be obtained by entering the personal identity information.
  • the system prompts the user to turn on the mobile camera and collect the biometric information of the current operating user in real time as the first step. Biometric information to verify the identity of the insured.
  • the identity information database of the applicant in the insurance application system is connected with the biometric information database of the public security network. According to the identity information of the applicant in the policy information, the corresponding biometric information is extracted from the biometric information database as the second biological information.
  • Feature information wherein the first biometric information of the user is information including physiological features (fingerprint, iris, face, DNA, etc.) inherent in the human body or behavioral features (gait, keystroke habits, etc.).
  • the identification verification operation here mainly uses physiological characteristics to identify and compare information. Specifically, the collected facial image can be selected as the identification basis, and the corresponding face information in the public security network can be retrieved. Compare and identify.
  • each characteristic information is compared, and then an identification result is obtained and fed back to the insurance application information as a basis for whether to perform the next step.
  • the insurance application data in the policy information that needs to be confirmed by the insured person includes: the identity information of the insured person, the type of insurance, the insurance claim information, etc.
  • the insurance application system extracts the insurance application data to be confirmed, and calls the AI voice conversion model set on the system, and inputs the insurance application data into the AI voice conversion model.
  • the AI voice conversion technology is used to convert the voice, adjust the sound effect, and generate the broadcast voice. Among them, the technology of using AI voice conversion technology to convert data content into voice belongs to the prior art, so it will not be repeated here.
  • the insurance application system After receiving the prompt message that the user starts the policy information confirmation process, the insurance application system extracts the corresponding broadcast voice according to the policy information, and broadcasts it to the insurance applicant. After broadcasting each item of policy information that needs to be confirmed, the user must make a voice confirmation reply. During this process, the system will collect the user's reply voice in real time and store it in the storage unit.
  • the relevant reply voice obtained by the system may be mixed with environmental noise, so the reply voice needs to be preprocessed.
  • the preprocessing is mainly silence removal, noise processing and speech enhancement.
  • the voice signal of the reply voice is extracted, the voice and non-voice signal periods are distinguished in the voice signal, the starting point of the voice signal is accurately determined, and then the effective voice segment is detected from the continuous voice stream. It includes two aspects: detecting the starting point of the valid speech, that is, the front end point, and detecting the end point of the valid speech, that is, the back end point.
  • noise suppression is performed to stabilize the spectral characteristics of the background noise.
  • the amplitude is very stable at one or several spectrums. Assuming that a small background is background noise at the beginning, grouping and Fourier transform are performed from the initial background noise, and the average of these groups is obtained. spectrum of noise.
  • the noise reduction process is to reversely compensate the noisy speech to obtain the denoised speech, and then use the spectral subtraction and its improved form in the enhancement algorithm based on short-time spectral estimation to eliminate the influence of environmental noise on the speech.
  • an effective clean speech is obtained.
  • the speech signal and speech waveform are extracted from the clean speech, and the acoustic feature is extracted for each frame of the waveform, and a multi-dimensional vector can be obtained, that is, the acoustic feature parameter. .
  • pre-filtering is first performed, and then A/D transformation is performed, and pre-emphasis is performed by a first-order finite excitation response high-pass filter.
  • Perform frame-by-frame processing use Hamming window to window a frame of speech to reduce the influence of the Gibbs effect, and then perform fast Fourier transform to transform the time domain signal into the power spectrum of the signal, using a set of Mel
  • the linearly distributed triangular window filter on the frequency scale (a total of 24 triangular window filters) filters the power spectrum of the signal, obtains the logarithm based on the output of the triangular window filter bank, and removes the correlation between the signals in each dimension,
  • the signal is mapped to a low-dimensional space, and spectral weighting is performed to suppress its low-order and high-order parameters, and then the cepstral mean is subtracted, and the differential parameters that characterize the dynamic characteristics of the speech are added to the speech features, and finally the acoustic feature parameters
  • the language model Input the acoustic feature parameters into the language model, the language model first calculates the distance between the feature vector sequence of the speech and each pronunciation template, and then uses the tools in the language model to judge and correct the grammatical results and semantics, especially some Homophones must pass the context structure to determine the meaning of the word, and finally get the reply data.
  • the correct answer data is preset, and the relevant reply data is obtained after processing according to speech recognition, and based on the answer data and the insurance applicant's reply is matched to obtain the matching result, when the matching result After the agreement is reached, the insured can continue to apply for insurance. If the matching result is inconsistent, the system will output a prompt message, indicating that the user has made an error in checking the information.
  • the matching result between the user's reply data and the preset answer data is obtained.
  • the matching result is consistent, it indicates that the user has completed the verification of the policy information and confirmed all the contents of the policy information.
  • the matching results are inconsistent, it indicates that the user has doubts about the policy information, or the policy information contains errors and needs to be changed, then the user can apply to interrupt the underwriting process and report the situation to the insurance company.
  • the data information to be confirmed in the policy information is converted into voice information for broadcast, the confirmation reply voice of the current user is obtained, and the voice recognition technology is used to identify and verify the reply of the current user, It is ensured that the current user clarifies the content of the data to be confirmed and confirms that the information is correct, thereby improving the accuracy of the verification result.
  • the fourth embodiment of the business data verification method in the embodiment of the present application includes:
  • the current business data verification process is the user's underwriting process
  • obtaining the business data to be verified is to obtain the policy information to be verified.
  • the insurance application user enters the relevant personal information on the insurance application system. After receiving the relevant personal information, the insurance application system retrieves the policy information corresponding to the user's personal information from the insurance policy business database on the system, where the user's personal information includes the name. , ID number and other basic personal identity information, policy information includes policy number, identity information of the insured, insurance type, insurance content and other data information.
  • the corresponding policy information can be obtained by entering the personal identity information.
  • the system prompts the user to turn on the mobile camera and collect the biometric information of the current operating user in real time, as the first step. Biometric information to verify the identity of the insured.
  • the identity information database of the applicant in the insurance application system is connected with the biometric information database of the public security network. According to the identity information of the applicant in the policy information, the corresponding biometric information is extracted from the biometric information database as the second biological information.
  • Feature information wherein the first biometric information of the user is information including physiological features (fingerprint, iris, face, DNA, etc.) inherent in the human body or behavioral features (gait, keystroke habits, etc.).
  • the identification verification operation here mainly uses physiological characteristics to identify and compare information. Specifically, the collected facial image can be selected as the identification basis, and the corresponding face information in the public security network can be retrieved. Compare and identify.
  • each characteristic information is compared, and then an identification result is obtained and fed back to the insurance application information as a basis for whether to perform the next step.
  • the insurance application data in the policy information that needs to be confirmed by the insured person includes: the identity information of the insured person, the type of insurance, the insurance claim information, etc.
  • the insurance application system extracts the insurance application data to be confirmed, and calls the AI voice conversion model set on the system to input the insurance application data into the AI voice conversion model.
  • the AI voice conversion technology is used to convert the voice, adjust the sound effect, and generate the broadcast voice.
  • the technology of using AI voice conversion technology to convert data content into voice belongs to the prior art, so it will not be repeated here.
  • the insurance application system After receiving the prompt message that the user starts the policy information confirmation process, the insurance application system extracts the corresponding broadcast voice according to the policy information, and broadcasts it to the insurance applicant. After broadcasting each item in the policy information that needs to be confirmed, the user must confirm and reply by voice. During this process, the system will collect the user's reply voice in real time and store it in the storage unit.
  • the answer data is first stored in the answer data storage window, and the characters of each byte in the answer data are extracted to obtain the first character, and the first character is stored in the answer data storage Random access memory in the window.
  • the random access memory of the answer data storage window is 16 individually addressable distributed random access memories, wherein each individually addressable distributed random access memory has two independent read ports.
  • the answer data storage window starts from the 1st individually addressable DRAM, puts the first character sequentially into the 16 individually addressable DRAMs, if the first character is put into the last 1 After the individually addressable distributed random memory, there is still the first character that is not put in, then the first character that is not put in will start from the first individually addressable distributed random memory, and then put 16 characters in sequence again.
  • individually addressable distributed random access memories looping in sequence until the first character of a predetermined byte is all placed into 16 individually addressable distributed random access memories, where each individually addressable distributed random access memory Random access memory stores up to 1 kilobyte of the first character.
  • speech recognition is performed on the reply speech, and after preprocessing, clean speech is obtained, and then acoustic features are extracted.
  • each character in the reply speech is extracted to obtain the second character.
  • the hash value of the second character is calculated in the hash calculation unit, the character index corresponding to the second character is determined according to the hash value, and the character index is stored in the random memory of the hash calculation unit, wherein the character index is the same as the Location information in the random access memory of the answer data window corresponding to the second character.
  • the character index generate and store the read address of the answer data storage window, read and store the first character of 32 bytes from the 16 individually addressable distributed random access memories in the answer data storage window according to the read address, and According to the lower 4 bits of the read address, select and store the effective first character of 26 bytes from the first character of 32 bytes; in the initiated match search command, the lower 24 bits in the shift register are compared according to the character index.
  • the stored 4-byte second character is matched with the 4-byte first character in the 26-byte first character; if the 4-byte second character matches, in the second cycle, get the same value as the current first character.
  • At least two character indices corresponding to two characters and according to the at least two character indices corresponding to the current second character, the current second character in bits 16 to 23 of the shift register and the corresponding first character in the 26-byte shift register
  • the 2-byte first character is matched. If the current second character is successfully matched, the next current second character will continue to be matched, and so on, until the end of the match. If the current second character fails to match, the match will be ended.
  • the matching result between the user's reply data and the preset answer data is obtained.
  • the matching result is consistent, it indicates that the user has completed the verification of the policy information and confirmed all the contents of the policy information.
  • the matching results are inconsistent, it indicates that the user has doubts about the policy information, or the policy information contains errors and needs to be changed, then the user can apply to interrupt the underwriting process and report the situation to the insurance company.
  • speech recognition technology is used to recognize the obtained reply voice of the current user, and verification and comparison are carried out according to the preset answer data, so as to ensure that the confirmation reply of the current user is true and accurate, thereby improving the review efficiency.
  • the reliability also improves the authenticity and accuracy of the verification results.
  • the fifth embodiment of the service data verification method in the embodiment of the present application includes:
  • the business data to be verified is the policy information of the insurance applicant who needs to be verified
  • the current business data verification process is the user's underwriting process
  • obtaining the business data to be verified is to obtain the policy information to be verified.
  • the insurance application user enters the relevant personal information on the insurance application system. After receiving the relevant personal information, the insurance application system retrieves the policy information corresponding to the user's personal information from the insurance policy information database on the system, where the user's personal information includes the name. , ID number and other basic personal identity information, policy information includes policy number, identity information of the insured, insurance type, insurance content and other data information.
  • the corresponding policy information can be obtained by entering the personal identity information.
  • the system prompts the user to turn on the mobile camera and collect the biometric information of the current operating user in real time, as the first step. Biometric information to verify the identity of the insured.
  • the identity information database of the applicant in the insurance application system is connected with the biometric information database of the public security network. According to the identity information of the applicant in the policy information, the corresponding biometric information is extracted from the biometric information database as the second biological information.
  • Feature information wherein the first biometric information of the user is information including physiological features (fingerprint, iris, face, DNA, etc.) inherent in the human body or behavioral features (gait, keystroke habits, etc.).
  • the identification verification operation here mainly uses physiological characteristics to identify and compare information. Specifically, the collected facial image can be selected as the identification basis, and the corresponding face information in the public security network can be retrieved. Compare and identify.
  • each characteristic information is compared, and then an identification result is obtained and fed back to the insurance application information as a basis for whether to perform the next step.
  • the insurance application data in the policy information that needs to be confirmed by the insured person includes: the identity information of the insured person, the type of insurance, the insurance claim information, etc.
  • the insurance application system extracts the insurance application data to be confirmed, and calls the AI voice conversion model set on the system, and inputs the insurance application data into the AI voice conversion model.
  • the AI voice conversion technology is used to convert the voice, adjust the sound effect, and generate the broadcast voice. Among them, the technology of using AI voice conversion technology to convert data content into voice belongs to the prior art, so it will not be repeated here.
  • the insurance application system After receiving the prompt message that the user starts the policy information confirmation process, the insurance application system extracts the corresponding broadcast voice according to the policy information, and broadcasts it to the insurance applicant. After broadcasting each item of policy information that needs to be confirmed, the user must make a voice confirmation reply. During this process, the system will collect the user's reply voice in real time and store it in the storage unit.
  • the insurance application system will pre-configure a reply setting, that is, answer data, which is used to check and match the user's reply. Preprocessing, extracting clean human voices, and then using speech recognition technology to perform speech recognition processing on the reply voice, after the speech recognition processing, the reply data is obtained, and then the answer data is used to match it, and a matching result is obtained, only Only when the user's reply is consistent with the answer data can the insurance information be generated.
  • the matching result between the user's reply data and the preset answer data is obtained.
  • the matching result is consistent, it indicates that the user has completed the verification of the policy information and confirmed all the contents of the policy information.
  • the matching results are inconsistent, it indicates that the user has doubts about the policy information, or the policy information contains errors and needs to be changed, then the user can apply to interrupt the underwriting process and report the situation to the insurance company.
  • obtaining the video information of the business data verification process is that after the policy information is confirmed, based on the remote communication technology, the video information of the user during the underwriting process can be obtained in real time.
  • the communication technology is mainly implemented by the camera function of the mobile terminal, that is, when the user starts the underwriting process, the camera function of the mobile terminal is already turned on, and the entire underwriting process is recorded.
  • the business data verification security rules are the underwriting security rules
  • the verification behavior is the underwriting behavior.
  • the real-time monitoring process mainly uses the camera of the mobile terminal to collect the real-time status video of the underwriting process in real time, and send it to the relevant personnel of the corresponding insurance company, and then the relevant personnel of the insurance company conduct real-time monitoring of the entire underwriting process. Relevant personnel can use remote communication technology to conduct real-time remote dialogue with users, and provide guidance and prompts for users' problems during the underwriting process.
  • the real-time monitoring of the underwriting process mainly uses the mobile camera function of the insured for remote communication, and timely voice prompts for the underwriting behavior that does not meet the underwriting safety regulations.
  • the underwriting behavior that does not meet the underwriting safety regulations mainly refers to the insured person leaving the camera range during the insurance application process, or the interference of irrelevant personnel during the insurance application process.
  • the voice communication technology is used to remind the user in real time.
  • the relevant personnel of the insurance company can remind the user in time, and can also apply for interruption of the underwriting process according to the situation to ensure that the user’s underwriting behavior during the entire underwriting process is safe and effective, and make relevant video records in time to facilitate follow-up Retrospect the insurance process.
  • the camera function of the mobile terminal is used to record the entire underwriting process, generate the underwriting process video, and store the underwriting process video in the storage unit.
  • the underwriting process video can be extracted from the storage unit.
  • the identity of the current user is identified and verified to ensure that it is consistent with the user identity in the business data to be verified, and the AI voice conversion model is used.
  • the AI voice conversion technology in the system converts the parts that need attention in the business data to be verified into broadcast voice, and broadcasts it, and then uses the remote communication technology to monitor the entire business data verification process, which ensures the business data verification to a great extent. Process safety and compliance, thereby increasing the accuracy and reliability of verification results.
  • an embodiment of the service data verification device in the embodiment of the present application includes:
  • a query module 602 configured to obtain the first biometric information of the current user, and query the corresponding second biometric information from a preset biometric database according to the user information in the business data;
  • An identification module 603, configured to compare and identify the first biometric information and the second biometric information to obtain an identification result
  • Extraction module 604 configured to extract the data content to be confirmed in the business data if the identification result is consistent
  • the voice conversion module 605 is used to call the preset AI voice conversion model, and convert the data content into broadcast voice;
  • the broadcast module 606 is used to broadcast the broadcast voice, and obtain the reply voice of the user in real time;
  • a matching module 607 configured to match the reply voice based on preset answer data to obtain a matching result
  • the information output module 608 is configured to output the service data if the matching result is that the reply voice is consistent with the answer data.
  • the business data verification device is used to perform the steps of the above business data verification method, and the corresponding biometric information is queried from the biometric database by acquiring the verified business data and the user's biometric information. , and compare to ensure that the user identity is consistent with the user identity in the business data to be verified, and then use the AI voice conversion model to convert the data content that needs to be confirmed in the business data into broadcast voice, play it to the user, and obtain The user's reply voice is matched based on the answer data, and if the match is consistent, the service data is output.
  • the device runs the steps of the business data verification method of the above-mentioned embodiment, which improves the authenticity and reliability of the verification process and also improves the accuracy of the verification result.
  • another embodiment of the apparatus for verifying service data in the embodiment of the present application includes:
  • a query module 602 configured to obtain the first biometric information of the current user, and query the corresponding second biometric information from a preset biometric database according to the user identity information in the business data;
  • An identification module 603, configured to compare and identify the first biometric information and the second biometric information to obtain an identification result
  • Extraction module 604 configured to extract the data content to be confirmed in the business data if the identification result is consistent
  • the voice conversion module 605 is used to call the preset AI voice conversion model, and convert the data content into broadcast voice;
  • the broadcast module 606 is used to broadcast the broadcast voice, and obtain the reply voice of the user in real time;
  • a matching module 607 configured to match the reply voice based on preset answer data to obtain a matching result
  • the information output module 608 is configured to output the service data if the matching result is that the reply voice is consistent with the answer data.
  • the identification module 603 includes:
  • a first feature extraction unit 6031 configured to perform feature extraction of face information based on the first biometric information to obtain the first feature value of the face
  • the second feature extraction unit 6032 is configured to perform feature extraction of face information based on the second biometric information to obtain the second feature value of the face;
  • the eigenvalue comparison unit 6033 is configured to compare and identify the face comparison threshold, the first eigenvalue of the face, and the second eigenvalue of the face, based on a preset face comparison threshold, to obtain a recognition result.
  • the feature value comparison unit 6033 is specifically used for:
  • the face similarity score is corrected and calculated to obtain the corrected face similarity score
  • the matching module 607 includes:
  • a voice recognition unit 6071 configured to perform voice recognition on the reply voice based on the voice recognition technology to obtain reply data
  • the data matching unit 6072 is configured to compare and match the reply data with the preset answer data to obtain a matching result.
  • the speech recognition unit 6071 is specifically used for:
  • language processing is performed on the acoustic feature parameters to obtain reply data.
  • the data matching unit 6072 is specifically used for:
  • Hash calculation is performed on the second character to obtain a character index corresponding to each of the second characters
  • the first character and the second character are compared and matched to obtain a matching result.
  • the monitoring module 609 is specifically used for:
  • monitoring the verification behavior in the business data verification process to obtain monitoring information
  • Voice prompts are given to the verification behaviors in the monitoring information that do not conform to the business data verification security rules
  • Video recording of the business data verification process is performed to generate an underwriting process video, and the business data verification process video is stored in a storage unit.
  • a monitoring module to the above-mentioned business data verification device, it is used to obtain video information of the business data verification process in real time, and monitor the verification behavior, and in the monitoring process, the non-conforming business data can be monitored.
  • the verification behavior of the verification safety rules will be prompted in time.
  • the device improves the authenticity of the business data verification process, realizes the monitoring of the verification behavior, and improves the accuracy and reliability of the verification result.
  • FIG. 8 is a schematic structural diagram of a service data verification device provided by an embodiment of the present application.
  • the service data verification device 800 may vary greatly due to different configurations or performances, and may include one or more processors (central processing units). , CPU) 810 (eg, one or more processors) and memory 820, one or more storage media 830 (eg, one or more mass storage devices) storing application programs 833 or data 832.
  • the memory 820 and the storage medium 830 may be short-term storage or persistent storage.
  • the program stored in the storage medium 830 may include one or more modules (not shown in the figure), and each module may include a series of instruction operations on the business data verification device 800 .
  • the processor 810 may be configured to communicate with the storage medium 830 to execute a series of instruction operations in the storage medium 830 on the business data verification device 800 .
  • the service data verification device 800 may also include one or more power supplies 840, one or more wired or wireless network interfaces 880, one or more input and output interfaces 860, and/or, one or more operating systems 831, such as Windows Server , Mac OS X, Unix, Linux, FreeBSD and more.
  • operating systems 831 such as Windows Server , Mac OS X, Unix, Linux, FreeBSD and more.
  • the blockchain referred to in this application is a new application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm.
  • Blockchain essentially a decentralized database, is a series of data blocks associated with cryptographic methods. Each data block contains a batch of network transaction information to verify its Validity of information (anti-counterfeiting) and generation of the next block.
  • the blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.
  • the present application also provides a computer-readable storage medium.
  • the computer-readable storage medium may be a non-volatile computer-readable storage medium.
  • the computer-readable storage medium may also be a volatile computer-readable storage medium.
  • the computer-readable storage medium stores instructions that, when executed on a computer, cause the computer to execute the steps of the business data verification method.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium.
  • the technical solutions of the present application can be embodied in the form of software products in essence, or the parts that contribute to the prior art, or all or part of the technical solutions, and the computer software products are stored in a storage medium , including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Computer Security & Cryptography (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Finance (AREA)
  • Computational Linguistics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Development Economics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)
  • Collating Specific Patterns (AREA)

Abstract

一种业务数据核验方法、装置、设备及存储介质,涉及大数据领域。该方法包括:获取待核验的业务数据(101);获取当前用户的第一生物特征信息,并根据业务数据中的用户身份信息从预设的生物特征数据库中,查询对应的第二生物特征信息(102),进行比对识别,若识别结果为一致,则提取业务数据中待确认的数据内容(104);调用预设的AI语音转换模型,将数据内容转换为播报语音进行播报,并获取用户的回复语音;基于预设的答案数据,对回复语音进行匹配,若匹配结果为回复语音与答案数据一致,则输出业务数据(108)。该方法保证了业务数据核验过程的安全性和合规性,提高了核验结果的准确度和可靠性。此外,该方法还涉及区块链技术领域,业务数据可存储于区块链中。

Description

业务数据核验方法、装置、设备及存储介质
本申请要求于2020年12月15日提交中国专利局、申请号为202011472837.8、发明名称为“业务数据核验方法、装置、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在申请中。
技术领域
本申请涉及大数据领域和区块链技术领域,尤其涉及一种业务数据核验方法、装置、设备及存储介质。
背景技术
目前,用户选择购买某款保险产品时,首先在购买页面选择相关套餐,然后填入相关信息,填完相关信息之后提交至保险审核服务器,最后是保险审核服务器根据用户提交的相关信息生成保单信息,然后投保人对其进行核对,在保单信息核对通过后,才能生成相应的投保信息。
然而,发明人意识到,这种对投保人信息的审核和确认方法费时费力,且用户核对保单信息时难以保证是实际投保人本人在进行核对,相关的核验机制也难以对核验结果的真实性和准确性进行评估,从而导致核验结果的准确度低,且核验结果的可靠性不强。
发明内容
本申请的主要目的在于解决现有的业务数据的核对方式难以对核验结果的准确性进行评估,从而导致数据核验准确度低、可靠性不强的技术问题。
本申请第一方面提供了一种业务数据核验方法,包括:获取待核验的业务数据;获取当前用户的第一生物特征信息,并根据所述业务数据中的用户身份信息从预设的生物特征数据库中,查询对应的第二生物特征信息;将所述第一生物特征信息与所述第二生物特征信息进行比对识别,得到识别结果;若所述识别结果为一致,则提取所述业务数据中,待确认的数据内容;调用预设的AI语音转换模型,将所述数据内容转换为播报语音;对所述播报语音进行播报,并实时获取用户的回复语音;基于预设的答案数据,对所述回复语音进行匹配,得到匹配结果;若所述匹配结果为所述回复语音与所述答案数据一致,则输出所述业务数据。
本申请第二方面提供了一种业务数据核验装置,包括:获取模块,用于获取待核验的业务数据;查询模块,用于获取当前用户的第一生物特征信息,并根据所述业务数据中的用户身份信息从预设的生物特征数据库中,查询对应的第二生物特征信息;识别模块,用于将所述第一生物特征信息与所述第二生物特征信息进行比对识别,得到识别结果;提取模块,用于若所述识别结果为一致,则提取所述业务数据中,待确认的数据内容;语音转换模块,用于调用预设的AI语音转换模型,将所述数据内容转换为播报语音;播报模块,用于对所述播报语音进行播报,并实时获取用户的回复语音;匹配模块,用于基于预设的答案数据,对所述回复语音进行匹配,得到匹配结果;信息输出模块,用于若所述匹配结果为所述回复语音与所述答案数据一致,则输出所述业务数据。
本申请第三方面提供了一种业务数据核验设备,包括:存储器和至少一个处理器,所述存储器中存储有指令;所述至少一个处理器调用所述存储器中的所述指令,以使得所述业务数据核验设备执行如下所述的业务数据核验方法的步骤:获取待核验的业务数据;获取当前用户的第一生物特征信息,并根据所述业务数据中的用户身份信息从预设的生物特征数据库中,查询对应的第二生物特征信息;将所述第一生物特征信息与所述第二生物特征信息进行比对识别,得到识别结果;若所述识别结果为一致,则提取所述业务数据中,待确认的数据内容;调用预设的AI语音转换模型,将所述数据内容转换为播报语音;对所述播报语音进行播报,并实时获取用户的回复语音;基于预设的答案数据,对所述回复语音进行匹配,得到匹配结果;若所述匹配结果为所述回复语音与所述答案数据一致,则输出所述业务数据。
本申请的第四方面提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当 其在计算机上运行时,使得计算机执行如下所述的业务数据核验方法的步骤:获取待核验的业务数据;获取当前用户的第一生物特征信息,并根据所述业务数据中的用户身份信息从预设的生物特征数据库中,查询对应的第二生物特征信息;将所述第一生物特征信息与所述第二生物特征信息进行比对识别,得到识别结果;若所述识别结果为一致,则提取所述业务数据中,待确认的数据内容;调用预设的AI语音转换模型,将所述数据内容转换为播报语音;对所述播报语音进行播报,并实时获取用户的回复语音;基于预设的答案数据,对所述回复语音进行匹配,得到匹配结果;若所述匹配结果为所述回复语音与所述答案数据一致,则输出所述业务数据。
本申请提供的技术方案中,通过获取待核验的业务数据和当前用户的第一生物特征信息,并根据业务数据中的用户身份信息从预设的生物特征数据库中,查询对应的第二生物特征信息;将第一生物特征信息与第二生物特征信息进行比对识别,得到识别结果;若所述识别结果为一致,则提取保单信息中,待确认的数据内容;调用预设的AI语音转换模型,将所述数据内容转换为播报语音;对播报语音进行播报,并实时获取当前用户的回复语音;基于预设的答案数据,对回复语音进行匹配,得到匹配结果;若匹配结果为回复语音与答案数据一致,则输出业务数据。本申请实施例中,通过该技术方案,保证了业务数据核验之后得到的核验结果的真实性和准确性,提高了核验结果的准确度和可靠性。
附图说明
图1为本申请实施例中业务数据核验方法的第一个实施例示意图;
图2为本申请实施例中业务数据核验方法的第二个实施例示意图;
图3为本申请实施例中业务数据核验方法的第三个实施例示意图;
图4为本申请实施例中业务数据核验方法的第四个实施例示意图;
图5为本申请实施例中业务数据核验方法的第五个实施例示意图;
图6为本申请实施例中业务数据核验装置的一个实施例示意图;
图7为本申请实施例中业务数据核验装置的另一个实施例示意图;
图8为本申请实施例中业务数据核验设备的一个实施例示意图。
具体实施方式
本申请实施例提供了一种业务数据核验方法、装置、设备及存储介质,通过获取带核验的业务数据和用户的生物特征信息,从生物特征数据库中查询到与之相对应的生物特征信息,并进行比对,确保当前用户身份与业务数据中的用户身份一致,再调用预设的AI语音转换模型,将业务数据中待确认的数据内容转换为播报语音,对用户进行播放,并获取用户的回复语音,并基于答案数据,对回复语音进行匹配,如果匹配为一致,则输出业务数据。通过该方法确保了整个业务数据核验过程的真实性,提高了审核之后得到的核验结果的准确度和可靠性。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”、“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。此外,术语“包括”或“具有”及其任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
为便于理解,下面对本申请实施例的具体流程进行描述,请参阅图1,本申请实施例中业务数据核验方法的第一个实施例包括:
101,获取待核验的业务数据;
可以理解的是,本申请的执行主体可以为业务数据核验装置,还可以是终端或者服务器,具体此处不做限定。本申请实施例以服务器为执行主体为例进行说明。
如果待核验的业务数据是需要核验的投保人投保的保单信息,则当前的业务数据核验过程为用户的核保过程,获取待核验的业务数据就是获取待核验的保单信息。投保用户在投保系统上输入相关的个人身份信息,投保系统在收到相关个人身份信息之后,从系统上的保单信息数据库中调取与用户个 人身份信息相对应的保单信息,其中用户的个人身份信息包括了姓名、身份证号等基本的个人身份信息,保单信息包括了保单编号、投保人的身份信息、保险种类、保险内容等数据信息。
102,获取当前用户的第一生物特征信息,并根据业务数据中的用户身份信息从预设的生物特征数据库中,查询对应的第二生物特征信息;
用户在进入投保系统之后,输入个人的身份信息可以获取对应的保单信息,在利用保单信息启动核保流程之前,系统提示用户开启移动端摄像头,实时采集当前操作用户的生物特征信息,作为第一生物特征信息,来进行身份核验。
投保系统中的投保人身份信息数据库与公安网的生物特征信息数据库进行对接,根据保单信息中的投保人的身份信息,从生物特征信息数据库中提取出相对应的生物特征信息,作为第二生物特征信息。其中,用户的第一生物特征信息是包括人体所固有的生理特征(指纹、虹膜、面相、DNA等)或行为特征(步态、击键习惯等)的信息。
103,将第一生物特征信息与第二生物特征信息进行比对识别,得到识别结果;
获取到第一生物特征信息和第二生物特征信息之后,基于生物特征识别技术,利用人体所固有的生理特征(指纹、虹膜、面相、DNA等)或行为特征(步态、击键习惯等)来进行个人身份识别验证,这里的身份识别验证操主要是利用生理特征进行信息识别和比对,具体的,可以选择采集到的面部图像作为识别依据,调取公安网中对应的人脸信息进行比对识别。
将第一生物特征信息与第二生物特征信息进行识别之后,对比每个特征信息,然后得到一个识别结果反馈给服务器,作为是否执行下一步骤的依据。
104,若识别结果为一致,则提取业务数据中,待确认的数据内容;
若第一生物特征信息与第二生物特征信息识别对比一致,证明现在核保过程的操作用户与实际投保人一致,即当前核保过程是由实际投保人进行,则从投保系统上提取对应的保单信息中需要投保人进行确认的投保数据,其中,保单信息中需要投保人确认的投保数据包括:投保人身份信息,保险种类,保险的理赔信息等。
105,调用预设的AI语音转换模型,将数据内容转换为播报语音;
当用户开始准备进行保单信息的确认过程时,投保系统提取出待确认的投保数据,并调用系统上设置有的AI语音转换模型,将这些投保数据输入至AI语音转换模型中,利用该模型中的AI语音转换技术,进行语音转换,并调整声音效果,生成播报语音。其中,利用AI语音转换技术将数据内容转换成语音的技术属于现有技术,故在此不再赘述。
106,对播报语音进行播报,并实时获取用户的回复语音;
投保系统在接收到用户启动保单信息确认流程的提示信息之后,根据保单信息提取出相应的播报语音,并对投保人进行播报。在对保单信息中的每一项需要确认的内容进行播报后,用户都要进行语音的确认回复,在此过程中,系统会实时采集用户的回复语音并存储在存储单元中。
107,基于预设的答案数据,对回复语音进行匹配,得到匹配结果;
对于保单信息中待确认的投保数据,投保系统会预先进行一个回复设置,即答案数据,用于对用户的回复进行核对和匹配,在获取到用户的回复语音之后,先对用户的回复语音进行预处理,提取出干净的人声,然后再利用语音识别技术,对回复语音进行语音识别处理,经过语音识别处理之后得到回复数据,再利用答案数据,对其进行匹配,得到一个匹配结果,只有当用户的回复与答案数据一致时,才能生成投保信息。
108,若匹配结果为回复语音与答案数据一致,则输出业务数据。
获取到用户回复数据与预设的答案数据的匹配结果,当匹配结果显示为一致时,表明用户已经对保单信息核对完成,并且对保单信息的所有内容都进行了确认,则系统会输出经过用户对保单信息确认之后输出保单信息,此时整个核保流程结束。当匹配结果不一致时,表明用户对于保单信息存有疑虑,或者保单信息出现错误内容,需要进行更改,则用户可以申请中断核保流程,并将该情况反馈给保险公司。
本申请实施例中,通过获取带核验的业务数据和当前用户的生物特征信息,从生物特征数据库中查询到与之相对应的生物特征信息,并进行比对,确保当前用户身份与业务数据中的用户身份一致, 再利用预设的AI语音转换模型中的AI语音转换技术,将业务数据中所需确认的数据内容转换成播报语音,对用户进行播放,并获取当前用户的回复语音,基于答案数据对回复语音进行匹配,如果匹配一致,则输出业务数据。通过本申请实施例的技术方案,保证了审核之后得到的核验结果的真实性和准确性,提高了核验结果的准确度和可靠性。
请参阅图2,本申请实施例中业务数据核验方法的第二个实施例包括:
201,获取待核验的业务数据;
如果待核验的业务数据是需要核验的投保人投保的保单信息,则当前的业务数据核验过程为用户的核保过程,获取待核验的业务数据就是获取待核验的保单信息。投保用户在投保系统上输入相关的个人身份信息,投保系统在收到相关个人身份信息之后,从系统上的保单信息数据库中调取与用户个人身份信息相对应的保单信息,其中用户的个人身份信息包括了姓名、身份证号等基本的个人身份信息,保单信息包括了保单编号、投保人的身份信息、保险种类、保险内容等数据信息。
202,获取当前用户的第一生物特征信息,并根据业务数据中的用户身份信息从预设的生物特征数据库中,查询对应的第二生物特征信息;
用户在进入投保系统之后,输入个人的身份信息可以获取对应的保单信息,在利用保单信息启动核保流程之前,系统提示用户开启移动端摄像头,实时采集当前操作用户的生物特征信息,作为第一生物特征信息,来进行身份核验。投保系统中的投保人身份信息数据库与公安网的生物特征信息数据库进行对接,根据保单信息中的投保人的身份信息,从生物特征信息数据库中提取出相对应的生物特征信息,作为第二生物特征信息。
203,基于第一生物特征信息,进行人脸信息特征提取,得到人脸第一特征值;
利用人脸图像采集技术,现场采集投保人的人脸信息,其中人脸信息包括正脸、左脸和右脸的信息,实时获得正脸图像、左脸图像和右脸图像。将正脸图像、左脸图像和右脸图像进行图像标准格式化处理,然后将正脸图像、左脸图像和右脸图像分别以正脸照片、左脸照片和右脸照片的像素为标准进行正脸图像、左脸图像、右脸图像像素归一化处理,使得人脸信息数据库中的人脸信息与采集的人脸图像的人脸特征信息差减小、甚至消失,提高准确度。
提取经过处理后的正脸图像、左脸图像、右脸图像的特征值,作为正脸第一特征值、左脸第一特征值和右脸第一特征值。其中,这里特征值指的是人脸的五官,包括眼睛、眉毛、鼻子、嘴巴和耳朵,也可以包括脸型、头发,或其他脸部特征。
204,基于第二生物特征信息,进行人脸信息特征提取,得到人脸第二特征值;
从对接公安系统的人脸信息数据库中,提取出与投保人对应的人脸信息数据,包括正脸照片、左脸照片和右脸照片,并根据各照片包含的人脸信息数据,提取出人脸信息的特征值,包括正脸第二特征值,左脸第二特征值,右脸第二特征值,并设置正脸对比阈值M2、第一侧脸对比阈值M2和第二侧脸对比阈值M3。
205,将人脸第一特征值与人脸第二特征值进行匹配对比,得到人脸相似度分值;
分别提取正脸照片、左脸照片和右脸照片中眼睛到正脸照片、左脸照片和右脸照片下边沿的最短距离,分别记为正脸原始尺寸α1、左脸原始尺寸α2和右脸原始尺寸α3;分别提取正脸图像、左脸图像和右脸图像中眼睛到正脸图像、左脸图像和右脸图像下边沿的最短距离,分别记为正脸校正尺寸β1、左脸校正尺寸β2和右脸校正尺寸β3。
计算正脸对齐校正系数、左脸对齐校正系数和右脸对齐校正系数,将正脸第一特征值和正脸第二特征值匹配比对,记下对应正脸相似度分值R1;将左脸第一特征值和左脸第二特征值匹配比对,记下对应左脸相似度分值R2;将右脸第一特征值和右脸第二特征值匹配比对记下对应右脸相似度分值R3。
206,基于预设的校正公式,对人脸相似度分值进行校正计算,得到校正人脸相似度分值;
根据预设的校正公式,计算校正正脸相似度分值R1’和校正左脸相似度分值R2’;根据关联影响公式,计算关联校正左脸相似度分值R2”;根据校正公式,计算校正右脸相似度分值R3’,根据关联影响公式,计算关联校正右脸相似度分值R3”;由于正脸图像、左脸图像和右脸图像是同时获得的,当正脸图像比对结果与阈值有偏差,由于人脸角度没对齐或其他原因,同时可能会存在左脸和 右脸没有对齐人脸图像,导致侧脸图像比对结果与阈值偏差增大,即导致误差增大,所以需要消除正脸对左脸的误差影响和左脸对右脸的误差影响,在此可利用关联影响公式进行修正。
综合对正脸和左脸的比对结果,对R1’和R2”进行归一化处理,得到第一人脸相似度分值R,其中R的计算公式为:
Figure PCTCN2021090188-appb-000001
对R1’、R2”和R3”进行归一化处理,得到第二人脸相似度分值R’,R’的计算公式为:
Figure PCTCN2021090188-appb-000002
其中,校正公式为:R i'=δ iR i
在该校正公式中,i为正数,1≤i≤3,δi表示Ri对齐校正系数;δ1表示R1的对齐校正系数,即正脸对齐校正系数;δ2表示R2的对齐校正系数,即左脸对齐校正系数;δ3表示R3的对齐校正系数,即右脸对齐校正系数。
在对现场采集的人脸图像进行像素归一化处理的时候,人脸图像和人脸信息数据库的人脸照片中的人脸比不是完全对齐的,即人脸的五官、脸型等并未完全对齐,可能导致对比的误差,以眼睛的位置为基准,对人脸图像和人脸照片中的人脸错开的程度进行校正,能够提高对比的精度。
另外,计算对齐校正系数的公式为:
Figure PCTCN2021090188-appb-000003
其中,i为正数,1≤i≤3,δi表示Ri对齐校正系数;δ1表示R1的对齐校正系数,即正脸对齐校正系数;δ2表示R2的对齐校正系数,即左脸对齐校正系数;δ3表示R3的对齐校正系数,即右脸对齐校正系数,α1为正脸原始尺寸,α2为左脸原始尺寸,α3为右脸原始尺寸,β1为正脸校正尺寸、β2为左脸校正尺寸,β3为右脸校正尺寸。
另外,关联影响公式为:R i(1+Q i-1 2)R i'
其中,i为正数,2≤i≤3,Q1为校正正脸相似度偏差率;Q2为校正侧脸相似度偏差率,R2’为校正左脸相似度分值,R3’为校正右脸相似度分值,R2”为关联校正左脸相似度分值,R3”为关联校正右脸相似度分值。
207,将校正人脸相似度分值与人脸对比阈值进行比对识别,得到识别结果;
将R与第一侧脸比对阈值M2进行比对,若R大于第一侧脸比对阈值M2,利用阈值偏差公式计算校正侧脸相似度偏差率Q2,若R小于或等于第一侧脸比对阈值M2,则人脸一致性检验不通过。
将R1’与正脸比对阈值进行比对,若R1’大于正脸比对阈值M1,利用阈值偏差公式计算校正正脸相似度偏差率Q1,若R1’小于或等于正脸比对阈值M1,则人脸一致性检验不通过;
将R’与第二侧脸比对阈值进行比对,若R’大于第二侧脸比对阈值,则人脸一致性比对通过;若R’小于或等于第一侧脸比对阈值,则人脸一致性检验不通过。
综合对正脸、左脸和右脸的比对结果,对R1’、R2”和R3”进行归一化处理,避免了投保人被冒用个人照片进行身份验证从而启动投保流程的情况。设置正脸比对阈值M1、第一侧脸比对阈值M2和第二侧脸比对阈值M3依次对正脸、左脸和右脸人证是否一致进行比对,一旦校正后的相似度分值小于或等于对应的阈值,则停止后续的运算步骤,直接判断为人脸一致性检验不通过。
其中,阈值偏差公式为:
Figure PCTCN2021090188-appb-000004
其中,i为正数,1≤i≤2,Q1为校正正脸相似度偏差率;Q2为校正侧脸相似度偏差率。
208,若识别结果为一致,则提取业务数据中,待确认的数据内容;
若第一生物特征信息与第二生物特征信息识别对比一致,证明现在核保过程的操作用户与实际投保人一致,即当前核保过程是由实际投保人进行,则从投保系统上提取对应的保单信息中需要投保人进行确认的投保数据,其中,保单信息中需要投保人确认的投保数据包括:投保人身份信息,保险种 类,保险的理赔信息等。
209,调用预设的AI语音转换模型,将数据内容转换为播报语音;
当用户开始准备进行保单信息的确认过程时,投保系统提取出待确认的投保数据,并调用系统上设置有的AI语音转换模型,将这些投保数据输入至AI语音转换模型中,利用该模型中的AI语音转换技术,进行语音转换,并调整声音效果,生成播报语音。其中,利用AI语音转换技术将数据内容转换成语音的技术属于现有技术,故在此不再赘述。
210,对播报语音进行播报,并实时获取用户的回复语音;
投保系统在接收到用户启动保单信息确认流程的提示信息之后,根据保单信息提取出相应的播报语音,并对投保人进行播报。在对保单信息中的每一项需要确认的内容进行播报后,用户都要进行语音的确认回复,在此过程中,系统会实时采集用户的回复语音并存储在存储单元中。
211,基于预设的答案数据,对回复语音进行匹配,得到匹配结果;
对于保单信息中待确认的投保数据,投保系统会预先进行一个回复设置,即答案数据,用于对用户的回复进行核对和匹配,在获取到用户的回复语音之后,先对用户的回复语音进行预处理,提取出干净的人声,然后再利用语音识别技术,对回复语音进行语音识别处理,经过语音识别处理之后得到回复数据,再利用答案数据,对其进行匹配,得到一个匹配结果,只有当用户的回复与答案数据一致时,才能生成投保信息。
212,若匹配结果为回复语音与答案数据一致,则输出业务数据。
获取到用户回复数据与预设的答案数据的匹配结果,当匹配结果显示为一致时,表明用户已经对保单信息核对完成,并且对保单信息的所有内容都进行了确认,则系统会输出经过用户对保单信息确认之后输出用户保单信息,此时整个核保流程结束。当匹配结果不一致时,表明用户对于保单信息存有疑虑,或者保单信息出现错误内容,需要进行更改,则用户可以申请中断核保流程,并将该情况反馈给保险公司。
本申请实施例中,通过人脸识别技术对当前用户进行身份识别,确保当前用户与待核验的业务数据中的用户身份一致,再利用AI语音转换技术,在核验过程中语音播报用户所需确认的数据内容并获取回复语音,根据回复语音验证对于待确认的数据内容的确认结果,通过该方法,核验了当前用户的身份,确保了核验过程的真实性,提高了核验结果的可靠性。
请参阅图3,本申请实施例中业务数据核验方法的第三个实施例包括:
301,获取待核验的业务数据;
如果待核验的业务数据是需要核验的投保人投保的保单信息,则当前的业务数据核验过程为用户的核保过程,获取待核验的业务数据就是获取待核验的保单信息。投保用户在投保系统上输入相关的个人身份信息,投保系统在收到相关个人身份信息之后,从系统上的保单信息数据库中调取与用户个人信息相对应的保单信息。
302,获取当前用户的第一生物特征信息,并根据业务数据中的用户身份信息从预设的生物特征数据库中,查询对应的第二生物特征信息;
用户在进入投保系统之后,输入个人的身份信息可以获取对应的保单信息,在利用保单信息启动核保流程之前,系统提示用户开启移动端摄像头,实时采集当前操作用户的生物特征信息,作为第一生物特征信息,来进行投保人身份核验。
投保系统中的投保人身份信息数据库与公安网的生物特征信息数据库进行对接,根据保单信息中的投保人的身份信息,从生物特征信息数据库中提取出相对应的生物特征信息,作为第二生物特征信息,其中,用户的第一生物特征信息是包括人体所固有的生理特征(指纹、虹膜、面相、DNA等)或行为特征(步态、击键习惯等)的信息。
303,将第一生物特征信息与第二生物特征信息进行比对识别,得到识别结果;
获取到第一生物特征信息和第二生物特征信息之后,基于生物特征识别技术,利用人体所固有的生理特征(指纹、虹膜、面相、DNA等)或行为特征(步态、击键习惯等)来进行个人身份识别验证,这里的身份识别验证操主要是利用生理特征进行信息识别和比对,具体的,可以选择采集到的面部图像作为识别依据,调取公安网中对应的人脸信息进行比对识别。
将第一生物特征信息与第二生物特征信息进行识别之后,对比每个特征信息,然后得到一个识别结果反馈给投保信息,作为是否执行下一步骤的依据。
304,若识别结果为一致,则提取业务数据中,待确认的数据内容;
若第一生物特征信息与第二生物特征信息识别对比一致,证明现在核保过程的操作用户与实际投保人一致,即当前核保过程是由实际投保人进行,则从投保系统上提取对应的保单信息中需要投保人进行确认的投保数据,其中,保单信息中需要投保人确认的投保数据包括:投保人身份信息,保险种类,保险的理赔信息等。
305,调用预设的AI语音转换模型,将数据内容转换为播报语音;
当用户开始准备进行保单信息的确认过程时,投保系统提取出待确认的投保数据,并调用系统上设置有的AI语音转换模型,将这些投保数据输入至AI语音转换模型中,利用该模型中的AI语音转换技术,进行语音转换,并调整声音效果,生成播报语音。其中,利用AI语音转换技术将数据内容转换成语音的技术属于现有技术,故在此不再赘述。
306,对播报语音进行播报,并实时获取用户的回复语音;
投保系统在接收到用户启动保单信息确认流程的提示信息之后,根据保单信息提取出相应的播报语音,并对投保人进行播报。在对保单信息中的每一项需要确认的内容进行播报后,用户都要进行语音的确认回复,在此过程中,系统会实时采集用户的回复语音并存储在存储单元中。
307,对回复语音进行预处理,得到清洁语音;
因为用户在进行语音回复的过程中,周围的环境不可控,系统获取到的相关回复语音中可能掺杂有环境噪声,所以需要对回复语音进行预处理。其中,预处理主要是静音切除、噪声处理和语音增强。
首先提取回复语音的语音信号,在语音信号中将语音和非语音信号时段区分开来,准确地确定出语音信号的起始点然后从连续的语音流中检测出有效的语音段。它包括两个方面,检测出有效语音的起始点即前端点,检测出有效语音的结束点即后端点。
然后进行噪声抑制,稳定背景噪音频谱特征,在某一或几个频谱处幅度非常稳定,假设开始一小段背景是背景噪音,从起始背景噪音开始进行分组、Fourier变换,对这些分组求平均得到噪声的频谱。降噪过程是将含噪语音反向补偿之后得到降噪后的语音,然后再利用基于短时谱估计增强算法中的谱减法及其改进形式消除环境噪声对语音的影响。
308,对清洁语音进行声学特征提取,得到声学特征参数;
接收到的回复语音经过预处理以后便得到有效的清洁语音,从清洁语音中提取出语音信号和语音波形,并对每一帧波形进行声学特征提取,便可以得到一个多维向量,即声学特征参数。
在提取声学特征参数的过程中,首先进行预滤波处理,然后经过A/D变换,通过一个一阶有限激励响应高通滤波器进行预加重,根据语音的短时平稳特性,语音可以以帧为单位进行分帧处理,采用哈明窗对一帧语音进行加窗,以减小吉布斯效应的影响,再进行快速傅里叶变换,将时域信号变换成为信号的功率谱,用一组Mel频标上线性分布的三角窗滤波器(共24个三角窗滤波器),对信号的功率谱滤波,基于三角窗滤波器组的输出求取对数,去除各维信号之间的相关性,将信号映射到低维空间,并进行谱加权,抑制其低阶和高阶参数,然后进行倒谱均值减,在语音特征中加入表征语音动态特性的差分参数,最后得到声学特征参数。
309,基于预设的语言模型,对声学特征参数进行语言处理,得到回复数据;
将声学特征参数输入至语言模型中,语言模型首先计算语音的特征矢量序列和每个发音模板之间的距离,然后利用语言模型中的工具,对语法结果和语义学进行判断纠正,特别是一些同音字则必须通过上下文结构才能确定词义,最后得到回复数据。
310,将回复数据与预设的答案数据进行比对匹配,得到匹配结果;
对于投保信息中所需要提醒投保人确认的部分,预先设置好正确的答案数据,根据语音识别处理之后得到相关回复数据,并基于答案数据与投保人的回复进行匹配,得到匹配结果,当匹配结果为一致之后,投保人才能继续进行投保。如果匹配结果为不一致,则系统会输出提示信息,表明用户核对信息出错。
311,若匹配结果为回复数据与答案数据一致,则输出业务数据。
获取到用户回复数据与预设的答案数据的匹配结果,当匹配结果显示为一致时,表明用户已经对保单信息核对完成,并且对保单信息的所有内容都进行了确认,则系统会输出经过用户对保单信息确认之后生成的用户投保信息,此时整个核保流程结束。当匹配结果不一致时,表明用户对于保单信息存有疑虑,或者保单信息出现错误内容,需要进行更改,则用户可以申请中断核保流程,并将该情况反馈给保险公司。
本申请实施例,通过AI语音转换技术,将保单信息中待确认的数据信息转换成为语音信息进行播报,获取当前用户的确认回复语音,并利用语音识别技术,对当前用户的回复进行识别验证,确保当前用户明确了待确认的数据内容,并且确认信息正确,由此提高了核验结果的准确度。
请参阅图4,本申请实施例中业务数据核验方法的第四个实施例包括:
401,获取待核验的业务数据;
如果待核验的业务数据是需要核验的投保人投保的保单信息,则当前的业务数据核验过程为用户的核保过程,获取待核验的业务数据就是获取待核验的保单信息。投保用户在投保系统上输入相关的个人信息,投保系统在收到相关个人信息之后,从系统上的保单业务数据库中调取与用户个人信息相对应的保单信息,其中用户的个人信息包括了姓名、身份证号等基本的个人身份信息,保单信息包括了保单编号、投保人的身份信息、保险种类、保险内容等数据信息。
402,获取当前用户的第一生物特征信息,并根据业务数据中的用户身份信息从预设的生物特征数据库中,查询对应的第二生物特征信息;
用户在进入投保系统之后,输入个人的身份信息可以获取对应的保单信息,在利用保单信息启动核保流程之前,系统提示用户开启移动端摄像头,实时采集当前操作用户的生物特征信息,作为第一生物特征信息,来进行投保人身份核验。
投保系统中的投保人身份信息数据库与公安网的生物特征信息数据库进行对接,根据保单信息中的投保人的身份信息,从生物特征信息数据库中提取出相对应的生物特征信息,作为第二生物特征信息,其中,用户的第一生物特征信息是包括人体所固有的生理特征(指纹、虹膜、面相、DNA等)或行为特征(步态、击键习惯等)的信息。
403,将第一生物特征信息与第二生物特征信息进行比对识别,得到识别结果;
获取到第一生物特征信息和第二生物特征信息之后,基于生物特征识别技术,利用人体所固有的生理特征(指纹、虹膜、面相、DNA等)或行为特征(步态、击键习惯等)来进行个人身份识别验证,这里的身份识别验证操主要是利用生理特征进行信息识别和比对,具体的,可以选择采集到的面部图像作为识别依据,调取公安网中对应的人脸信息进行比对识别。
将第一生物特征信息与第二生物特征信息进行识别之后,对比每个特征信息,然后得到一个识别结果反馈给投保信息,作为是否执行下一步骤的依据。
404,若识别结果为一致,则提取业务数据中,待确认的数据内容;
若第一生物特征信息与第二生物特征信息识别对比一致,证明现在核保过程的操作用户与实际投保人一致,即当前核保过程是由实际投保人进行,则从投保系统上提取对应的保单信息中需要投保人进行确认的投保数据,其中,保单信息中需要投保人确认的投保数据包括:投保人身份信息,保险种类,保险的理赔信息等。
405,调用预设的AI语音转换模型,将数据内容转换为播报语音;
当用户开始准备进行保单信息的确认过程时,投保系统提取出待确认的投保数据,并调用系统上设置有的AI语音转换模型,将这些投保数据输入至AI语音转换模型中,利用该模型中的AI语音转换技术,进行语音转换,并调整声音效果,生成播报语音。其中,利用AI语音转换技术将数据内容转换成语音的技术属于现有技术,故在此不再赘述。
406,对播报语音进行播报,并实时获取用户的回复语音;
投保系统在接收到用户启动保单信息确认流程的提示信息之后,根据保单信息提取出相应的播报语音,并对投保人进行播报。在对保单信息中的每一项需要确认的内容进行播报后,用户都要进行语音的确认回复,在此过程中,系统会实时采集用户的回复语音并存储在存储单元中。
407,基于语音识别技术,对回复语音进行语音识别,得到回复数据;
获取到用户的回复语音之后,利用语音识别技术,对回复语音进行语音识别处理,根据回复语音相对应的声音波形,从声音波形中识别出相对应的特征参数,然后根据这些特征参数将其转换成为回复数据。
408,提取答案数据中各字节的字符,得到第一字符;
在将答案数据与回复语音进行匹配的过程中,首先将答案数据存储在答案数据存储窗口中,并提取答案数据中各字节的字符,得到第一字符,将第一字符存储在答案数据存储窗口中的随机存储器中。
答案数据存储窗口的随机存储器为16个可单独寻址的分布式随机存储器,其中,每个可单独寻址的分布式随机存储器均具有两个独立的读端口。
答案数据存储窗口从第1个可单独寻址的分布式随机存储器开始,将第一字符顺序放入16个可单独寻址的分布式随机存储器中,如果将第一字符放入最后1个可单独寻址的分布式随机存储器后,还存在未放入的第一字符,则将未放入的第一字符从第1个可单独寻址的分布式随机存储器开始,再次顺序放入16个可单独寻址的分布式随机存储器,依次循环,直到将预定字节的第一字符全部放入16个可单独寻址的分布式随机存储器中为止,其中,每个可单独寻址的分布式随机存储器最多存储1千字节第一字符。
409,提取回复数据中各字节的字符,得到第二字符;
利用语音识别技术,对回复语音进行语音识别,经过预处理之后得到清洁语音,然后再进行声学特征的提取,利用提取到的声学特征经过预设的声学模型进行模式匹配和预设的语言模型进行语言处理,提取出回复语音中的各字符,得到第二字符。
410,对第二字符进行哈希计算,得到与每个第二字符相对应的字符索引;
在哈希计算单元中计算第二字符的哈希值,根据哈希值确定与第二字符对应的字符索引,并将字符索引存储在哈希计算单元的随机存储器中,其中,字符索引就是与第二字符相对应的答案数据窗口的随机存储器中的位置信息。
411,基于字符索引,对第一字符和第二字符进行比对匹配,得到匹配结果;
根据字符索引,产生并存储答案数据存储窗口的读地址,根据读地址从答案数据存储窗口中的16个可单独寻址的分布式随机存储器中读取并存储32字节的第一字符,并根据读地址的低4位从32字节的第一字符中选择并存储有效的26字节的第一字符;在发起的匹配查找命令中,根据字符索引对移位寄存器中的低24位中存储的4字节的第二字符和26字节的第一字符中的4字节第一字符进行匹配;如果4字节的第二字符匹配命中,在第二个周期内,获取与当前第二字符对应的至少两个字符索引,并根据与当前第二字符对应的至少两个字符索引对移位寄存器的16至23位中的当前第二字符和26字节的第一字符中相应的2字节第一字符进行匹配操作,如果当前第二字符匹配成功,则继续对下一个当前第二字符进行匹配,依次类推,直到匹配结束,如果当前第二字符匹配失败,则结束匹配。
412,若匹配结果为一致,则输出业务数据。
获取到用户回复数据与预设的答案数据的匹配结果,当匹配结果显示为一致时,表明用户已经对保单信息核对完成,并且对保单信息的所有内容都进行了确认,则系统会输出经过用户对保单信息确认之后生成的用户投保信息,此时整个核保流程结束。当匹配结果不一致时,表明用户对于保单信息存有疑虑,或者保单信息出现错误内容,需要进行更改,则用户可以申请中断核保流程,并将该情况反馈给保险公司。
本申请实施例中,利用语音识别技术,对获取到的当前用户的回复语音进行识别,并根据预设的答案数据进行验证和比对,确保当前用户的确认回复真实、准确,从而提高审核的可靠性,也提高了核验结果的真实性和准确度。
请参阅图5,本申请实施例中业务数据核验方法的第五个实施例包括:
501,获取待核验的业务数据;
如果待核验的业务数据是需要核验的投保人投保的保单信息,则当前的业务数据核验过程为用户的核保过程,获取待核验的业务数据就是获取待核验的保单信息。投保用户在投保系统上输入相关的个人信息,投保系统在收到相关个人信息之后,从系统上的保单信息数据库中调取与用户个人信息相 对应的保单信息,其中用户的个人信息包括了姓名、身份证号等基本的个人身份信息,保单信息包括了保单编号、投保人的身份信息、保险种类、保险内容等数据信息。
502,获取当前用户的第一生物特征信息,并根据业务数据中的用户身份信息从预设的生物特征数据库中,查询对应的第二生物特征信息;
用户在进入投保系统之后,输入个人的身份信息可以获取对应的保单信息,在利用保单信息启动核保流程之前,系统提示用户开启移动端摄像头,实时采集当前操作用户的生物特征信息,作为第一生物特征信息,来进行投保人身份核验。
投保系统中的投保人身份信息数据库与公安网的生物特征信息数据库进行对接,根据保单信息中的投保人的身份信息,从生物特征信息数据库中提取出相对应的生物特征信息,作为第二生物特征信息,其中,用户的第一生物特征信息是包括人体所固有的生理特征(指纹、虹膜、面相、DNA等)或行为特征(步态、击键习惯等)的信息。
503,将第一生物特征信息与第二生物特征信息进行比对识别,得到识别结果;
获取到第一生物特征信息和第二生物特征信息之后,基于生物特征识别技术,利用人体所固有的生理特征(指纹、虹膜、面相、DNA等)或行为特征(步态、击键习惯等)来进行个人身份识别验证,这里的身份识别验证操主要是利用生理特征进行信息识别和比对,具体的,可以选择采集到的面部图像作为识别依据,调取公安网中对应的人脸信息进行比对识别。
将第一生物特征信息与第二生物特征信息进行识别之后,对比每个特征信息,然后得到一个识别结果反馈给投保信息,作为是否执行下一步骤的依据。
504,若识别结果为一致,则提取业务数据中,待确认的数据内容;
若第一生物特征信息与第二生物特征信息识别对比一致,证明现在核保过程的操作用户与实际投保人一致,即当前核保过程是由实际投保人进行,则从投保系统上提取对应的保单信息中需要投保人进行确认的投保数据,其中,保单信息中需要投保人确认的投保数据包括:投保人身份信息,保险种类,保险的理赔信息等。
505,调用预设的AI语音转换模型,将数据内容转换为播报语音;
当用户开始准备进行保单信息的确认过程时,投保系统提取出待确认的投保数据,并调用系统上设置有的AI语音转换模型,将这些投保数据输入至AI语音转换模型中,利用该模型中的AI语音转换技术,进行语音转换,并调整声音效果,生成播报语音。其中,利用AI语音转换技术将数据内容转换成语音的技术属于现有技术,故在此不再赘述。
506,对播报语音进行播报,并实时获取用户的回复语音;
投保系统在接收到用户启动保单信息确认流程的提示信息之后,根据保单信息提取出相应的播报语音,并对投保人进行播报。在对保单信息中的每一项需要确认的内容进行播报后,用户都要进行语音的确认回复,在此过程中,系统会实时采集用户的回复语音并存储在存储单元中。
507,基于预设的答案数据,对回复语音进行匹配,得到匹配结果;
对于保单信息中待确认的投保数据,投保系统会预先进行一个回复设置,即答案数据,用于对用户的回复进行核对和匹配,在获取到用户的回复语音之后,先对用户的回复语音进行预处理,提取出干净的人声,然后再利用语音识别技术,对回复语音进行语音识别处理,经过语音识别处理之后得到回复数据,再利用答案数据,对其进行匹配,得到一个匹配结果,只有当用户的回复与答案数据一致时,才能生成投保信息。
508,如匹配结果为回复语音与答案数据一致,则输出业务数据;
获取到用户回复数据与预设的答案数据的匹配结果,当匹配结果显示为一致时,表明用户已经对保单信息核对完成,并且对保单信息的所有内容都进行了确认,则系统会输出经过用户对保单信息确认之后生成的用户投保信息,此时整个核保流程结束。当匹配结果不一致时,表明用户对于保单信息存有疑虑,或者保单信息出现错误内容,需要进行更改,则用户可以申请中断核保流程,并将该情况反馈给保险公司。
509,基于远程通讯技术,实时获取业务数据核验过程的视频信息;
当该业务数据核验过程为核保过程时,获取业务数据核验过程的视频信息就是当保单信息被确认 之后,基于远程通讯技术,可以实时获取用户在核保过程中的视频信息,具体的,远程通讯技术主要利用移动端的摄像功能进行实现,即在用户启动核保流程时,移动端的摄像功能便已经开启,对整个核保过程进行记录。
510,基于预设的业务数据核验安全规则,对业务数据核验过程中的核验行为进行监测,得到监测信息;
当业务数据核验过程为核保过程时,业务数据核验安全规则就是核保安全规则,核验行为就是核保行为。在实时获取核保过程的视频信息中,对用户的核保行为进行监测,根据规定的核保安全规则,对核保过程中不符合规定的核保行为进行监测并及时利用远程通讯的语音通讯功能作出相应提示。实时监测过程主要是利用移动端的摄像头实时采集核保过程的实时状态视频,并发送给对应的保险公司相关人员,然后保险公司相关人员对整个核保过程进行实时监测,在此过程中,保险公司相关人员可以利用远程通讯技术与用户进行实时远程对话,对用户在核保过程中出现的问题进行指导和提示。
511,对监测信息中不符合业务数据核验安全规则的核验行为进行语音提示;
实时监测核保过程主要是利用投保人的移动端摄像功能进行远程通讯,对于不符合核保安全规定的核保行为及时进行语音提示。其中不符合核保安全规定的核保行为主要是指投保人在投保过程中出现的离开了摄像范围,或者在投保过程中出现了不相干的人员进行干涉等情况,当实时监测到这些不符合核保安全规定的核保行为时,利用语音通讯技术对用户进行实时提醒,具体的,如果在监测过程中发现用户出现了不符合安全规定的行为或者在投保过程中出现了第三人进行干扰,则保险公司相关人员可以及时提醒用户,并还可以根据情况,申请中断核保流程,确保用户在整个核保过程中的核保行为是安全有效的,并及时做好相关视频记录,方便后续对投保流程进行回溯。
512,对业务数据核验过程进行录像,生成业务数据核验过程视频,并将业务数据核验过程视频存储在存储单元中。
在整个核保过程中,利用移动端的摄像功能将整个核保过程进行录像,生成核保过程视频,并将核保过程视频存储在存储单元中。当后续需要进行核保过程的追溯时,可从存储单元中提取出核保过程视频。
本申请实施例中,通过结合人脸识别技术、AI语音转换技术和远程通讯技术,对当前用户进行身份识别和验证,确保与待核验的业务数据中的用户身份一致,并利用AI语音转换模型中的AI语音转换技术将待核验的业务数据中所需注意的部分转换为播报语音,并进行播报,再利用远程通讯技术对整个业务数据核验过程进行监测,极大程度上保证了业务数据核验过程的安全性和合规性,从而提高了核验结果的准确度和可靠性。
上面对本申请实施例中业务数据核验方法进行了描述,下面对本申请实施例中业务数据核验装置进行描述,请参阅图6,本申请实施例中业务数据核验装置一个实施例包括:
获取模块601,用于获取待核验的业务数据;
查询模块602,用于获取当前用户的第一生物特征信息,并根据所述业务数据中的用户信息从预设的生物特征数据库中,查询对应的第二生物特征信息;
识别模块603,用于将所述第一生物特征信息与所述第二生物特征信息进行比对识别,得到识别结果;
提取模块604,用于若所述识别结果为一致,则提取所述业务数据中,待确认的数据内容;
语音转换模块605,用于调用预设的AI语音转换模型,将所述数据内容转换为播报语音;
播报模块606,用于对所述播报语音进行播报,并实时获取用户的回复语音;
匹配模块607,用于基于预设的答案数据,对所述回复语音进行匹配,得到匹配结果;
信息输出模块608,用于若所述匹配结果为所述回复语音与所述答案数据一致,则输出所述业务数据。
本申请实施例中,利用该业务数据核验装置执行上述业务数据核验方法的步骤,通过获取带核验的业务数据和用户的生物特征信息,从生物特征数据库中查询到与之相对应的生物特征信息,并进行比对,确保用户身份与待核验的业务数据中的用户身份一致,再利用AI语音转换模型,将业务数据中所需确认的数据内容转换成播报语音,对用户进行播放,并获取用户的回复语音,基于答案数据对 回复语音进行匹配,如果匹配为一致,则输出业务数据。该装置运行上述实施例的业务数据核验方法的步骤,提高了核验过程的真实性和可靠性,也提高了核验结果的准确度。
请参阅图7,本申请实施例中业务数据核验装置的另一个实施例包括:
获取模块601,用于获取待核验的业务数据;
查询模块602,用于获取当前用户的第一生物特征信息,并根据所述业务数据中的用户身份信息从预设的生物特征数据库中,查询对应的第二生物特征信息;
识别模块603,用于将所述第一生物特征信息与所述第二生物特征信息进行比对识别,得到识别结果;
提取模块604,用于若所述识别结果为一致,则提取所述业务数据中,待确认的数据内容;
语音转换模块605,用于调用预设的AI语音转换模型,将所述数据内容转换为播报语音;
播报模块606,用于对所述播报语音进行播报,并实时获取用户的回复语音;
匹配模块607,用于基于预设的答案数据,对所述回复语音进行匹配,得到匹配结果;
信息输出模块608,用于若所述匹配结果为所述回复语音与答案数据一致,则输出所述业务数据。
可选的,识别模块603包括:
第一特征提取单元6031,用于基于所述第一生物特征信息,进行人脸信息特征提取,得到人脸第一特征值;
第二特征提取单元6032,用于基于所述第二生物特征信息,进行人脸信息特征提取,得到人脸第二特征值;
特征值比对单元6033,用于基于预设的人脸对比阈值,将所述人脸对比阈值、所述人脸第一特征值和人脸第二特征值进行比对识别,得到识别结果。
可选的,特征值比对单元6033具体用于:
将所述人脸第一特征值与所述人脸第二特征值进行匹配对比,得到人脸相似度分值;
基于预设的校正公式,对人脸相似度分值进行校正计算,得到校正人脸相似度分值;
将所述校正人脸相似度分值与所述人脸对比阈值进行比对识别,得到识别结果。
可选的,匹配模块607包括:
语音识别单元6071,用于基于语音识别技术,对所述回复语音进行语音识别,得到回复数据;
数据匹配单元6072,用于将所述回复数据与预设的答案数据进行比对匹配,得到匹配结果。
可选的,语音识别单元6071具体用于:
对所述回复语音进行预处理,得到清洁语音;
对所述清洁语音进行声学特征提取,得到声学特征参数;
基于预设的语言模型,对所述声学特征参数进行语言处理,得到回复数据。
可选的,数据匹配单元6072具体用于:
提取所述答案数据中各字节的字符,得到第一字符;
提取所述回复数据中各字节的字符,得到第二字符;
对所述第二字符进行哈希计算,得到与每个所述第二字符相对应的字符索引;
基于所述字符索引,对所述第一字符和所述第二字符进行比对匹配,得到匹配结果。
可选的,监测模块609具体用于:
基于远程通讯技术,实时获取业务数据核验过程的视频信息;
基于预设的业务数据核验安全规则,对所述业务数据核验过程中的核验行为进行监测,得到监测信息;
对所述监测信息中不符合业务数据核验安全规则的所述核验行为进行语音提示;
对所述业务数据核验过程进行录像,生成核保过程视频,并将所述业务数据核验过程视频存储在存储单元中。
本申请实施例中,通过在上述的业务数据核验装置中增加一个监测模块,用于实时获取业务数据核验过程的视频信息,并对核验行为进行监测,并可以在监测过程中对不符合业务数据核验安全规则的核验行为及时进行提示。通过该装置提高了业务数据核验过程的真实性,实现了对核验行为的监测, 同时提高了核验结果的准确度和可靠性。
上面图6和图7从模块化功能实体的角度对本申请实施例中的业务数据核验装置进行详细描述,下面从硬件处理的角度对本申请实施例中业务数据核验设备进行详细描述。
图8是本申请实施例提供的一种业务数据核验设备的结构示意图,该业务数据核验设备800可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上处理器(central processing units,CPU)810(例如,一个或一个以上处理器)和存储器820,一个或一个以上存储应用程序833或数据832的存储介质830(例如一个或一个以上海量存储设备)。其中,存储器820和存储介质830可以是短暂存储或持久存储。存储在存储介质830的程序可以包括一个或一个以上模块(图示没标出),每个模块可以包括对业务数据核验设备800中的一系列指令操作。更进一步地,处理器810可以设置为与存储介质830通信,在业务数据核验设备800上执行存储介质830中的一系列指令操作。
业务数据核验设备800还可以包括一个或一个以上电源840,一个或一个以上有线或无线网络接口880,一个或一个以上输入输出接口860,和/或,一个或一个以上操作系统831,例如Windows Serve,Mac OS X,Unix,Linux,FreeBSD等等。本领域技术人员可以理解,图8示出的业务数据核验设备结构并不构成对业务数据核验设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
本申请所指区块链是分布式数据存储、点对点传输、共识机制、加密算法等计算机技术的新型应用模式。区块链(Blockchain),本质上是一个去中心化的数据库,是一串使用密码学方法相关联产生的数据块,每一个数据块中包含了一批次网络交易的信息,用于验证其信息的有效性(防伪)和生成下一个区块。区块链可以包括区块链底层平台、平台产品服务层以及应用服务层等。
本申请还提供一种计算机可读存储介质,该计算机可读存储介质可以为非易失性计算机可读存储介质,该计算机可读存储介质也可以为易失性计算机可读存储介质,所述计算机可读存储介质中存储有指令,当所述指令在计算机上运行时,使得计算机执行所述业务数据核验方法的步骤。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (20)

  1. 一种业务数据核验方法,其特征在于,所述业务数据核验方法包括:
    获取待核验的业务数据;
    获取当前用户的第一生物特征信息,并根据所述业务数据中的用户身份信息从预设的生物特征数据库中,查询对应的第二生物特征信息;
    将所述第一生物特征信息与所述第二生物特征信息进行比对识别,得到识别结果;
    若所述识别结果为一致,则提取所述业务数据中,待确认的数据内容;
    调用预设的AI语音转换模型,将所述数据内容转换为播报语音;
    对所述播报语音进行播报,并实时获取用户的回复语音;
    基于预设的答案数据,对所述回复语音进行匹配,得到匹配结果;
    若所述匹配结果为所述回复语音与所述答案数据一致,则输出所述业务数据。
  2. 根据权利要求1所述的业务数据核验方法,其特征在于,所述将所述第一生物特征信息与所述第二生物特征信息进行比对识别,得到识别结果包括:
    基于所述第一生物特征信息,进行人脸信息特征提取,得到人脸第一特征值;
    基于所述第二生物特征信息,进行人脸信息特征提取,得到人脸第二特征值;
    基于预设的人脸对比阈值,将所述人脸对比阈值、所述人脸第一特征值和人脸第二特征值进行比对识别,得到识别结果。
  3. 根据权利要求2所述的业务数据核验方法,其特征在于,所述基于预设的人脸对比阈值,将所述人脸对比阈值、所述人脸第一特征值和人脸第二特征值进行比对识别,得到识别结果包括:
    将所述人脸第一特征值与所述人脸第二特征值进行匹配对比,得到人脸相似度分值;
    基于预设的校正公式,对人脸相似度分值进行校正计算,得到校正人脸相似度分值;
    将所述校正人脸相似度分值与所述人脸对比阈值进行比对识别,得到识别结果。
  4. 根据权利要求1-3中任一项所述的业务数据核验方法,其特征在于,所述基于预设的答案数据,对所述回复语音进行匹配,得到匹配结果包括:
    基于语音识别技术,对所述回复语音进行语音识别,得到回复数据;
    将所述回复数据与预设的答案数据进行比对匹配,得到匹配结果。
  5. 根据权利要求4所述的业务数据核验方法,其特征在于,所述基于语音识别技术,对所述回复语音进行语音识别,得到回复数据包括:
    对所述回复语音进行预处理,得到清洁语音;
    对所述清洁语音进行声学特征提取,得到声学特征参数;
    基于预设的语言模型,对所述声学特征参数进行语言处理,得到回复数据。
  6. 根据权利要求4所述的业务数据核验方法,其特征在于,所述将所述回复数据与预设的答案数据进行比对匹配,得到匹配结果包括:
    提取所述答案数据中各字节的字符,得到第一字符;
    提取所述回复数据中各字节的字符,得到第二字符;
    对所述第二字符进行哈希计算,得到与每个所述第二字符相对应的字符索引;
    基于所述字符索引,对所述第一字符和所述第二字符进行比对匹配,得到匹配结果。
  7. 根据权利要求1-3中任一项所述的业务数据核验方法,其特征在于,在所述获取待核验的业务数据之后,还包括:
    基于远程通讯技术,实时获取业务数据核验过程的视频信息;
    基于预设的业务数据核验安全规则,对所述业务数据核验过程中的核验行为进行监测,得到监测信息;
    对所述监测信息中不符合业务数据核验安全规则的所述核验行为进行语音提示;
    对所述业务数据核验过程进行录像,生成业务数据核验过程视频,并将所述业务数据核验过程视频存储在存储单元中。
  8. 一种业务数据核验设备,其中,所述业务数据核验设备包括:存储器和至少一个处理器,所述存储器中存储有指令;
    所述至少一个处理器调用所述存储器中的所述指令,以使得所述业务数据核验设备执行如下所述的业务数据核验方法的步骤:
    获取待核验的业务数据;
    获取当前用户的第一生物特征信息,并根据所述业务数据中的用户身份信息从预设的生物特征数据库中,查询对应的第二生物特征信息;
    将所述第一生物特征信息与所述第二生物特征信息进行比对识别,得到识别结果;
    若所述识别结果为一致,则提取所述业务数据中,待确认的数据内容;
    调用预设的AI语音转换模型,将所述数据内容转换为播报语音;
    对所述播报语音进行播报,并实时获取用户的回复语音;
    基于预设的答案数据,对所述回复语音进行匹配,得到匹配结果;
    若所述匹配结果为所述回复语音与所述答案数据一致,则输出所述业务数据。
  9. 根据权利要求8所述的业务数据核验设备,其中,所述业务数据核验设备执行所述将所述第一生物特征信息与所述第二生物特征信息进行比对识别,得到识别结果的步骤时,包括:
    基于所述第一生物特征信息,进行人脸信息特征提取,得到人脸第一特征值;
    基于所述第二生物特征信息,进行人脸信息特征提取,得到人脸第二特征值;
    基于预设的人脸对比阈值,将所述人脸对比阈值、所述人脸第一特征值和人脸第二特征值进行比对识别,得到识别结果。
  10. 根据权利要求9所述的业务数据核验设备,其中,所述业务数据核验设备执行所述基于预设的人脸对比阈值,将所述人脸对比阈值、所述人脸第一特征值和人脸第二特征值进行比对识别,得到识别结果的步骤时,包括:
    将所述人脸第一特征值与所述人脸第二特征值进行匹配对比,得到人脸相似度分值;
    基于预设的校正公式,对人脸相似度分值进行校正计算,得到校正人脸相似度分值;
    将所述校正人脸相似度分值与所述人脸对比阈值进行比对识别,得到识别结果。
  11. 根据权利要求8-10任一项所述的业务数据核验设备,其中,所述业务数据核验设备执行所述基于预设的答案数据,对所述回复语音进行匹配,得到匹配结果的步骤时,包括:
    基于语音识别技术,对所述回复语音进行语音识别,得到回复数据;
    将所述回复数据与预设的答案数据进行比对匹配,得到匹配结果。
  12. 根据权利要求11所述的业务数据核验设备,其中,所述业务数据核验设备执行所述基于语音识别技术,对所述回复语音进行语音识别,得到回复数据的步骤时,包括:
    对所述回复语音进行预处理,得到清洁语音;
    对所述清洁语音进行声学特征提取,得到声学特征参数;
    基于预设的语言模型,对所述声学特征参数进行语言处理,得到回复数据。
  13. 根据权利要求11所述的业务数据核验设备,其中,所述业务数据核验设备执行所述将所述回复数据与预设的答案数据进行比对匹配,得到匹配结果的步骤时,包括:
    提取所述答案数据中各字节的字符,得到第一字符;
    提取所述回复数据中各字节的字符,得到第二字符;
    对所述第二字符进行哈希计算,得到与每个所述第二字符相对应的字符索引;
    基于所述字符索引,对所述第一字符和所述第二字符进行比对匹配,得到匹配结果。
  14. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,其中,所述计算机程序被处理器执行时实现如下所述的业务数据核验方法的步骤:
    获取待核验的业务数据;
    获取当前用户的第一生物特征信息,并根据所述业务数据中的用户身份信息从预设的生物特征数据库中,查询对应的第二生物特征信息;
    将所述第一生物特征信息与所述第二生物特征信息进行比对识别,得到识别结果;
    若所述识别结果为一致,则提取所述业务数据中,待确认的数据内容;
    调用预设的AI语音转换模型,将所述数据内容转换为播报语音;
    对所述播报语音进行播报,并实时获取用户的回复语音;
    基于预设的答案数据,对所述回复语音进行匹配,得到匹配结果;
    若所述匹配结果为所述回复语音与所述答案数据一致,则输出所述业务数据。
  15. 根据权利要求14所述的计算机可读存储介质,其中,所述计算机程序被处理器执行时实现所述将所述第一生物特征信息与所述第二生物特征信息进行比对识别,得到识别结果的步骤时,包括:
    基于所述第一生物特征信息,进行人脸信息特征提取,得到人脸第一特征值;
    基于所述第二生物特征信息,进行人脸信息特征提取,得到人脸第二特征值;
    基于预设的人脸对比阈值,将所述人脸对比阈值、所述人脸第一特征值和人脸第二特征值进行比对识别,得到识别结果。
  16. 根据权利要求15所述的计算机可读存储介质,其中,所述计算机程序被处理器执行时实现所述基于预设的人脸对比阈值,将所述人脸对比阈值、所述人脸第一特征值和人脸第二特征值进行比对识别,得到识别结果的步骤时,包括:
    将所述人脸第一特征值与所述人脸第二特征值进行匹配对比,得到人脸相似度分值;
    基于预设的校正公式,对人脸相似度分值进行校正计算,得到校正人脸相似度分值;
    将所述校正人脸相似度分值与所述人脸对比阈值进行比对识别,得到识别结果。
  17. 根据权利要求14-16中任一项所述的计算机可读存储介质,其中,所述计算机程序被处理器执行时实现所述基于预设的答案数据,对所述回复语音进行匹配,得到匹配结果的步骤时,包括:
    基于语音识别技术,对所述回复语音进行语音识别,得到回复数据;
    将所述回复数据与预设的答案数据进行比对匹配,得到匹配结果。
  18. 根据权利要求17所述的计算机可读存储介质,其中,所述计算机程序被处理器执行时实现所述基于语音识别技术,对所述回复语音进行语音识别,得到回复数据的步骤时,包括:
    对所述回复语音进行预处理,得到清洁语音;
    对所述清洁语音进行声学特征提取,得到声学特征参数;
    基于预设的语言模型,对所述声学特征参数进行语言处理,得到回复数据。
  19. 根据权利要求17所述的计算机可读存储介质,其中,所述计算机程序被处理器执行时实现所述将所述回复数据与预设的答案数据进行比对匹配,得到匹配结果的步骤时,包括:
    提取所述答案数据中各字节的字符,得到第一字符;
    提取所述回复数据中各字节的字符,得到第二字符;
    对所述第二字符进行哈希计算,得到与每个所述第二字符相对应的字符索引;
    基于所述字符索引,对所述第一字符和所述第二字符进行比对匹配,得到匹配结果。
  20. 一种业务数据核验装置,其中,所述业务数据核验装置包括:
    获取模块,用于获取待核验的业务数据;
    查询模块,用于获取当前用户的第一生物特征信息,并根据所述业务数据中的用户身份信息从预 设的生物特征数据库中,查询对应的第二生物特征信息;
    识别模块,用于将所述第一生物特征信息与所述第二生物特征信息进行比对识别,得到识别结果;
    提取模块,用于若所述识别结果为一致,则提取所述业务数据中,待确认的数据内容;
    语音转换模块,用于调用预设的AI语音转换模型,将所述数据内容转换为播报语音;
    播报模块,用于对所述播报语音进行播报,并实时获取用户的回复语音;
    匹配模块,用于基于预设的答案数据,对所述回复语音进行匹配,得到匹配结果;
    信息输出模块,用于若所述匹配结果为所述回复语音与所述答案数据一致,则输出所述业务数据。
PCT/CN2021/090188 2020-12-15 2021-04-27 业务数据核验方法、装置、设备及存储介质 WO2022126964A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011472837.8 2020-12-15
CN202011472837.8A CN112541174A (zh) 2020-12-15 2020-12-15 业务数据核验方法、装置、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022126964A1 true WO2022126964A1 (zh) 2022-06-23

Family

ID=75018675

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/090188 WO2022126964A1 (zh) 2020-12-15 2021-04-27 业务数据核验方法、装置、设备及存储介质

Country Status (2)

Country Link
CN (1) CN112541174A (zh)
WO (1) WO2022126964A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112541174A (zh) * 2020-12-15 2021-03-23 平安科技(深圳)有限公司 业务数据核验方法、装置、设备及存储介质
CN115712843B (zh) * 2022-12-01 2023-10-27 北京国联视讯信息技术股份有限公司 基于人工智能的数据匹配检测处理方法及系统
CN117172714A (zh) * 2023-09-20 2023-12-05 公诚管理咨询有限公司 应用于通信工程的安全生产费核查方法、系统及电子设备
CN117494092B (zh) * 2023-11-14 2024-06-04 深圳市策城软件有限公司 基于生物活体识别的景区门票无感核验方法、系统及介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108288080A (zh) * 2017-12-01 2018-07-17 国政通科技股份有限公司 身份信息核验方法、装置、介质及计算设备
US20180300468A1 (en) * 2016-08-15 2018-10-18 Goertek Inc. User registration method and device for smart robots
CN109003190A (zh) * 2018-06-11 2018-12-14 中国平安人寿保险股份有限公司 一种核保方法、计算机可读存储介质及终端设备
CN109274845A (zh) * 2018-08-31 2019-01-25 平安科技(深圳)有限公司 智能语音自动回访方法、装置、计算机设备及存储介质
CN111276148A (zh) * 2020-01-14 2020-06-12 中国平安人寿保险股份有限公司 基于卷积神经网络的回访方法、系统及存储介质
CN112541174A (zh) * 2020-12-15 2021-03-23 平安科技(深圳)有限公司 业务数据核验方法、装置、设备及存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101996139B (zh) * 2009-08-28 2015-11-25 百度在线网络技术(北京)有限公司 数据匹配方法和数据匹配装置
CN109087429B (zh) * 2018-09-19 2020-12-04 重庆第二师范学院 基于人脸识别技术的图书馆借书证人证一致性检验的方法
CN109660678A (zh) * 2018-12-07 2019-04-19 深圳前海微众银行股份有限公司 电核系统实现方法、系统及可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180300468A1 (en) * 2016-08-15 2018-10-18 Goertek Inc. User registration method and device for smart robots
CN108288080A (zh) * 2017-12-01 2018-07-17 国政通科技股份有限公司 身份信息核验方法、装置、介质及计算设备
CN109003190A (zh) * 2018-06-11 2018-12-14 中国平安人寿保险股份有限公司 一种核保方法、计算机可读存储介质及终端设备
CN109274845A (zh) * 2018-08-31 2019-01-25 平安科技(深圳)有限公司 智能语音自动回访方法、装置、计算机设备及存储介质
CN111276148A (zh) * 2020-01-14 2020-06-12 中国平安人寿保险股份有限公司 基于卷积神经网络的回访方法、系统及存储介质
CN112541174A (zh) * 2020-12-15 2021-03-23 平安科技(深圳)有限公司 业务数据核验方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN112541174A (zh) 2021-03-23

Similar Documents

Publication Publication Date Title
WO2022126964A1 (zh) 业务数据核验方法、装置、设备及存储介质
Lavrentyeva et al. STC antispoofing systems for the ASVspoof2019 challenge
KR102339594B1 (ko) 객체 인식 방법, 컴퓨터 디바이스 및 컴퓨터 판독 가능 저장 매체
US20210050020A1 (en) Voiceprint recognition method, model training method, and server
US20200035247A1 (en) Machine learning for authenticating voice
WO2020181824A1 (zh) 声纹识别方法、装置、设备以及计算机可读存储介质
WO2020177380A1 (zh) 基于短文本的声纹检测方法、装置、设备及存储介质
WO2018166187A1 (zh) 服务器、身份验证方法、系统及计算机可读存储介质
US20180261236A1 (en) Speaker recognition method and apparatus, computer device and computer-readable medium
WO2017113658A1 (zh) 基于人工智能的声纹认证方法以及装置
WO2020119448A1 (zh) 语音信息验证
WO2021082420A1 (zh) 声纹认证方法、装置、介质及电子设备
WO2019136801A1 (zh) 语音数据库创建方法、声纹注册方法、装置、设备及介质
Liu et al. A Spearman correlation coefficient ranking for matching-score fusion on speaker recognition
Moro-Velázquez et al. Modulation spectra morphological parameters: A new method to assess voice pathologies according to the grbas scale
CN103794207A (zh) 一种双模语音身份识别方法
WO2020073519A1 (zh) 声纹验证的方法、装置、计算机设备以及存储介质
CN109766419A (zh) 基于语音分析的产品推荐方法、装置、设备及存储介质
WO2020253065A1 (zh) 基于数据分析的资格评审方法、装置及服务器
CN113409771B (zh) 一种伪造音频的检测方法及其检测系统和存储介质
CN111063359B (zh) 电话回访有效性判别方法、装置、计算机设备和介质
TWM622203U (zh) 用於金融交易系統之聲紋辨識裝置
JP4440414B2 (ja) 話者照合装置及び方法
TWI778234B (zh) 語者驗證系統
WO2021257000A1 (en) Cross-modal speaker verification

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21904903

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21904903

Country of ref document: EP

Kind code of ref document: A1