WO2020232894A1 - Procédé de vérification de données en temps réel, dispositif, serveur et support - Google Patents

Procédé de vérification de données en temps réel, dispositif, serveur et support Download PDF

Info

Publication number
WO2020232894A1
WO2020232894A1 PCT/CN2019/103300 CN2019103300W WO2020232894A1 WO 2020232894 A1 WO2020232894 A1 WO 2020232894A1 CN 2019103300 W CN2019103300 W CN 2019103300W WO 2020232894 A1 WO2020232894 A1 WO 2020232894A1
Authority
WO
WIPO (PCT)
Prior art keywords
real
data
time
word
preset
Prior art date
Application number
PCT/CN2019/103300
Other languages
English (en)
Chinese (zh)
Inventor
王旭
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020232894A1 publication Critical patent/WO2020232894A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0815Network architectures or network communication protocols for network security for authentication of entities providing single-sign-on or federations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0861Network architectures or network communication protocols for network security for authentication of entities using biometrical features, e.g. fingerprint, retina-scan
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/12Applying verification of the received information

Definitions

  • This application belongs to the field of data processing technology, and in particular relates to a method, device, server and medium for verifying real-time data.
  • the embodiments of the present application provide a real-time data verification method and server to solve the problem of inaccurate and incomplete real-time data analysis.
  • the first aspect of the embodiments of the present application provides a method for verifying real-time data, including: after receiving user login information, determining user identity data corresponding to the login information, and analyzing the key contained in the login information Code, determine multiple verification keywords corresponding to the user based on the key code; receive real-time data of the user, and divide the real-time data into real-time video components and real-time audio components, the real-time video components include face data Calculate the similarity between the face data contained in the real-time video component and the user identity data, and if the similarity between the face data and the user identity data is not less than a preset similarity threshold, then calculate the prediction It is assumed that the number of verification keywords contained in the real-time audio component within the time period; if the number of verification keywords contained in the real-time audio component is not greater than the first number threshold but greater than the second number threshold, then One of the asynchronous processing servers is selected as the selected server, and the real-time data is sent to the selected server to perform non-real-time verification on the
  • user identity data and verification keywords are obtained through login information, real-time data of the user is received, real-time data is divided into real-time video and real-time audio data, and face data and user identity contained in real-time video components are calculated
  • the similarity of the data is used to verify the identity of the user. If the similarity is not less than the preset similarity threshold, the number of verification keywords included in the real-time audio component within the preset time period is calculated to verify whether the real-time data is legal If the number of the verification keywords contained in the real-time audio component is greater than the first number threshold, it is determined that the real-time data has passed the verification, so as to perform intelligent analysis on the real-time data and improve the security of remote business processing.
  • FIG. 1 is an implementation flowchart of a method for verifying real-time data provided by an embodiment of the present application
  • FIG. 2 is a specific implementation flowchart of a method S101 for verifying real-time data provided by an embodiment of the present application
  • FIG. 3 is a specific implementation flowchart of the method S103 for verifying real-time data provided by an embodiment of the present application
  • FIG. 4 is a specific implementation flowchart of the method S107 for verifying real-time data provided by an embodiment of the present application
  • Figure 5 is a structural block diagram of a real-time data verification device provided by an embodiment of the present application.
  • Fig. 6 is a schematic diagram of a server provided by an embodiment of the present application.
  • Fig. 1 shows an implementation process of a method for verifying real-time data provided by an embodiment of the present application.
  • the process of the method includes steps S101 to S108.
  • the specific implementation principle of each step is as follows.
  • the server will verify the collected video of the user and business personnel (ie real-time Data) conduct artificial intelligence analysis to determine whether there are hidden safety hazards in the entire business.
  • the embodiments of this application mainly verify the real-time data by automatically verifying whether the user in the real-time data matches the user data corresponding to the login information, and automatically verifying whether the real-time data contains sufficient verification keywords. , So as to ensure the safety of business processing.
  • the server will verify the login name and password contained in the login information. Once the verification is passed, the user identity data will be determined according to the login name, where the user identity data includes the user's facial photo.
  • the key codes contained in the login information are not the same. Obviously, the key codes can be used to distinguish different businesses; on the contrary, Different types of users log in on the same service interface, and the key codes contained in the login information may also be different. For example, the types of users can be divided into minor users, adult users, and elderly users, due to their different discrimination capabilities , After the same business login, the key code contained in the login information is also different.
  • one of the verification dimensions of the subsequent real-time data in the embodiment of this application is to automatically verify whether the real-time data contains sufficient verification keywords. Therefore, it is necessary to first determine which verification keys need to be included in the real-time data based on the key code. word. Understandably, in the process of online business processing, for example, for elderly users, business personnel need to inform elderly users of business risks, and elderly users must confirm these risks. At this time, because real-time data is a recording of the entire communication process, Therefore, qualified real-time data must contain a certain number of verification keywords.
  • a plurality of keywords corresponding to the key code contained in the login information can be determined through the correspondence between the preset key code and the verification keyword.
  • the verification keyword corresponding to the key code can be determined by predicting the occurrence probability of multiple words corresponding to a key code in the future.
  • the above S101 includes:
  • S1011 Retrieve a word set corresponding to the key code received in a preset period, and each word in the word set corresponds to a receiving moment.
  • the process of generating the word set corresponding to the key code is: the business person extracts the real-time data recorded during the business transaction with the user within a preset period (for example: within one month) as the specimen data, and determines The key code corresponding to the specimen data is artificially screened out from the specimen data that can be used to ensure the security of business processing, thereby determining multiple words corresponding to a key code.
  • the words extracted from a specimen data all correspond to the reception time.
  • the business personnel can obtain more words corresponding to the keyword by extracting multiple specimen data corresponding to a key code, thereby generating a set of words corresponding to the keyword. It is worth noting that the embodiments of the present application will not merge words, so the word set will contain a large number of repeated words, and each word corresponds to a receiving time.
  • the receiving time period includes multiple receiving times, and the number of occurrences of the word is at The number of occurrences of the word in the word set in one of the receiving time periods.
  • a word set contains many repeated words, and each word corresponds to a receiving time, it is possible to count the number of occurrences of a certain word in the word set during a receiving time period.
  • the corresponding relationship between the reception time period and the word appearance times of each word will be generated.
  • the number of occurrences of the word "losing money" during the period from January 1 to January 5 (receiving time period) is 10
  • the number of occurrences of the word "risk” is 8
  • the occurrence of the word "not recommended” The number of times is 9, the number of occurrences of the word "clear” is 20, and so on.
  • the regression model Fit a regression equation that characterizes the correspondence between the number of occurrences of the word and the reception time period, and generate a regression equation corresponding to each word in the word set, where num represents the number of the word, time represents the number of the reception time period, pre1 and pre2 are the coefficients of two nonlinear regression equations, and the e is a natural constant.
  • the coefficients of the regression model can be obtained by the existing nonlinear regression equation solving method, so it is not detailed here.
  • the serial number of the receiving time period here indicates the first receiving time period from front to back. For example, there are 5 receiving time periods in total, namely: January 1 to January 5, and January 6. From January 10th, January 11th to January 15th, January 16th to January 20th, and January 21st to January 25th, then the corresponding reception from January 1st to January 5th The serial number of the time zone is 1, and the serial number of the corresponding receiving time zone from January 6 to January 10 is 2, and so on.
  • each word corresponds to a regression equation.
  • S1014 Based on the regression equation corresponding to each word in the word set, calculate the number of occurrences of each word within a preset number of receiving time periods after the current moment as the predicted number of occurrences corresponding to each word.
  • the independent variables are calculated as 11, 12, 13, 14, and 15 through the regression equation corresponding to each word.
  • the sum of the corresponding dependent variables ie the sum of the number of occurrences is used as the predicted number of occurrences corresponding to each word.
  • S1015 Select words whose predicted number of occurrences are not less than a preset number threshold as verification keywords corresponding to the user.
  • real-time data of the user is received, and the real-time data is divided into a real-time video component and a real-time audio component, and the real-time video component includes face data.
  • the real-time data of the user in the embodiment of the present application is the video collected when the user communicates with the business personnel.
  • the video can be divided into real-time video component and real-time audio component.
  • the real-time video component only contains Image information does not contain components of voice information
  • real-time audio components are components that only contain voice information without image information.
  • the real-time data is the user-side video collected during the communication between the user and the business personnel
  • the real-time video component contains the user's face data
  • the real-time audio component contains the voice data of both the user and the business personnel.
  • the foregoing S103 includes:
  • S1031 Intercept a frame of image from the real-time video component as a target image.
  • the real-time video component is actually composed of multiple frames of images, and the embodiment of the present application selects one frame of images as the target image for analysis.
  • S1032 Divide the target image into multiple image regions, read pixel point data of each pixel in the image region, and number the image regions according to a preset sequence.
  • the target image can be divided into 100 equal parts along the horizontal direction and 100 equal parts along the ordinate. Then the target image can be divided into 10,000 image regions, and each image region contains multiple Pixels.
  • the pixel point data may be the RGB value of one pixel point, that is, the RGB value of one pixel point can be represented by one pixel point data.
  • S1033 Simultaneously input the pixel point data of a preset number of adjacent image areas with a number into a preset VGG neural network model, and output a segmentation coefficient corresponding to the image area with a middle number, where the preset number is an odd number greater than 1. .
  • the pixel data of each image area is not individually input into the preset VGG neural network model, but because the face is continuously covered in each image area, in order to avoid Intermittent image area calculations may cause accidental errors. Therefore, in the embodiment of the present application, a preset number of pixel data of adjacent image areas with a number are simultaneously input into the neural network model (for example, three consecutive image areas are Pixel data is input into the neural network model at the same time), and when the result is output, only the segmentation coefficient corresponding to the image area with the middle number is output. The segmentation coefficient is used to distinguish the image area covered by the face image or the non-face image. Image area.
  • a preset number of image areas with adjacent numbers are used as a region group, and after the pixel data of each pixel in each image region is organized and combined into a vector, the characteristics of a region group can be generated vector.
  • the input parameters are set to the preset number of pixel data of adjacent image areas, and the output parameters are set to the preset numbered image.
  • the segmentation coefficient corresponding to the region so naturally the neural network model trained in this way can realize the segmentation coefficient corresponding to the image region with the middle output number described above.
  • the training process of the VGG neural network includes: obtaining the training feature vectors of multiple training region groups and the training segmentation coefficients of the image regions in the center of the training region groups; repeating the following steps until the adjusted cross-entropy loss of the VGG neural network
  • the function value is less than the preset threshold: the training feature vector is used as the input of the VGG neural network, the training segmentation coefficient is used as the output of the VGG neural network, and the VGG neural network is fully connected by the random gradient descent method
  • the parameters of each layer in the layer are updated, and the cross entropy loss function value of the adjusted VGG neural network is calculated.
  • the VGG neural network whose cross-entropy loss function value is less than the preset threshold is output as the preset neural network model.
  • the pixel data of the image regions numbered 1-3 are input into the neural network model, and the segmentation coefficient of the image region numbered 2 will be output; After the pixel data of the image areas numbered 2-4 are input to the neural network model, the segmentation coefficient of the image area numbered 3 will be output, and so on, until the segmentation coefficient of the image area numbered 99 is output.
  • segmentation coefficients of the image regions numbered 1 and 100 cannot be obtained through the above method, but the ultimate purpose of obtaining the segmentation coefficients of the image region is to segment the face image from the target image, and the face image The probability of pixels covering the edges of the target image is extremely small. Therefore, although the embodiment of the present application cannot calculate the segmentation coefficients of the image area with the largest number and the smallest number, it does not affect the segmentation of the face image in practical applications.
  • S1034 Use pixels corresponding to the segmentation coefficients that are less than a preset coefficient threshold as face pixels, and generate face data according to pixel point data of all the face pixels.
  • S1035 Calculate the similarity between the matrix corresponding to the face data and the matrix corresponding to the user identity data by a distance formula, as the similarity between the face data included in the video component and the user identity data.
  • the distance formula is: Wherein the S is the similarity, the X i is the face data corresponding to the i-th matrix element Y i is the identity of the user data corresponding to the i-th matrix element The K is the number of elements included in the matrix corresponding to the face data and the matrix corresponding to the user identity data.
  • the real-time audio component in the preset time period is converted into text data through a speech-to-text conversion algorithm; the text data is word segmented to generate a speech word set, and the speech word set contains multiple Words; calculating the number of the verification keywords included in the voice word set as the number of the verification keywords included in the real-time audio component.
  • the embodiment of the present application when the number of keywords contained in the real-time audio component is not enough, it cannot be directly determined whether the real-time data has passed the verification, and other asynchronous processing servers need to be used for further automatic analysis or manual analysis. Since the embodiment of this application only introduces the real-time data verification method from the side of the main server, here only how the main server selects an asynchronous processing server as the selected server and sends the real-time data to the selected server is not involved. Choose how the server performs non-real-time verification of real-time data.
  • the foregoing S107 includes:
  • S1071 Invoke the number of threads contained in each of the multiple asynchronous processing servers, and count the number of abnormal tasks received from each asynchronous processing server within a unit time period before the current moment.
  • S1072 Calculate sending parameters corresponding to each asynchronous processing server through a segmented formula.
  • the segmentation formula includes: The K(i) represents the sending parameter corresponding to the asynchronous processing server i, the Z(i) represents the number of threads corresponding to the asynchronous processing server i, and the D(i) represents the number of abnormal tasks corresponding to the asynchronous processing server i .
  • S1073 Calculate the sending ratio corresponding to each asynchronous processing server through a ratio calculation formula.
  • the ratio calculation formula includes:
  • the Par i represents the sending ratio corresponding to the asynchronous processing server i
  • the K(i) represents the sending parameter corresponding to the asynchronous processing server i
  • the n is the number of asynchronous processing servers.
  • S1074 Select one of the asynchronous processing servers corresponding to the highest sending ratio as the selected server.
  • the user identity data and verification keywords through login information, receive real-time data from users, divide real-time data into real-time video and real-time audio data, and calculate the similarity between face data contained in real-time video components and user identity data
  • the similarity is not less than the preset similarity threshold
  • the number of verification keywords included in the real-time audio component within the preset time period is calculated to verify whether the real-time data is legal. If the number of the verification keywords contained in the audio component is greater than the first number threshold, it is determined that the real-time data has passed the verification, so as to perform intelligent analysis on the real-time data and improve the security of remote business processing.
  • FIG. 5 shows a structural block diagram of the real-time data verification device provided in the embodiment of the present application. For ease of description, only the information related to the embodiment of the present application is shown. section.
  • the device includes:
  • the parsing module 501 is configured to, after receiving the user's login information, determine the user identity data corresponding to the login information, analyze the key code contained in the login information, and determine the number corresponding to the user based on the key code. Verification keywords;
  • the decomposition module 502 is configured to receive real-time data of the user, and divide the real-time data into a real-time video component and a real-time audio component, and the real-time video component includes face data;
  • the calculation module 503 is configured to calculate the similarity between the face data contained in the real-time video component and the user identity data, if the similarity between the face data and the user identity data is not less than a preset similarity Threshold, calculate the number of verification keywords included in the real-time audio component within a preset time period;
  • the first execution module 504 is configured to select one of a plurality of asynchronous processing servers as the selected server if the number of the verification keywords contained in the real-time audio component is not greater than the first number threshold but greater than the second number threshold , And send the real-time data to the selected server to perform non-real-time verification on the real-time data through the selected server;
  • the second execution module 505 is configured to determine that the real-time data passes the verification if the number of the verification keywords contained in the real-time audio component is greater than the first number threshold.
  • the parsing module is specifically used for:
  • the corresponding relationship between the receiving time period and the word appearance times of each word is established.
  • the receiving time period includes multiple receiving times, and the word appearance times are in one place. The number of occurrences of the word in the word set in the receiving time period;
  • the words whose predicted occurrence times are not less than a preset threshold are selected as the verification keywords corresponding to the user.
  • calculation module is specifically used for:
  • the calculating the number of verification keywords contained in the real-time audio component within a preset time period includes: converting the real-time audio component within the preset time period into a voice-to-text conversion algorithm Text data; word segmentation processing of the text data to generate a set of voice words, the set of voice words contains multiple words; the number of the verification keywords included in the set of voice words is calculated as the real-time audio component Contains the number of verification keywords.
  • the selecting one of multiple asynchronous processing servers as the selected server includes:
  • the user identity data and verification keywords through login information, receive real-time data from users, divide real-time data into real-time video and real-time audio data, and calculate the similarity between face data contained in real-time video components and user identity data
  • the similarity is not less than the preset similarity threshold
  • the number of verification keywords included in the real-time audio component within the preset time period is calculated to verify whether the real-time data is legal. If the number of the verification keywords contained in the audio component is greater than the first number threshold, it is determined that the real-time data has passed the verification, so as to perform intelligent analysis on the real-time data and improve the security of remote business processing.
  • Fig. 6 is a schematic diagram of a server provided by an embodiment of the present application.
  • the server 6 of this embodiment includes: a processor 60, a memory 61, and computer-readable instructions 62 stored in the memory 61 and running on the processor 60, such as verification of real-time data program.
  • the processor 60 executes the computer-readable instructions 62
  • the steps in the foregoing embodiments of the verification method for real-time data are implemented, such as steps 101 to 108 shown in FIG. 1.
  • the processor 60 executes the computer-readable instructions 62
  • the functions of the modules/units in the foregoing device embodiments such as the functions of the units 501 to 505 shown in FIG. 5, are implemented.
  • the computer-readable instructions 62 may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 61 and executed by the processor 60, To complete this application.
  • the one or more modules/units may be a series of computer-readable instruction instruction segments capable of completing specific functions, and the instruction segments are used to describe the execution process of the computer-readable instructions 62 in the server 6.
  • the server 6 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the server may include, but is not limited to, a processor 60 and a memory 61.
  • FIG. 6 is only an example of the server 6 and does not constitute a limitation on the server 6. It may include more or less components than shown, or a combination of certain components, or different components, for example
  • the server may also include input and output devices, network access devices, buses, and the like.
  • the so-called processor 60 may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the storage 61 may be an internal storage unit of the server 6, such as a hard disk or a memory of the server 6.
  • the memory 61 may also be an external storage device of the server 6, for example, a plug-in hard disk, a smart media card (SMC), or a secure digital (SD) card equipped on the server 6. Flash Card, etc. Further, the memory 61 may also include both an internal storage unit of the server 6 and an external storage device.
  • the memory 61 is used to store the computer readable instructions and other programs and data required by the server.
  • the memory 61 can also be used to temporarily store data that has been output or will be output.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the integrated module/unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium. Based on this understanding, this application implements all or part of the processes in the above-mentioned embodiments and methods, and can also be completed by instructing relevant hardware through computer-readable instructions.
  • the computer-readable instructions can be stored in a computer-readable storage medium. in.
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • ROM read only memory
  • PROM programmable ROM
  • EPROM electrically programmable ROM
  • EEPROM electrically erasable programmable ROM
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computing Systems (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biomedical Technology (AREA)
  • Collating Specific Patterns (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

La présente invention peut s'appliquer au domaine technique du traitement de données et concerne un procédé et un dispositif de vérification de données en temps réel, un serveur et un support. Le procédé consiste à : obtenir des données d'identité d'utilisateur et des mots-clés de vérification par l'intermédiaire d'informations de connexion ; recevoir des données en temps réel de l'utilisateur ; et diviser les données en temps réel en données vidéo en temps réel et en données audio en temps réel ; et calculer la similarité entre des données de visage humain contenues dans une composante vidéo en temps réel et les données d'identité d'utilisateur pour vérifier l'identité d'utilisateur. Si la similarité est supérieure ou égale à un seuil prédéfini de similarité, le nombre de mots-clés de vérification contenus dans la composante audio en temps réel pendant la durée prédéfinie est calculé pour vérifier si les données en temps réel sont légales ; si le nombre de mots-clés de vérification contenus dans la composante audio en temps réel à l'intérieur est supérieur à un premier seuil de quantité, il est déterminé que les données en temps réel ont été vérifiées pour effectuer une analyse intelligente sur les données en temps réel et pour améliorer la sécurité de la gestion de service à distance.
PCT/CN2019/103300 2019-05-21 2019-08-29 Procédé de vérification de données en temps réel, dispositif, serveur et support WO2020232894A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910424123.0A CN110266645A (zh) 2019-05-21 2019-05-21 实时数据的验证方法、装置、服务器及介质
CN201910424123.0 2019-05-21

Publications (1)

Publication Number Publication Date
WO2020232894A1 true WO2020232894A1 (fr) 2020-11-26

Family

ID=67914974

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/103300 WO2020232894A1 (fr) 2019-05-21 2019-08-29 Procédé de vérification de données en temps réel, dispositif, serveur et support

Country Status (2)

Country Link
CN (1) CN110266645A (fr)
WO (1) WO2020232894A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112036370B (zh) * 2020-09-22 2023-05-12 济南博观智能科技有限公司 一种人脸特征比对方法、系统、设备及计算机存储介质
US11688106B2 (en) * 2021-03-29 2023-06-27 International Business Machines Corporation Graphical adjustment recommendations for vocalization
CN117726307B (zh) * 2024-02-18 2024-04-30 成都汇智捷成科技有限公司 一种基于业务中台的数据治理方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104580650A (zh) * 2014-12-25 2015-04-29 广东欧珀移动通信有限公司 一种提示诈骗电话的方法及通信终端
WO2017194978A1 (fr) * 2016-05-13 2017-11-16 Lucozade Ribena Suntory Limited Procédé de commande d'un état d'un affichage d'un dispositif
CN109389279A (zh) * 2018-08-17 2019-02-26 深圳壹账通智能科技有限公司 保险销售环节合规判定方法、装置、设备及存储介质
CN109729383A (zh) * 2019-01-04 2019-05-07 深圳壹账通智能科技有限公司 双录视频质量检测方法、装置、计算机设备和存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9967619B2 (en) * 2014-12-01 2018-05-08 Google Llc System and method for associating search queries with remote content display
CN114464186A (zh) * 2016-07-28 2022-05-10 北京小米移动软件有限公司 关键词确定方法及装置
CN108512869B (zh) * 2017-02-24 2020-02-11 北京数安鑫云信息技术有限公司 一种采用异步化方式处理并发数据的方法及系统
CN109376344A (zh) * 2018-09-03 2019-02-22 平安普惠企业管理有限公司 表单的生成方法及终端设备
CN109190775A (zh) * 2018-09-05 2019-01-11 南方电网科学研究院有限责任公司 一种智能运维管理设备及运维管理方法
CN109377500B (zh) * 2018-09-18 2023-07-25 平安科技(深圳)有限公司 基于神经网络的图像分割方法及终端设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104580650A (zh) * 2014-12-25 2015-04-29 广东欧珀移动通信有限公司 一种提示诈骗电话的方法及通信终端
WO2017194978A1 (fr) * 2016-05-13 2017-11-16 Lucozade Ribena Suntory Limited Procédé de commande d'un état d'un affichage d'un dispositif
CN109389279A (zh) * 2018-08-17 2019-02-26 深圳壹账通智能科技有限公司 保险销售环节合规判定方法、装置、设备及存储介质
CN109729383A (zh) * 2019-01-04 2019-05-07 深圳壹账通智能科技有限公司 双录视频质量检测方法、装置、计算机设备和存储介质

Also Published As

Publication number Publication date
CN110266645A (zh) 2019-09-20

Similar Documents

Publication Publication Date Title
US11475143B2 (en) Sensitive data classification
US20210257066A1 (en) Machine learning based medical data classification method, computer device, and non-transitory computer-readable storage medium
WO2019119505A1 (fr) Procédé et dispositif de reconnaissance faciale, dispositif informatique et support d'enregistrement
CN110162593A (zh) 一种搜索结果处理、相似度模型训练方法及装置
CN111680159B (zh) 数据处理方法、装置及电子设备
WO2020232894A1 (fr) Procédé de vérification de données en temps réel, dispositif, serveur et support
CN111797214A (zh) 基于faq数据库的问题筛选方法、装置、计算机设备及介质
CN110569350B (zh) 法条推荐方法、设备和存储介质
CN111695338A (zh) 基于人工智能的面试内容精炼方法、装置、设备及介质
US20230032728A1 (en) Method and apparatus for recognizing multimedia content
US11734360B2 (en) Methods and systems for facilitating classification of documents
CN110489747A (zh) 一种图像处理方法、装置、存储介质及电子设备
Alshehri et al. Iterative keystroke continuous authentication: A time series based approach
US20230237252A1 (en) Digital posting match recommendation apparatus and methods
CN112468658A (zh) 语音质量检测方法、装置、计算机设备及存储介质
CN111488501A (zh) 一种基于云平台的电商统计系统
US20220156489A1 (en) Machine learning techniques for identifying logical sections in unstructured data
CN111460139B (zh) 一种基于智慧管理的工程监理知识服务系统及方法
WO2020253353A1 (fr) Procédé de génération de qualification d'acquisition de ressources pour un utilisateur prédéfini et dispositif associé
CN112364136B (zh) 关键词生成方法、装置、设备及存储介质
CN115146589B (zh) 文本处理方法、装置、介质以及电子设备
CN109961801A (zh) 智能服务评价方法、计算机可读存储介质和终端设备
CN114724072A (zh) 智能推题方法、装置、设备及存储介质
Li et al. A deep learning approach of financial distress recognition combining text
CN112733645A (zh) 手写签名校验方法、装置、计算机设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19929289

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19929289

Country of ref document: EP

Kind code of ref document: A1