WO2022100452A1 - Ocr系统的评估方法、装置、设备及可读存储介质 - Google Patents
Ocr系统的评估方法、装置、设备及可读存储介质 Download PDFInfo
- Publication number
- WO2022100452A1 WO2022100452A1 PCT/CN2021/127185 CN2021127185W WO2022100452A1 WO 2022100452 A1 WO2022100452 A1 WO 2022100452A1 CN 2021127185 W CN2021127185 W CN 2021127185W WO 2022100452 A1 WO2022100452 A1 WO 2022100452A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- ocr system
- text
- image
- training
- deep learning
- Prior art date
Links
- 238000011156 evaluation Methods 0.000 title claims abstract description 109
- 238000012549 training Methods 0.000 claims abstract description 130
- 238000001514 detection method Methods 0.000 claims abstract description 77
- 238000000034 method Methods 0.000 claims abstract description 19
- 238000013136 deep learning model Methods 0.000 claims description 114
- 238000002372 labelling Methods 0.000 claims description 17
- 238000012795 verification Methods 0.000 claims description 12
- 238000012360 testing method Methods 0.000 claims 1
- 238000012015 optical character recognition Methods 0.000 description 202
- 238000005516 engineering process Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 238000011478 gradient descent method Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000012854 evaluation process Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/5846—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using extracted text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/55—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
Definitions
- the present application relates to the technical field of optical character recognition, and in particular, to an evaluation method, apparatus, device and readable storage medium of an OCR system.
- OCR Optical Character Recognition
- OCR technology can convert the printed text in the image into a text format that can be processed by a computer.
- the input and verification in OCR technology are widely used in data comparison and other scenarios, and become the informatization and digitization of various industries in the national economy. key aspects of the application.
- OCR technology With the continuous development of big data and deep learning technology, OCR technology has made breakthroughs, and OCR technology is widely used in the recognition of scanned documents of printed documents.
- the evaluation of the recognition accuracy of the OCR system usually includes two links: text detection and text recognition.
- text recognition is dependent on the text detection and positioning results, and sometimes a higher detection index will bring about a decline in the recognition index, so the existing evaluation technology of the OCR system is difficult to reflect the overall OCR system. performance issue.
- the main purpose of the present application is to provide an OCR system evaluation method, device, device and readable storage medium, aiming to solve the technical problem that the existing OCR system evaluation technology cannot reflect the overall performance of the OCR system.
- a first aspect of the embodiments of the present application provides a method for evaluating an OCR system, and the method for evaluating an OCR system includes the following steps:
- the character recognition accuracy rate calculates the evaluation index of the OCR system to evaluate the performance of the OCR system based on the evaluation index.
- a second aspect of the embodiments of the present application provides an evaluation device for an OCR system, and the evaluation device for the OCR system includes:
- a training module is used to obtain a training image, and input the training image into the initial OCR system, so as to train the initial OCR system, and obtain the corresponding OCR system after completing the training of the initial OCR system;
- a recognition module for inputting the image to be recognized into the OCR system, so as to determine the text recognition result corresponding to the image to be recognized based on the OCR system;
- An evaluation module configured to determine the text recall rate corresponding to the OCR system and the text recognition accuracy rate corresponding to the OCR system based on the text recognition result and the actual labeling data corresponding to the to-be-recognized image, and based on the text
- the recall rate and the text recognition precision rate calculate the evaluation index of the OCR system to evaluate the performance of the OCR system based on the evaluation index.
- a third aspect of the embodiments of the present application provides an OCR system evaluation device, where the OCR system evaluation device includes: a memory, a processor, and an OCR system stored on the memory and running on the processor
- the evaluation program of the OCR system realizes the following steps when the evaluation program of the OCR system is executed by the processor:
- the character recognition accuracy rate calculates the evaluation index of the OCR system to evaluate the performance of the OCR system based on the evaluation index.
- a fourth aspect of the embodiments of the present application provides a readable storage medium on which an evaluation program of an OCR system is stored, and when the evaluation program of the OCR system is executed by a processor, the following steps are implemented:
- the character recognition accuracy rate calculates the evaluation index of the OCR system to evaluate the performance of the OCR system based on the evaluation index.
- the evaluation method of the OCR system of the present application can effectively avoid misjudgment, and make the evaluation of the model more objective and fair.
- FIG. 1 is a schematic structural diagram of an evaluation device of the OCR system of the hardware operating environment involved in the solution of the embodiment of the present application;
- FIG. 2 is a schematic flowchart of the first embodiment of the evaluation method of the OCR system of the application
- FIG. 3 is a schematic flowchart of the second embodiment of the evaluation method of the OCR system of the present application.
- FIG. 1 is a schematic structural diagram of a terminal of a hardware operating environment involved in the solution of the embodiment of the present application.
- the evaluation device of the OCR system in the embodiment of the present application may be a PC, or may be a smart phone, a tablet computer, an e-book reader, an MP3 (Moving Picture Experts Group Audio Layer III, moving image expert compression standard audio layer 3) player, MP4 (Moving Picture Experts Group Audio Layer IV, Moving Picture Experts Compression Standard Audio Layer 4) Players, portable computers and other portable terminal devices with display functions.
- MP3 Motion Picture Experts Group Audio Layer III, moving image expert compression standard audio layer 3
- MP4 Moving Picture Experts Group Audio Layer IV, Moving Picture Experts Compression Standard Audio Layer 4
- the evaluation device of the OCR system may include: a processor 1001 , such as a CPU, a network interface 1004 , a user interface 1003 , a memory 1005 , and a communication bus 1002 .
- the communication bus 1002 is used to realize the connection and communication between these components.
- the user interface 1003 may include a display screen (Display), an input unit such as a keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
- the network interface 1004 may include a standard wired interface and a wireless interface (eg, a WI-FI interface).
- the memory 1005 may be high-speed RAM memory, or may be non-volatile memory, such as disk memory.
- the memory 1005 may also be a storage device independent of the aforementioned processor 1001 .
- the evaluation equipment of the OCR system may further include a camera, an RF (Radio Frequency, radio frequency) circuit, a sensor, an audio circuit, a WiFi module, and the like.
- sensors such as light sensors, motion sensors, and other sensors.
- the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display screen according to the brightness of the ambient light, and the proximity sensor may turn off the display screen when the evaluation device of the OCR system moves to the ear and/or backlight.
- the gravitational acceleration sensor can detect the magnitude of acceleration in all directions (generally three axes), and can detect the magnitude and direction of gravity when stationary, and can be used for applications that recognize the posture of mobile terminals (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping).
- the structure of the evaluation device of the OCR system shown in FIG. 1 does not constitute a limitation to the evaluation device of the OCR system, and may include more or less components than those shown in the figure, or combine some components, Or a different component arrangement.
- the memory 1005 as a computer storage medium may include an operating system, a network communication module, a user interface module and an evaluation program of the OCR system.
- the network interface 1004 is mainly used to connect to the background server and perform data communication with the background server;
- the user interface 1003 is mainly used to connect to the client (client) and perform data communication with the client ; and the processor 1001 can be used to call the evaluation program of the OCR system stored in the memory 1005 .
- the evaluation device of the OCR system includes: a memory 1005, a processor 1001, and an evaluation program of the OCR system that is stored on the memory 1005 and can run on the processor 1001, wherein the processor 1001 calls When evaluating the program of the OCR system stored in the memory 1005, and perform the following operations:
- the character recognition accuracy rate calculates the evaluation index of the OCR system to evaluate the performance of the OCR system based on the evaluation index.
- processor 1001 can call the evaluation program of the OCR system stored in the memory 1005, and also perform the following operations:
- a second deep learning model is trained based on the text detection model, and the text recognition model corresponding to the second deep learning model after the training is completed is obtained.
- processor 1001 can call the evaluation program of the OCR system stored in the memory 1005, and also perform the following operations:
- the first deep learning model is optimized based on the first gradient information to determine the text detection model, wherein the text detection model is the optimized first deep learning model.
- processor 1001 can call the evaluation program of the OCR system stored in the memory 1005, and also perform the following operations:
- second gradient information corresponding to the second deep learning model is determined, and the second deep learning model is optimized based on the second gradient information to determine the text recognition model, wherein the The text recognition detection model is the optimized second deep learning model.
- processor 1001 can call the evaluation program of the OCR system stored in the memory 1005, and also perform the following operations:
- processor 1001 can call the evaluation program of the OCR system stored in the memory 1005, and also perform the following operations:
- the first target account does not match the second target account, send verification information to the first target account, and when receiving feedback information corresponding to the verification information, execute the corresponding image storage request image storage operations.
- processor 1001 can call the evaluation program of the OCR system stored in the memory 1005, and also perform the following operations:
- the evaluation index of the OCR system reaches the preset threshold, the performance of the OCR system reaches the standard.
- FIG. 2 is a schematic flowchart of the first embodiment of the evaluation method for the OCR system of the present application.
- Step S10 obtaining a training image, and inputting the training image into the initial OCR system, so as to train the initial OCR system, and obtain an OCR system corresponding to the initial OCR system after the training is completed;
- the evaluation method of the OCR system proposed in this application is applied to the OCR system.
- the OCR system is an optical character recognition system, which can recognize the text in the image, and extract the text in the image to convert the text in the image into a computer-processable format.
- the OCR system includes a text detection model and a text recognition model.
- the text detection model and the text recognition model are both deep learning models, and the deep learning model may be a network model such as a convolutional neural network or a recurrent neural network, and the network type to which the deep learning model belongs is not limited in this embodiment.
- the text detection model is used to identify the position of the text in the picture
- the text recognition model is used to identify the text content in the recognized position of each text, that is, to identify the text content contained in each text position.
- the process of training the OCR system first obtain the initial OCR system, and obtain the training image, and then input the training image into the initial OCR model, so as to train the initial OCR model based on the training image; After the OCR system, the corresponding OCR system after training the initial OCR system is obtained.
- the initial OCR system is the initial state before training the OCR system.
- the initial OCR system includes a first deep learning model and a second deep learning model.
- the first deep learning model is used for training the text detection model
- the second deep learning model is used for training Character recognition model, that is, the first deep learning model is an initial character detection model
- the second deep learning model is an initial character recognition model.
- Step S20 inputting the to-be-recognized image into the OCR system, to determine the text recognition result corresponding to the to-be-recognized image based on the OCR system;
- the OCR system includes a character detection model and a character recognition model, and then an evaluation process of the OCR system is performed. First, acquire the to-be-recognized image, and input the to-be-recognized image into the OCR system to determine the text recognition result corresponding to the to-be-recognized image based on the text detection model and the text recognition model in the OCR system after training.
- the image to be recognized is input into the text detection model, and based on the first model parameter of the text detection model, an intermediate recognition result corresponding to the image to be recognized is determined, wherein the intermediate recognition result is a text box obtained by recognizing the image to be recognized; obtain After the intermediate recognition result, the to-be-recognized image containing the intermediate recognition result is input into the character recognition model to obtain the character recognition result.
- the image to be recognized is an image containing text content that is inconsistent with the training image, and the image to be recognized is used to evaluate the OCR system.
- the image to be recognized is input into the text detection model, so that the text detection model determines the text position of the image to be recognized, that is, the text box corresponding to the image to be recognized determined by the text detection model is in the image to be recognized. text position.
- the character recognition result is obtained, that is to say, the character recognition result is the character content obtained after the image to be recognized is recognized by the character recognition model.
- Step S30 Determine the text recall rate corresponding to the OCR system and the text recognition precision rate corresponding to the OCR system based on the text recognition result and the actual labeling data corresponding to the to-be-recognized image, and based on the text recall rate and the character recognition accuracy rate to calculate the evaluation index of the OCR system to evaluate the performance of the OCR system based on the evaluation index.
- the text recall rate is the ratio between the number of correctly recognized characters and the actual number of characters
- the text recognition accuracy is the ratio between the correct number of characters in the text recognition result and the number of all characters in the text recognition result .
- recall is the text recall rate
- precision is the text recognition precision rate
- the calculation formulas of the text recall rate recall and text recognition precision rate are as follows:
- N gt represents the number of all characters in the labeled answer of the image to be recognized
- N gp represents the number of correctly recognized characters in the labeled answer of the image to be recognized
- N pred represents the number of all characters in the text recognition result of the image to be recognized
- N pp represents The correct number of characters in the text recognition result of the image to be recognized.
- the evaluation index of the OCR system is calculated to obtain For subsequent evaluation of the performance of the OCR system based on the evaluation indicators. This scheme uses the score as the evaluation index of the OCR system.
- the calculation formula of the evaluation index f1 is as follows:
- the evaluation index of the OCR system reaches the preset threshold, the performance of the OCR system meets the standard.
- the corresponding values of the initial OCR system after the training is completed are obtained.
- OCR system input the image to be recognized into the OCR system to determine the text recognition result corresponding to the image to be recognized based on the OCR system; finally, based on the text recognition result and the corresponding image to be recognized
- the actual labeling data determine the text recall rate corresponding to the OCR system and the text recognition precision rate corresponding to the OCR system, and calculate the evaluation index of the OCR system based on the text recall rate and the text recognition precision rate, To evaluate the performance of the OCR system based on the evaluation metrics.
- the evaluation method of the OCR system of the present application uses a single index to evaluate the pros and cons or performance of the OCR system, and can help users select better OCR services, and promote the development of informatization and digitization in various industries.
- the evaluation method based on the IOU in the prior art will misjudge such a situation, and The evaluation method of the OCR system of the present application can effectively avoid misjudgment and make the evaluation of the model more objective and fair.
- step S10 includes:
- Step S11 inputting the training image into a first deep learning model to obtain a text detection model corresponding to the first deep learning model after training is completed;
- Step S12 training a second deep learning model based on the text detection model, to obtain the text recognition model corresponding to the second deep learning model after the training is completed.
- the first deep learning model in the initial OCR system is trained first, and after completing the first deep learning model, a text detection model is obtained, and then the text detection model and the second deep learning model are jointly trained.
- the OCR system includes a text detection model and a text recognition model.
- the training image is first input into the first deep learning model in the initial OCR system for training, and after the first deep learning model is trained, a text detection model is obtained; after the text detection model is obtained, the text detection model and the second depth are combined
- the learning models are trained together, and after the second deep learning model is trained, a text recognition model is obtained.
- the condition for completing the training of the first deep learning model or the second deep learning model may be that the training step reaches the maximum iterative step or the gradient corresponding to the gradient descent method reaches the minimum gradient value.
- the step of inputting the training image into the first deep learning model to obtain a text detection model corresponding to the first deep learning model after training includes:
- the first deep learning model is optimized based on the first gradient information to determine the text detection model, wherein the text detection model is the optimized first deep learning model.
- the process of training the text detection model is as follows: firstly, the training image is marked, and the text box in the training image is marked out, so as to determine the preset marking position of the text box in the training image; It is assumed that the training image of the marked position is input to the first deep learning model for training and learning, and the first deep learning model outputs the learning marked position corresponding to the training image.
- the first deep learning model is optimized based on the gradient descent method. The parameters are optimized; the first deep learning model is optimized based on the first gradient information, and when the first gradient information satisfies the first preset condition, the training of the first deep learning model is completed, and a text detection model is obtained.
- the first preset condition may be that the first gradient information reaches the first minimum gradient value, and the first minimum gradient value may be set as required.
- the step of training a second deep learning model based on the text detection model, and obtaining the text recognition model corresponding to the second deep learning model after training includes:
- second gradient information corresponding to the second deep learning model is determined, and the second deep learning model is optimized based on the second gradient information to determine the text recognition model, wherein the The text recognition detection model is the optimized second deep learning model.
- the process of training the text recognition model is as follows: collect a large number of single text strip images, determine the preset text content of the text strip images, and input the text strip images into the optimized text detection model for text detection.
- the model outputs a text bar image marked with the detected text position; after that, the text bar image marked with the text position is input into the second deep learning model for training, and the second deep learning model outputs the text content corresponding to the text bar image, that is, the output Identify the text content in the text bar image; then, use the gradient descent method to optimize the second deep learning model, and optimize the first deep learning model based on the second gradient information, until the second gradient information corresponding to the second deep learning model When the second preset condition is satisfied, the optimization of the second deep learning model is completed, and a character recognition model is finally obtained.
- the second preset condition may be that the second gradient information reaches a second minimum gradient value, and the second minimum gradient value may be set as required.
- the step of calculating the evaluation index of the OCR system based on the text recall rate and the text recognition precision rate to evaluate the performance of the OCR system based on the evaluation index, it also includes:
- the ID card information includes the client's ID card number, address, gender or place of origin, etc. It should be noted that customers need to upload the customer's ID card information in some cases when using some terminal devices or platform systems, so these terminal devices or platform systems can obtain and store the customer's ID card when there is a customer uploading the ID card information. Therefore, there is an opportunity to steal the customer's ID card information, resulting in the leakage of the customer's personal information and privacy. Therefore, it is urgent to protect the customer's private data.
- the terminal when the terminal receives an image storage request to store the customer's image to be stored, the terminal obtains the image to be stored corresponding to the image storage request, and can perform an image recognition operation on the image to be stored based on the trained OCR system, so as to identify the image to be stored.
- the to-be-stored images to be stored are identified for the purpose of identifying whether the to-be-stored images contain the customer's private data. Therefore, no matter when the terminal initiates an image storage request for any image, the image to be stored corresponding to the image storage request is obtained for identification, so as to identify whether the image to be stored contains ID card information, so that the current image storage operation of the terminal can be monitored in real time. , to monitor whether the current storage operation is suspected of leaking customers' private data. If the OCR system recognizes that the image to be stored does not contain ID information, it executes the image storage operation corresponding to the image storage request.
- the step of determining whether there is ID card information in the to-be-stored image based on the OCR system it also includes:
- the first target account does not match the second target account, send verification information to the first target account, and when receiving feedback information corresponding to the verification information, execute the corresponding image storage request image storage operations.
- the target account can be the user identification card of the digital cellular mobile phone, the system account of the digital cellular mobile phone, or the computer system account of the PC terminal, etc.
- the target account is not limited.
- the account information of the target account can be Including international mobile subscriber identification number or personal account number, etc.
- the OCR system recognizes that the image to be stored contains ID card information, it means that the current image storage operation is suspected of leaking the customer's private data, so the image storage operation is prevented and controlled.
- the ID card number in the ID card information is obtained based on the OCR system, and the first target account associated with the ID card information is determined by the ID card number, such as through the ID card number.
- the first target account and the second target account are matched.
- the first target account matches the second target account it indicates that the local terminal is a security device held by the client, and the local terminal is allowed to perform the image storage operation corresponding to the image storage request operation.
- the first target account and the second target account do not match, it means that the local terminal is not the client's device and is an unsafe device.
- the local terminal cannot be allowed to perform the image storage operation corresponding to the image storage request operation. Instead, the verification information is sent to the first target account, and if feedback information fed back by the first target account is received, an image storage operation corresponding to the image storage request operation is performed.
- the evaluation method of the OCR system proposed in this embodiment, by inputting the training image into the first deep learning model, a text detection model corresponding to the first deep learning model after training is obtained; training is performed based on the text detection model For the second deep learning model, the character recognition model corresponding to the second deep learning model after the training is completed is obtained.
- the first deep learning model in the initial OCR system is first trained, and after the first deep learning model is trained, a text detection model is obtained, and then the text recognition model is obtained by training in conjunction with the text detection model, so that the text detection can be improved.
- the degree of cooperation between the model and the text recognition model can further improve the accuracy of the OCR system.
- an embodiment of the present application also proposes an evaluation device for an OCR system, and the evaluation device for the OCR system includes:
- a training module is used to obtain a training image, and input the training image into the initial OCR system, so as to train the initial OCR system, and obtain the corresponding OCR system after completing the training of the initial OCR system;
- a recognition module for inputting the image to be recognized into the OCR system, so as to determine the text recognition result corresponding to the image to be recognized based on the OCR system;
- An evaluation module configured to determine the text recall rate corresponding to the OCR system and the text recognition accuracy rate corresponding to the OCR system based on the text recognition result and the actual labeling data corresponding to the to-be-recognized image, and based on the text
- the recall rate and the text recognition precision rate calculate the evaluation index of the OCR system to evaluate the performance of the OCR system based on the evaluation index.
- training module is also used for:
- a second deep learning model is trained based on the text detection model, and the text recognition model corresponding to the second deep learning model after the training is completed is obtained.
- training module is also used for:
- the first deep learning model is optimized based on the first gradient information to determine the text detection model, wherein the text detection model is the optimized first deep learning model.
- training module is also used for:
- second gradient information corresponding to the second deep learning model is determined, and the second deep learning model is optimized based on the second gradient information to determine the text recognition model, wherein the The text recognition detection model is the optimized second deep learning model.
- evaluation module is also used for:
- evaluation module is also used for:
- the first target account does not match the second target account, send verification information to the first target account, and when receiving feedback information corresponding to the verification information, execute the corresponding image storage request image storage operations.
- evaluation module is also used for:
- the evaluation index of the OCR system reaches the preset threshold, the performance of the OCR system reaches the standard.
- an embodiment of the present application also proposes a readable storage medium, where an evaluation program of an OCR system is stored on the readable storage medium, and the evaluation program of the OCR system is executed by a processor to achieve any of the above. The steps of the evaluation method of the OCR system described.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Library & Information Science (AREA)
- Character Discrimination (AREA)
Abstract
提供了一种OCR系统的评估方法、装置、设备及可读存储介质,方法包括:获取训练图像,并将训练图像输入至初始OCR系统中,以对初始OCR系统进行训练,得到训练完成初始OCR系统后对应的OCR系统(S10);将待识别图像输入至OCR系统中,以基于OCR系统确定待识别图像对应的文字识别结果(S20);基于文字识别结果以及待识别图像对应的实际标注数据,确定OCR系统对应的文字召回率以及OCR系统对应的文字识别精确率,并基于文字召回率和文字识别精确率计算OCR系统的评估指标,以基于评估指标评估OCR系统的性能(S30)。解决了现有技术中将文字检测和文字识别分开独立评价而导致评估指标不能客观反映OCR系统整体性能的问题。
Description
本申请要求于2020年11月16日在中国专利局提交的、申请号为202011275415.1、申请名称为“OCR系统的评估方法、装置、设备及可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及光学字符识别技术领域,尤其涉及一种OCR系统的评估方法、装置、设备及可读存储介质。
OCR(光学字符识别)技术能够将图像中印刷文字转换为计算机可处理的文本格式,OCR技术中的录入、校验被广泛应用在数据比对等场景中,成为国民经济各行业信息化和数字化应用的关键环节。随着大数据和深度学习技术的不断发展,OCR技术取得了突破性的进展,OCR技术被广泛地应用在印刷文档扫描件识别的应用上。
目前,对OCR系统识别准确率的评价,通常包含:文本检测、文本识别两个环节。现有技术中,文本检测主要以IOU=0.5为阈值时检测框与标注框的得分作为评价指标,而文本识别则使用字符准确率或者字段准确率作为评价指标。实际上,在OCR系统中,文本识别对文本检测定位结果有依赖性,有时较高的检测指标反而会带来识别指标的下降,因此导致现有的对OCR系统的评估技术难以反映OCR系统整体性能的问题。
上述内容仅用于辅助理解本申请的技术方案,并不代表承认上述内容是现有技术。
本申请的主要目的在于提供一种OCR系统的评估方法、装置、设备及可读存储介质,旨在解决现有的对OCR系统的评估技术难以反映OCR系统整体性能的技术问题。
为解决上述技术问题,本申请实施例采用的技术方案是:
本申请实施例的第一方面提供了一种OCR系统的评估方法,所述OCR系统的评估方法包括以下步骤:
获取训练图像,并将所述训练图像输入至初始OCR系统中,以对所述初始OCR系统进行训练,得到训练完成所述初始OCR系统后对应的OCR系统;
将待识别图像输入至所述OCR系统中,以基于所述OCR系统确定所述待识别图像对应的文字识别结果;
基于所述文字识别结果以及所述待识别图像对应的实际标注数据,确定所述OCR系统对应的文字召回率以及所述OCR系统对应的文字识别精确率,并基于所述文字召回率和所述文字识别精确率计算所述OCR系统的评估指标,以基于所述评估指标评估所述OCR系统的性能。
本申请实施例的第二方面提供了一种OCR系统的评估装置,所述OCR系统的评估装置包括:
训练模块,用于获取训练图像,并将所述训练图像输入至初始OCR系统中,以对所述初始OCR系统进行训练,得到训练完成所述初始OCR系统后对应的OCR系统;
识别模块,用于将待识别图像输入至所述OCR系统中,以基于所述OCR系统确定所述待识别图像对应的文字识别结果;
评估模块,用于基于所述文字识别结果以及所述待识别图像对应的实际标注数据,确定所述OCR系统对应的文字召回率以及所述OCR系统对应的文字识别精确率,并基于所述文字召回率和所述文字识别精确率计算所述OCR系统的评估指标,以基于所述评估指标评估所述OCR系统的性能。
本申请实施例的第三方面提供了一种OCR系统的评估设备,所述OCR系统的评估设备包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的OCR系统的评估 程序,所述OCR系统的评估程序被所述处理器执行时实现如下步骤:
获取训练图像,并将所述训练图像输入至初始OCR系统中,以对所述初始OCR系统进行训练,得到训练完成所述初始OCR系统后对应的OCR系统;
将待识别图像输入至所述OCR系统中,以基于所述OCR系统确定所述待识别图像对应的文字识别结果;
基于所述文字识别结果以及所述待识别图像对应的实际标注数据,确定所述OCR系统对应的文字召回率以及所述OCR系统对应的文字识别精确率,并基于所述文字召回率和所述文字识别精确率计算所述OCR系统的评估指标,以基于所述评估指标评估所述OCR系统的性能。
本申请实施例的第四方面提供了一种可读存储介质,所述可读存储介质上存储有OCR系统的评估程序,所述OCR系统的评估程序被处理器执行时实现如下步骤:
获取训练图像,并将所述训练图像输入至初始OCR系统中,以对所述初始OCR系统进行训练,得到训练完成所述初始OCR系统后对应的OCR系统;
将待识别图像输入至所述OCR系统中,以基于所述OCR系统确定所述待识别图像对应的文字识别结果;
基于所述文字识别结果以及所述待识别图像对应的实际标注数据,确定所述OCR系统对应的文字召回率以及所述OCR系统对应的文字识别精确率,并基于所述文字召回率和所述文字识别精确率计算所述OCR系统的评估指标,以基于所述评估指标评估所述OCR系统的性能。
本申请的有益效果在于:
在本申请实施例提出的技术方案中,本申请的OCR系统的评估方法可以有效避免误判,使模型的评估更加客观公正。
图1是本申请实施例方案涉及的硬件运行环境的OCR系统的评估设备结构示意图;
图2为本申请OCR系统的评估方法第一实施例的流程示意图;
图3为本申请OCR系统的评估方法第二实施例的流程示意图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
如图1所示,图1是本申请实施例方案涉及的硬件运行环境的终端结构示意图。
本申请实施例OCR系统的评估设备可以是PC,也可以是智能手机、平板电脑、电子书阅读器、MP3(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)播放器、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、便携计算机等具有显示功能的可移动式终端设备。
如图1所示,该OCR系统的评估设备可以包括:处理器1001,例如CPU,网络接口1004,用户接口1003,存储器1005,通信总线1002。其中,通信总线1002用于实现这些组件之间的连接通信。用户接口1003可以包括显示屏(Display)、输入单元比如键盘(Keyboard),可选用户接口1003还可以包括标准的有线接口、无线接口。网络接口1004可选的可以包括标准的有线接口、无线接口(如WI-FI接口)。存储器1005可以是高速RAM存储器,也可以是稳定的存储器(non-volatile memory),例如磁盘存储器。存储器1005可选的还可以是独立于前述处理器1001的存储装置。
可选地,OCR系统的评估设备还可以包括摄像头、RF(Radio Frequency,射频)电路,传感器、音频电路、WiFi模块等等。其中,传感器比如光传感器、运动传感器以及其他传 感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示屏的亮度,接近传感器可在OCR系统的评估设备移动到耳边时,关闭显示屏和/或背光。作为运动传感器的一种,重力加速度传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别移动终端姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)。
本领域技术人员可以理解,图1中示出的OCR系统的评估设备结构并不构成对OCR系统的评估设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
如图1所示,作为一种计算机存储介质的存储器1005中可以包括操作系统、网络通信模块、用户接口模块以及OCR系统的评估程序。
在图1所示的OCR系统的评估设备中,网络接口1004主要用于连接后台服务器,与后台服务器进行数据通信;用户接口1003主要用于连接客户端(用户端),与客户端进行数据通信;而处理器1001可以用于调用存储器1005中存储的OCR系统的评估程序。
在本实施例中,OCR系统的评估设备包括:存储器1005、处理器1001及存储在所述存储器1005上并可在所述处理器1001上运行的OCR系统的评估程序,其中,处理器1001调用存储器1005中存储的OCR系统的评估程序时,并执行以下操作:
获取训练图像,并将所述训练图像输入至初始OCR系统中,以对所述初始OCR系统进行训练,得到训练完成所述初始OCR系统后对应的OCR系统;
将待识别图像输入至所述OCR系统中,以基于所述OCR系统确定所述待识别图像对应的文字识别结果;
基于所述文字识别结果以及所述待识别图像对应的实际标注数据,确定所述OCR系统对应的文字召回率以及所述OCR系统对应的文字识别精确率,并基于所述文字召回率和所述文字识别精确率计算所述OCR系统的评估指标,以基于所述评估指标评估所述OCR系统的性能。
进一步地,处理器1001可以调用存储器1005中存储的OCR系统的评估程序,还执行以下操作:
将所述训练图像输入至第一深度学习模型,得到训练完成所述第一深度学习模型后对应的文字检测模型;
基于所述文字检测模型进行训练第二深度学习模型,得到训练完成所述第二深度学习模型后对应的所述文字识别模型。
进一步地,处理器1001可以调用存储器1005中存储的OCR系统的评估程序,还执行以下操作:
标注所述训练图像,以确定所述训练图像中文本框的预设标注位置,并将包含所述预设标注位置的所述训练图像输入至第一深度学习模型,确定所述训练图像对应的学习标注位置;
基于所述预设标注位置和所述学习标注位置,确定所述第一深度学习模型对应的第一梯度信息;
基于所述第一梯度信息优化所述第一深度学习模型,以确定所述文字检测模型,其中,所述文字检测模型为优化完成的所述第一深度学习模型。
进一步地,处理器1001可以调用存储器1005中存储的OCR系统的评估程序,还执行以下操作:
获取文本条图像,并将所述文本条图像输入至所述文字检测模型,得到标注文字位置 的所述文本条图像;
将标注文字位置的所述文本条图像输入至第二深度学习模型,得到所述文本条图像对应的学习文字内容;
基于所述学习文字内容,确定所述第二深度学习模型对应的第二梯度信息,并基于所述第二梯度信息优化所述第二深度学习模型,以确定所述文字识别模型,其中,所述文字识别检测模型为优化完成的所述第二深度学习模型。
进一步地,处理器1001可以调用存储器1005中存储的OCR系统的评估程序,还执行以下操作:
当接收到图像存储请求时,获取所述图像存储请求对应的待存储图像;
在所述OCR系统的性能达标时,将所述待存储图像输入至所述OCR系统,基于所述OCR系统确定所述待存储图像中是否存在身份证信息;
若所述待存储图像中不存在身份证信息,则执行所述图像存储请求对应的图像存储操作。
进一步地,处理器1001可以调用存储器1005中存储的OCR系统的评估程序,还执行以下操作:
若所述待存储图像中存在身份证信息,则查询与所述身份证信息关联的第一目标账户,将所述第一目标账户与本地设备对应的第二目标账户进行匹配;
若所述第一目标账户与所述第二目标账户匹配,则执行所述图像存储请求对应的图像存储操作;
若所述第一目标账户与所述第二目标账户不匹配,则向所述第一目标账户发送验证信息,并在接收到所述验证信息对应的反馈信息时,执行所述图像存储请求对应的图像存储操作。
进一步地,处理器1001可以调用存储器1005中存储的OCR系统的评估程序,还执行以下操作:
若所述OCR系统的评估指标达预设阈值,则所述OCR系统的性能达标。
本申请还提供一种OCR系统的评估方法,参照图2,图2为本申请OCR系统的评估方法第一实施例的流程示意图。
步骤S10,获取训练图像,并将所述训练图像输入至初始OCR系统中,以对所述初始OCR系统进行训练,得到训练完成所述初始OCR系统后对应的OCR系统;
本申请所提出的OCR系统的评估方法应用于OCR系统,OCR系统为光学字符识别系统,能够识别图像中的文字,并提取图像中的文字,以将图像中的文字转换成计算机可处理的格式,其中,该OCR系统包括文字检测模型和文字识别模型。其中,文字检测模型和文字识别模型均为深度学习模型,深度学习模型可以是卷积神经网络或者循环神经网络等网络模型,深度学习模型所属的网络类型在本实施例中不作限定。其中,文字检测模型用于将图片中文字的位置识别出来,文字识别模型则是用于对识别出来的各个文字的位置进行识别其中的文字内容,即识别各个文字位置所包含的文字内容。
在本实施例中,训练OCR系统的过程,先获取初始OCR系统,以及获取训练图像,之后将训练图像输入至初始OCR模型中,以基于训练图像对该初始OCR模型进行训练;在训练完成初始OCR系统之后,得到训练完成初始OCR系统后对应的OCR系统。其中,初始OCR系统为训练OCR系统之前的初始状态,初始OCR系统包括第一深度学习模型和第二深度学习模型,第一深度学习模型用于训练文字检测模型,第二深度学习模型用于训练文字识别模型,也就是说,第一深度学习模型为初始文字检测模型,第二深度学习模型为初始 文字识别模型。
步骤S20,将待识别图像输入至所述OCR系统中,以基于所述OCR系统确定所述待识别图像对应的文字识别结果;
在本实施例中,在训练完成初始OCR系统得到OCR系统后,OCR系统包括文字检测模型和文字识别模型,之后,进行对OCR系统的评估过程。首先获取待识别图像,并将待识别图像输入该OCR系统中,以基于训练完成后OCR系统中的文字检测模型和文字识别模型确定待识别图像对应的文字识别结果。具体地,将待识别图像输入至文字检测模型中,基于文字检测模型的第一模型参数,确定待识别图像对应的中间识别结果,其中,中间识别结果为识别待识别图像得到的文本框;得到中间识别结果之后,将包含中间识别结果的待识别图像输入至文字识别模型中,得到文字识别结果。其中,为了提升评估OCR系统的准确度,待识别图像为与训练图像不一致的包含文字内容的图像,待识别图像用于评估OCR系统。
需要说明的是,将待识别图像输入至文字检测模型中,以使文字检测模型确定待识别图像的文字位置,也就是说,文字检测模型所确定待识别图像对应的文本框为待识别图像中的文字位置。将包含中间识别结果的待识别图像输入至训练完成的文字识别模型中,以使文字识别模型基于中间识别结果以及待识别图像中的图像信息,识别待识别图像的图像信息中的文字内容,从而得到文字识别结果,也就是说,文字识别结果为文字识别模型对待识别图像进行识别后得到的文字内容。
步骤S30,基于所述文字识别结果以及所述待识别图像对应的实际标注数据,确定所述OCR系统对应的文字召回率以及所述OCR系统对应的文字识别精确率,并基于所述文字召回率和所述文字识别精确率计算所述OCR系统的评估指标,以基于所述评估指标评估所述OCR系统的性能。
在本实施例中,文字召回率为被正确识别的字符数与实际字符数之间的比值,文字识别精确率为文字识别结果中正确的字符数与文字识别结果中所有字符数之间的比值。recall是文字召回率,precision是文字识别精确率,文字召回率recall和文字识别精确率precision的计算公式分别如下:
其中,N
gt代表待识别图像的标注答案中所有字符数,N
gp代表待识别图像的标注答案被正确识别的字符数,N
pred代表待识别图像的文字识别结果中所有字符数,N
pp代表待识别图像的文字识别结果中正确的字符数。
得到文字召回率和文字识别的精确率后,进行对OCR系统的评估,具体地,基于OCR系统对待识别图像进行识别的文字召回率和文字识别的精确率,进行计算OCR系统的评估指标,以供后续基于评估指标评估OCR系统的性能。本方案以得分作为OCR系统的评估指标,评估指标f1的计算公式如下:
进一步地,若OCR系统的评估指标达预设阈值,则OCR系统的性能达标。
本实施例提出的OCR系统的评估方法,通过获取训练图像,并将所述训练图像输入至 初始OCR系统中,以对所述初始OCR系统进行训练,得到训练完成所述初始OCR系统后对应的OCR系统;然后,将待识别图像输入至所述OCR系统中,以基于所述OCR系统确定所述待识别图像对应的文字识别结果;最后,基于所述文字识别结果以及所述待识别图像对应的实际标注数据,确定所述OCR系统对应的文字召回率以及所述OCR系统对应的文字识别精确率,并基于所述文字召回率和所述文字识别精确率计算所述OCR系统的评估指标,以基于所述评估指标评估所述OCR系统的性能。本实施例中通过计算OCR系统对应的文字召回率和文字识别精确率,基于文字召回率和文字识别精确率计算OCR系统的评估师表从而对OCR系统进行整体的评价,解决了现有技术中将文字检测和文字识别分开独立评价而导致评估指标不能客观反映OCR系统整体性能的问题。进一步地,本申请的OCR系统的评估方法使用单一指标来评价OCR系统的优劣或者性能,并且可以帮助用户选择更优质的OCR服务,促进各行业信息化和数字化的发展。并且,由于图像存在文本框断裂、粘连导致答案框与检测框有一对多、多对一、多对多匹配的复杂情况,现有技术基于IOU的评价方式会对此类情况进行误判,而本申请的OCR系统的评估方法可以有效避免误判,使模型的评估更加客观公正。
基于第一实施例,提出本申请OCR系统的评估方法的第二实施例,参照图3,在本实施例中,步骤S10包括:
步骤S11,将所述训练图像输入至第一深度学习模型,得到训练完成所述第一深度学习模型后对应的文字检测模型;
步骤S12,基于所述文字检测模型进行训练第二深度学习模型,得到训练完成所述第二深度学习模型后对应的所述文字识别模型。
在本实施例中,先对初始OCR系统中的第一深度学习模型进行训练,完成第一深度学习模型后得到文字检测模型,之后联合文字检测模型和第二深度学习模型一起进行训练。其中,OCR系统包括文字检测模型和文字识别模型。具体地,先将训练图像输入至初始OCR系统中的第一深度学习模型进行训练,训练完成第一深度学习模型后,得到文字检测模型;得到文字检测模型之后,联合文字检测模型和第二深度学习模型一起进行训练,训练完成第二深度学习模型之后,得到文字识别模型。其中,训练完成第一深度学习模型或第二深度学习模型的条件可以是训练步骤达到最大迭代步骤或梯度下降法对应的梯度达到最小梯度值。
进一步地,所述将所述训练图像输入至第一深度学习模型,得到训练完成所述第一深度学习模型后对应的文字检测模型的步骤包括:
标注所述训练图像,以确定所述训练图像中文本框的预设标注位置,并将包含所述预设标注位置的所述训练图像输入至第一深度学习模型,确定所述训练图像对应的学习标注位置;
基于所述预设标注位置和所述学习标注位置,确定所述第一深度学习模型对应的第一梯度信息;
基于所述第一梯度信息优化所述第一深度学习模型,以确定所述文字检测模型,其中,所述文字检测模型为优化完成的所述第一深度学习模型。
在本实施例中,训练文字检测模型的过程具体如下:先对训练图像进行标注,将训练图像中的文本框标注出来,从而确定训练图像中文本框的预设标注位置;之后,将包含预设标注位置的训练图像输入至第一深度学习模型进行训练和学习,第一深度学习模型输出训练图像对应的学习标注位置。在训练文字检测模型的过程中,基于梯度下降法对第一深度学习模型进行优化,即在得到第一深度学习模型输出的学习标注位置之后,基于第一梯度信息对深度学习模型的第一模型参数进行优化;基于第一梯度信息对该第一深度学习模型进行优化,直至第一梯度信息满足第一预设条件时,训练第一深度学习模型完成,得到文字检测模型。其中,第一预设条件可以是第一梯度信息达到第一最小梯度值,第一最小 梯度值可以按照需要进行设置。
进一步地,所述基于所述文字检测模型进行训练第二深度学习模型,得到训练完成所述第二深度学习模型后对应的所述文字识别模型的步骤包括:
获取文本条图像,并将所述文本条图像输入至所述文字检测模型,得到标注文字位置的所述文本条图像;
将标注文字位置的所述文本条图像输入至第二深度学习模型,得到所述文本条图像对应的学习文字内容;
基于所述学习文字内容,确定所述第二深度学习模型对应的第二梯度信息,并基于所述第二梯度信息优化所述第二深度学习模型,以确定所述文字识别模型,其中,所述文字识别检测模型为优化完成的所述第二深度学习模型。
在本实施例中,训练文字识别模型的过程如下:收集大量单条文本条图像,并确定文本条图像的预设文字内容,将文本条图像输入至已优化完成的文字检测模型,以供文字检测模型输出标注了所检测到的文字位置的文本条图像;之后,将标注文字位置的文本条图像输入第二深度学习模型进行训练,第二深度学习模型输出文本条图像对应的文字内容,即输出识别文本条图像中的文字内容;之后,使用梯度下降法对第二深度学习模型进行优化,基于第二梯度信息对第一深度学习模型进行优化,直至第二深度学习模型对应的第二梯度信息满足第二预设条件时,优化该第二深度学习模型完成,最终得到文字识别模型。其中,第二预设条件可以是第二梯度信息达到第二最小梯度值,第二最小梯度值可以按照需要进行设置。
进一步地,所述基于所述文字召回率和所述文字识别精确率计算所述OCR系统的评估指标,以基于所述评估指标评估所述OCR系统的性能的步骤之后,还包括:
当接收到图像存储请求时,获取所述图像存储请求对应的待存储图像;
在所述OCR系统的性能达标时,将所述待存储图像输入至所述OCR系统,基于所述OCR系统确定所述待存储图像中是否存在身份证信息;
若所述待存储图像中不存在身份证信息,则执行所述图像存储请求对应的图像存储操作。
其中,身份证信息包括客户的身份证号码、地址、性别或籍贯等。需要说明的是,客户在使用一些终端设备或者平台系统时在一些情况下需要上传客户的身份证信息,因此这些终端设备或者平台系统在有客户上传身份证信息时可以获取并存储客户的身份证信息,从而有机会窃取客户的身份证信息,导致客户的个人信息和隐私泄露,因此亟需对客户的隐私数据进行保护。
在本实施例中,当终端接收到图像存储请求以存储客户的待存储图像时,获取该图像存储请求对应的待存储图像,可以基于训练完成的OCR系统对待存储图像执行图像识别操作,以对准备进行存储的待存储图像进行识别,目的是识别待存储图像是否包含客户的隐私数据。因此,无论终端发起任何图像的图像存储请求时,均获取该图像存储请求对应的待存储图像进行识别,以识别待存储图像中是否包含身份证信息,从而可以实现实时监控终端当前的图像存储操作,以监测当前的存储操作是否涉嫌泄露客户的隐私数据。若OCR系统识别到待存储图像中未包含身份证信息,则执行该图像存储请求对应的图像存储操作。
进一步地,所述基于所述OCR系统确定所述待存储图像中是否存在身份证信息的步骤之后,还包括:
若所述待存储图像中存在身份证信息,则查询与所述身份证信息关联的第一目标账户,将所述第一目标账户与本地设备对应的第二目标账户进行匹配;
若所述第一目标账户与所述第二目标账户匹配,则执行所述图像存储请求对应的图像存储操作;
若所述第一目标账户与所述第二目标账户不匹配,则向所述第一目标账户发送验证信息,并在接收到所述验证信息对应的反馈信息时,执行所述图像存储请求对应的图像存储 操作。
其中,目标账户可以是数字蜂窝移动电话的用户识别卡、数字蜂窝移动电话的系统账户或PC端的电脑系统账户等,在本实施例中,目标账户不作限定,进一步地,目标账户的账户信息可以包括国际移动用户识别号码或者个人账号等。
在本实施例中,若OCR系统识别到待存储图像中包含身份证信息,则说明当前的图像存储操作涉嫌泄露客户的隐私数据,因此对图像存储操作进行防控。具体地,当识别到待存储图像中存在身份证信息时,基于OCR系统获取身份证信息中的身份证号码,通过身份证号码确定与身份证信息关联的第一目标账户,如通过身份证号码确定与身份证号码关联的手机号码等;接着,获取本地设备(客户终端)所绑定的第二目标账户,例如当本地设备为手机时,可以获取本地设备上的SIM卡的电话号码。
之后,将第一目标账户和第二目标账户进行匹配。当第一目标账户与第二目标账户匹配时,说明本地终端为客户持有的安全设备,则允许本地终端执行图像存储请求操作对应的图像存储操作。相反地,若第一目标账户和第二目标账户不匹配,说明本地终端并非客户的设备,属于不安全设备,此时则不能允许本地终端执行图像存储请求操作对应的图像存储操作。取代的是,向第一目标账户发送验证信息,若接收到第一目标账户反馈的反馈信息时,则执行图像存储请求操作对应的图像存储操作。
本实施例提出的OCR系统的评估方法,通过将所述训练图像输入至第一深度学习模型,得到训练完成所述第一深度学习模型后对应的文字检测模型;基于所述文字检测模型进行训练第二深度学习模型,得到训练完成所述第二深度学习模型后对应的所述文字识别模型。本实施例中,先对初始OCR系统中的第一深度学习模型进行训练,训练完成第一深度学习模型后得到文字检测模型,之后联合文字检测模型进行训练得到文字识别模型,从而可以提高文字检测模型和文字识别模型之间的配合度,进一步可以提高OCR系统的精确度。
此外,本申请实施例还提出一种OCR系统的评估装置,所述OCR系统的评估装置包括:
训练模块,用于获取训练图像,并将所述训练图像输入至初始OCR系统中,以对所述初始OCR系统进行训练,得到训练完成所述初始OCR系统后对应的OCR系统;
识别模块,用于将待识别图像输入至所述OCR系统中,以基于所述OCR系统确定所述待识别图像对应的文字识别结果;
评估模块,用于基于所述文字识别结果以及所述待识别图像对应的实际标注数据,确定所述OCR系统对应的文字召回率以及所述OCR系统对应的文字识别精确率,并基于所述文字召回率和所述文字识别精确率计算所述OCR系统的评估指标,以基于所述评估指标评估所述OCR系统的性能。
进一步地,所述训练模块,还用于:
将所述训练图像输入至第一深度学习模型,得到训练完成所述第一深度学习模型后对应的文字检测模型;
基于所述文字检测模型进行训练第二深度学习模型,得到训练完成所述第二深度学习模型后对应的所述文字识别模型。
进一步地,所述训练模块,还用于:
标注所述训练图像,以确定所述训练图像中文本框的预设标注位置,并将包含所述预设标注位置的所述训练图像输入至第一深度学习模型,确定所述训练图像对应的学习标注位置;
基于所述预设标注位置和所述学习标注位置,确定所述第一深度学习模型对应的第一梯度信息;
基于所述第一梯度信息优化所述第一深度学习模型,以确定所述文字检测模型,其中,所述文字检测模型为优化完成的所述第一深度学习模型。
进一步地,所述训练模块,还用于:
获取文本条图像,并将所述文本条图像输入至所述文字检测模型,得到标注文字位置的所述文本条图像;
将标注文字位置的所述文本条图像输入至第二深度学习模型,得到所述文本条图像对应的学习文字内容;
基于所述学习文字内容,确定所述第二深度学习模型对应的第二梯度信息,并基于所述第二梯度信息优化所述第二深度学习模型,以确定所述文字识别模型,其中,所述文字识别检测模型为优化完成的所述第二深度学习模型。
进一步地,所述评估模块,还用于:
当接收到图像存储请求时,获取所述图像存储请求对应的待存储图像;
在所述OCR系统的性能达标时,将所述待存储图像输入至所述OCR系统,基于所述OCR系统确定所述待存储图像中是否存在身份证信息;
若所述待存储图像中不存在身份证信息,则执行所述图像存储请求对应的图像存储操作。
进一步地,所述评估模块,还用于:
若所述待存储图像中存在身份证信息,则查询与所述身份证信息关联的第一目标账户,将所述第一目标账户与本地设备对应的第二目标账户进行匹配;
若所述第一目标账户与所述第二目标账户匹配,则执行所述图像存储请求对应的图像存储操作;
若所述第一目标账户与所述第二目标账户不匹配,则向所述第一目标账户发送验证信息,并在接收到所述验证信息对应的反馈信息时,执行所述图像存储请求对应的图像存储操作。
进一步地,所述评估模块,还用于:
若所述OCR系统的评估指标达预设阈值,则所述OCR系统的性能达标。
此外,本申请实施例还提出一种可读存储介质,所述可读存储介质上存储有OCR系统的评估程序,所述OCR系统的评估程序被处理器执行时实现如上述中任一项所述的OCR系统的评估方法的步骤。
本申请可读存储介质具体实施例与上述OCR系统的评估方法的各实施例基本相同,在此不再详细赘述。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者系统不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者系统所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者系统中还存在另外的相同要素。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。
Claims (20)
- 一种OCR系统的评估方法,其中,所述OCR系统的评估方法包括以下步骤:获取训练图像,并将所述训练图像输入至初始OCR系统中,以对所述初始OCR系统进行训练,得到训练完成所述初始OCR系统后对应的OCR系统;将待识别图像输入至所述OCR系统中,以基于所述OCR系统确定所述待识别图像对应的文字识别结果;基于所述文字识别结果以及所述待识别图像对应的实际标注数据,确定所述OCR系统对应的文字召回率以及所述OCR系统对应的文字识别精确率,并基于所述文字召回率和所述文字识别精确率计算所述OCR系统的评估指标,以基于所述评估指标评估所述OCR系统的性能。
- 如权利要求1所述的OCR系统的评估方法,其中,所述OCR系统包括文字检测模型和文字识别模型,所述将所述训练图像输入至初始OCR系统中,以对所述初始OCR系统进行训练,得到训练完成所述初始OCR系统后对应的OCR系统的步骤包括:将所述训练图像输入至第一深度学习模型,得到训练完成所述第一深度学习模型后对应的文字检测模型;基于所述文字检测模型进行训练第二深度学习模型,得到训练完成所述第二深度学习模型后对应的所述文字识别模型。
- 如权利要求2所述的OCR系统的评估方法,其中,所述将所述训练图像输入至第一深度学习模型,得到训练完成所述第一深度学习模型后对应的文字检测模型的步骤包括:标注所述训练图像,以确定所述训练图像中文本框的预设标注位置,并将包含所述预设标注位置的所述训练图像输入至第一深度学习模型,确定所述训练图像对应的学习标注位置;基于所述预设标注位置和所述学习标注位置,确定所述第一深度学习模型对应的第一梯度信息;基于所述第一梯度信息优化所述第一深度学习模型,以确定所述文字检测模型,其中,所述文字检测模型为优化完成的所述第一深度学习模型。
- 如权利要求2所述的OCR系统的评估方法,其中,所述基于所述文字检测模型进行训练第二深度学习模型,得到训练完成所述第二深度学习模型后对应的所述文字识别模型的步骤包括:获取文本条图像,并将所述文本条图像输入至所述文字检测模型,得到标注文字位置的所述文本条图像;将标注文字位置的所述文本条图像输入至第二深度学习模型,得到所述文本条图像对应的学习文字内容;基于所述学习文字内容,确定所述第二深度学习模型对应的第二梯度信息,并基于所述第二梯度信息优化所述第二深度学习模型,以确定所述文字识别模型,其中,所述文字识别检测模型为优化完成的所述第二深度学习模型。
- 如权利要求1所述的OCR系统的评估方法,其中,所述基于所述文字召回率和所述文字识别精确率计算所述OCR系统的评估指标,以基于所述评估指标评估所述OCR系统的性能的步骤之后,还包括:当接收到图像存储请求时,获取所述图像存储请求对应的待存储图像;在所述OCR系统的性能达标时,将所述待存储图像输入至所述OCR系统,基于所述 OCR系统确定所述待存储图像中是否存在身份证信息;若所述待存储图像中不存在身份证信息,则执行所述图像存储请求对应的图像存储操作。
- 如权利要求5所述的OCR系统的评估方法,其中,所述基于所述OCR系统确定所述待存储图像中是否存在身份证信息的步骤之后,还包括:若所述待存储图像中存在身份证信息,则查询与所述身份证信息关联的第一目标账户,将所述第一目标账户与本地设备对应的第二目标账户进行匹配;若所述第一目标账户与所述第二目标账户匹配,则执行所述图像存储请求对应的图像存储操作;若所述第一目标账户与所述第二目标账户不匹配,则向所述第一目标账户发送验证信息,并在接收到所述验证信息对应的反馈信息时,执行所述图像存储请求对应的图像存储操作。
- 如权利要求1至6任一项所述的OCR系统的评估方法,其中,所述基于所述评估指标评估所述OCR系统的性能的步骤包括:若所述OCR系统的评估指标达预设阈值,则所述OCR系统的性能达标。
- 一种OCR系统的评估装置,其中,所述OCR系统的评估装置包括:训练模块,用于获取训练图像,并将所述训练图像输入至初始OCR系统中,以对所述初始OCR系统进行训练,得到训练完成所述初始OCR系统后对应的OCR系统;识别模块,用于将待识别图像输入至所述OCR系统中,以基于所述OCR系统确定所述待识别图像对应的文字识别结果;评估模块,用于基于所述文字识别结果以及所述待识别图像对应的实际标注数据,确定所述OCR系统对应的文字召回率以及所述OCR系统对应的文字识别精确率,并基于所述文字召回率和所述文字识别精确率计算所述OCR系统的评估指标,以基于所述评估指标评估所述OCR系统的性能。
- 一种OCR系统的评估设备,其中,所述OCR系统的评估设备包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的OCR系统的评估程序,所述OCR系统的评估程序被所述处理器执行时实现如下步骤:获取训练图像,并将所述训练图像输入至初始OCR系统中,以对所述初始OCR系统进行训练,得到训练完成所述初始OCR系统后对应的OCR系统;将待识别图像输入至所述OCR系统中,以基于所述OCR系统确定所述待识别图像对应的文字识别结果;基于所述文字识别结果以及所述待识别图像对应的实际标注数据,确定所述OCR系统对应的文字召回率以及所述OCR系统对应的文字识别精确率,并基于所述文字召回率和所述文字识别精确率计算所述OCR系统的评估指标,以基于所述评估指标评估所述OCR系统的性能。
- 如权利要求9所述的OCR系统的评估设备,其中,所述OCR系统的评估程序被所述处理器执行时实现的步骤还包括:将所述训练图像输入至第一深度学习模型,得到训练完成所述第一深度学习模型后对应的文字检测模型;基于所述文字检测模型进行训练第二深度学习模型,得到训练完成所述第二深度学习模型后对应的所述文字识别模型。
- 如权利要求10所述的OCR系统的评估设备,其中,所述OCR系统的评估程序被所述处理器执行时实现的步骤还包括:标注所述训练图像,以确定所述训练图像中文本框的预设标注位置,并将包含所述预设标注位置的所述训练图像输入至第一深度学习模型,确定所述训练图像对应的学习标注位置;基于所述预设标注位置和所述学习标注位置,确定所述第一深度学习模型对应的第一梯度信息;基于所述第一梯度信息优化所述第一深度学习模型,以确定所述文字检测模型,其中,所述文字检测模型为优化完成的所述第一深度学习模型。
- 如权利要求10所述的OCR系统的评估设备,其中,所述OCR系统的评估程序被所述处理器执行时实现的步骤还包括:获取文本条图像,并将所述文本条图像输入至所述文字检测模型,得到标注文字位置的所述文本条图像;将标注文字位置的所述文本条图像输入至第二深度学习模型,得到所述文本条图像对应的学习文字内容;基于所述学习文字内容,确定所述第二深度学习模型对应的第二梯度信息,并基于所述第二梯度信息优化所述第二深度学习模型,以确定所述文字识别模型,其中,所述文字识别检测模型为优化完成的所述第二深度学习模型。
- 如权利要求9所述的OCR系统的评估设备,其中,所述OCR系统的评估程序被所述处理器执行时实现的步骤还包括:当接收到图像存储请求时,获取所述图像存储请求对应的待存储图像;在所述OCR系统的性能达标时,将所述待存储图像输入至所述OCR系统,基于所述OCR系统确定所述待存储图像中是否存在身份证信息;若所述待存储图像中不存在身份证信息,则执行所述图像存储请求对应的图像存储操作。
- 如权利要求13所述的OCR系统的评估设备,其中,所述OCR系统的评估程序被所述处理器执行时实现的步骤还包括:若所述待存储图像中存在身份证信息,则查询与所述身份证信息关联的第一目标账户,将所述第一目标账户与本地设备对应的第二目标账户进行匹配;若所述第一目标账户与所述第二目标账户匹配,则执行所述图像存储请求对应的图像存储操作;若所述第一目标账户与所述第二目标账户不匹配,则向所述第一目标账户发送验证信息,并在接收到所述验证信息对应的反馈信息时,执行所述图像存储请求对应的图像存储操作。
- 如权利要求9-14任意一项所述的OCR系统的评估设备,其中,所述OCR系统的评估程序被所述处理器执行时实现的步骤还包括:若所述OCR系统的评估指标达预设阈值,则所述OCR系统的性能达标。
- 一种可读存储介质,其中,所述可读存储介质上存储有OCR系统的评估程序,所述OCR系统的评估程序被处理器执行时实现如下步骤:获取训练图像,并将所述训练图像输入至初始OCR系统中,以对所述初始OCR系统进 行训练,得到训练完成所述初始OCR系统后对应的OCR系统;将待识别图像输入至所述OCR系统中,以基于所述OCR系统确定所述待识别图像对应的文字识别结果;基于所述文字识别结果以及所述待识别图像对应的实际标注数据,确定所述OCR系统对应的文字召回率以及所述OCR系统对应的文字识别精确率,并基于所述文字召回率和所述文字识别精确率计算所述OCR系统的评估指标,以基于所述评估指标评估所述OCR系统的性能。
- 如权利要求16所述的可读存储介质,其中,所述OCR系统的评估程序被所述处理器执行时实现的步骤还包括:将所述训练图像输入至第一深度学习模型,得到训练完成所述第一深度学习模型后对应的文字检测模型;基于所述文字检测模型进行训练第二深度学习模型,得到训练完成所述第二深度学习模型后对应的所述文字识别模型。
- 如权利要求17所述的可读存储介质,其中,所述OCR系统的评估程序被所述处理器执行时实现的步骤还包括:标注所述训练图像,以确定所述训练图像中文本框的预设标注位置,并将包含所述预设标注位置的所述训练图像输入至第一深度学习模型,确定所述训练图像对应的学习标注位置;基于所述预设标注位置和所述学习标注位置,确定所述第一深度学习模型对应的第一梯度信息;基于所述第一梯度信息优化所述第一深度学习模型,以确定所述文字检测模型,其中,所述文字检测模型为优化完成的所述第一深度学习模型。
- 如权利要求18所述的可读存储介质,其中,所述OCR系统的评估程序被所述处理器执行时实现的步骤还包括:获取文本条图像,并将所述文本条图像输入至所述文字检测模型,得到标注文字位置的所述文本条图像;将标注文字位置的所述文本条图像输入至第二深度学习模型,得到所述文本条图像对应的学习文字内容;基于所述学习文字内容,确定所述第二深度学习模型对应的第二梯度信息,并基于所述第二梯度信息优化所述第二深度学习模型,以确定所述文字识别模型,其中,所述文字识别检测模型为优化完成的所述第二深度学习模型。
- 如权利要求16所述的可读存储介质,其中,所述OCR系统的评估程序被所述处理器执行时实现的步骤还包括:当接收到图像存储请求时,获取所述图像存储请求对应的待存储图像;在所述OCR系统的性能达标时,将所述待存储图像输入至所述OCR系统,基于所述OCR系统确定所述待存储图像中是否存在身份证信息;若所述待存储图像中不存在身份证信息,则执行所述图像存储请求对应的图像存储操作。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011275415.1A CN112100431B (zh) | 2020-11-16 | 2020-11-16 | Ocr系统的评估方法、装置、设备及可读存储介质 |
CN202011275415.1 | 2020-11-16 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022100452A1 true WO2022100452A1 (zh) | 2022-05-19 |
Family
ID=73785570
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/127185 WO2022100452A1 (zh) | 2020-11-16 | 2021-10-28 | Ocr系统的评估方法、装置、设备及可读存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112100431B (zh) |
WO (1) | WO2022100452A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115061590A (zh) * | 2022-08-17 | 2022-09-16 | 芯见(广州)科技有限公司 | 基于视频识别的kvm坐席系统控制方法及kvm坐席系统 |
CN116612483A (zh) * | 2023-07-19 | 2023-08-18 | 广州宏途数字科技有限公司 | 一种智能笔手写矢量的识别方法及装置 |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112100431B (zh) * | 2020-11-16 | 2021-02-26 | 深圳壹账通智能科技有限公司 | Ocr系统的评估方法、装置、设备及可读存储介质 |
CN113220557B (zh) * | 2021-06-01 | 2024-01-26 | 上海明略人工智能(集团)有限公司 | 冷启动推荐模型评估方法、系统、计算机设备及存储介质 |
CN115512348B (zh) * | 2022-11-08 | 2023-03-28 | 浪潮金融信息技术有限公司 | 一种基于双识别技术的物体识别方法、系统、设备及介质 |
CN117217876B (zh) * | 2023-11-08 | 2024-03-26 | 深圳市明心数智科技有限公司 | 基于ocr技术的订单预处理方法、装置、设备及介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108764226A (zh) * | 2018-04-13 | 2018-11-06 | 顺丰科技有限公司 | 图像文本识别方法、装置、设备及其存储介质 |
CN109934227A (zh) * | 2019-03-12 | 2019-06-25 | 上海兑观信息科技技术有限公司 | 图像文字识别系统和方法 |
CN110097049A (zh) * | 2019-04-03 | 2019-08-06 | 中国科学院计算技术研究所 | 一种自然场景文本检测方法及系统 |
CN110399871A (zh) * | 2019-06-14 | 2019-11-01 | 华南理工大学 | 一种场景文本检测结果的评估方法 |
US20200226400A1 (en) * | 2019-01-11 | 2020-07-16 | Microsoft Technology Licensing, Llc | Compositional model for text recognition |
CN112100431A (zh) * | 2020-11-16 | 2020-12-18 | 深圳壹账通智能科技有限公司 | Ocr系统的评估方法、装置、设备及可读存储介质 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9177210B2 (en) * | 2007-10-30 | 2015-11-03 | Hki Systems And Service Llc | Processing container images and identifiers using optical character recognition and geolocation |
US10510131B2 (en) * | 2018-03-07 | 2019-12-17 | Ricoh Company, Ltd. | Return mail services |
CN109919014B (zh) * | 2019-01-28 | 2023-11-03 | 平安科技(深圳)有限公司 | Ocr识别方法及其电子设备 |
CN111191198A (zh) * | 2019-11-25 | 2020-05-22 | 京东数字科技控股有限公司 | 账户信息处理方法、装置、计算机可读介质及电子设备 |
-
2020
- 2020-11-16 CN CN202011275415.1A patent/CN112100431B/zh active Active
-
2021
- 2021-10-28 WO PCT/CN2021/127185 patent/WO2022100452A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108764226A (zh) * | 2018-04-13 | 2018-11-06 | 顺丰科技有限公司 | 图像文本识别方法、装置、设备及其存储介质 |
US20200226400A1 (en) * | 2019-01-11 | 2020-07-16 | Microsoft Technology Licensing, Llc | Compositional model for text recognition |
CN109934227A (zh) * | 2019-03-12 | 2019-06-25 | 上海兑观信息科技技术有限公司 | 图像文字识别系统和方法 |
CN110097049A (zh) * | 2019-04-03 | 2019-08-06 | 中国科学院计算技术研究所 | 一种自然场景文本检测方法及系统 |
CN110399871A (zh) * | 2019-06-14 | 2019-11-01 | 华南理工大学 | 一种场景文本检测结果的评估方法 |
CN112100431A (zh) * | 2020-11-16 | 2020-12-18 | 深圳壹账通智能科技有限公司 | Ocr系统的评估方法、装置、设备及可读存储介质 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115061590A (zh) * | 2022-08-17 | 2022-09-16 | 芯见(广州)科技有限公司 | 基于视频识别的kvm坐席系统控制方法及kvm坐席系统 |
CN116612483A (zh) * | 2023-07-19 | 2023-08-18 | 广州宏途数字科技有限公司 | 一种智能笔手写矢量的识别方法及装置 |
CN116612483B (zh) * | 2023-07-19 | 2023-09-29 | 广州宏途数字科技有限公司 | 一种智能笔手写矢量的识别方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
CN112100431B (zh) | 2021-02-26 |
CN112100431A (zh) | 2020-12-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022100452A1 (zh) | Ocr系统的评估方法、装置、设备及可读存储介质 | |
CN109919014B (zh) | Ocr识别方法及其电子设备 | |
WO2019096008A1 (zh) | 身份识别方法、计算机设备及存储介质 | |
US11290447B2 (en) | Face verification method and device | |
CN108280115B (zh) | 识别用户关系的方法及装置 | |
CN110704661B (zh) | 一种图像分类方法和装置 | |
CN110097419A (zh) | 商品数据处理方法、计算机设备和存储介质 | |
CN110335139B (zh) | 基于相似度的评估方法、装置、设备及可读存储介质 | |
CN108536638B (zh) | 智能书签的设置方法、移动终端、系统及可读存储介质 | |
US11297027B1 (en) | Automated image processing and insight presentation | |
CN107766403A (zh) | 一种相册处理方法、移动终端以及计算机可读存储介质 | |
CN113190646A (zh) | 一种用户名样本的标注方法、装置、电子设备及存储介质 | |
CN115205883A (zh) | 基于ocr和nlp的资料审核方法、装置、设备、存储介质 | |
CN109947988B (zh) | 一种信息处理方法、装置、终端设备及服务器 | |
CN116311069A (zh) | 异常行为判断方法、装置及存储介质 | |
EP2930632A1 (en) | Method for sorting media content and electronic device implementing same | |
CN109885490B (zh) | 一种图片对比方法和装置 | |
CN113869063A (zh) | 数据推荐方法、装置、电子设备及存储介质 | |
CN106131296A (zh) | 信息展示方法及装置 | |
CN110866114B (zh) | 对象行为的识别方法、装置及终端设备 | |
CN112418442A (zh) | 联邦迁移学习的数据处理方法、装置、设备及存储介质 | |
CN115563255A (zh) | 对话文本的处理方法、装置、电子设备及存储介质 | |
CN113269730B (zh) | 图像处理方法、装置、计算机设备及存储介质 | |
CN109471664A (zh) | 智能助手管理方法、终端及计算机可读存储介质 | |
CN115545944A (zh) | 一种信息披露管理方法、装置、电子设备和存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21890979 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 28/08/2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21890979 Country of ref document: EP Kind code of ref document: A1 |