CN114926831A - Text-based recognition method and device, electronic equipment and readable storage medium - Google Patents

Text-based recognition method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN114926831A
CN114926831A CN202210609347.0A CN202210609347A CN114926831A CN 114926831 A CN114926831 A CN 114926831A CN 202210609347 A CN202210609347 A CN 202210609347A CN 114926831 A CN114926831 A CN 114926831A
Authority
CN
China
Prior art keywords
character
text
recognition
image
recognition result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210609347.0A
Other languages
Chinese (zh)
Inventor
李书涵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Puhui Enterprise Management Co Ltd
Original Assignee
Ping An Puhui Enterprise Management Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Puhui Enterprise Management Co Ltd filed Critical Ping An Puhui Enterprise Management Co Ltd
Priority to CN202210609347.0A priority Critical patent/CN114926831A/en
Publication of CN114926831A publication Critical patent/CN114926831A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/1444Selective acquisition, locating or processing of specific regions, e.g. highlighted text, fiducial marks or predetermined fields
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/16Image preprocessing
    • G06V30/162Quantising the image signal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/16Image preprocessing
    • G06V30/164Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/19007Matching; Proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/42Document-oriented image-based pattern recognition based on the type of document

Abstract

The invention relates to an artificial intelligence technology, and discloses a text-based identification method, which comprises the following steps: identifying each character in a user basic information image or identifying each character according to a character identification rule to obtain an identification result and the confidence coefficient of each identification result, and selecting the character with the confidence coefficient smaller than a threshold value as a character set; acquiring business semantics corresponding to the user information, and identifying an actual business scene corresponding to the business semantics; obtaining relevant characters corresponding to each character in the character set according to the confidence coefficient, and obtaining character recognition results according to relevant characters fed back by an operator; and aggregating the confidence coefficient and the result into the identification text of the basic information image. In addition, the invention also relates to a block chain technology, and the basic information image can be stored in the node of the block chain. The invention also provides a text recognition device, electronic equipment and a storage medium. The invention can improve the text recognition accuracy.

Description

Text-based recognition method and device, electronic equipment and readable storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a text recognition based method and device, electronic equipment and a computer readable storage medium.
Background
In daily life, when people transact business of various banks, insurance and government agencies, materials are often required to be submitted, personal information is often required to be input and the like, the materials are submitted in modes of scanning documents, copying and the like under normal conditions, and a business receiver can perform image recognition on the submitted materials so as to realize rapid extraction of material contents
The existing text recognition technology is generally a process of checking characters with electronic equipment, determining character shapes by detecting dark and light patterns, and translating the character shapes into computer characters by a character recognition method. However, in the actual recognition process, in some actual business scenes, some rarely used new words and near-word may appear, and the existing text recognition technology is difficult to recognize an accurate result, so that the accuracy of text recognition is low.
Disclosure of Invention
The invention provides a text recognition based method, a text recognition based device, electronic equipment and a computer readable storage medium, and mainly aims to solve the problem of low accuracy in text recognition.
In order to achieve the above object, the present invention provides a text-based recognition method, which includes:
acquiring a basic information image of a user, and identifying each character in the basic information image to obtain a first identification result and a first confidence coefficient of each first identification result;
selecting the characters with the first confidence degrees smaller than a first threshold value as a first character set;
acquiring business semantics corresponding to the user information, and identifying an actual business scene corresponding to the business semantics;
acquiring a character recognition rule of the actual service scene, recognizing each character in the first character set according to the character recognition rule to obtain a second recognition result and a second confidence coefficient of each second recognition result, and selecting the character with the second confidence coefficient smaller than a second threshold value as a second character set;
acquiring a relevant character corresponding to each character in the second character set according to the second confidence, pushing the relevant character to a preset operator, acquiring a feedback text returned by the preset operator, and determining a third recognition result of each character in the second character set according to the feedback text;
and aggregating a first recognition result with the first confidence degree being greater than or equal to the first threshold value, a second recognition result with the second confidence degree being greater than or equal to the second threshold value and the third recognition result into a recognition text of the basic information image.
Optionally, the recognizing each character in the basic information image to obtain a first recognition result and a first confidence of each first recognition result includes:
carrying out image enhancement on the basic information image to obtain an enhanced image;
detecting a text region in the enhanced image;
identifying text content in the text region;
selecting one character from the text content one by one as information to be identified;
and matching the information to be recognized with a preset character library to obtain a first recognition result and a first confidence coefficient of the information to be recognized.
Optionally, the image enhancing the basic information image to obtain an enhanced image includes:
carrying out graying processing on the basic information image to obtain a grayscale image;
carrying out binarization processing on the gray level image to obtain a binarized image;
performing image noise reduction on the binary image to obtain a noise-reduced image;
and carrying out inclination correction processing on the noise-reduced image to obtain an enhanced image.
Optionally, the matching the information to be recognized with a preset character library to obtain a first recognition result and a first confidence of the information to be recognized includes:
calculating a matching value between the information to be identified and each character in a preset character library;
selecting the character with the maximum matching value as a first recognition result of the information to be recognized, and determining the matching value between the information to be recognized and the first recognition result as the first confidence coefficient.
Optionally, the calculating a matching value between the information to be recognized and each character in a preset character library includes:
calculating a matching value between the information to be recognized and each character in a preset character library by using a matching value algorithm as follows:
Figure BDA0003671463160000021
wherein P is the match value, W 1K Is the weight value, W, of the Kth characteristic item of the information to be identified 2K And presetting the weight of the Kth characteristic item of each character in the character library.
Optionally, the identifying an actual service scenario corresponding to the service semantic includes:
acquiring scene labels corresponding to different actual service scenes and acquiring matching characteristics of the user information;
calculating a mapping value between the matching features and scene labels corresponding to each different scene;
and determining the actual service scene corresponding to the scene label with the maximum mapping value as the actual service scene of the user information.
Optionally, the pair of computing a mapping value between the matching feature and a scene label corresponding to each different scene includes:
calculating a mapping value between the matching features and the scene label corresponding to each different scene by using the following algorithm:
Figure BDA0003671463160000031
wherein S is the mapping value, X i And n is the number of the characteristic quantity of the ith characteristic quantity corresponding to the user information.
In order to solve the above problem, the present invention further provides a text-based recognition apparatus, including:
the first character recognition module is used for acquiring a basic information image of a user, recognizing each character in the basic information image to obtain a first recognition result and a first confidence coefficient of each first recognition result, and selecting the character with the first confidence coefficient smaller than a first threshold value as a first character set;
the scene identification module is used for acquiring the business semantics corresponding to the user information and identifying the actual business scene corresponding to the business semantics;
the second character recognition module is used for acquiring a character recognition rule of the actual service scene, recognizing each character in the first character set according to the character recognition rule to obtain a second recognition result and a second confidence coefficient of each second recognition result, and selecting the character with the second confidence coefficient smaller than a second threshold value as a second character set;
the related character screening module is used for acquiring related characters corresponding to each character in the second character set according to the second confidence degree, pushing the related characters to a preset operator, acquiring a feedback text returned by the preset operator, and determining a third recognition result of each character in the second character set according to the feedback text;
and the character result collection module is used for collecting a first recognition result of which the first confidence coefficient is greater than or equal to the first threshold value, a second recognition result of which the second confidence coefficient is greater than or equal to the second threshold value and a third recognition result into a recognition text of the basic information image.
In order to solve the above problem, the present invention also provides an electronic device, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein, the first and the second end of the pipe are connected with each other,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform the text recognition method described above.
In order to solve the above problem, the present invention also provides a computer-readable storage medium, in which at least one computer program is stored, the at least one computer program being executed by a processor in an electronic device to implement the text recognition method described above.
The embodiment of the invention can perform multiple screening according to the value of the confidence coefficient in different service scenes, select the method for correctly identifying the text, ensure that error correction can be performed under the condition of wrong file identification without influencing service, greatly reduce the error probability, and correctly identify the text according to some auxiliary functions even when rare characters and characters in the vicinity appear, thereby improving the accuracy of text identification. Therefore, the text recognition method provided by the invention can solve the problem of low accuracy in recognition of the user text.
Drawings
Fig. 1 is a schematic flowchart of a text recognition method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart illustrating a process of obtaining confidence in character recognition according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of identifying an actual service scenario corresponding to a service semantic according to an embodiment of the present invention;
FIG. 4 is a functional block diagram of a text recognition apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device implementing the text recognition method according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
The embodiment of the application provides a text-based identification method. The execution subject of the text recognition method includes, but is not limited to, at least one of electronic devices such as a server and a terminal that can be configured to execute the method provided by the embodiments of the present application. In other words, the text recognition method may be performed by software installed in a terminal device or a server device, or hardware, and the software may be a block chain platform. The server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like. The server may be an independent server, or may be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like.
Fig. 1 is a schematic flow chart of a text-based recognition method according to an embodiment of the present invention. In this embodiment, the text-based recognition method includes:
s1, acquiring a basic information image of a user, and recognizing each character in the basic information image to obtain a first recognition result and a first confidence coefficient of each first recognition result;
in an embodiment of the present invention, the basic information image of the user includes document information and material information, such as an identification card, a driving license, a business license, a property certificate, a bank flow, a credit investigation report, and other images related to the user, such as a copy, a scanned copy, and other images.
In detail, a computer sentence with data crawling function (such as a java sentence, a python sentence, and the like) can be used for crawling the stored basic information image from a predetermined storage area, wherein the storage area comprises but is not limited to a database, a block chain node and a network cache.
In one practical application scenario of the invention, because related workers sometimes encounter the situations of uncommon word information, special symbols and the like when uploading and inputting client information, each character in the basic information image can be identified so as to improve the accuracy of text identification.
In the embodiment of the present invention, referring to fig. 2, the recognizing each character in the basic information image to obtain a first recognition result and a first confidence of each first recognition result includes:
s21, carrying out image enhancement on the basic information image to obtain an enhanced image;
s22, detecting a text area in the enhanced image;
s23, recognizing text content in the text area;
s24, selecting one character from the text content one by one as the information to be identified;
s25, matching the information to be recognized with a preset character library to obtain a first recognition result and a first confidence coefficient of the information to be recognized.
Optionally, the image enhancing the basic information image to obtain an enhanced image includes:
carrying out graying processing on the basic information image to obtain a grayscale image;
carrying out binarization processing on the gray level image to obtain a binarized image;
performing image noise reduction on the binary image to obtain a noise-reduced image;
and carrying out tilt correction processing on the noise-reduced image to obtain an enhanced image.
In detail, the binarization includes, but is not limited to, local threshold binarization and global threshold binarization; the tilt detection method of the text image includes, but is not limited to, a projective image method, and a Houhg method.
Further, the pair of detecting text regions in the enhanced image includes:
and performing character detection on the image recognition result by using a text segmentation method based on image segmentation, classifying texts from a pixel level, judging whether each pixel belongs to a text target, obtaining a probability map of a text region, and selecting a part of the probability map, which is larger than a preset probability threshold, as the text region in the enhanced image in a post-processing mode.
In detail, the post-processing mode includes, but is not limited to, opencvhe algorithm and polygon algorithm.
Further, the pair of recognizing text content in the text region includes:
extracting character features of the text region to obtain character features;
and collecting each character feature into the text content.
In detail, the Attention model can be used for extracting character features of the text region to obtain character features.
In the embodiment of the present invention, the first confidence of the first recognition result refers to a matching degree of the first recognition result obtained by recognizing each character in the basic information image.
Optionally, the matching the information to be recognized with a preset character library to obtain a first recognition result and a first confidence of the information to be recognized includes:
calculating a matching value between the information to be identified and each character in a preset character library;
selecting the character with the maximum matching value as a first recognition result of the information to be recognized, and determining the matching value between the information to be recognized and the first recognition result as the first confidence coefficient.
In the embodiment of the present invention, the calculating a matching value between the information to be recognized and each character in a preset character library includes:
calculating a matching value between the information to be recognized and each character in a preset character library by using a matching value algorithm as follows:
Figure BDA0003671463160000071
wherein P is the match value, W 1K Is the weight value, W, of the Kth characteristic item of the information to be identified 2K And presetting the weight of the Kth characteristic item of each character in the character library.
For example, if a Chinese character can be correctly identified and matched with a Chinese character existing in a Chinese character library, the confidence is 100%; if the recognition results do not match exactly, perhaps because there is a discrepancy in the components, radicals, etc. of the near-word, the confidence level may be 98%; if a uncommon word occurs, and only 70% of the glyphs match, the confidence in the recognition result of the character may be 70%.
S2, selecting the characters with the first confidence coefficient smaller than a first threshold value as a first character set;
in the embodiment of the present invention, the characters with the first confidence degree smaller than the first threshold may be selected as the first character set including the result that the character recognition result is not completely matched with the chinese characters existing in the chinese character library.
S3, acquiring the service semantics corresponding to the user information, and identifying the actual service scene corresponding to the service semantics;
in the embodiment of the present invention, the service semantics corresponding to the user information include metadata description of a service level (including check logic, display logic, editable logic, and triggers in an object level and various different service scenarios).
In detail, the step of obtaining the service semantic corresponding to the user information is consistent with the step of obtaining the basic information image of the user in S1, and is not repeated here.
In one practical application scenario of the invention, because text recognition errors sometimes occur when personal information is input when people transact various businesses such as various banks, insurance, government agencies and the like in daily life, in order to reduce the workload of business personnel under various business scenarios such as banks, insurance and the like, analysis can be carried out according to the practical business scenarios so as to ensure the correctness of data.
In the embodiment of the present invention, referring to fig. 3, the identifying an actual service scenario corresponding to the service semantics includes:
s31, obtaining scene labels corresponding to different actual service scenes, and obtaining matching characteristics of the user information;
s32, calculating a mapping value between the matching feature and a scene label corresponding to each different scene;
and S33, determining the actual service scene corresponding to the scene label with the maximum mapping value as the actual service scene of the user information.
In detail, the step of obtaining the matching characteristics of the user information corresponding to different scenes and the user information to be matched is consistent with the step of obtaining the basic information image of the user in S1, and is not repeated here.
In this embodiment of the present invention, the calculating a mapping value between the matching feature and a scene tag corresponding to each different scene includes:
calculating a mapping value between the matching feature and a scene label corresponding to each different scene by using the following algorithm:
Figure BDA0003671463160000081
wherein S is the mapping value, X i And n is the number of the characteristic quantity of the ith characteristic quantity corresponding to the user information.
For example, the user information to be matched has matching features { X1, X2, X3}, and features corresponding to X1, X2, and X3 in the user information may be quantized, superimposed, and averaged to obtain the user information mapping value of the scene.
S4, acquiring a character recognition rule of the actual service scene, recognizing each character in the first character set according to the character recognition rule to obtain a second recognition result and a second confidence coefficient of each second recognition result, and selecting the character with the second confidence coefficient smaller than a second threshold value as a second character set;
in the embodiment of the present invention, the character recognition rule of the actual service scene is to perform different processing on the actual service scene according to different service lines, for example, if the identity card is currently recognized, the processing is performed according to the service scene semantics of the identity card: for example, if the currently recognized field is a nation, the system can embed related nation designs, the system knows which of the 56 nationalities in the country currently exist, when the recognized result of the nation is correct, the system does not interfere, and when the recognized result wrongly changes ' Hui ' into ' Kou ', the system judges that no Kou ' exists currently, and the system automatically corrects the character into ' Hui ' according to the character of the Chinese character.
In detail, the step of obtaining the character recognition rule of the actual service scene is consistent with the step of obtaining the basic information image of the user in S1, which is not repeated here.
In this embodiment of the present invention, the second confidence level refers to a character, in which each character in the first character set is recognized in an actual service scenario, and the confidence level of the obtained second recognition result is not 100%.
Further, the step of recognizing each character in the first character set according to the character recognition rule to obtain a second recognition result and a second confidence of each second recognition result is consistent with the step of recognizing each character in the basic information image to obtain a first recognition result and a first confidence of each first recognition result, and further, the steps are not repeated.
S5, obtaining relevant characters corresponding to each character in the second character set according to the second confidence degree, pushing the relevant characters to a preset operator, obtaining a feedback text returned by the preset operator, and determining a third recognition result of each character in the second character set according to the feedback text;
in the embodiment of the invention, the related characters corresponding to each character in the second character set comprise rarely-used characters, characters in a close form, such as 'Tian' and 'Tian' characters, or non-existent Chinese characters in a current Chinese character library, such as 'Ching' characters.
In the embodiment of the invention, the related characters are pushed to a preset operator, and the feedback text returned by the preset operator, including the characters which do not exist in the current character library, can be disassembled for input, or the previous old characters and traditional characters can be attached with pinyin, comments and the like.
Further, the related characters corresponding to each character in the second character set are obtained according to the second confidence level, the related characters are pushed to a preset operator, a feedback text returned by the preset operator is obtained, and a third recognition result of each character in the second character set is determined according to the feedback text; and identifying each character in the basic information image to obtain a first identification result and a first confidence coefficient of each first identification result, which is not repeated.
And S6, collecting a first recognition result with the first confidence coefficient being greater than or equal to the first threshold value, a second recognition result with the second confidence coefficient being greater than or equal to the second threshold value and the third recognition result into a recognition text of the basic information image.
In the embodiment of the present invention, the first recognition result that the first confidence level is greater than or equal to the first threshold is that the character with the confidence level of 100% can be correctly recognized in the basic information image and is completely matched with the Chinese character existing in the Chinese character library.
In the embodiment of the invention, the second recognition result with the second confidence degree larger than or equal to the second threshold is the result that the Chinese character features can be correctly recognized in an actual service scene.
In the embodiment of the invention, the third identification result is obtained according to a feedback text returned by a preset salesman in a rare scene of uncommon word and word-near-word.
The embodiment of the invention can perform multiple screening according to the value of the confidence coefficient in different service scenes, select the method for correctly identifying the text, ensure that error correction can be performed under the condition of wrong file identification without influencing service, greatly reduce the error probability, and correctly identify the text according to some auxiliary functions even when rare characters and characters in the vicinity appear, thereby improving the accuracy of text identification. Therefore, the text recognition method provided by the invention can solve the problem of low accuracy in recognition of the user text.
Fig. 4 is a functional block diagram of a text recognition apparatus according to an embodiment of the present invention.
The text recognition apparatus 100 according to the present invention may be installed in an electronic device. According to the implemented functions, the text-based recognition apparatus 100 may include a first character recognition module 101, a scene recognition module 102, a second character recognition module 103, a related character screening module 104, and a character result aggregation module 105. The module of the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and can perform a fixed function, and are stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the first character recognition module 101 is configured to obtain a basic information image of a user, recognize each character in the basic information image to obtain a first recognition result and a first confidence of each first recognition result, and select a character with the first confidence smaller than a first threshold as a first character set;
the scene recognition module 102 is configured to obtain a service semantic corresponding to the user information, and recognize an actual service scene corresponding to the service semantic;
the second character recognition module 103 is configured to obtain a character recognition rule of the actual service scenario, recognize each character in the first character set according to the character recognition rule, obtain a second recognition result and a second confidence of each second recognition result, and select a character with the second confidence smaller than a second threshold as a second character set;
the related character screening module 104 is configured to obtain a related character corresponding to each character in the second character set according to the second confidence level, push the related character to a preset operator, obtain a feedback text returned by the preset operator, and determine a third recognition result of each character in the second character set according to the feedback text;
the character result aggregation module 105 is configured to aggregate a first recognition result with the first confidence degree being greater than or equal to the first threshold, a second recognition result with the second confidence degree being greater than or equal to the second threshold, and a third recognition result into a recognition text of the basic information image.
In detail, in the embodiment of the present invention, when the modules in the text recognition apparatus 100 are used, the same technical means as the text recognition method described in fig. 1 to fig. 3 are used, and the same technical effect can be produced, which is not described herein again.
Fig. 5 is a schematic structural diagram of an electronic device implementing a text recognition method according to an embodiment of the present invention.
The electronic device 1 may comprise a processor 10, a memory 11, a communication bus 12 and a communication interface 13, and may further comprise a computer program, such as a text recognition program, stored in the memory 11 and executable on the processor 10.
In some embodiments, the processor 10 may be composed of an integrated circuit, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same function or different functions, and includes one or more Central Processing Units (CPUs), a microprocessor, a digital Processing chip, a graphics processor, a combination of various control chips, and the like. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the whole electronic device by using various interfaces and lines, and executes various functions of the electronic device and processes data by running or executing programs or modules (for example, executing a text recognition program and the like) stored in the memory 11 and calling data stored in the memory 11.
The memory 11 includes at least one type of readable storage medium including flash memory, removable hard disks, multimedia cards, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disks, optical disks, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device, for example a removable hard disk of the electronic device. The memory 11 may also be an external storage device of the electronic device in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the electronic device. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device. The memory 11 may be used not only to store application software installed in the electronic device and various types of data, such as codes of a text recognition program, etc., but also to temporarily store data that has been output or is to be output.
The communication bus 12 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. The bus may be divided into an address bus, a data bus, a control bus, etc. The bus is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like.
The communication interface 13 is used for communication between the electronic device and other devices, and includes a network interface and a user interface. Optionally, the network interface may include a wired interface and/or a wireless interface (e.g., WI-FI interface, bluetooth interface, etc.), which are typically used to establish a communication connection between the electronic device and other electronic devices. The user interface may be a Display (Display), an input unit, such as a Keyboard (Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable, among other things, for displaying information processed in the electronic device and for displaying a visualized user interface.
Only electronic devices having components are shown, and those skilled in the art will appreciate that the structures shown in the figures do not constitute limitations on the electronic devices, and may include fewer or more components than shown, or some components in combination, or a different arrangement of components.
For example, although not shown, the electronic device may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management and the like are realized through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
It is to be understood that the embodiments described are illustrative only and are not to be construed as limiting the scope of the claims.
The text recognition program stored in the memory 11 of the electronic device 1 is a combination of instructions that, when executed in the processor 10, implement:
acquiring a basic information image of a user, and identifying each character in the basic information image to obtain a first identification result and a first confidence coefficient of each first identification result;
selecting the characters with the first confidence degrees smaller than a first threshold value as a first character set;
acquiring business semantics corresponding to the user information, and identifying an actual business scene corresponding to the business semantics;
acquiring a character recognition rule of the actual service scene, recognizing each character in the first character set according to the character recognition rule to obtain a second recognition result and a second confidence coefficient of each second recognition result, and selecting the character with the second confidence coefficient smaller than a second threshold value as a second character set;
acquiring a relevant character corresponding to each character in the second character set according to the second confidence, pushing the relevant character to a preset operator, acquiring a feedback text returned by the preset operator, and determining a third recognition result of each character in the second character set according to the feedback text;
and aggregating a first recognition result with the first confidence degree being greater than or equal to the first threshold value, a second recognition result with the second confidence degree being greater than or equal to the second threshold value and the third recognition result into a recognition text of the basic information image.
Specifically, the specific implementation method of the instruction by the processor 10 may refer to the description of the relevant steps in the embodiment corresponding to the drawings, which is not described herein again.
Further, the integrated modules/units of the electronic device 1 may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. The computer readable storage medium may be volatile or non-volatile. For example, the computer-readable medium may include: any entity or device capable of carrying said computer program code, a recording medium, a usb-disk, a removable hard disk, a magnetic diskette, an optical disk, a computer Memory, a Read-Only Memory (ROM).
The present invention also provides a computer-readable storage medium, storing a computer program which, when executed by a processor of an electronic device, may implement:
acquiring a basic information image of a user, and identifying each character in the basic information image to obtain a first identification result and a first confidence coefficient of each first identification result;
selecting the characters with the first confidence degrees smaller than a first threshold value as a first character set;
acquiring business semantics corresponding to the user information, and identifying an actual business scene corresponding to the business semantics;
acquiring a character recognition rule of the actual service scene, recognizing each character in the first character set according to the character recognition rule to obtain a second recognition result and a second confidence coefficient of each second recognition result, and selecting the character with the second confidence coefficient smaller than a second threshold value as a second character set;
acquiring a relevant character corresponding to each character in the second character set according to the second confidence, pushing the relevant character to a preset operator, acquiring a feedback text returned by the preset operator, and determining a third recognition result of each character in the second character set according to the feedback text;
and aggregating a first recognition result with the first confidence degree being greater than or equal to the first threshold value, a second recognition result with the second confidence degree being greater than or equal to the second threshold value and the third recognition result into a recognition text of the basic information image.
In the several embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
The embodiment of the application can acquire and process related data based on an artificial intelligence technology. Among them, Artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the same, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made to the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A text-based recognition method, the method comprising:
acquiring a basic information image of a user, and identifying each character in the basic information image to obtain a first identification result and a first confidence coefficient of each first identification result;
selecting the characters with the first confidence degrees smaller than a first threshold value as a first character set;
acquiring business semantics corresponding to the user information, and identifying an actual business scene corresponding to the business semantics;
acquiring a character recognition rule of the actual service scene, recognizing each character in the first character set according to the character recognition rule to obtain a second recognition result and a second confidence coefficient of each second recognition result, and selecting the character with the second confidence coefficient smaller than a second threshold value as a second character set;
acquiring a related character corresponding to each character in the second character set according to the second confidence degree, pushing the related character to a preset operator, acquiring a feedback text returned by the preset operator, and determining a third recognition result of each character in the second character set according to the feedback text;
and aggregating a first recognition result with the first confidence degree being greater than or equal to the first threshold value, a second recognition result with the second confidence degree being greater than or equal to the second threshold value and the third recognition result into a recognition text of the basic information image.
2. The text-based recognition method of claim 1, wherein said recognizing each character in the basic information image to obtain first recognition results and a first confidence of each of the first recognition results comprises:
carrying out image enhancement on the basic information image to obtain an enhanced image;
detecting a text region in the enhanced image;
identifying text content in the text region;
selecting one character from the text content one by one as information to be identified;
and matching the information to be recognized with a preset character library to obtain a first recognition result and a first confidence coefficient of the information to be recognized.
3. The text-based recognition method of claim 2, wherein the image enhancing the basic information image to obtain an enhanced image comprises:
carrying out gray processing on the basic information image to obtain a gray image;
carrying out binarization processing on the gray level image to obtain a binarization image;
performing image noise reduction on the binary image to obtain a noise-reduced image;
and carrying out inclination correction processing on the noise-reduced image to obtain an enhanced image.
4. The text-based recognition method of claim 2, wherein the matching the information to be recognized with a preset character library to obtain a first recognition result and a first confidence level of the information to be recognized comprises:
calculating a matching value between the information to be recognized and each character in a preset character library;
selecting the character with the maximum matching value as a first recognition result of the information to be recognized, and determining the matching value between the information to be recognized and the first recognition result as the first confidence coefficient.
5. The text-based recognition method of claim 4, wherein the pair calculating a matching value between the information to be recognized and each character in a preset character library comprises:
calculating a matching value between the information to be recognized and each character in a preset character library by using a matching value algorithm as follows:
Figure FDA0003671463150000021
wherein P is the match value, W 1K Is the weight value, W, of the Kth characteristic item of the information to be identified 2K And presetting the weight of the Kth characteristic item of each character in the character library.
6. The text-based recognition method of claim 1, wherein the recognizing the actual service scenario corresponding to the service semantic comprises:
acquiring scene labels corresponding to different actual service scenes and acquiring matching characteristics of the user information;
calculating a mapping value between the matching feature and a scene label corresponding to each different scene;
and determining the actual service scene corresponding to the scene label with the maximum mapping value as the actual service scene of the user information.
7. The text-based recognition method of any one of claims 1 to 6, wherein the pair calculating a mapping value between the matching feature and a scene tag corresponding to each different scene comprises:
calculating a mapping value between the matching features and the scene label corresponding to each different scene by using the following algorithm:
Figure FDA0003671463150000022
wherein S is the mapping value, X i And n is the number of the characteristic quantities, wherein the ith characteristic quantity corresponds to the user information.
8. A text-based recognition apparatus, the apparatus comprising:
the first character recognition module is used for acquiring a basic information image of a user, recognizing each character in the basic information image to obtain a first recognition result and a first confidence coefficient of each first recognition result, and selecting the character with the first confidence coefficient smaller than a first threshold value as a first character set;
the scene recognition module is used for acquiring the service semantics corresponding to the user information and recognizing the actual service scene corresponding to the service semantics;
the second character recognition module is used for acquiring a character recognition rule of the actual service scene, recognizing each character in the first character set according to the character recognition rule to obtain a second recognition result and a second confidence coefficient of each second recognition result, and selecting the character with the second confidence coefficient smaller than a second threshold value as a second character set;
the related character screening module is used for acquiring related characters corresponding to each character in the second character set according to the second confidence degree, pushing the related characters to a preset operator, acquiring a feedback text returned by the preset operator, and determining a third recognition result of each character in the second character set according to the feedback text;
and the character result collection module is used for collecting a first recognition result of which the first confidence coefficient is greater than or equal to the first threshold value, a second recognition result of which the second confidence coefficient is greater than or equal to the second threshold value and a third recognition result into a recognition text of the basic information image.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein, the first and the second end of the pipe are connected with each other,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the text recognition method of any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out a text recognition method according to any one of claims 1 to 7.
CN202210609347.0A 2022-05-31 2022-05-31 Text-based recognition method and device, electronic equipment and readable storage medium Pending CN114926831A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210609347.0A CN114926831A (en) 2022-05-31 2022-05-31 Text-based recognition method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210609347.0A CN114926831A (en) 2022-05-31 2022-05-31 Text-based recognition method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN114926831A true CN114926831A (en) 2022-08-19

Family

ID=82812193

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210609347.0A Pending CN114926831A (en) 2022-05-31 2022-05-31 Text-based recognition method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114926831A (en)

Similar Documents

Publication Publication Date Title
US10943105B2 (en) Document field detection and parsing
CN112861648B (en) Character recognition method, character recognition device, electronic equipment and storage medium
CN112507936B (en) Image information auditing method and device, electronic equipment and readable storage medium
CN112508145B (en) Electronic seal generation and verification method and device, electronic equipment and storage medium
CN112699775A (en) Certificate identification method, device and equipment based on deep learning and storage medium
CN113157927A (en) Text classification method and device, electronic equipment and readable storage medium
CN114898373A (en) File desensitization method and device, electronic equipment and storage medium
CN112560855A (en) Image information extraction method and device, electronic equipment and storage medium
CN114120347A (en) Form verification method and device, electronic equipment and storage medium
CN113536782B (en) Sensitive word recognition method and device, electronic equipment and storage medium
CN115294593A (en) Image information extraction method and device, computer equipment and storage medium
CN114926831A (en) Text-based recognition method and device, electronic equipment and readable storage medium
CN114943306A (en) Intention classification method, device, equipment and storage medium
CN113704474A (en) Bank outlet equipment operation guide generation method, device, equipment and storage medium
CN113221888B (en) License plate number management system test method and device, electronic equipment and storage medium
CN113793121A (en) Automatic litigation method and device for litigation cases, electronic device and storage medium
CN115546814A (en) Key contract field extraction method and device, electronic equipment and storage medium
CN114153972A (en) Accessory classification method, device, equipment and medium based on optical character recognition
CN116542221A (en) PDF file analysis preview method, device, equipment and storage medium
CN114385815A (en) News screening method, device, equipment and storage medium based on business requirements
CN115203364A (en) Software fault feedback processing method, device, equipment and readable storage medium
CN116225416A (en) Webpage code creation method, device, equipment and storage medium
CN111784499A (en) Service integration method and device based on cloud platform, electronic equipment and storage medium
CN114840438A (en) Text code detection and evaluation method, device, equipment and storage medium
CN113486266A (en) Page label adding method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination