CN111382810A - Character string recognition method and device and storage medium - Google Patents

Character string recognition method and device and storage medium Download PDF

Info

Publication number
CN111382810A
CN111382810A CN201811644895.7A CN201811644895A CN111382810A CN 111382810 A CN111382810 A CN 111382810A CN 201811644895 A CN201811644895 A CN 201811644895A CN 111382810 A CN111382810 A CN 111382810A
Authority
CN
China
Prior art keywords
character string
sub
target image
result corresponding
character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811644895.7A
Other languages
Chinese (zh)
Inventor
王超
赵锟
姜帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201811644895.7A priority Critical patent/CN111382810A/en
Publication of CN111382810A publication Critical patent/CN111382810A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/196Recognition using electronic means using sequential comparisons of the image signals with a plurality of references
    • G06V30/1983Syntactic or structural pattern recognition, e.g. symbolic string recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Character Discrimination (AREA)

Abstract

The disclosure relates to a character string recognition method, a character string recognition device and a storage medium. The method comprises the following steps: extracting a feature map of a target image through a convolutional neural network; segmenting the feature map into a plurality of sub-maps; inputting the multiple subgraphs into a recurrent neural network according to a specified sequence to obtain character recognition results corresponding to each subgraph in the multiple subgraphs; and determining a character string recognition result corresponding to the target image according to the character recognition result corresponding to each of the multiple sub-images. The method and the device can avoid accumulative errors caused by recognition after the target image is segmented, so that the accuracy of character string recognition can be improved.

Description

Character string recognition method and device and storage medium
Technical Field
The present disclosure relates to the field of image recognition technologies, and in particular, to a method and an apparatus for recognizing a character string, and a storage medium.
Background
In the related art, Character Recognition is generally performed by an OCR (Optical Character Recognition) technique or a segmentation and Recognition method based on deep learning. Taking telephone number recognition as an example, the character recognition process in the related art is roughly as follows: firstly, segmenting subgraphs according to the positions of all digits in a telephone number image; then, identifying numbers in each subgraph; and finally, serially connecting all the numbers in the sequence from left to right and from top to bottom to obtain a number string.
The character recognition method in the related art needs two steps of segmentation and recognition, the two processes of segmentation and recognition can generate accumulated errors, and the subsequent recognition can be influenced due to inaccurate segmentation. In addition, if the accuracy of the single digit recognition model is 99%, the overall accuracy of the 11-digit phone number is reduced to 89.5%, and thus the accuracy of the recognition cannot be guaranteed.
Disclosure of Invention
In view of the above, the present disclosure provides a method, an apparatus, and a storage medium for recognizing a character string.
According to an aspect of the present disclosure, there is provided a character string recognition method, including:
extracting a feature map of a target image through a convolutional neural network;
segmenting the feature map into a plurality of sub-maps;
inputting the multiple subgraphs into a recurrent neural network according to a specified sequence to obtain character recognition results corresponding to each subgraph in the multiple subgraphs;
and determining a character string recognition result corresponding to the target image according to the character recognition result corresponding to each of the multiple sub-images.
In one possible implementation, segmenting the feature map into a plurality of sub-maps includes:
and dividing the characteristic graph according to columns to obtain a plurality of subgraphs.
In one possible implementation manner, the number of columns of the feature map segmentation is greater than the expected length of the character string corresponding to the target image.
In a possible implementation manner, determining a character string recognition result corresponding to the target image according to a character recognition result corresponding to each of the multiple sub-images includes:
determining a character string with the maximum probability corresponding to the sub-images through a connection time sequence classification network according to characters in the character recognition result corresponding to each sub-image in the sub-images and the probability corresponding to the characters;
and determining a character string identification result corresponding to the target image according to the character string with the maximum probability.
In a possible implementation manner, determining a character string recognition result corresponding to the target image according to the character string with the maximum probability includes:
and correcting the character string with the maximum probability according to the expected structure information of the character string corresponding to the target image to obtain the character string identification result corresponding to the target image.
According to another aspect of the present disclosure, there is provided a character string recognition apparatus including:
the extraction module is used for extracting a feature map of the target image through a convolutional neural network;
a segmentation module for segmenting the feature map into a plurality of sub-maps;
the recognition module is used for inputting the multiple subgraphs into a recurrent neural network according to a specified sequence to obtain character recognition results corresponding to each subgraph in the multiple subgraphs;
and the determining module is used for determining a character string recognition result corresponding to the target image according to the character recognition result corresponding to each sub-image in the plurality of sub-images.
In one possible implementation, the segmentation module is configured to:
and dividing the characteristic graph according to columns to obtain a plurality of subgraphs.
In one possible implementation manner, the number of columns of the feature map segmentation is greater than the expected length of the character string corresponding to the target image.
In one possible implementation, the determining module includes:
the first determining sub-module is used for determining a character string with the maximum probability corresponding to the sub-images through a connection time sequence classification network according to characters in the character recognition result corresponding to each sub-image in the sub-images and the probability corresponding to the characters;
and the second determining submodule is used for determining a character string recognition result corresponding to the target image according to the character string with the maximum probability.
In one possible implementation, the second determining submodule is configured to:
and correcting the character string with the maximum probability according to the expected structure information of the character string corresponding to the target image to obtain the character string identification result corresponding to the target image.
According to another aspect of the present disclosure, there is provided a character string recognition apparatus including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to perform the above method.
According to another aspect of the present disclosure, there is provided a computer readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the above-described method.
The character string recognition method and device in each aspect of the disclosure extract the feature graph of the target image through the convolutional neural network, divide the feature graph into a plurality of subgraphs, input the plurality of subgraphs into the cyclic neural network according to the designated sequence to obtain the character recognition result corresponding to each subgraph in the plurality of subgraphs, and determine the character string recognition result corresponding to the target image according to the character recognition result corresponding to each subgraph in the plurality of subgraphs, thereby being capable of avoiding cumulative errors generated by re-recognition after the target image is divided, and improving the accuracy of character string recognition.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flowchart of a method of recognizing a character string according to an embodiment of the present disclosure.
Fig. 2 illustrates a recurrent neural network in a character string recognition method according to an embodiment of the present disclosure and an expanded schematic diagram thereof.
Fig. 3 illustrates a schematic diagram of an LRCN-CTC network in a character string recognition method according to an embodiment of the present disclosure.
Fig. 4 shows an exemplary flowchart of step S14 of the character string recognition method according to an embodiment of the present disclosure.
Fig. 5 illustrates a schematic diagram of a target image, RNN results, and CTC results in a character string recognition method according to an embodiment of the present disclosure.
Fig. 6 is a schematic diagram illustrating the size of output data after being processed by a corresponding layer of an LRCN-CTC network in the character string recognition method according to an embodiment of the present disclosure.
Fig. 7 is a schematic diagram illustrating a test sample in a character string recognition method according to an embodiment of the present disclosure.
Fig. 8 shows a block diagram of a recognition apparatus of a character string according to an embodiment of the present disclosure.
Fig. 9 is a block diagram illustrating an apparatus 800 for recognition of a character string, according to an example embodiment.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flowchart of a method of recognizing a character string according to an embodiment of the present disclosure. As shown in fig. 1, the method includes steps S11 through S14.
In step S11, a feature map of the target image is extracted by the convolutional neural network.
The target image may be an image that needs to be subjected to character string recognition.
In this embodiment, a Convolutional Neural Network (CNN) may be ZFNet, cafnenet, AlexNet, VGGNet, or the like, and is not limited herein.
In step S12, the feature map is divided into a plurality of sub-maps.
In one possible implementation, the feature map may be partitioned into multiple sub-maps by a feature map partitioning layer.
In this embodiment, the feature map is divided into a plurality of sub-maps, and the plurality of sub-maps are used as an input sequence of the recurrent neural network, thereby realizing serialization of a single target image and constructing a time-series input signal required by the recurrent neural network.
In one possible implementation, the segmenting the feature map into a plurality of sub-maps may include: and dividing the characteristic diagram according to columns to obtain a plurality of subgraphs. In this implementation, it is considered that the character strings in the target image are generally arranged from left to right in the horizontal direction, and therefore, the feature map may be divided into columns, and then, the plurality of sub-maps may be used as the input sequence of the recurrent neural network from left to right.
In this implementation, since the field of view of each column of the feature map corresponding to the target image is relatively large, and each column of the feature map can describe information of a plurality of columns of the target image, segmenting the feature map by columns substantially corresponds to segmenting the original image by blocks (a plurality of columns).
As an example of this implementation, the column-wise partitioning of the feature map may be implemented using the Reshape method and the Transpose method.
As an example of this implementation, the number of columns of the feature map segmentation is greater than the expected length of the character string corresponding to the target image. For example, the number of columns of feature map segmentation may be set to 2M-1, where M represents the expected length of the string and M is greater than 1; such as: in the case where the character string is a telephone number, the expected length of the telephone number is 11 digits, and the number of columns into which the feature map is divided is set to 21.
In this example, the background interval between the respective characters is taken into consideration, and therefore, by setting the number of columns into which the feature map is divided to be larger than the expected length of the character string corresponding to the target image, it is advantageous to improve the recognition effect of the character string.
As an example of this implementation, the inverse calculation may result in the width of the input image (i.e., the target image) of the convolutional neural network, depending on the number of columns of the feature map. The size of the input image may be derived from the width of the input image, as well as the aspect ratio of the input image.
In step S13, the multiple sub-images are input into the recurrent neural network in a designated order, and a character recognition result corresponding to each of the multiple sub-images is obtained.
Fig. 2 illustrates a recurrent neural network in a character string recognition method according to an embodiment of the present disclosure and an expanded schematic diagram thereof. A Recurrent Neural Network (RNN) can be used to process sequence data, with the idea that the output at the current time is not only dependent on the input at the current time, but is also influenced by the state at the previous time. The recurrent neural network takes into account context information, which is advantageous for character string recognition of telephone numbers and the like having a specified structure.
In one possible implementation, the recurrent neural network may employ a LSTM (Long-Short term memory) neural network. The LSTM neural network belongs to one of the recurrent neural networks, and is proposed to solve the problem of "long-term dependence". The LSTM neural network performs better on the character recognition task than the standard recurrent neural network. The structure of the recurrent neural network as shown in fig. 2, LSTM redesigns the neural network module a in the recurrent neural network. In another possible implementation, the recurrent neural network may employ a BLSTM (bidirectional long-Short Term Memory based) neural network.
In another possible implementation, the recurrent neural network may employ a cascaded LSTM neural network.
In the present exemplary embodiment, each of the plurality of subgraphs forms an input signal of the recurrent neural network at a respective time. And inputting the input signal at each moment into the recurrent neural network so as to obtain the output signal at each moment. In this embodiment, the number of bits of the output signal of the recurrent neural network is equal to the number of bits of the input signal, i.e., the number of bits of the output signal of the recurrent neural network is equal to the number of subgraphs.
In step S14, a character string recognition result corresponding to the target image is determined based on the character recognition result corresponding to each of the plurality of sub-images.
In one possible implementation manner, a character string recognition result corresponding to the target image may be determined according to a character recognition result corresponding to each of the plurality of sub-images through a Connection Timing Classification (CTC) network.
In other possible implementation manners, the character string recognition result corresponding to the target image may be determined according to the character recognition result corresponding to each of the multiple sub-images by using a Hidden Markov Model (HMM) or a Conditional Random Field (CRFs), and the like.
In one possible implementation, this embodiment may be implemented by an LRCN-CTC (Long-Term RecurrentConvergence Networks-Connectionsist Temporal Classification) network. Fig. 3 illustrates a schematic diagram of an LRCN-CTC network in a character string recognition method according to an embodiment of the present disclosure. As shown in FIG. 3, the LRCN-CTC network may include an LRCN and a join temporal classification network, and the LRCN may include a convolutional neural network, a feature map slicing layer, and a recurrent neural network. Wherein, the convolutional neural network may be used to perform step S11, the feature map segmentation layer may be used to perform step S12, the recurrent neural network may be used to perform step S13, and the join timing classification network may be used to perform step S14. The implementation mode provides an end-to-end character string identification method, and after a target image is input into an LRCN-CTC network, the LRCN-CTC network directly outputs a character string identification result corresponding to the target image. The implementation mode integrates the LRCN and the connection time sequence classification network into one model, so that the model training is simpler, the end-to-end training and recognition are realized, and the accuracy of character string recognition is improved.
In the embodiment, the feature graph of the target image is extracted through the convolutional neural network, the feature graph is divided into a plurality of sub-graphs, the sub-graphs are input into the cyclic neural network according to the designated sequence, character recognition results corresponding to the sub-graphs in the sub-graphs are obtained, and the character string recognition result corresponding to the target image is determined according to the character recognition results corresponding to the sub-graphs in the sub-graphs, so that the accumulative error caused by recognition after the target image is divided can be avoided, and the accuracy of character string recognition can be improved.
Fig. 4 shows an exemplary flowchart of step S14 of the character string recognition method according to an embodiment of the present disclosure. As shown in fig. 4, step S14 may include step S141 and step S142.
In step S141, a string having the maximum probability corresponding to the plurality of sub-images is determined by the connection timing classification network according to the characters and the probabilities corresponding to the characters in the character recognition result corresponding to each of the plurality of sub-images.
In this embodiment, the character recognition result corresponding to each sub-image output by the recurrent neural network includes the probability that each sub-image belongs to each character category. Taking a telephone number as an example, the character categories may include 11 categories of 0-9 and background. Fig. 5 illustrates a schematic diagram of a target image, RNN results, and CTC results in a character string recognition method according to an embodiment of the present disclosure. In fig. 5, "_" indicates that the character category is the background. In the present embodiment, the probability of each character string can be expressed as
Figure BDA0001931862750000081
Wherein Y represents a group represented by Y1To yTThe formed character string, p (y)t) The character output at time t belongs to the character category ytI.e. p (y)t) Indicating that the character corresponding to the t-th sub-figure belongs to the character category ytProbability of ytThe value range of (1) is 0-9 and background, and T represents the number of subgraphs. The connection timing classification network may calculate the probability of each string and may determine the string with the highest probability from each string.
In a possible implementation manner, considering that a plurality of consecutive moments may respectively correspond to different parts of the same number and that there is a background interval between adjacent numbers in the target image, adjacent and identical characters in the character string with the maximum probability may be merged into one character, and a space character in the character string with the maximum probability is deleted to obtain a corrected sequence. For example, the correction sequences corresponding to the character strings "1 _ 12" and "_ 11_ 122" are both 112.
In step S142, a character string recognition result corresponding to the target image is determined according to the character string with the maximum probability.
In a possible implementation manner, determining a character string recognition result corresponding to the target image according to the character string with the maximum probability may include: and correcting the character string with the maximum probability according to the expected structure information of the character string corresponding to the target image to obtain the character string identification result corresponding to the target image.
As an example of this implementation, in the case where the character string is a telephone number, the character string of the maximum probability may be corrected according to the expected structure information of the telephone number, so that the character string recognition result can be optimized. For example, the expected structure information of the telephone number includes a mobile access code, an identification code, and a mobile subscriber number. The mobile access code is 3 bits, the identification code is 4 bits, the mobile user number is 4 bits, and the first bit of the mobile access code is 1. If the first bit of the string with the highest probability is not 1, the first bit may be corrected to 1.
In one possible implementation, the join timing classification network may calculate loss values during training and may optimize parameters of the entire LRCN-CTC network back based on the loss values.
The embodiment can be applied to character string recognition with limited categories and small length change. Wherein, the limited category may refer to that the number of the character categories in the character string is within a certain range, for example, all english words only relate to 26 english letters; all numeric telephone numbers, only ten digits 0-9 are involved. A small variation in length may mean that the length of the string is within a certain range, for example, a telephone number between 7-12 digits in length. When the character pitch is substantially the same, the change in the length of the character string is not reflected on the picture, and it is equivalent that the aspect ratio of the picture is within a certain range.
Application example:
in an application for phone number recognition in a natural scene image, the average aspect ratio of the phone number image is 4.85. in this example, a convolutional neural network may employ ZFNet, the number of channels of an input image is 3, the height is 65, the width is 321, the size of a feature map obtained by ZFNet is 256 × 5 × 21 (the number of channels ×, the height is × width). 6 shows a schematic diagram of the size of output data after being processed by a corresponding layer of an LRCN-CTC network in a recognition method of a character string according to an embodiment of the present disclosure.s. 6, size denotes the size of output data after being processed by a corresponding layer, N denotes the number of pictures processed per layer, and T denotes the number of bits of a phone number recognition result.
The LRCN-CTC network model can be obtained through training by a large amount of manually labeled telephone number data. Fig. 7 is a schematic diagram of a test sample in the method for recognizing a character string according to an embodiment of the present disclosure, and after a test, the test sample is input into a trained network model, and the probabilities of outputting a correct recognition result are all high. Therefore, the image recognition method and the device have good recognition capability under the conditions that the image has complex background interference, inclination and the like.
Fig. 8 shows a block diagram of a recognition apparatus of a character string according to an embodiment of the present disclosure. As shown in fig. 8, the apparatus includes: an extraction module 81, configured to extract a feature map of the target image through a convolutional neural network; a segmentation module 82 configured to segment the feature map into a plurality of sub-maps; the recognition module 83 is configured to input the multiple sub-images into a recurrent neural network according to a specified order, so as to obtain a character recognition result corresponding to each sub-image in the multiple sub-images; the determining module 84 is configured to determine a character string recognition result corresponding to the target image according to the character recognition result corresponding to each of the multiple sub-images.
In one possible implementation, the segmentation module 82 is configured to: and dividing the characteristic graph according to columns to obtain a plurality of subgraphs.
In one possible implementation manner, the number of columns of the feature map segmentation is greater than the expected length of the character string corresponding to the target image.
In one possible implementation, the determining module 84 includes: the first determining sub-module is used for determining a character string with the maximum probability corresponding to the sub-images through a connection time sequence classification network according to characters in the character recognition result corresponding to each sub-image in the sub-images and the probability corresponding to the characters; and the second determining submodule is used for determining a character string recognition result corresponding to the target image according to the character string with the maximum probability.
In one possible implementation, the second determining submodule is configured to: and correcting the character string with the maximum probability according to the expected structure information of the character string corresponding to the target image to obtain the character string identification result corresponding to the target image.
In the embodiment, the feature graph of the target image is extracted through the convolutional neural network, the feature graph is divided into a plurality of sub-graphs, the sub-graphs are input into the cyclic neural network according to the designated sequence, character recognition results corresponding to the sub-graphs in the sub-graphs are obtained, and the character string recognition result corresponding to the target image is determined according to the character recognition results corresponding to the sub-graphs in the sub-graphs, so that the accumulative error caused by recognition after the target image is divided can be avoided, and the accuracy of character string recognition can be improved.
Fig. 9 is a block diagram illustrating an apparatus 800 for recognition of a character string, according to an example embodiment. For example, the apparatus 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 9, the apparatus 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the apparatus 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power components 806 provide power to the various components of device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 800.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed status of the device 800, the relative positioning of components, such as a display and keypad of the device 800, the sensor assembly 814 may also detect a change in the position of the device 800 or a component of the device 800, the presence or absence of user contact with the device 800, the orientation or acceleration/deceleration of the device 800, and a change in the temperature of the device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communications between the apparatus 800 and other devices in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the device 800 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A method for recognizing a character string, comprising:
extracting a feature map of a target image through a convolutional neural network;
segmenting the feature map into a plurality of sub-maps;
inputting the multiple subgraphs into a recurrent neural network according to a specified sequence to obtain character recognition results corresponding to each subgraph in the multiple subgraphs;
and determining a character string recognition result corresponding to the target image according to the character recognition result corresponding to each of the multiple sub-images.
2. The method of claim 1, wherein partitioning the feature map into a plurality of sub-maps comprises:
and dividing the characteristic graph according to columns to obtain a plurality of subgraphs.
3. The method of claim 2, wherein the number of columns of the feature map segmentation is greater than the expected length of the corresponding character string of the target image.
4. The method of claim 1, wherein determining the character string recognition result corresponding to the target image according to the character recognition result corresponding to each of the plurality of sub-images comprises:
determining a character string with the maximum probability corresponding to the sub-images through a connection time sequence classification network according to characters in the character recognition result corresponding to each sub-image in the sub-images and the probability corresponding to the characters;
and determining a character string identification result corresponding to the target image according to the character string with the maximum probability.
5. The method according to claim 4, wherein determining the character string recognition result corresponding to the target image according to the character string with the maximum probability comprises:
and correcting the character string with the maximum probability according to the expected structure information of the character string corresponding to the target image to obtain the character string identification result corresponding to the target image.
6. An apparatus for recognizing a character string, comprising:
the extraction module is used for extracting a feature map of the target image through a convolutional neural network;
a segmentation module for segmenting the feature map into a plurality of sub-maps;
the recognition module is used for inputting the multiple subgraphs into a recurrent neural network according to a specified sequence to obtain character recognition results corresponding to each subgraph in the multiple subgraphs;
and the determining module is used for determining a character string recognition result corresponding to the target image according to the character recognition result corresponding to each sub-image in the plurality of sub-images.
7. The apparatus of claim 6, wherein the segmentation module is configured to:
and dividing the characteristic graph according to columns to obtain a plurality of subgraphs.
8. The apparatus of claim 6, wherein the determining module comprises:
the first determining sub-module is used for determining a character string with the maximum probability corresponding to the sub-images through a connection time sequence classification network according to characters in the character recognition result corresponding to each sub-image in the sub-images and the probability corresponding to the characters;
and the second determining submodule is used for determining a character string recognition result corresponding to the target image according to the character string with the maximum probability.
9. An apparatus for recognizing a character string, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the method of any one of claims 1 to 5.
10. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 5.
CN201811644895.7A 2018-12-29 2018-12-29 Character string recognition method and device and storage medium Pending CN111382810A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811644895.7A CN111382810A (en) 2018-12-29 2018-12-29 Character string recognition method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811644895.7A CN111382810A (en) 2018-12-29 2018-12-29 Character string recognition method and device and storage medium

Publications (1)

Publication Number Publication Date
CN111382810A true CN111382810A (en) 2020-07-07

Family

ID=71219452

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811644895.7A Pending CN111382810A (en) 2018-12-29 2018-12-29 Character string recognition method and device and storage medium

Country Status (1)

Country Link
CN (1) CN111382810A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678293A (en) * 2015-12-30 2016-06-15 成都数联铭品科技有限公司 Complex image and text sequence identification method based on CNN-RNN
CN107798327A (en) * 2017-10-31 2018-03-13 北京小米移动软件有限公司 Character identifying method and device
CN108288078A (en) * 2017-12-07 2018-07-17 腾讯科技(深圳)有限公司 Character identifying method, device and medium in a kind of image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678293A (en) * 2015-12-30 2016-06-15 成都数联铭品科技有限公司 Complex image and text sequence identification method based on CNN-RNN
CN107798327A (en) * 2017-10-31 2018-03-13 北京小米移动软件有限公司 Character identifying method and device
CN108288078A (en) * 2017-12-07 2018-07-17 腾讯科技(深圳)有限公司 Character identifying method, device and medium in a kind of image

Similar Documents

Publication Publication Date Title
US20210042474A1 (en) Method for text recognition, electronic device and storage medium
CN110287874B (en) Target tracking method and device, electronic equipment and storage medium
KR102538164B1 (en) Image processing method and device, electronic device and storage medium
CN109615006B (en) Character recognition method and device, electronic equipment and storage medium
CN111783756B (en) Text recognition method and device, electronic equipment and storage medium
CN109934275B (en) Image processing method and device, electronic equipment and storage medium
CN110532956B (en) Image processing method and device, electronic equipment and storage medium
CN110781813B (en) Image recognition method and device, electronic equipment and storage medium
CN111539410B (en) Character recognition method and device, electronic equipment and storage medium
CN110990801B (en) Information verification method and device, electronic equipment and storage medium
CN109145970B (en) Image-based question and answer processing method and device, electronic equipment and storage medium
CN109543536B (en) Image identification method and device, electronic equipment and storage medium
CN109522937B (en) Image processing method and device, electronic equipment and storage medium
US20210326649A1 (en) Configuration method and apparatus for detector, storage medium
CN109685041B (en) Image analysis method and device, electronic equipment and storage medium
CN111242303A (en) Network training method and device, and image processing method and device
CN111523555A (en) Image processing method and device, electronic equipment and storage medium
CN114202562A (en) Video processing method and device, electronic equipment and storage medium
CN113538310A (en) Image processing method and device, electronic equipment and storage medium
CN110659625A (en) Training method and device of object recognition network, electronic equipment and storage medium
CN113506324B (en) Image processing method and device, electronic equipment and storage medium
CN111382810A (en) Character string recognition method and device and storage medium
CN113807540A (en) Data processing method and device
CN114118278A (en) Image processing method and device, electronic equipment and storage medium
CN110019928B (en) Video title optimization method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200707

RJ01 Rejection of invention patent application after publication