WO2023130925A1 - 字体识别方法、装置、可读介质及电子设备 - Google Patents

字体识别方法、装置、可读介质及电子设备 Download PDF

Info

Publication number
WO2023130925A1
WO2023130925A1 PCT/CN2022/138914 CN2022138914W WO2023130925A1 WO 2023130925 A1 WO2023130925 A1 WO 2023130925A1 CN 2022138914 W CN2022138914 W CN 2022138914W WO 2023130925 A1 WO2023130925 A1 WO 2023130925A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
font
recognized
target
images
Prior art date
Application number
PCT/CN2022/138914
Other languages
English (en)
French (fr)
Inventor
叶勇杰
黄灿
Original Assignee
北京有竹居网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京有竹居网络技术有限公司 filed Critical 北京有竹居网络技术有限公司
Publication of WO2023130925A1 publication Critical patent/WO2023130925A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the present disclosure relates to the field of computer vision processing, and in particular, to a font recognition method, device, readable medium and electronic equipment.
  • font recognition due to the wide variety of fonts (there are more than 12,000 fonts in Chinese characters alone), and each font itself has multiple characteristics, multiple characteristics of the same font may not be reflected in one character. Therefore, different characters may have the same or different features, and different fonts may have many same or similar features on the same character, which makes it difficult to recognize fonts.
  • the font recognition method in the related art generally has the problems of low font recognition rate and poor accuracy of font recognition results.
  • the disclosure provides a font recognition method, device, readable medium and electronic equipment.
  • the present disclosure provides a font recognition method, the method comprising:
  • the preset font recognition model is used to divide the image to be recognized into a plurality of sub-images, and obtain the first image feature corresponding to each of the sub-images, and according to each of the sub-images in the image to be recognized
  • the first image feature corresponding to the sub-image determines the second image feature corresponding to the image to be recognized
  • the second image feature includes a context association feature between each of the sub-images and other sub-images in the image to be recognized , determining a font type corresponding to the target text according to the second image feature.
  • the present disclosure provides a font recognition device, the device comprising:
  • An acquisition module configured to acquire an image to be recognized, where the image to be recognized includes target text
  • a determining module configured to input the image to be recognized into a preset font recognition model, so that the preset font recognition model outputs a font type corresponding to the target text;
  • the preset font recognition model is used to divide the image to be recognized into a plurality of sub-images, and obtain the first image feature corresponding to each of the sub-images, and according to each of the sub-images in the image to be recognized
  • the first image feature corresponding to the sub-image determines the second image feature corresponding to the image to be recognized
  • the second image feature includes a context association feature between each of the sub-images and other sub-images in the image to be recognized , determining a font type corresponding to the target text according to the second image feature.
  • the present disclosure provides a computer-readable medium on which a computer program is stored, and when the program is executed by a processing device, the steps of the method described in the first aspect above are implemented.
  • an electronic device including:
  • a processing device configured to execute the computer program in the storage device, so as to implement the steps of the method described in the first aspect above.
  • the preset font recognition model outputs the font type corresponding to the target text; wherein, the preset font recognition model is used to The image to be recognized is divided into multiple sub-images, and the first image feature corresponding to each of the sub-images is acquired, and the first image feature corresponding to each of the sub-images in the image to be recognized is determined according to The second image feature corresponding to the image to be recognized, the second image feature includes the context correlation feature between each of the sub-images and other sub-images in the image to be recognized, and the target text is determined according to the second image feature The corresponding font type.
  • the second image feature corresponding to the image to be recognized can be determined, and the correlation between each character image and other sub-images can be further improved.
  • the image to be recognized can be described comprehensively and more accurately, so that the accuracy of the font recognition result can be effectively improved, and the font recognition rate can also be effectively improved.
  • FIG. 1 is a flowchart of a font recognition method shown in an exemplary embodiment of the present disclosure
  • Fig. 2 is a schematic diagram of the working principle of a preset font recognition model shown in an exemplary embodiment of the present disclosure
  • Fig. 3 is a schematic diagram of the working principle of a TPA module shown in an exemplary embodiment of the present disclosure
  • Fig. 4 is a flowchart of a model training method shown in an exemplary embodiment of the present disclosure
  • Fig. 5 is a block diagram of a font recognition device shown in an exemplary embodiment of the present disclosure.
  • Fig. 6 is a block diagram of an electronic device shown in an exemplary embodiment of the present disclosure.
  • the term “comprise” and its variations are open-ended, ie “including but not limited to”.
  • the term “based on” is “based at least in part on”.
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one further embodiment”; the term “some embodiments” means “at least some embodiments.” Relevant definitions of other terms will be given in the description below.
  • the present disclosure can be used in the process of identifying text fonts in images and documents.
  • the related art font recognition methods roughly include two categories: one
  • the class is font recognition based on hand-designed features, that is, using people's experience to manually design feature extractors to extract the features of each character, and use them to classify fonts.
  • this type of method uses fixed hand-made features to represent font features, it is very easy It is easy to lose some useful font information, and in order to design a feature descriptor with high accuracy, it generally requires careful engineering design and a lot of domain expertise, so it is usually difficult to obtain a more accurate feature descriptor, which is not conducive to obtaining comprehensive and accurate font features, which is not conducive to improving the accuracy of the final font recognition results;
  • the other type is font recognition based on deep learning, which uses deep neural networks to automatically extract font features, and uses the extracted font features to perform font recognition. Classification, this type of font recognition scheme based on deep learning does not break through the limitations of a large number of font categories. Usually, the trained deep learning model still has the problems of low font recognition rate and poor recognition result accuracy.
  • the present disclosure provides a font recognition method, device, readable medium and electronic equipment.
  • the font recognition method inputs the image to be recognized into a preset font recognition model, so that the preset font recognition model Output the font type corresponding to the target text; wherein, the preset font recognition model is used to divide the image to be recognized into a plurality of sub-images, and obtain the first image feature corresponding to each of the sub-images, according to the image to be recognized
  • the first image feature corresponding to each of the sub-images in the image determines the second image feature corresponding to the image to be recognized, and the second image feature includes context association features between each of the sub-images and other sub-images in the image to be recognized, Determine the font type corresponding to the target text according to the second image feature, since the second image feature corresponding to the image to be recognized is determined according to the first image feature corresponding to each sub-image in the image to be recognized, can be based on each The correlation between the character image and other sub-images describes the
  • Fig. 1 is a flowchart of a font recognition method shown in an exemplary embodiment of the present disclosure; as shown in Fig. 1, the method may include the following steps:
  • step 101 an image to be recognized is acquired, and the image to be recognized includes target text.
  • the image to be recognized may also include an image background, and the image to be recognized is formed by the image background and the target text.
  • Step 102 input the image to be recognized into a preset font recognition model, so that the preset font recognition model outputs a font type corresponding to the target text.
  • the preset font recognition model is used to divide the image to be recognized into a plurality of sub-images, and obtain the first image feature corresponding to each of the sub-images, according to the corresponding to each of the sub-images in the image to be recognized
  • the first image feature determines the second image feature corresponding to the image to be recognized
  • the second image feature includes the context correlation feature between each sub-image and other sub-images in the image to be recognized
  • the target is determined according to the second image feature
  • the preset font recognition model may include an image segmentation module and a multi-head attention module
  • the image segmentation module is used to divide the image to be recognized into a plurality of sub-images, and obtain the first An image feature, and input the first image feature corresponding to each of the sub-images into the multi-head attention module, so that the multi-head attention module obtains the context association between each of the sub-images and other sub-images in the image to be recognized feature, so as to obtain the second image feature corresponding to the image to be recognized.
  • the image segmentation module divides the image to be recognized into multiple sub-images, it can divide the image to be recognized into multiple sub-images according to a preset image division rule.
  • a sub-image can also be divided into a plurality of sub-images with a preset pixel ratio, or it can be divided into a sub-image according to the image texture, so that a plurality of sub-images can be obtained image.
  • the multi-head attention module can refer to the relevant description of the multi-head attention mechanism in the prior art. Since the multi-head attention mechanism is widely used in the prior art, and related technologies are relatively easy to obtain, this disclosure will not discuss this here. Let me repeat.
  • FIG. 2 is a schematic diagram of the working principle of a preset font recognition model shown in an exemplary embodiment of the present disclosure.
  • each data set is used as a training data set, and then each training data set obtained is input into the Backbone (backbone network) to obtain the image features of each font recognition image, and then through the TPA (Temporal Pyramid Attention, Self-attention mechanism) image segmentation is performed on the font recognition image according to the image features to obtain the first image features of each sub-image after segmentation, and the first image features of the sub-images obtained after the segmentation are spliced and sent to the multi-head Attention module, so that the multi-head attention module acquires the contextual image features of the mutual influence between different sub-images, (wherein, the structural diagram of the TPA can be as shown in FIG.
  • FIG. 3 is a diagram of an exemplary embodiment of the present disclosure
  • MLP Multilayer Perceptron, multi-layer perceptron
  • Dimensional processing to obtain the low-dimensional features corresponding to each sub-image, such as changing a font recognition image corresponding to a 512-dimensional feature into 64-dimensional data, that is, using a 64-dimensional vector to represent a font recognition image (that is, for each font recognition image to carry out embedding operation), thereby obtaining the second image feature of each font recognition image, according to the embedding result (ie the second image feature of each font recognition image) corresponding to the font recognition image in each training data set
  • Recognize the representative features of the font that is, obtain the mean value of the second image feature of each font recognition image in the training data set through the embedding space (embedding interval network), and use the mean value of the
  • the preset font recognition model can recognize N fonts, and the preset font recognition model includes the representative features of each of the N fonts. For example, if the representative feature corresponding to font A is [a 1 a 2 ... a n ], if the representative feature corresponding to font B is [b 1 b 2 ... b n ], and the second image feature corresponding to the current image to be recognized is [x 1 x 2 ...
  • the current image to be recognized The Euclidean distance between the image and the representative features of the font A is The current Euclidean distance between the image to be recognized and the representative features of the font B is Similarly, the Euclidean distance between the image to be recognized and each recognizable font in the preset font recognition model can be obtained, wherein the preset font recognition model includes representative features of recognizable fonts.
  • the preset font recognition model includes representative features of six identifiable fonts, namely font A to font F, wherein the Euclidean distance between the second image feature of the image to be recognized and the representative feature of font A is less than
  • the Euclidean distance between the image to be recognized and the representative features of font B, font C, font D, font E, and font F, that is, the Euclidean distance between the second image feature of the image to be recognized and the representative feature of font A is the target distance.
  • a possible implementation manner is: when the target distance is less than the preset distance threshold, the target font type corresponding to the target representative feature used to calculate the target distance is used as the font type of the target text.
  • the Euclidean distance between the second image feature of the image to be recognized and the representative feature of font A is the target distance
  • the target distance is less than the preset distance threshold
  • the The font A is used as the font type corresponding to the target text in the image to be recognized.
  • the Euclidean distance between the second image feature of the image to be recognized and the representative feature of font A is the target distance
  • the target distance is greater than or equal to the preset distance threshold
  • the font type corresponding to the target text in the image to be recognized is a newly added font, that is, the preset font recognition model cannot recognize the specific font type corresponding to the target text in the image to be recognized.
  • the preset font recognition model outputs the font type corresponding to the target text; wherein, the preset font recognition model is used for the image to be recognized Divide into a plurality of sub-images, and obtain the first image features corresponding to each of the sub-images, and determine the second image features corresponding to the image to be identified according to the first image features corresponding to each of the sub-images in the image to be identified,
  • the second image features include the context-associated features of each sub-image and other sub-images in the image to be recognized, and the font type corresponding to the target text can be determined according to the second image features, and can be based on each word image and other sub-images
  • the correlation of the method can describe the image to be recognized more comprehensively and accurately, so that the accuracy of the font recognition result can be effectively improved, and the font recognition rate can also be effectively improved.
  • the preset font recognition model is also used to expand the identifiable font types through the following steps shown in S4 to S6, as follows:
  • the first font identification sample image includes a specified text sample of the target new font.
  • a possible implementation may include: acquiring the specified text sample of the target new font from the preset font corpus; acquiring the target background image from the preset background library; combining the specified text sample with the target background Image synthesis of the first font recognition sample image.
  • the characters in the specified text sample in the first font recognition sample image are obtained from the preset font corpus and correspond to the target new font, the obtained first font recognition sample image
  • the specified text samples in are font types that add new fonts for this target.
  • the first font recognition sample image may be input into the image segmentation model, so that the image segmentation module divides the first font recognition sample image into a plurality of sub-images, and obtains the first image corresponding to each of the sub-images feature, and input the first image feature corresponding to each of the sub-images into the multi-head attention module, so that the multi-head attention module obtains the context of each of the sub-images and other sub-images in the first font recognition sample image Associate the features, so as to obtain the second image features corresponding to the first font recognition sample image.
  • the second image features corresponding to each first volume recognition sample image in the 50 first font recognition sample images can be obtained, thereby obtaining 50 second image features, and the mean value of the 50 second image features is used as the The target representative feature corresponding to the new font is added.
  • the above technical solutions can not only effectively avoid the problem of difficulty in obtaining training data in the font recognition model training process in related technologies, but also realize the recognition of multi-category and multi-lingual font types without real labeling data, and It can realize rapid expansion for new font types that will appear in the future.
  • Fig. 4 is a flow chart of a model training method shown in an exemplary embodiment of the present disclosure; as shown in Fig. 4, the preset font recognition model can be obtained by training in the following manner:
  • step 401 a plurality of second font identification sample images are obtained, and the plurality of second font identification sample images include annotation data of various first font types.
  • the second font recognition sample image may be generated by obtaining characters of the first font type from a preset font corpus; obtaining a specified background image from a preset background library; The character and the specified background image are synthesized into the second font recognition sample image.
  • the preset font corpus may be a simplified Chinese corpus, a traditional Chinese corpus, or an English font prediction library.
  • step 402 the multiple second font recognition sample images are used as a first training data set, and a preset initial model is pre-trained to obtain a first undetermined model.
  • the preset initial model may include an image segmentation initial module and a multi-head attention initial module.
  • multiple font category recognition tasks may be established, and the models of the multiple font category recognition tasks are trained through the first training data set to obtain the first undetermined model.
  • step 403 a plurality of third font identification sample images are obtained, and the plurality of third font identification sample images include annotation data of multiple second font types.
  • the first font type is the same as or different from the second font type.
  • the third font recognition sample image can be generated in the following steps: obtaining characters of the second font type from a preset font corpus; obtaining a required background image from a preset background library; and using the second font
  • the third font recognition sample image is synthesized with the type of characters and the required background image, so that the difficulty of obtaining training data in the training process of the font recognition model can be effectively reduced.
  • Step 404 using the plurality of third font recognition sample images as a second training data set, and training the first undetermined model to obtain the preset font recognition model.
  • the above model training process can refer to the meta-learning process in the prior art, that is, through the above steps 401 to 402 as the meta-training stage, multiple different classification tasks are constructed using synthetic data sets, and MAML (Model -Agnostic Meta-Learning) model training, to obtain the first undetermined model, then through the above steps 403 to 404, as the meta-testing stage, also use synthetic data to construct the second training data set, based on the second training data The set continues to train the first undetermined model, which can effectively increase the convergence rate of the preset font recognition model and improve the training efficiency of the preset font recognition model.
  • MAML Model -Agnostic Meta-Learning
  • the above model training process can be carried out offline or online. After obtaining the preset font recognition model, if there is a new font type, it is necessary to make the preset font recognition model capable of When identifying the newly added font type, it is only necessary to obtain the representative features of the newly added font type and save them in the default font recognition model, so that the default font recognition model can recognize the newly added font type of capability.
  • the above technical solutions can effectively avoid the problem of difficulty in obtaining training data during font recognition model training in related technologies, and can realize the recognition of multi-category and multi-lingual font types without real label data, and adopt
  • the meta-learning training mechanism can obtain a preset font recognition model that can be rapidly expanded for new font types that will appear in the future.
  • Fig. 5 is a block diagram of a font recognition device shown in an exemplary embodiment of the present disclosure; as shown in Fig. 5, the device may include:
  • the acquiring module 501 is configured to acquire an image to be recognized, where the image to be recognized includes target text;
  • the determining module 502 is configured to input the image to be recognized into a preset font recognition model, so that the preset font recognition model outputs a font type corresponding to the target text;
  • the preset font recognition model is used to divide the image to be recognized into a plurality of sub-images, and obtain the first image feature corresponding to each of the sub-images, according to the corresponding to each of the sub-images in the image to be recognized
  • the first image feature determines the second image feature corresponding to the image to be recognized
  • the second image feature includes the context correlation feature between each sub-image and other sub-images in the image to be recognized
  • the target is determined according to the second image feature
  • the second image feature corresponding to the image to be recognized can be determined, which can describe more comprehensively and accurately according to the correlation between each word image and other sub-images
  • determining the font type corresponding to the target text according to the second image feature can effectively improve the accuracy of the font recognition result, and can also effectively improve the font recognition rate.
  • the preset font recognition model is used for:
  • the font type corresponding to the target text in the image to be recognized is determined according to the target distance.
  • the preset font recognition model is used for:
  • the target font type corresponding to the target representative feature used to calculate the target distance is used as the font type of the target text.
  • the preset font recognition model is used for:
  • the font type corresponding to the target text is a newly added font.
  • the preset font recognition model is also used for:
  • Obtain the target mean value of the plurality of second image features corresponding to the plurality of first font recognition sample images use the target mean value as the target representative feature corresponding to the target newly added font, and store the target representative feature.
  • the preset font recognition model is used for:
  • the specified text sample and the target background image are synthesized into the first font recognition sample image.
  • the device also includes a model training module 503 configured to:
  • the multiple second font recognition sample images are the first training data set, and the preset initial model is pre-trained, to obtain the first undetermined model, the preset initial model includes an image segmentation initial module and a multi-head attention initial module;
  • the plurality of third font identification sample images include annotation data of multiple second font types, the first font type is the same as or different from the second font type;
  • the multiple third font recognition sample images are used as a second training data set, and the first undetermined model is trained to obtain the preset font recognition model.
  • the above technical solutions can effectively avoid the problem of difficulty in obtaining training data during font recognition model training in related technologies, and can realize the recognition of multi-category and multi-lingual font types without real label data, and adopt
  • the meta-learning training mechanism can obtain a preset font recognition model that can be rapidly expanded for new font types that will appear in the future.
  • FIG. 6 it shows a schematic structural diagram of an electronic device 600 suitable for implementing an embodiment of the present disclosure.
  • the terminal equipment in the embodiment of the present disclosure may include but not limited to such as mobile phone, notebook computer, digital broadcast receiver, PDA (personal digital assistant), PAD (tablet computer), PMP (portable multimedia player), vehicle terminal (such as mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers and the like.
  • the electronic device shown in FIG. 6 is only an example, and should not limit the functions and application scope of the embodiments of the present disclosure.
  • an electronic device 600 may include a processing device (such as a central processing unit, a graphics processing unit, etc.) 601, which may be randomly accessed according to a program stored in a read-only memory (ROM) 602 or loaded from a storage device 608.
  • a processing device such as a central processing unit, a graphics processing unit, etc.
  • RAM memory
  • various programs and data necessary for the operation of the electronic device 600 are also stored.
  • the processing device 601, ROM 602, and RAM 603 are connected to each other through a bus 604.
  • An input/output (I/O) interface 605 is also connected to the bus 604 .
  • the following devices can be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speaker, vibration an output device 607 such as a computer; a storage device 608 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 609.
  • the communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While FIG. 6 shows electronic device 600 having various means, it should be understood that implementing or having all of the means shown is not a requirement. More or fewer means may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product, which includes a computer program carried on a non-transitory computer readable medium, where the computer program includes program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from a network via communication means 609, or from storage means 608, or from ROM 602.
  • the processing device 601 When the computer program is executed by the processing device 601, the above-mentioned functions defined in the methods of the embodiments of the present disclosure are performed.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • a computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can transmit, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted by any appropriate medium, including but not limited to wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
  • any currently known or future network protocol such as HTTP (HyperText Transfer Protocol, Hypertext Transfer Protocol) can be used to communicate, and can communicate with digital data in any form or medium (for example, communication network) interconnection.
  • Examples of communication networks include local area networks (“LANs”), wide area networks (“WANs”), internetworks (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device, or may exist independently without being incorporated into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: acquires an image to be recognized, and the image to be recognized includes target text; The image to be recognized is input into a preset font recognition model, so that the preset font recognition model outputs the font type corresponding to the target text; wherein, the preset font recognition model is used to divide the image to be recognized into multiple sub-images, and obtain the first image feature corresponding to each of the sub-images, and determine the second image corresponding to the image to be identified according to the first image feature corresponding to each of the sub-images in the image to be identified feature, the second image feature includes context association features between each of the sub-images and other sub-images in the image to be recognized, and the font type corresponding to the target text is determined according to the second image feature.
  • Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, or combinations thereof, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages - such as "C" or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (for example, using an Internet service provider to connected via the Internet).
  • LAN local area network
  • WAN wide area network
  • Internet service provider for example, using an Internet service provider to connected via the Internet.
  • each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
  • the modules involved in the embodiments described in the present disclosure may be implemented by software or by hardware. Wherein, the name of the module does not constitute a limitation on the module itself under certain circumstances.
  • the acquisition module can also be described as "acquire an image to be recognized, and the image to be recognized includes the target text".
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs System on Chips
  • CPLD Complex Programmable Logical device
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • Example 1 provides a font recognition method, the method comprising:
  • the preset font recognition model is used to divide the image to be recognized into a plurality of sub-images, and obtain the first image feature corresponding to each of the sub-images, and according to each of the sub-images in the image to be recognized
  • the first image feature corresponding to the sub-image determines the second image feature corresponding to the image to be recognized
  • the second image feature includes a context association feature between each of the sub-images and other sub-images in the image to be recognized , determining a font type corresponding to the target text according to the second image feature.
  • Example 2 provides the method described in Example 1, and determining the font type corresponding to the target text according to the second image feature includes:
  • a font type corresponding to the target text in the image to be recognized is determined according to the target distance.
  • Example 3 provides the method described in Example 2, and determining the font type corresponding to the target text in the image to be recognized according to the target distance includes:
  • the target font type corresponding to the target representative feature used to calculate the target distance is used as the font type of the target text.
  • Example 4 provides the method described in Example 2, and determining the font type corresponding to the target text in the image to be recognized according to the target distance includes:
  • the font type corresponding to the target text is a newly added font.
  • Example 5 provides the method described in Example 1, and the preset font recognition model is further used for:
  • Example 6 provides the method described in Example 5, the acquisition of a plurality of first font identification sample images of the target new font includes:
  • the specified text sample and the target background image are synthesized into the first font recognition sample image.
  • Example 7 provides the method described in any one of Examples 1-6, and the preset font recognition model is obtained by training in the following manner:
  • the plurality of second font recognition sample images are used as the first training data set, and the preset initial model is pre-trained to obtain the first undetermined model.
  • the preset initial model includes an image segmentation initial module and a multi-head attention initial module;
  • the plurality of third font identification sample images including annotation data of multiple second font types, the first font type being the same as or different from the second font type;
  • the multiple third font recognition sample images are used as a second training data set, and the first undetermined model is trained to obtain the preset font recognition model.
  • Example 8 provides a font recognition device, the device comprising:
  • An acquisition module configured to acquire an image to be recognized, where the image to be recognized includes target text
  • a determining module configured to input the image to be recognized into a preset font recognition model, so that the preset font recognition model outputs a font type corresponding to the target text;
  • the preset font recognition model is used to divide the image to be recognized into a plurality of sub-images, and obtain the first image feature corresponding to each of the sub-images, and according to each of the sub-images in the image to be recognized
  • the first image feature corresponding to the sub-image determines the second image feature corresponding to the image to be recognized
  • the second image feature includes a context association feature between each of the sub-images and other sub-images in the image to be recognized , determining a font type corresponding to the target text according to the second image feature.
  • Example 9 provides a computer-readable medium on which a computer program is stored, and when the program is executed by a processing device, the method described in any one of Examples 1-7 above is implemented step.
  • Example 10 provides an electronic device, comprising:
  • a processing device configured to execute the computer program in the storage device to implement the steps of any one of the methods in Examples 1-7 above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Character Discrimination (AREA)

Abstract

一种字体识别方法、装置、可读介质及电子设备,该字体识别方法通过该预设字体识别模型将该待识别图像划分为多个子图像,并获取每个该子图像对应的第一图像特征,根据该待识别图像中每个该子图像对应的该第一图像特征确定该待识别图像对应的第二图像特征,该第二图像特征包括该待识别图像中每个该子图像与其他子图像的上下文关联特征,根据该第二图像特征确定该目标文本对应的字体类型,这样,能够根据每个子图像与其他子图像的相关性更全面、更准确地描述该待识别图像,从而能够有效提升字体识别结果的准确性,也能够有效提高字体识别率。

Description

字体识别方法、装置、可读介质及电子设备
相关申请的交叉引用
本申请要求于2022年01月10日提交的,申请号为202210023481.2、发明名称为“字体识别方法、装置、可读介质及电子设备”的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本公开涉及计算机视觉处理领域,具体地,涉及一种字体识别方法、装置、可读介质及电子设备。
背景技术
在字体识别过程中,由于字体种类繁多(仅仅是中文汉字就存在着12000多种字体),且每个字体本身会存在多种特征,同一种字体的多种特征可能无法在一个字符上体现,因此会在不同的字符上体现相同或者不同的特征,并且不同字体在同一个字符上所表现的特征也可能存在很多相同或者相似特征,因此会导致字体的识别难度较大。相关技术中的字体识别方法,通常存在字体识别率较低,字体识别结果准确性较差的问题。
发明内容
提供该发明内容部分以便以简要的形式介绍构思,这些构思将在后面的具体实施方式部分被详细描述。该发明内容部分并不旨在标识要求保护的技术方案的关键特征或必要特征,也不旨在用于限制所要求的保护的技术方案的范围。
本公开提供一种字体识别方法、装置、可读介质及电子设备。
第一方面,本公开提供一种字体识别方法,所述方法包括:
获取待识别图像,所述待识别图像中包括目标文本;
将所述待识别图像输入预设字体识别模型,以使所述预设字体识别模型输出所述目标文本对应的字体类型;
其中,所述预设字体识别模型,用于将所述待识别图像划分为多个子图像,并获取每个所述子图像对应的第一图像特征,根据所述待识别图像中每个所述子图像对应的所述第一图像特征确定所述待识别图像对应的第二图像特征,所述第二图像特征包括所述待识别图像中每个所述子图像与其他子图像的上下文关联特征,根据所述第二图像特征确定所述目标文本对应的字体类型。
第二方面,本公开提供一种字体识别装置,所述装置包括:
获取模块,被配置为获取待识别图像,所述待识别图像中包括目标文本;
确定模块,被配置为将所述待识别图像输入预设字体识别模型,以使所述预设字体识别模型输出所述目标文本对应的字体类型;
其中,所述预设字体识别模型,用于将所述待识别图像划分为多个子图像,并获取每个所述子图像对应的第一图像特征,根据所述待识别图像中每个所述子图像对应的所述第一图像特征确定所述待识别图像对应的第二图像特征,所述第二图像特征包括所述待识别图像中每个所述子图像与其他子图像的上下文关联特征,根据所述第二图像特征确定所述目标文本对应的字体类型。
第三方面,本公开提供一种计算机可读介质,其上存储有计算机程序,该程序被处理装置执行时实现以上第一方面所述方法的步骤。
第四方面,本公开提供一种电子设备,包括:
存储装置,其上存储有计算机程序;
处理装置,用于执行所述存储装置中的所述计算机程序,以实现实现以上第一方面所述方法的步骤。
上述技术方案,通过将所述待识别图像输入预设字体识别模型,以使所述预设字体识别模型输出所述目标文本对应的字体类型;其中,所述预设字体识别模型,用于将所述待识别图像划分为多个子图像,并获取每个所述子图像对应的第一图像特征,根据所述待识别图像中每个所述子图像对应的所述第一图像特征确定所述待识别图像对应的第二图像特征,所述第二图像特征包括所述待识别图像中每个所述子图像与其他子图像的上下文关联特征,根据所述第二图像特征确定所述目标文本对应的字体类型。这样,根据所述待识别图像中每个所述子图像对应的所述第一图像特征确定所述待识别图像对应的第二图像特征,能够根据每个字图像与其他子图像的相关性更全面、更准确地描述该待识别图像,从而能够有效提升字体识别结果的准确性,也能够有效提高字体识别率。
本公开的其他特征和优点将在随后的具体实施方式部分予以详细说明。
附图说明
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。在附图中:
图1是本公开一示例性实施例示出的一种字体识别方法的流程图;
图2是本公开一示例性实施例示出的一种预设字体识别模型工作原理示意图;
图3是本公开一示例性实施例示出的一种TPA模块的工作原理示意图;
图4是本公开一示例性实施例示出的一种模型训练方法的流程图;
图5是本公开一示例性实施例示出的一种字体识别装置的框图;
图6是本公开一示例性实施例示出的一种电子设备的框图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
在详细介绍本公开的具体实施方式之前,首先,对本公开的应用场景进行以下说明,本公开可以用于识别图像,文档中文本字体的过程中,相关技术的字体识别方法大致包括两类:一类是基于手工设计特征的字体识别,即利用人们的经验手工设计特征提取器来提取每个字符的特征,并以此进行字体分类,这类方法由于使用固定的手工特征表示字体特征,因此很容易丢失一些有用的字体信息,并且要想设计高准确率的特征描述子,一般需要仔细的工程设计和大量的领域专业知识,因此通常得到较为准确的特征描述子的难度较大,不利于获取到全面且准确的字体特征,从而也不利于提升最终字体识别结果准确性;另一类是基于深度学习的字体识别,即使用深度神经网络自动提取字体特征,并利用提取到的字体特征进行字体分类,这类基于深度学习的字体识别方案并未突破字体类别繁多的 局限性,通常训练出来的深度学习模型,依然会存在字体识别率较低,识别结果准确性较差的问题。
为了解决以上技术问题,本公开提供了一种字体识别方法、装置、可读介质及电子设备,该字体识别方法通过将该待识别图像输入预设字体识别模型,以使该预设字体识别模型输出该目标文本对应的字体类型;其中,该预设字体识别模型,用于将该待识别图像划分为多个子图像,并获取每个该子图像对应的第一图像特征,根据该待识别图像中每个该子图像对应的该第一图像特征确定该待识别图像对应的第二图像特征,该第二图像特征包括该待识别图像中每个该子图像与其他子图像的上下文关联特征,根据该第二图像特征确定该目标文本对应的字体类型,由于根据该待识别图像中每个该子图像对应的该第一图像特征确定该待识别图像对应的第二图像特征,能够根据每个字图像与其他子图像的相关性更全面、更准确地描述该待识别图像,因此根据该第二图像特征确定该目标文本对应的字体类型能够有效保证字体识别结果的准确性,也能够有效提高字体识别率。
下面结合具体实施例对本公开的技术方案进行详细阐述。
图1是本公开一示例性实施例示出的一种字体识别方法的流程图;如图1所示,该方法可以包括以下步骤:
步骤101,获取待识别图像,该待识别图像中包括目标文本。
其中,该待识别图像中除包括该目标文本以外,还可以包括图像背景,由该图像背景与该目标文本形成该待识别图像。
步骤102,将该待识别图像输入预设字体识别模型,以使该预设字体识别模型输出该目标文本对应的字体类型。
其中,该预设字体识别模型,用于将该待识别图像划分为多个子图像,并获取每个该子图像对应的第一图像特征,根据该待识别图像中每个该子图像对应的该第一图像特征确定该待识别图像对应的第二图像特征,该第二图像特征包括该待识别图像中每个该子图像与其他子图像的上下文关联特征,根据该第二图像特征确定该目标文本对应的字体类型。
需要说明的是,该预设字体识别模型可以包括图像分割模块和多头注意力模块,该图像分割模块,用于将该待识别图像划分为多个子图像,并获取每个该子图像对应的第一图像特征,并将每个该子图像对应的该第一图像特征输入该多头注意力模块,以使该多头注意力模块获取该待识别图像中每个该子图像与其他子图像的上下文关联特征,从而得到该待识别图像对应的第二图像特征。其中,该图像分割模块在将该待识别图像划分为多个子图像时,可以按照预设的图像划分规则将该待识别图像划分为多个子图像,该图像划分规则可以是按照像素均分为多个子图像,也可以是将该待识别图像划分为多个预设像素比的子图像,还可以是按照图像纹理,将连续地,图像纹理较为相似的部分划分为一个子图像, 从而得到多个子图像。该多头注意力模块可以参考现有技术中对多头注意力机制的相关描述,由于该多头注意力机制在现有技术中应用的比较多,相关技术也比较容易获取,因此本公开这里对此不再赘述。
示例地,图2是本公开一示例性实施例示出的一种预设字体识别模型工作原理示意图,如图2所示,先通过Batch模块创建多个数据集(每个数据集中包括多个字体识别图像),每个数据集作为一个训练数据集,然后将得到的每个训练数据集输入Backbone(主干网络),以获取每个字体识别图像的图像特征,然后通过该TPA(Temporal Pyramid Attention,自注意力机制)根据该图像特征对该字体识别图像进行图像分割,以得到分割后每个子图像的第一图像特征,并将分割得到的子图像的第一图像特征进行拼接后送入该多头注意力模块,以使该多头注意力模块获取不同的子图像之间相互影响的上下文图像特征,(其中,该TPA的结构示意图可以如图3所示,图3是本公开一示例性实施例示出的一种TPA模块的工作原理示意图;)从而得到通过该上下文图像特征全面且准确的描述该字体识别图像,之后再通过MLP(Multilayer Perceptron,多层感知机)对该第二图像特征进行降维处理,以得到每个子图像对应的低维特征,例如将一个字体识别图像对应512维的特征变为64维的数据,即用一个64维的向量表示一个字体识别图像(即对每个字体识别图像进行embedding操作),从而得到每个字体识别图像的第二图像特征,根据每个训练数据集中字体识别图像对应的embedding结果(即每个字体识别图像的第二图像特征)计算每种可识别字体的代表特征(即通过该embedding space(嵌入间隔网络)获取该训练数据集中每个字体识别图像的第二图像特征的均值),将该第二图像特征的均值作为该训练数据集对应字体类型的代表特征,同理,可以得到多个字体类型对应的代表特征,在得到多个字体类型对应的代表特征之后,可以获取待识别图像,使该待识别图像通过该backbone(主干网络),该TPA模块以及MLP模块之后,得到该待识别图像对应的第二图像特征,并在该sampler(采样模块)分别计算多个字体类型对应的代表特征与该待识别图像对应的第二图像特征之间的欧式距离,根据该欧式距离确定该待识别图像中目标文本对应的字体类型。
另外,还需说明的是,以上所述的根据该第二图像特征确定该目标文本对应的字体类型的实施方式可以包括以下S1至S3所示的步骤:
S1,获取该第二图像特征与该预设字体识别模型对应的多种可识别字体中每种该可识别字体对应的代表特征之间的欧式距离,以得到该待识别图像与多种该可识别字体的代表特征的多个该欧式距离。
示例地,该预设字体识别模型能够识别N种字体,则该预设字体识别模型中包括这N种字体中每种字体的代表特征,例如,若字体A对应的代表特征为[a 1 a 2 … a n],若字体B对应的代表特征为[b 1 b 2 … b n],当前待识别图像对应的第二图像特征为 [x 1 x 2 … x n],则当前该待识别图像与该字体A的代表特征之间的欧式距离为
Figure PCTCN2022138914-appb-000001
当前该待识别图像与该字体B的代表特征之间的欧式距离为
Figure PCTCN2022138914-appb-000002
同理可以得到该待识别图像与预设字体识别模型中每种可识别字体的欧式距离,其中,在该预设字体识别模型中包括可识别字体的代表特征。
S2,从多个该欧式距离中确定最小的目标距离。
示例地,若该预设字体识别模型中包括6种可识别字体的代表特征,分别为字体A至字体F,其中,该待识别图像的第二图像特征与字体A的代表特征的欧式距离小于该待识别图像分别与字体B,字体C,字体D,字体E,以及字体F的代表特征的欧式距离,即该待识别图像的第二图像特征与字体A的代表特征的欧式距离为该目标距离。
S3,根据该目标距离确定该待识别图像中目标文本对应的字体类型。
本步骤中,一种可能的实施方式为:在该目标距离小于预设距离阈值的情况下,将计算该目标距离所用目标代表特征对应的目标字体类型作为该目标文本的字体类型。
仍以上述S2中所示步骤为例进行说明,在该待识别图像的第二图像特征与字体A的代表特征的欧式距离为该目标距离时,若该目标距离小于预设距离阈值,则将该字体A作为该待识别图像中目标文本对应的字体类型。
本步骤中,另一种可能的实施方式中,在确定该目标距离大于或者等于预设距离阈值的情况下,确定该目标文本对应的字体类型为新增字体。
仍以上述S2中所示步骤为例进行说明,在该待识别图像的第二图像特征与字体A的代表特征的欧式距离为该目标距离时,若该目标距离大于或者等于预设距离阈值,则确定该待识别图像中目标文本对应的字体类型为新增字体,即该预设字体识别模型无法识别到该待识别图像中目标文本对应的具体的字体类型。
以上技术方案,通过将该待识别图像输入预设字体识别模型,以使该预设字体识别模型输出该目标文本对应的字体类型;其中,该预设字体识别模型,用于将该待识别图像划分为多个子图像,并获取每个该子图像对应的第一图像特征,根据该待识别图像中每个该子图像对应的该第一图像特征确定该待识别图像对应的第二图像特征,该第二图像特征包括该待识别图像中每个该子图像与其他子图像的上下文关联特征,根据该第二图像特征确定该目标文本对应的字体类型,能够根据每个字图像与其他子图像的相关性更全面、更准确地描述该待识别图像,从而能够有效提升字体识别结果的准确性,也能够有效提高字体识别率。
可选地,该预设字体识别模型还用于通过以下S4至S6所示步骤实现对可识别字体类型的扩充,如下所示:
S4,获取目标新增字体的多个第一字体识别样本图像。
其中,该第一字体识别样本图像包括该目标新增字体的指定文本样本。
本步骤中,一种可能的实施方式可以包括:从预设字体语料库中获取该目标新增字体的指定文本样本;从预设背景库中获取目标背景图像;将该指定文本样本和该目标背景图像合成该第一字体识别样本图像。
需要说明的是,由于该第一字体识别样本图像中的指定文本样本中的字符均是从预设字体语料库中获取到该目标新增字体对应的字符,因此得到的该第一字体识别样本图像中的指定文本样本均为字体类型为该目标新增字体。
S5,获取每个该第一字体识别样本图像对应的第二图像特征。
本步骤中,可以将该第一字体识别样本图像输入该图像分割模型,使该图像分割模块将该第一字体识别样本图像划分为多个子图像,并获取每个该子图像对应的第一图像特征,并将每个该子图像对应的该第一图像特征输入该多头注意力模块,以使该多头注意力模块获取该第一字体识别样本图像中每个该子图像与其他子图像的上下文关联特征,从而得到该第一字体识别样本图像对应的第二图像特征。
S6,获取该多个第一字体识别样本图像对应的多个该第二图像特征的目标均值,将该目标均值作为该目标新增字体对应的目标代表特征,并存储该目标代表特征。
示例地,可以获取50个第一字体识别样本图像中每个第一体识别样本图像对应的第二图像特征,从而得到50个第二图像特征,将该50个第二图像特征的均值作为该新增字体对应的目标代表特征。
以上技术方案,不仅能够有效避免相关技术中,字体识别模型训练过程中训练数据获取难度大的问题,能够在无需真实标注数据的情况下,实现对多类别、多语种的字体类型的识别,并且能够针对未来出现的新增字体类型,实现快速扩展。
图4是本公开一示例性实施例示出的一种模型训练方法的流程图;如图4所示,该预设字体识别模型可以通过以下方式训练得到:
步骤401,获取多个第二字体识别样本图像,多个该第二字体识别样本图像包括多种第一字体类型的标注数据。
本步骤中,该第二字体识别样本图像的生成方式可以是,从预设字体语料库中获取该第一字体类型的字符;从预设背景库中获取指定背景图像;将该第一字体类型的字符和该指定背景图像合成该第二字体识别样本图像。其中,该预设字体语料库可以是中文简体预料库,中文繁体语料库,也可以是英文字体预料库。
步骤402,将该多个第二字体识别样本图像为第一训练数据集,对预设初始模型进行预训练,以得到第一待定模型。
其中,该预设初始模型可以包括图像分割初始模块和多头注意力初始模块。
本步骤中,可以建立多个字体类别识别任务,通过该第一训练数据集对该多个字体类别识别任务的模型进行训练,以得到该第一待定模型。
步骤403,获取多个第三字体识别样本图像,多个该第三字体识别样本图像包括多种第二字体类型的标注数据。
其中,该第一字体类型与该第二字体类型相同或不同。
需要说明的是,该第三字体识别样本图像的生成方式可以是,从预设字体语料库中获取该第二字体类型的字符;从预设背景库中获取需要的背景图像;将该第二字体类型的字符和需要的背景图像合成该第三字体识别样本图像,这样,能够有效降低字体识别模型训练过程中训练数据的获取难度。
步骤404,将该多个第三字体识别样本图像为第二训练数据集,对该第一待定模型进行训练,以得到该预设字体识别模型。
需要说明的是,以上模型训练的过程可以参考现有技术中元学习的过程,即通过以上步骤401至402作为meta-training阶段,使用合成数据集构造多个不同的分类任务,采用MAML(Model-Agnostic Meta-Learning)方式进行模型训练,以得到该第一待定模型,然后通过以上步骤403至404,作为meta-testing阶段,同样使用合成数据构造第二训练数据集,基于该第二训练数据集继续对该第一待定模型进行训练,能够有效提升预设字体识别模型的收敛速率,提高该预设字体识别模型的训练效率。
另外,还需说明的是,以上模型训练过程可以在线下进行,也可以在线上进行,在得到该预设字体识别模型之后,若出现新增字体类型,需要使该预设字体识别模型具备对该新增字体类型的识别能力时,只需要获取该新增字体类型的代表特征,并将其保存到该预设字体识别模型中,即可使该预设字体识别模型具备识别该新增字体类型的能力。
以上技术方案,能够有效避免相关技术中,字体识别模型训练过程中训练数据获取难度大的问题,能够在无需真实标注数据的情况下,实现对多类别、多语种的字体类型的识别,并且采用元学习的训练机制能够得到可以针对未来出现的新增字体类型,进行快速扩展的预设字体识别模型。
图5是本公开一示例性实施例示出的一种字体识别装置的框图;如图5所示,该装置可以包括:
获取模块501,被配置为获取待识别图像,该待识别图像中包括目标文本;
确定模块502,被配置为将该待识别图像输入预设字体识别模型,以使该预设字体识别模型输出该目标文本对应的字体类型;
其中,该预设字体识别模型,用于将该待识别图像划分为多个子图像,并获取每个该子图像对应的第一图像特征,根据该待识别图像中每个该子图像对应的该第一图像特征确 定该待识别图像对应的第二图像特征,该第二图像特征包括该待识别图像中每个该子图像与其他子图像的上下文关联特征,根据该第二图像特征确定该目标文本对应的字体类型。
根据该待识别图像中每个该子图像对应的该第一图像特征确定该待识别图像对应的第二图像特征,能够根据每个字图像与其他子图像的相关性更全面、更准确地描述该待识别图像,因此根据该第二图像特征确定该目标文本对应的字体类型能够有效提升字体识别结果的准确性,也能够有效提高字体识别率。
可选地,该预设字体识别模型,用于:
获取该第二图像特征与该预设字体识别模型对应的多种可识别字体中每种该可识别字体对应的代表特征之间的欧式距离,以得到该待识别图像与多种该可识别字体的代表特征的多个该欧式距离;
从多个该欧式距离中确定最小的目标距离;
根据该目标距离确定该待识别图像中目标文本对应的字体类型。
可选地,该预设字体识别模型,用于:
在该目标距离小于预设距离阈值的情况下,将计算该目标距离所用目标代表特征对应的目标字体类型作为该目标文本的字体类型。
可选地,该预设字体识别模型,用于:
在确定该目标距离大于或者等于预设距离阈值的情况下,确定该目标文本对应的字体类型为新增字体。
可选地,该预设字体识别模型还用于:
获取目标新增字体的多个第一字体识别样本图像,该第一字体识别样本图像包括该目标新增字体的指定文本样本;
获取每个该第一字体识别样本图像对应的第二图像特征;
获取该多个第一字体识别样本图像对应的多个该第二图像特征的目标均值,将该目标均值作为该目标新增字体对应的目标代表特征,并存储该目标代表特征。
可选地,该预设字体识别模型,用于:
从预设字体语料库中获取该目标新增字体的指定文本样本;
从预设背景库中获取目标背景图像;
将该指定文本样本和该目标背景图像合成该第一字体识别样本图像。
可选地,该装置还包括模型训练模块503,被配置为:
获取多个第二字体识别样本图像,多个该第二字体识别样本图像包括多种第一字体类型的标注数据;
将该多个第二字体识别样本图像为第一训练数据集,对预设初始模型进行预训练,以 得到第一待定模型,该预设初始模型包括图像分割初始模块和多头注意力初始模块;
获取多个第三字体识别样本图像,多个该第三字体识别样本图像包括多种第二字体类型的标注数据,该第一字体类型与该第二字体类型相同或不同;
将该多个第三字体识别样本图像为第二训练数据集,对该第一待定模型进行训练,以得到该预设字体识别模型。
以上技术方案,能够有效避免相关技术中,字体识别模型训练过程中训练数据获取难度大的问题,能够在无需真实标注数据的情况下,实现对多类别、多语种的字体类型的识别,并且采用元学习的训练机制能够得到可以针对未来出现的新增字体类型,进行快速扩展的预设字体识别模型。
下面参考图6,其示出了适于用来实现本公开实施例的电子设备600的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图6示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图6所示,电子设备600可以包括处理装置(例如中央处理器、图形处理器等)601,其可以根据存储在只读存储器(ROM)602中的程序或者从存储装置608加载到随机访问存储器(RAM)603中的程序而执行各种适当的动作和处理。在RAM 603中,还存储有电子设备600操作所需的各种程序和数据。处理装置601、ROM 602以及RAM 603通过总线604彼此相连。输入/输出(I/O)接口605也连接至总线604。
通常,以下装置可以连接至I/O接口605:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置606;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置607;包括例如磁带、硬盘等的存储装置608;以及通信装置609。通信装置609可以允许电子设备600与其他设备进行无线或有线通信以交换数据。虽然图6示出了具有各种装置的电子设备600,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置609从网络上被下载和安装,或者从存储装置608被安装,或者从ROM 602被安装。在该计算机程序被处理装置601执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机 可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:获取待识别图像,所述待识别图像中包括目标文本;将所述待识别图像输入预设字体识别模型,以使所述预设字体识别模型输出所述目标文本对应的字体类型;其中,所述预设字体识别模型,用于将所述待识别图像划分为多个子图像,并获取每个所述子图像对应的第一图像特征,根据所述待识别图像中每个所述子图像对应的所述第一图像特征确定所述待识别图像对应的第二图像特征,所述第二图像特征包括所述待识别图像中每个所述子图像与其他子图像的上下文关联特征,根据所述第二图像特征确定所述目标文本对应的字体类型。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言——诸如“C”语言或类似的程序设计语言。程序 代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)——连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的模块可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,模块的名称在某种情况下并不构成对该模块本身的限定,例如,获取模块还可以被描述为“获取待识别图像,所述待识别图像中包括目标文本”。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,示例1提供了一种字体识别方法,所述方法包括:
获取待识别图像,所述待识别图像中包括目标文本;
将所述待识别图像输入预设字体识别模型,以使所述预设字体识别模型输出所述目标 文本对应的字体类型;
其中,所述预设字体识别模型,用于将所述待识别图像划分为多个子图像,并获取每个所述子图像对应的第一图像特征,根据所述待识别图像中每个所述子图像对应的所述第一图像特征确定所述待识别图像对应的第二图像特征,所述第二图像特征包括所述待识别图像中每个所述子图像与其他子图像的上下文关联特征,根据所述第二图像特征确定所述目标文本对应的字体类型。
根据本公开的一个或多个实施例,示例2提供了示例1所述的方法,所述根据所述第二图像特征确定所述目标文本对应的字体类型,包括:
获取所述第二图像特征与所述预设字体识别模型对应的多种可识别字体中每种所述可识别字体对应的代表特征之间的欧式距离,以得到所述待识别图像与多种所述可识别字体的代表特征的多个所述欧式距离;
从多个所述欧式距离中确定最小的目标距离;
根据所述目标距离确定所述待识别图像中目标文本对应的字体类型。
根据本公开的一个或多个实施例,示例3提供了示例2所述的方法,所述根据所述目标距离确定所述待识别图像中目标文本对应的字体类型,包括:
在所述目标距离小于预设距离阈值的情况下,将计算所述目标距离所用目标代表特征对应的目标字体类型作为所述目标文本的字体类型。
根据本公开的一个或多个实施例,示例4提供了示例2所述的方法,所述根据所述目标距离确定所述待识别图像中目标文本对应的字体类型,包括:
在确定所述目标距离大于或者等于预设距离阈值的情况下,确定所述目标文本对应的字体类型为新增字体。
根据本公开的一个或多个实施例,示例5提供了示例1所述的方法,所述预设字体识别模型还用于:
获取目标新增字体的多个第一字体识别样本图像,所述第一字体识别样本图像包括所述目标新增字体的指定文本样本;
获取每个所述第一字体识别样本图像对应的第二图像特征;
获取所述多个第一字体识别样本图像对应的多个所述第二图像特征的目标均值,将所述目标均值作为所述目标新增字体对应的目标代表特征,并存储所述目标代表特征。
根据本公开的一个或多个实施例,示例6提供了示例5所述的方法,所述获取目标新增字体的多个第一字体识别样本图像,包括:
从预设字体语料库中获取所述目标新增字体的指定文本样本;
从预设背景库中获取目标背景图像;
将所述指定文本样本和所述目标背景图像合成所述第一字体识别样本图像。
根据本公开的一个或多个实施例,示例7提供了示例1-6任一项所述的方法,所述预设字体识别模型通过以下方式训练得到:
获取多个第二字体识别样本图像,多个所述第二字体识别样本图像包括多种第一字体类型的标注数据;
将所述多个第二字体识别样本图像为第一训练数据集,对预设初始模型进行预训练,以得到第一待定模型,所述预设初始模型包括图像分割初始模块和多头注意力初始模块;
获取多个第三字体识别样本图像,多个所述第三字体识别样本图像包括多种第二字体类型的标注数据,所述第一字体类型与所述第二字体类型相同或不同;
将所述多个第三字体识别样本图像为第二训练数据集,对所述第一待定模型进行训练,以得到所述预设字体识别模型。
根据本公开的一个或多个实施例,示例8提供了一种字体识别装置,所述装置包括:
获取模块,被配置为获取待识别图像,所述待识别图像中包括目标文本;
确定模块,被配置为将所述待识别图像输入预设字体识别模型,以使所述预设字体识别模型输出所述目标文本对应的字体类型;
其中,所述预设字体识别模型,用于将所述待识别图像划分为多个子图像,并获取每个所述子图像对应的第一图像特征,根据所述待识别图像中每个所述子图像对应的所述第一图像特征确定所述待识别图像对应的第二图像特征,所述第二图像特征包括所述待识别图像中每个所述子图像与其他子图像的上下文关联特征,根据所述第二图像特征确定所述目标文本对应的字体类型。
根据本公开的一个或多个实施例,示例9提供了一种计算机可读介质,其上存储有计算机程序,该程序被处理装置执行时实现以上示例1-7中任一项所述方法的步骤。
根据本公开的一个或多个实施例,示例10提供了一种电子设备,包括:
存储装置,其上存储有计算机程序;
处理装置,用于执行所述存储装置中的所述计算机程序,以实现以上示例1-7中任一项所述方法的步骤。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出 的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。

Claims (10)

  1. 一种字体识别方法,其特征在于,所述方法包括:
    获取待识别图像,所述待识别图像中包括目标文本;
    将所述待识别图像输入预设字体识别模型,以使所述预设字体识别模型输出所述目标文本对应的字体类型;
    其中,所述预设字体识别模型,用于将所述待识别图像划分为多个子图像,并获取每个所述子图像对应的第一图像特征,根据所述待识别图像中每个所述子图像对应的所述第一图像特征确定所述待识别图像对应的第二图像特征,所述第二图像特征包括所述待识别图像中每个所述子图像与其他子图像的上下文关联特征,根据所述第二图像特征确定所述目标文本对应的字体类型。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述第二图像特征确定所述目标文本对应的字体类型,包括:
    获取所述第二图像特征与所述预设字体识别模型对应的多种可识别字体中每种所述可识别字体对应的代表特征之间的欧式距离,以得到所述待识别图像与多种所述可识别字体的代表特征的多个所述欧式距离;
    从多个所述欧式距离中确定最小的目标距离;
    根据所述目标距离确定所述待识别图像中目标文本对应的字体类型。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述目标距离确定所述待识别图像中目标文本对应的字体类型,包括:
    在所述目标距离小于预设距离阈值的情况下,将计算所述目标距离所用目标代表特征对应的目标字体类型作为所述目标文本的字体类型。
  4. 根据权利要求2所述的方法,其特征在于,所述根据所述目标距离确定所述待识别图像中目标文本对应的字体类型,包括:
    在确定所述目标距离大于或者等于预设距离阈值的情况下,确定所述目标文本对应的字体类型为新增字体。
  5. 根据权利要求1所述的方法,其特征在于,所述预设字体识别模型还用于:
    获取目标新增字体的多个第一字体识别样本图像,所述第一字体识别样本图像包括所述目标新增字体的指定文本样本;
    获取每个所述第一字体识别样本图像对应的第二图像特征;
    获取所述多个第一字体识别样本图像对应的多个所述第二图像特征的目标均值,将所述目标均值作为所述目标新增字体对应的目标代表特征,并存储所述目标代表特征。
  6. 根据权利要求5所述的方法,其特征在于,所述获取目标新增字体的多个第一字体识别样本图像,包括:
    从预设字体语料库中获取所述目标新增字体的指定文本样本;
    从预设背景库中获取目标背景图像;
    将所述指定文本样本和所述目标背景图像合成所述第一字体识别样本图像。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述预设字体识别模型通过以下方式训练得到:
    获取多个第二字体识别样本图像,多个所述第二字体识别样本图像包括多种第一字体类型的标注数据;
    将所述多个第二字体识别样本图像为第一训练数据集,对预设初始模型进行预训练,以得到第一待定模型,所述预设初始模型包括图像分割初始模块和多头注意力初始模块;
    获取多个第三字体识别样本图像,多个所述第三字体识别样本图像包括多种第二字体类型的标注数据,所述第一字体类型与所述第二字体类型相同或不同;
    将所述多个第三字体识别样本图像为第二训练数据集,对所述第一待定模型进行训练,以得到所述预设字体识别模型。
  8. 一种字体识别装置,其特征在于,所述装置包括:
    获取模块,被配置为获取待识别图像,所述待识别图像中包括目标文本;
    确定模块,被配置为将所述待识别图像输入预设字体识别模型,以使所述预设字体识别模型输出所述目标文本对应的字体类型;
    其中,所述预设字体识别模型,用于将所述待识别图像划分为多个子图像,并获取每个所述子图像对应的第一图像特征,根据所述待识别图像中每个所述子图像对应的所述第一图像特征确定所述待识别图像对应的第二图像特征,所述第二图像特征包括所述待识别图像中每个所述子图像与其他子图像的上下文关联特征,根据所述第二图像特征确定所述目标文本对应的字体类型。
  9. 一种计算机可读介质,其上存储有计算机程序,其特征在于,该程序被处理装置执行时实现权利要求1-7中任一项所述方法的步骤。
  10. 一种电子设备,其特征在于,包括:
    存储装置,其上存储有计算机程序;
    处理装置,用于执行所述存储装置中的所述计算机程序,以实现权利要求1-7中任一项所述方法的步骤。
PCT/CN2022/138914 2022-01-10 2022-12-14 字体识别方法、装置、可读介质及电子设备 WO2023130925A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210023481.2 2022-01-10
CN202210023481.2A CN114495080A (zh) 2022-01-10 2022-01-10 字体识别方法、装置、可读介质及电子设备

Publications (1)

Publication Number Publication Date
WO2023130925A1 true WO2023130925A1 (zh) 2023-07-13

Family

ID=81509720

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/138914 WO2023130925A1 (zh) 2022-01-10 2022-12-14 字体识别方法、装置、可读介质及电子设备

Country Status (2)

Country Link
CN (1) CN114495080A (zh)
WO (1) WO2023130925A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114495080A (zh) * 2022-01-10 2022-05-13 北京有竹居网络技术有限公司 字体识别方法、装置、可读介质及电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109978078A (zh) * 2019-04-10 2019-07-05 厦门元印信息科技有限公司 字体版权检测方法、介质、计算机设备及装置
CN113111871A (zh) * 2021-04-21 2021-07-13 北京金山数字娱乐科技有限公司 文本识别模型的训练方法及装置、文本识别方法及装置
CN113128442A (zh) * 2021-04-28 2021-07-16 华南师范大学 基于卷积神经网络的汉字书法风格识别方法和评分方法
US20210326655A1 (en) * 2018-12-29 2021-10-21 Huawei Technologies Co., Ltd. Text Recognition Method and Terminal Device
CN113591831A (zh) * 2021-07-26 2021-11-02 西南大学 一种基于深度学习的字体识别方法、系统及存储介质
CN114495080A (zh) * 2022-01-10 2022-05-13 北京有竹居网络技术有限公司 字体识别方法、装置、可读介质及电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210326655A1 (en) * 2018-12-29 2021-10-21 Huawei Technologies Co., Ltd. Text Recognition Method and Terminal Device
CN109978078A (zh) * 2019-04-10 2019-07-05 厦门元印信息科技有限公司 字体版权检测方法、介质、计算机设备及装置
CN113111871A (zh) * 2021-04-21 2021-07-13 北京金山数字娱乐科技有限公司 文本识别模型的训练方法及装置、文本识别方法及装置
CN113128442A (zh) * 2021-04-28 2021-07-16 华南师范大学 基于卷积神经网络的汉字书法风格识别方法和评分方法
CN113591831A (zh) * 2021-07-26 2021-11-02 西南大学 一种基于深度学习的字体识别方法、系统及存储介质
CN114495080A (zh) * 2022-01-10 2022-05-13 北京有竹居网络技术有限公司 字体识别方法、装置、可读介质及电子设备

Also Published As

Publication number Publication date
CN114495080A (zh) 2022-05-13

Similar Documents

Publication Publication Date Title
US20230394671A1 (en) Image segmentation method and apparatus, and device, and storage medium
KR102576344B1 (ko) 비디오를 처리하기 위한 방법, 장치, 전자기기, 매체 및 컴퓨터 프로그램
WO2023077995A1 (zh) 信息提取方法、装置、设备、介质及产品
WO2022247562A1 (zh) 多模态数据检索方法、装置、介质及电子设备
WO2022252881A1 (zh) 图像处理方法、装置、可读介质和电子设备
CN113313064A (zh) 字符识别方法、装置、可读介质及电子设备
WO2023143016A1 (zh) 特征提取模型的生成方法、图像特征提取方法和装置
CN112883968B (zh) 图像字符识别方法、装置、介质及电子设备
CN112364829B (zh) 一种人脸识别方法、装置、设备及存储介质
CN112766284B (zh) 图像识别方法和装置、存储介质和电子设备
CN113033580B (zh) 图像处理方法、装置、存储介质及电子设备
WO2023016111A1 (zh) 键值匹配方法、装置、可读介质及电子设备
WO2023078070A1 (zh) 一种字符识别方法、装置、设备、介质及产品
WO2023103653A1 (zh) 键值匹配方法、装置、可读介质及电子设备
WO2023030427A1 (zh) 生成模型的训练方法、息肉识别方法、装置、介质及设备
WO2023142914A1 (zh) 日期识别方法、装置、可读介质及电子设备
WO2023185516A1 (zh) 图像识别模型的训练方法、识别方法、装置、介质和设备
WO2023130925A1 (zh) 字体识别方法、装置、可读介质及电子设备
CN113610034B (zh) 识别视频中人物实体的方法、装置、存储介质及电子设备
CN113033682B (zh) 视频分类方法、装置、可读介质、电子设备
CN110414450A (zh) 关键词检测方法、装置、存储介质及电子设备
WO2024061311A1 (zh) 模型训练方法、图像分类方法和装置
CN110674813B (zh) 汉字识别方法、装置、计算机可读介质及电子设备
WO2023065895A1 (zh) 文本识别方法、装置、可读介质及电子设备
WO2023143107A1 (zh) 一种字符识别方法、装置、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22918373

Country of ref document: EP

Kind code of ref document: A1