CN112434620B - Scene text recognition method, device, equipment and computer readable medium - Google Patents

Scene text recognition method, device, equipment and computer readable medium Download PDF

Info

Publication number
CN112434620B
CN112434620B CN202011357602.4A CN202011357602A CN112434620B CN 112434620 B CN112434620 B CN 112434620B CN 202011357602 A CN202011357602 A CN 202011357602A CN 112434620 B CN112434620 B CN 112434620B
Authority
CN
China
Prior art keywords
model
parameters
equipment
training
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011357602.4A
Other languages
Chinese (zh)
Other versions
CN112434620A (en
Inventor
李振飞
张金义
王瑞杨
刘伟赫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinao Xinzhi Technology Co ltd
Original Assignee
Xinao Xinzhi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinao Xinzhi Technology Co ltd filed Critical Xinao Xinzhi Technology Co ltd
Priority to CN202011357602.4A priority Critical patent/CN112434620B/en
Publication of CN112434620A publication Critical patent/CN112434620A/en
Application granted granted Critical
Publication of CN112434620B publication Critical patent/CN112434620B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/95Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/63Scene text, e.g. street names

Abstract

The embodiment of the invention discloses a scene character recognition method, a device, equipment and a computer readable medium. The method comprises the following steps: preprocessing the acquired input image to obtain a processed input image; inputting the processed input image into a pre-trained scene character recognition model, and outputting a recognition result; transmitting the identification result to a target display device, and controlling the target display device to display the identification result. The embodiment realizes scene character recognition aiming at the image and meets the requirement of a user on character recognition. In addition, the scene character recognition model is generated based on the parameters obtained by training through the selection equipment, so that data transmission of large data volume is avoided, communication cost is reduced, and the utilization rate of data on the edge equipment is improved.

Description

Scene text recognition method, device, equipment and computer readable medium
Technical Field
The embodiment of the invention relates to the field of computers, in particular to a scene character recognition method, a device, equipment and a computer readable medium.
Background
At present, the scene character recognition technology is widely applied to industry and business, but the scene character recognition technology at present has low safety and high broadband consumption of a data transmission resource network. Thereby seriously affecting the rapid progress and development of scene character recognition technology.
Disclosure of Invention
The disclosure of the present invention is intended in part to introduce concepts in a simplified form that are further described below in the detailed description. The disclosure of the present invention is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The embodiment of the invention discloses a scene character recognition method, a device, equipment and a computer readable medium, which are used for solving the technical problems mentioned in the background art section.
In a first aspect, an embodiment of the present disclosure provides a scene text recognition method, where the method includes: preprocessing the acquired input image to obtain a processed input image; inputting the processed input image into a pre-trained scene character recognition model, and outputting a recognition result, wherein the scene character recognition model is obtained through joint learning training, and the training of the scene character recognition model comprises the following steps: in response to receiving a training request of a target user, selecting at least one device as a target device; controlling the target equipment to start training; acquiring parameters of a device model obtained by training in response to determining that the training is completed; generating the scene text recognition model based on the parameters of the equipment model; transmitting the identification result to a target display device, and controlling the target display device to display the identification result.
In a second aspect, an embodiment of the present disclosure provides a scene text recognition device, including: the processing unit is configured to preprocess the acquired input image to obtain a processed input image; the generating unit is configured to input the processed input image into a pre-trained scene character recognition model and output a recognition result, wherein the scene character recognition model is obtained through joint learning training, and the training of the scene character recognition model comprises the following steps: in response to receiving a training request of a target user, selecting at least one device as a target device; controlling the target equipment to start training; acquiring parameters of a device model obtained by training in response to determining that the training is completed; generating the scene text recognition model based on the parameters of the equipment model; and a display unit configured to transmit the recognition result to a target display device, and control the target display device to display the recognition result.
In a third aspect, embodiments of the present disclosure provide an electronic device, including: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement the method as described in the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer readable medium having a computer program stored thereon, wherein the program when executed by a processor implements a method as described in the first aspect.
One of the above embodiments of the present disclosure has the following advantageous effects: the input image used for inputting the scene character recognition model is obtained through preprocessing the input image, so that a recognition result is obtained. The scene character recognition aiming at the image is realized, and the requirement of a user on character recognition is met. In addition, the scene character recognition model is generated based on the parameters obtained by training through the selection equipment, so that data transmission of large data volume is avoided, communication cost is reduced, and the utilization rate of data on the edge equipment is improved.
Drawings
The above and other features, advantages and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a schematic illustration of an application scenario of a scene text recognition method according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of an embodiment of a scene text recognition method according to the present disclosure;
FIG. 3 is a flow chart of an embodiment of generating a scene word recognition model according to the scene word recognition method disclosed herein;
FIG. 4 is a schematic diagram of an embodiment of a scene text recognition device according to the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device suitable for use in implementing the disclosed embodiments of the invention.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and examples of the present disclosure are for illustrative purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used for distinguishing between different devices, modules, or units and not for limiting the order or interdependence of the functions performed by these devices, modules, or units.
It should be noted that references to "one" or "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be interpreted as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the devices in the disclosed embodiments are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a schematic diagram of an application scenario of a scenario text recognition method according to some embodiments of the present disclosure.
In the application scenario of fig. 1, first, the computing device 101 may pre-process the acquired input image 102, as indicated by reference numeral 103, resulting in a processed input image 104. The computing device 101 may then input the processed input image 104 to a pre-trained scene word recognition model 105, outputting a recognition result 106. Finally, the computing device 101 may transmit the recognition result 106 to the target display device 107, and control the target display device 107 to display the recognition result 106.
The computing device 101 may be hardware or software. When the computing device is hardware, the computing device may be implemented as a distributed cluster formed by a plurality of servers or terminal devices, or may be implemented as a single server or a single terminal device. When the computing device is embodied as software, it may be installed in the hardware devices listed above. It may be implemented as a plurality of software or software modules, for example, for providing distributed services, or as a single software or software module. The present invention is not particularly limited herein.
It should be understood that the number of computing devices in fig. 1 is merely illustrative. There may be any number of computing devices, as desired for an implementation.
With continued reference to FIG. 2, a flow 200 of an embodiment of a scene word recognition method in accordance with the present disclosure is shown. The method may be performed by the computing device 101 in fig. 1. The scene character recognition method comprises the following steps:
step 201, preprocessing the acquired input image to obtain a processed input image.
In an embodiment, the execution subject of the scene text recognition method (such as the computing device 101 shown in fig. 1) may perform preprocessing on the acquired input image to obtain a processed input image. Here, the preprocessing may be image binarization processing of the input image. The input image may be an image containing text to be identified.
In an alternative implementation manner of the embodiment, the execution body may acquire the input image through a wired connection manner or a wireless connection manner.
It should be noted that the wireless connection may include, but is not limited to, 3G/4G connections, wiFi connections, bluetooth connections, wiMAX connections, zigbee connections, UWB (ultra wideband) connections, and other now known or later developed wireless connection means.
Step 202, inputting the processed input image to a pre-trained scene character recognition model, and outputting a recognition result.
In an embodiment, the execution subject may input the processed input image to a pre-trained scene text recognition model, and output a recognition result. Here, the above-described scene text recognition model may be a deep neural network (e.g., fedtr model) that has been trained to extract text in an image. The recognition result may be a text contained in the input image.
In an alternative implementation manner of the embodiment, the training of the scene text recognition model includes: in response to receiving a training request of a target user, selecting at least one device as a target device; controlling the target equipment to start training; acquiring parameters of a device model obtained by training in response to determining that the training is completed; and generating the scene character recognition model based on the parameters of the equipment model.
And 203, transmitting the identification result to a target display device, and controlling the target display device to display the identification result.
In an alternative implementation manner of the embodiment, the execution body may transmit the identification result to a target display device, and control the target display device to display the identification result.
One of the above embodiments of the present disclosure has the following advantageous effects: the input image used for inputting the scene character recognition model is obtained through preprocessing the input image, so that a recognition result is obtained. The scene character recognition aiming at the image is realized, and the requirement of a user on character recognition is met. In addition, the scene character recognition model is generated based on the parameters obtained by training through the selection equipment, so that data transmission of large data volume is avoided, communication cost is reduced, and the utilization rate of data on the edge equipment is improved.
With continued reference to FIG. 3, a flow 300 of an embodiment of generating a scene word recognition model in accordance with the scene word recognition method of the present disclosure is shown. The method may be performed by the computing device 101 in fig. 1. The training method comprises the following steps:
in step 301, at least one device is selected as a target device in response to receiving a training request of a target user.
In an embodiment, in response to receiving a training request of a target user, an execution subject of the scene text recognition method (e.g., computing device 101 shown in fig. 1) may select (e.g., randomly select) at least one device from a device library as a target device. At least one device for participating in training is stored in the device library.
And step 302, controlling the target equipment to start training.
In an embodiment, the execution body may control the target device to start training by: the first step, the execution subject may acquire an initial model and model parameters of the initial model; the second step, the execution body can transmit the model parameters to the target equipment; and thirdly, the execution subject can control the target equipment to start training based on the local data of the target equipment. Here, the initial model may be a model that is not trained or does not reach a preset condition after training. The initial model may be a model having a deep neural network structure. The storage location of the initial model is also not limiting in this disclosure.
Alternatively, the executing body may train to completion in response to determining that the end training condition is reached. Here, the end training condition may be to perform a preset number of training tasks. The training adopts a gradient descent algorithm to carry out iterative updating on the model.
And step 303, in response to determining that the training is completed, acquiring parameters of the device model obtained by training.
In an embodiment, in response to determining that the training is completed, the execution subject may obtain parameters of a training-derived device model.
Step 304, generating the scene text recognition model based on the parameters of the equipment model.
In an embodiment, the execution subject may generate the scene text recognition model based on parameters of the device model by:
in the first step, the execution body transmits parameters of the equipment model to a central server.
In an alternative implementation manner of the embodiment, the execution body may encrypt the parameters of the device model based on a preset encryption method (for example, a symmetric encryption method). Then, in response to determining that the encryption is complete, the execution body may transmit parameters of the encrypted device model to the central server.
And secondly, the execution main body can control the central server to aggregate the parameters of the equipment model to obtain aggregate parameters.
In an optional implementation manner of the embodiment, the executing body may control the central server to average the parameters of the device model based on a preset aggregation algorithm (for example, an average aggregation algorithm), so as to obtain an average result as an aggregation parameter.
And thirdly, the execution subject can generate the scene text recognition model based on the aggregation parameters.
In an optional implementation manner of the embodiment, the execution body may determine the aggregation parameter as a parameter of the initial model, so as to obtain the scene text recognition model.
As can be seen in fig. 3, the flow 300 of scene word recognition in some embodiments corresponding to fig. 3 embodies the step of expanding how the scene word recognition model is derived, as compared to the description of the embodiment corresponding to fig. 2. Thus, the embodiments describe a solution that can be trained by selecting a target device, controlling the target device, and then obtaining the parameters of the training. The acquired trained parameters generate a scene text recognition model. In addition, the parameters of the equipment model are encrypted, so that information leakage can be prevented, and the safety is improved.
With further reference to fig. 4, as an implementation of the method described above for each of the above figures, the present disclosure provides some embodiments of a scene text recognition device, which correspond to those described above for fig. 2, and which are particularly applicable to various electronic devices.
As shown in fig. 4, the scene text recognition device 400 of the embodiment includes: a processing unit 401, a generating unit 402, and a display unit 403. Wherein, the processing unit 401 is configured to pre-process the acquired input image to obtain a processed input image; the generating unit 402 is configured to input the processed input image into a pre-trained scene text recognition model, and output a recognition result, where the scene text recognition model is obtained through joint learning training, and the training of the scene text recognition model includes: in response to receiving a training request of a target user, selecting at least one device as a target device; controlling the target equipment to start training; acquiring parameters of a device model obtained by training in response to determining that the training is completed; generating the scene text recognition model based on the parameters of the equipment model; a display unit 403 configured to transmit the identification result to a target display device, and control the target display device to display the identification result.
In an optional implementation manner of the embodiment, the controlling the target device to start training includes: obtaining an initial model and model parameters of the initial model; transmitting the model parameters to the target device; and controlling the target equipment to start training based on the local data of the target equipment.
In an optional implementation manner of the embodiment, generating the scene text recognition model based on the parameters of the device model includes: transmitting parameters of the equipment model to a central server; controlling the central server to aggregate the parameters of the equipment model to obtain aggregate parameters; and generating the scene text recognition model based on the aggregation parameters.
In an optional implementation manner of the embodiment, the transmitting the parameter of the equipment model to the central server includes: encrypting parameters of the equipment model based on a preset encryption method; and transmitting parameters of the encrypted device model to the central server in response to determining that the encryption is complete.
In an optional implementation manner of the embodiment, controlling the central server to aggregate the parameters of the device model to obtain aggregate parameters includes: and based on a preset aggregation algorithm, controlling the central server to aggregate the parameters of the equipment model to obtain aggregation parameters.
In an alternative implementation of the embodiment, generating the scene text recognition model based on the aggregation parameter includes: and determining the aggregation parameters as parameters of the initial model to obtain the scene character recognition model.
It will be appreciated that the elements described in the apparatus 400 correspond to the various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting benefits described above with respect to the method are equally applicable to the apparatus 400 and the units contained therein, and are not described in detail herein.
Referring now to FIG. 5, a schematic diagram of an electronic device (e.g., computing device 101 of FIG. 1) 500 suitable for use in implementing some embodiments of the present disclosure is shown. The server illustrated in fig. 5 is merely an example, and should not be construed as limiting the functionality and scope of use of the disclosed embodiments.
As shown in fig. 5, the electronic device 500 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 501, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data required for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
In general, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 507 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 508 including, for example, magnetic tape, hard disk, etc.; and communication means 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 shows an electronic device 500 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 5 may represent one device or a plurality of devices as needed.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communications device 509, or from the storage device 508, or from the ROM 502. The above-described functions defined in the methods of some embodiments of the present disclosure are performed when the computer program is executed by the processing device 501.
It should be noted that the computer readable medium according to some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the above. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be embodied in the apparatus; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: preprocessing the acquired input image to obtain a processed input image; inputting the processed input image into a pre-trained scene character recognition model, and outputting a recognition result, wherein the scene character recognition model is obtained through joint learning training, and the training of the scene character recognition model comprises the following steps: in response to receiving a training request of a target user, selecting at least one device as a target device; controlling the target equipment to start training; acquiring parameters of a device model obtained by training in response to determining that the training is completed; generating the scene text recognition model based on the parameters of the equipment model; transmitting the identification result to a target display device, and controlling the target display device to display the identification result.
Computer program code for carrying out operations for some embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented in software or in hardware. The described units may also be provided in a processor, for example, described as: a processor includes a processing unit, a generating unit, and a display unit. The names of these units do not in any way limit the unit itself, and for example, the processing unit may also be described as "a unit that performs preprocessing on an acquired input image to obtain a processed input image".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
The above description is only illustrative of the preferred embodiments of the present disclosure and of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments disclosed herein is not limited to the specific combination of features described above, but encompasses other technical solutions formed by any combination of features described above or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually replaced with the technical features having similar functions (but not limited to) disclosed in the embodiments disclosed in the present invention.

Claims (7)

1. A scene text recognition method, comprising:
preprocessing the acquired input image to obtain a processed input image;
inputting the processed input image into a pre-trained scene character recognition model, and outputting a recognition result, wherein the scene character recognition model is obtained through joint learning training, and the training of the scene character recognition model comprises the following steps:
in response to receiving a training request of a target user, selecting at least one device as a target device;
acquiring an initial model and model parameters of the initial model, transmitting the model parameters to the target equipment, and controlling the target equipment to start training the model parameters by adopting a gradient descent algorithm based on local data of the target equipment;
acquiring parameters of a device model obtained by training in response to determining that the training is completed;
transmitting the parameters of the equipment model to a central server, controlling the central server to aggregate the parameters of the equipment model of the selected at least one target equipment to obtain aggregate parameters, and generating the scene character recognition model based on the aggregate parameters;
transmitting the identification result to a target display device, and controlling the target display device to display the identification result.
2. The scene text recognition method according to claim 1, wherein said transmitting parameters of said equipment model to a central server comprises:
encrypting parameters of the equipment model based on a preset encryption method;
and transmitting parameters of the encrypted device model to the central server in response to determining that the encryption is complete.
3. The scene text recognition method according to claim 1, wherein said controlling the central server to aggregate parameters of the device model of the selected at least one target device to obtain aggregated parameters includes:
and based on a preset aggregation algorithm, controlling the central server to aggregate parameters of the equipment model of the selected at least one target equipment to obtain aggregation parameters.
4. The method of claim 1, wherein generating the scene word recognition model based on the aggregation parameter comprises:
and determining the aggregation parameters as model parameters of the initial model to obtain the scene character recognition model.
5. A scene text recognition device, comprising:
the processing unit is configured to preprocess the acquired input image to obtain a processed input image;
the generating unit is configured to input the processed input image into a pre-trained scene character recognition model and output a recognition result, wherein the scene character recognition model is obtained through joint learning training, and the training of the scene character recognition model comprises the following steps:
in response to receiving a training request of a target user, selecting at least one device as a target device;
acquiring an initial model and model parameters of the initial model, transmitting the model parameters to the target equipment, and controlling the target equipment to start training the model parameters by adopting a gradient descent algorithm based on local data of the target equipment;
acquiring parameters of a device model obtained by training in response to determining that the training is completed;
transmitting the parameters of the equipment model to a central server, controlling the central server to aggregate the parameters of the equipment model of the selected at least one target equipment to obtain aggregate parameters, and generating the scene character recognition model based on the aggregate parameters;
and a display unit configured to transmit the recognition result to a target display device, and control the target display device to display the recognition result.
6. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-4.
7. A computer readable medium having stored thereon a computer program, wherein the program when executed by a processor implements the method of any of claims 1-4.
CN202011357602.4A 2020-11-26 2020-11-26 Scene text recognition method, device, equipment and computer readable medium Active CN112434620B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011357602.4A CN112434620B (en) 2020-11-26 2020-11-26 Scene text recognition method, device, equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011357602.4A CN112434620B (en) 2020-11-26 2020-11-26 Scene text recognition method, device, equipment and computer readable medium

Publications (2)

Publication Number Publication Date
CN112434620A CN112434620A (en) 2021-03-02
CN112434620B true CN112434620B (en) 2024-03-01

Family

ID=74699335

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011357602.4A Active CN112434620B (en) 2020-11-26 2020-11-26 Scene text recognition method, device, equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN112434620B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113762163B (en) * 2021-09-09 2022-06-07 杭州澳亚生物技术股份有限公司 GMP workshop intelligent monitoring management method and system
CN114222181B (en) * 2021-11-11 2024-03-12 北京达佳互联信息技术有限公司 Image processing method, device, equipment and medium
CN114429629A (en) * 2022-01-21 2022-05-03 北京有竹居网络技术有限公司 Image processing method and device, readable storage medium and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR940024622A (en) * 1993-04-08 1994-11-18 김진형 Hangul handwritten character recognition device and method
CN109902678A (en) * 2019-02-12 2019-06-18 北京奇艺世纪科技有限公司 Model training method, character recognition method, device, electronic equipment and computer-readable medium
CN110795477A (en) * 2019-09-20 2020-02-14 平安科技(深圳)有限公司 Data training method, device and system
CN111210022A (en) * 2020-01-09 2020-05-29 深圳前海微众银行股份有限公司 Backward model selection method, device and readable storage medium
CN111598254A (en) * 2020-05-22 2020-08-28 深圳前海微众银行股份有限公司 Federal learning modeling method, device and readable storage medium
CN111898424A (en) * 2020-06-19 2020-11-06 贝壳技术有限公司 Character recognition model training method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR940024622A (en) * 1993-04-08 1994-11-18 김진형 Hangul handwritten character recognition device and method
CN109902678A (en) * 2019-02-12 2019-06-18 北京奇艺世纪科技有限公司 Model training method, character recognition method, device, electronic equipment and computer-readable medium
CN110795477A (en) * 2019-09-20 2020-02-14 平安科技(深圳)有限公司 Data training method, device and system
CN111210022A (en) * 2020-01-09 2020-05-29 深圳前海微众银行股份有限公司 Backward model selection method, device and readable storage medium
CN111598254A (en) * 2020-05-22 2020-08-28 深圳前海微众银行股份有限公司 Federal learning modeling method, device and readable storage medium
CN111898424A (en) * 2020-06-19 2020-11-06 贝壳技术有限公司 Character recognition model training method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112434620A (en) 2021-03-02

Similar Documents

Publication Publication Date Title
CN112434620B (en) Scene text recognition method, device, equipment and computer readable medium
CN109800732B (en) Method and device for generating cartoon head portrait generation model
CN109981787B (en) Method and device for displaying information
CN111784712B (en) Image processing method, device, equipment and computer readable medium
CN111354345B (en) Method, apparatus, device and medium for generating speech model and speech recognition
CN114006769B (en) Model training method and device based on transverse federal learning
CN111915689B (en) Method, apparatus, electronic device, and computer-readable medium for generating an objective function
CN112434619B (en) Case information extraction method, apparatus, device and computer readable medium
CN111612434B (en) Method, apparatus, electronic device and medium for generating processing flow
CN110046670B (en) Feature vector dimension reduction method and device
CN111797822A (en) Character object evaluation method and device and electronic equipment
CN111787041A (en) Method and apparatus for processing data
CN115022328A (en) Server cluster, server cluster testing method and device and electronic equipment
CN111784567B (en) Method, apparatus, electronic device, and computer-readable medium for converting image
CN111726476B (en) Image processing method, device, equipment and computer readable medium
CN111062995B (en) Method, apparatus, electronic device and computer readable medium for generating face image
CN112543228A (en) Data transmission method and device, electronic equipment and computer readable medium
CN113222050A (en) Image classification method and device, readable medium and electronic equipment
CN111797931A (en) Image processing method, image processing network training method, device and equipment
CN111680754A (en) Image classification method and device, electronic equipment and computer-readable storage medium
CN111709784A (en) Method, apparatus, device and medium for generating user retention time
CN111580890A (en) Method, apparatus, electronic device, and computer-readable medium for processing features
CN112781581B (en) Method and device for generating path from moving to child cart applied to sweeper
CN113077353B (en) Method, device, electronic equipment and medium for generating nuclear insurance conclusion
CN114697206B (en) Method, device, equipment and computer readable medium for managing nodes of Internet of things

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240105

Address after: 065001 China (Hebei) Pilot Free Trade Zone Daxing Airport Area Langfang Airport Economic Zone Hangyidao Free Trade Zone Science and Technology Innovation Base 2101, Langfang City, Hebei Province

Applicant after: Xinao Xinzhi Technology Co.,Ltd.

Address before: 100020 10th floor, Motorola building, 1 Wangjing East Road, Chaoyang District, Beijing

Applicant before: ENNEW DIGITAL TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant