CN112434620A - Scene character recognition method, device, equipment and computer readable medium - Google Patents
Scene character recognition method, device, equipment and computer readable medium Download PDFInfo
- Publication number
- CN112434620A CN112434620A CN202011357602.4A CN202011357602A CN112434620A CN 112434620 A CN112434620 A CN 112434620A CN 202011357602 A CN202011357602 A CN 202011357602A CN 112434620 A CN112434620 A CN 112434620A
- Authority
- CN
- China
- Prior art keywords
- model
- parameters
- character recognition
- training
- scene character
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 238000012549 training Methods 0.000 claims abstract description 62
- 238000007781 pre-processing Methods 0.000 claims abstract description 10
- 230000004044 response Effects 0.000 claims description 22
- 230000002776 aggregation Effects 0.000 claims description 15
- 238000004220 aggregation Methods 0.000 claims description 15
- 238000012545 processing Methods 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 6
- 238000004891 communication Methods 0.000 abstract description 9
- 230000005540 biological transmission Effects 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 239000003795 chemical substances by application Substances 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/94—Hardware or software architectures specially adapted for image or video understanding
- G06V10/95—Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
- G06V20/63—Scene text, e.g. street names
Abstract
The embodiment of the invention discloses a scene character recognition method, a scene character recognition device, scene character recognition equipment and a computer readable medium. The method comprises the following steps: preprocessing the acquired input image to obtain a processed input image; inputting the processed input image into a pre-trained scene character recognition model, and outputting a recognition result; and transmitting the identification result to target display equipment, and controlling the target display equipment to display the identification result. The method and the device realize scene character recognition aiming at the image and meet the requirement of a user on character recognition. In addition, the equipment is selected for training, and the scene character recognition model is generated based on the parameters obtained by training, so that data transmission with large data volume is avoided, communication cost is reduced, and the utilization rate of data on the edge equipment is improved.
Description
Technical Field
The embodiment of the invention relates to the field of computers, in particular to a scene character recognition method, a scene character recognition device, scene character recognition equipment and a computer readable medium.
Background
The scene character recognition technology is widely applied to industry and business at present, but the security of the scene character recognition technology is low, and the broadband consumption of a data transmission resource network is high. Thereby seriously affecting the rapid progress and development of the scene character recognition technology.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary of the disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The embodiment of the invention provides a scene character recognition method, a scene character recognition device, a scene character recognition equipment and a computer readable medium, which are used for solving the technical problems mentioned in the background technology section.
In a first aspect, an embodiment of the present disclosure provides a method for recognizing a scene text, where the method includes: preprocessing the acquired input image to obtain a processed input image; inputting the processed input image into a pre-trained scene character recognition model, and outputting a recognition result, wherein the scene character recognition model is obtained through joint learning training, and the training of the scene character recognition model comprises the following steps: in response to receiving a training request of a target user, selecting at least one device as a target device; controlling the target device to start training; in response to determining that the training is complete, obtaining parameters of the trained device model; generating the scene character recognition model based on the parameters of the equipment model; and transmitting the identification result to target display equipment, and controlling the target display equipment to display the identification result.
In a second aspect, an embodiment of the present disclosure provides a scene character recognition apparatus, where the apparatus includes: a processing unit configured to pre-process the acquired input image to obtain a processed input image; a generating unit configured to input the processed input image to a pre-trained scene character recognition model, and output a recognition result, wherein the scene character recognition model is obtained by joint learning training, and the training of the scene character recognition model comprises: in response to receiving a training request of a target user, selecting at least one device as a target device; controlling the target device to start training; in response to determining that the training is complete, obtaining parameters of the trained device model; generating the scene character recognition model based on the parameters of the equipment model; a display unit configured to transmit the recognition result to a target display apparatus and control the target display apparatus to display the recognition result.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; a storage device having one or more programs stored thereon which, when executed by one or more processors, cause the one or more processors to implement the method as described in the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer readable medium having a computer program stored thereon, wherein the program, when executed by a processor, implements the method as described in the first aspect.
One of the above embodiments disclosed by the invention has the following beneficial effects: and preprocessing the input image to obtain an input image for inputting the scene character recognition model, thereby obtaining a recognition result. The scene character recognition aiming at the image is realized, and the requirement of a user on character recognition is met. In addition, the equipment is selected for training, and the scene character recognition model is generated based on the parameters obtained by training, so that data transmission with large data volume is avoided, communication cost is reduced, and the utilization rate of data on the edge equipment is improved.
Drawings
The above and other features, advantages and aspects of the disclosed embodiments will become more apparent from the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.
FIG. 1 is a schematic diagram of an application scenario of a scene text recognition method according to a disclosed embodiment of the invention;
FIG. 2 is a flow chart of an embodiment of a scene text recognition method according to the present disclosure;
FIG. 3 is a flowchart of an embodiment of generating a scene text recognition model according to the scene text recognition method disclosed in the present invention;
FIG. 4 is a schematic structural diagram of an embodiment of a scene text recognition apparatus according to the present disclosure;
FIG. 5 is a schematic block diagram of an electronic device suitable for use in implementing disclosed embodiments of the invention.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments disclosed in the present invention may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules, or units, and are not used for limiting the order or interdependence of the functions performed by the devices, modules, or units.
It is noted that references to "a", "an", and "the" modifications in the disclosure are exemplary rather than limiting, and that those skilled in the art will understand that "one or more" unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the disclosed embodiments are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a schematic diagram of an application scenario of a scenario text recognition method according to some embodiments of the present disclosure.
In the application scenario of fig. 1, first, the computing device 101 may pre-process an acquired input image 102, as indicated by reference numeral 103, resulting in a processed input image 104. The computing device 101 may then input the processed input image 104 to the pre-trained scene text recognition model 105, outputting a recognition result 106. Finally, the computing device 101 may transmit the recognition result 106 to the target display device 107, and control the target display device 107 to display the recognition result 106.
The computing device 101 may be hardware or software. When the computing device is hardware, it may be implemented as a distributed cluster composed of multiple servers or terminal devices, or may be implemented as a single server or a single terminal device. When the computing device is embodied as software, it may be installed in the hardware devices enumerated above. It may be implemented, for example, as multiple software or software modules to provide distributed services, or as a single software or software module. And is not particularly limited herein.
It should be understood that the number of computing devices in FIG. 1 is merely illustrative. There may be any number of computing devices, as implementation needs dictate.
With continued reference to FIG. 2, a flow 200 of an embodiment of a scene text recognition method in accordance with the present disclosure is shown. The method may be performed by the computing device 101 of fig. 1. The scene character recognition method comprises the following steps:
In an embodiment, an executing entity (e.g., the computing device 101 shown in fig. 1) of the scene text recognition method may perform preprocessing on the acquired input image to obtain a processed input image. Here, the preprocessing may be image binarization processing on the input image. The input image may be an image containing text to be recognized.
In an alternative implementation manner of the embodiment, the execution main body may acquire the input image through a wired connection manner or a wireless connection manner.
It should be noted that the wireless connection means may include, but is not limited to, a 3G/4G connection, a WiFi connection, a bluetooth connection, a WiMAX connection, a Zigbee connection, a uwb (ultra wideband) connection, and other wireless connection means now known or developed in the future.
In an embodiment, the execution subject may input the processed input image to a pre-trained scene character recognition model, and output a recognition result. Here, the scene text recognition model may be a deep neural network (e.g., feocr model) that has been trained to extract text in an image. The recognition result may be a character contained in the input image.
In an optional implementation manner of the embodiment, the training of the scene text recognition model includes: in response to receiving a training request of a target user, selecting at least one device as a target device; controlling the target device to start training; in response to determining that the training is complete, obtaining parameters of the trained device model; and generating the scene character recognition model based on the parameters of the equipment model.
In an alternative implementation manner of the embodiment, the execution main body may transmit the recognition result to a target display device, and control the target display device to display the recognition result.
One of the above embodiments disclosed by the invention has the following beneficial effects: and preprocessing the input image to obtain an input image for inputting the scene character recognition model, thereby obtaining a recognition result. The scene character recognition aiming at the image is realized, and the requirement of a user on character recognition is met. In addition, the equipment is selected for training, and the scene character recognition model is generated based on the parameters obtained by training, so that data transmission with large data volume is avoided, communication cost is reduced, and the utilization rate of data on the edge equipment is improved.
With continued reference to FIG. 3, a flow 300 of an embodiment of generating a scene text recognition model in accordance with the scene text recognition method disclosed herein is shown. The method may be performed by the computing device 101 of fig. 1. The training method comprises the following steps:
in response to receiving a training request from a target user, at least one device is selected as a target device, step 301.
In an embodiment, in response to receiving a training request of a target user, an executing agent of the scene text recognition method (e.g., computing device 101 shown in fig. 1) may select (e.g., randomly select) at least one device from a library of devices as the target device. The equipment library stores at least one equipment for participating in training.
In an embodiment, the executing entity may control the target device to start training by: firstly, the execution main body can obtain an initial model and model parameters of the initial model; secondly, the executing body can transmit the model parameters to the target device; and thirdly, the executing body can control the target device to start training based on the local data of the target device. Here, the initial model may be a model that is not trained or does not reach a preset condition after training. The initial model may be a model having a deep neural network structure. The storage location of the initial model is likewise not limiting in this disclosure.
Optionally, in response to determining that the end training condition is reached, the executive may complete training. Here, the end training condition may be to perform a preset number of training tasks. The training adopts a gradient descent algorithm to iteratively update the model.
In an embodiment, in response to determining that the training is complete, the performing agent may obtain parameters of the trained device model.
And 304, generating the scene character recognition model based on the parameters of the equipment model.
In an embodiment, the executing entity may generate the scene character recognition model based on parameters of the device model by:
first, the execution subject transmits the parameters of the equipment model to a central server.
In an optional implementation manner of the embodiment, the execution body may encrypt the parameter of the device model based on a preset encryption method (e.g., a symmetric encryption method). Then, in response to determining that the encryption is complete, the executing agent may transmit parameters of the encrypted device model to the central server.
And secondly, the execution main body can control the central server to aggregate the parameters of the equipment model to obtain aggregated parameters.
In an optional implementation manner of the embodiment, the execution subject may control the central server to average the parameters of the device model based on a preset aggregation algorithm (e.g., an average aggregation algorithm), and obtain an average result as the aggregation parameter.
And thirdly, the execution main body can generate the scene character recognition model based on the aggregation parameters.
In an optional implementation manner of the embodiment, the executing entity may determine the aggregation parameter as a parameter of the initial model, so as to obtain the scene character recognition model.
As can be seen from fig. 3, compared with the description of the embodiment corresponding to fig. 2, the process 300 of scene text recognition in some embodiments corresponding to fig. 3 embodies steps of how to obtain an extension of the scene text recognition model. Thus, the embodiments describe schemes that can be trained by selecting a target device, controlling the target device, and then obtaining the trained parameters. And generating a scene character recognition model by the acquired training parameters. In addition, the parameters of the equipment model are encrypted, so that information leakage can be prevented, and the safety is improved.
With further reference to fig. 4, as an implementation of the foregoing method for the above-mentioned figures, the present disclosure provides some embodiments of a scene text recognition apparatus, which correspond to the method embodiments described above in fig. 2, and the apparatus may be specifically applied to various electronic devices.
As shown in fig. 4, the scene text recognition apparatus 400 of the embodiment includes: a processing unit 401, a generating unit 402 and a display unit 403. The processing unit 401 is configured to perform preprocessing on the acquired input image to obtain a processed input image; a generating unit 402, configured to input the processed input image to a pre-trained scene character recognition model obtained by joint learning training, and output a recognition result, where the training of the scene character recognition model includes: in response to receiving a training request of a target user, selecting at least one device as a target device; controlling the target device to start training; in response to determining that the training is complete, obtaining parameters of the trained device model; generating the scene character recognition model based on the parameters of the equipment model; a display unit 403 configured to transmit the recognition result to a target display device and control the target display device to display the recognition result.
In an optional implementation manner of the embodiment, the controlling the target device to start training includes: obtaining an initial model and model parameters of the initial model; transmitting the model parameters to the target device; controlling the target device to start training based on the local data of the target device.
In an optional implementation manner of the embodiment, the generating the scene character recognition model based on the parameters of the device model includes: transmitting parameters of the equipment model to a central server; controlling the central server to aggregate the parameters of the equipment model to obtain aggregate parameters; and generating the scene character recognition model based on the aggregation parameters.
In an optional implementation manner of the embodiment, the transmitting the parameters of the device model to the central server includes: encrypting the parameters of the equipment model based on a preset encryption method; in response to determining that the encryption is complete, transmitting parameters of the encrypted device model to the central server.
In an optional implementation manner of the embodiment, controlling the central server to aggregate the parameters of the device model to obtain an aggregated parameter includes: and controlling the central server to aggregate the parameters of the equipment model based on a preset aggregation algorithm to obtain aggregation parameters.
In an optional implementation manner of the embodiment, generating the scene text recognition model based on the aggregation parameter includes: and determining the aggregation parameters as the parameters of the initial model to obtain the scene character recognition model.
It will be understood that the elements described in the apparatus 400 correspond to various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 400 and the units included therein, and will not be described herein again.
Referring now to FIG. 5, a block diagram of an electronic device (e.g., computing device 101 of FIG. 1)500 suitable for use in implementing some embodiments of the present disclosure is shown. The server shown in fig. 5 is only an example, and should not bring any limitation to the function and the scope of use of the disclosed embodiments of the present invention.
As shown in fig. 5, electronic device 500 may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 illustrates an electronic device 500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 5 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. Which when executed by the processing means 501 performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium mentioned above in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the apparatus; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: preprocessing the acquired input image to obtain a processed input image; inputting the processed input image into a pre-trained scene character recognition model, and outputting a recognition result, wherein the scene character recognition model is obtained through joint learning training, and the training of the scene character recognition model comprises the following steps: in response to receiving a training request of a target user, selecting at least one device as a target device; controlling the target device to start training; in response to determining that the training is complete, obtaining parameters of the trained device model; generating the scene character recognition model based on the parameters of the equipment model; and transmitting the identification result to target display equipment, and controlling the target display equipment to display the identification result.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a processing unit, a generating unit, and a display unit. The names of these units do not in some cases form a limitation on the units themselves, and for example, the processing unit may also be described as a "unit that preprocesses the acquired input image to obtain a processed input image".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of the preferred embodiments of the present disclosure and is provided for the purpose of illustrating the general principles of the technology. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments disclosed in the present application is not limited to the embodiments with specific combinations of the above-mentioned features, but also covers other embodiments with any combination of the above-mentioned features or their equivalents without departing from the inventive concept. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.
Claims (10)
1. A scene character recognition method is characterized by comprising the following steps:
preprocessing the acquired input image to obtain a processed input image;
inputting the processed input image into a pre-trained scene character recognition model, and outputting a recognition result, wherein the scene character recognition model is obtained through joint learning training, and the training of the scene character recognition model comprises the following steps:
in response to receiving a training request of a target user, selecting at least one device as a target device;
controlling the target device to start training;
in response to determining that the training is complete, obtaining parameters of the trained device model;
generating the scene character recognition model based on the parameters of the equipment model;
and transmitting the identification result to target display equipment, and controlling the target display equipment to display the identification result.
2. The method of claim 1, wherein the controlling the target device to start training comprises:
obtaining an initial model and model parameters of the initial model;
transmitting the model parameters to the target device;
controlling the target device to start training based on the local data of the target device.
3. The method of claim 2, wherein the generating the scene text recognition model based on the parameters of the device model comprises:
transmitting parameters of the equipment model to a central server;
controlling the central server to aggregate the parameters of the equipment model to obtain aggregate parameters;
and generating the scene character recognition model based on the aggregation parameters.
4. The method of claim 3, wherein the transmitting the parameters of the device model to a central server comprises:
encrypting the parameters of the equipment model based on a preset encryption method;
in response to determining that the encryption is complete, transmitting parameters of the encrypted device model to the central server.
5. The method of claim 3, wherein the controlling the central server to aggregate the parameters of the device model to obtain aggregated parameters comprises:
and controlling the central server to aggregate the parameters of the equipment model based on a preset aggregation algorithm to obtain aggregation parameters.
6. The method according to claim 3, wherein the generating the scene text recognition model based on the aggregation parameter comprises:
and determining the aggregation parameters as the parameters of the initial model to obtain the scene character recognition model.
7. A scene character recognition apparatus, comprising:
a processing unit configured to pre-process the acquired input image to obtain a processed input image;
a generating unit configured to input the processed input image to a pre-trained scene character recognition model, and output a recognition result, wherein the scene character recognition model is obtained by joint learning training, and the training of the scene character recognition model comprises:
in response to receiving a training request of a target user, selecting at least one device as a target device;
controlling the target device to start training;
in response to determining that the training is complete, obtaining parameters of the trained device model;
generating the scene character recognition model based on the parameters of the equipment model;
a display unit configured to transmit the recognition result to a target display apparatus and control the target display apparatus to display the recognition result.
8. The apparatus according to claim 7, wherein said controlling the target device to start training comprises:
obtaining an initial model and model parameters of the initial model;
transmitting the model parameters to the target device;
controlling the target device to start training based on the local data of the target device.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-6.
10. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011357602.4A CN112434620B (en) | 2020-11-26 | 2020-11-26 | Scene text recognition method, device, equipment and computer readable medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011357602.4A CN112434620B (en) | 2020-11-26 | 2020-11-26 | Scene text recognition method, device, equipment and computer readable medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112434620A true CN112434620A (en) | 2021-03-02 |
CN112434620B CN112434620B (en) | 2024-03-01 |
Family
ID=74699335
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011357602.4A Active CN112434620B (en) | 2020-11-26 | 2020-11-26 | Scene text recognition method, device, equipment and computer readable medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112434620B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113762163A (en) * | 2021-09-09 | 2021-12-07 | 杭州澳亚生物技术股份有限公司 | GMP workshop intelligent monitoring management method and system |
CN114222181A (en) * | 2021-11-11 | 2022-03-22 | 北京达佳互联信息技术有限公司 | Image processing method, device, equipment and medium |
WO2023138361A1 (en) * | 2022-01-21 | 2023-07-27 | 北京有竹居网络技术有限公司 | Image processing method and apparatus, and readable storage medium and electronic device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR940024622A (en) * | 1993-04-08 | 1994-11-18 | 김진형 | Hangul handwritten character recognition device and method |
CN109902678A (en) * | 2019-02-12 | 2019-06-18 | 北京奇艺世纪科技有限公司 | Model training method, character recognition method, device, electronic equipment and computer-readable medium |
CN110795477A (en) * | 2019-09-20 | 2020-02-14 | 平安科技(深圳)有限公司 | Data training method, device and system |
CN111210022A (en) * | 2020-01-09 | 2020-05-29 | 深圳前海微众银行股份有限公司 | Backward model selection method, device and readable storage medium |
CN111598254A (en) * | 2020-05-22 | 2020-08-28 | 深圳前海微众银行股份有限公司 | Federal learning modeling method, device and readable storage medium |
CN111898424A (en) * | 2020-06-19 | 2020-11-06 | 贝壳技术有限公司 | Character recognition model training method and device, electronic equipment and storage medium |
-
2020
- 2020-11-26 CN CN202011357602.4A patent/CN112434620B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR940024622A (en) * | 1993-04-08 | 1994-11-18 | 김진형 | Hangul handwritten character recognition device and method |
CN109902678A (en) * | 2019-02-12 | 2019-06-18 | 北京奇艺世纪科技有限公司 | Model training method, character recognition method, device, electronic equipment and computer-readable medium |
CN110795477A (en) * | 2019-09-20 | 2020-02-14 | 平安科技(深圳)有限公司 | Data training method, device and system |
CN111210022A (en) * | 2020-01-09 | 2020-05-29 | 深圳前海微众银行股份有限公司 | Backward model selection method, device and readable storage medium |
CN111598254A (en) * | 2020-05-22 | 2020-08-28 | 深圳前海微众银行股份有限公司 | Federal learning modeling method, device and readable storage medium |
CN111898424A (en) * | 2020-06-19 | 2020-11-06 | 贝壳技术有限公司 | Character recognition model training method and device, electronic equipment and storage medium |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113762163A (en) * | 2021-09-09 | 2021-12-07 | 杭州澳亚生物技术股份有限公司 | GMP workshop intelligent monitoring management method and system |
CN114222181A (en) * | 2021-11-11 | 2022-03-22 | 北京达佳互联信息技术有限公司 | Image processing method, device, equipment and medium |
CN114222181B (en) * | 2021-11-11 | 2024-03-12 | 北京达佳互联信息技术有限公司 | Image processing method, device, equipment and medium |
WO2023138361A1 (en) * | 2022-01-21 | 2023-07-27 | 北京有竹居网络技术有限公司 | Image processing method and apparatus, and readable storage medium and electronic device |
Also Published As
Publication number | Publication date |
---|---|
CN112434620B (en) | 2024-03-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112434620B (en) | Scene text recognition method, device, equipment and computer readable medium | |
CN109981787B (en) | Method and device for displaying information | |
CN111199037B (en) | Login method, system and device | |
CN111784712B (en) | Image processing method, device, equipment and computer readable medium | |
CN114006769B (en) | Model training method and device based on transverse federal learning | |
CN112434619B (en) | Case information extraction method, apparatus, device and computer readable medium | |
CN116150249B (en) | Table data export method, apparatus, electronic device and computer readable medium | |
CN110046670B (en) | Feature vector dimension reduction method and device | |
CN112559898A (en) | Item information sending method, item information sending device, electronic equipment and computer readable medium | |
CN111787041A (en) | Method and apparatus for processing data | |
CN112507676B (en) | Method and device for generating energy report, electronic equipment and computer readable medium | |
CN115022328A (en) | Server cluster, server cluster testing method and device and electronic equipment | |
CN112543228A (en) | Data transmission method and device, electronic equipment and computer readable medium | |
CN111797931A (en) | Image processing method, image processing network training method, device and equipment | |
CN112488947A (en) | Model training and image processing method, device, equipment and computer readable medium | |
CN111709784A (en) | Method, apparatus, device and medium for generating user retention time | |
CN111580890A (en) | Method, apparatus, electronic device, and computer-readable medium for processing features | |
CN112781581B (en) | Method and device for generating path from moving to child cart applied to sweeper | |
CN113077353B (en) | Method, device, electronic equipment and medium for generating nuclear insurance conclusion | |
CN110909382B (en) | Data security control method and device, electronic equipment and computer readable medium | |
CN117633848B (en) | User information joint processing method, device, equipment and computer readable medium | |
CN114697206B (en) | Method, device, equipment and computer readable medium for managing nodes of Internet of things | |
CN112488943B (en) | Model training and image defogging method, device and equipment | |
CN111835846B (en) | Information updating method and device and electronic equipment | |
CN116453197A (en) | Face recognition method, device, electronic equipment and computer readable medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20240105 Address after: 065001 China (Hebei) Pilot Free Trade Zone Daxing Airport Area Langfang Airport Economic Zone Hangyidao Free Trade Zone Science and Technology Innovation Base 2101, Langfang City, Hebei Province Applicant after: Xinao Xinzhi Technology Co.,Ltd. Address before: 100020 10th floor, Motorola building, 1 Wangjing East Road, Chaoyang District, Beijing Applicant before: ENNEW DIGITAL TECHNOLOGY Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |