CN110070081A - Automatic information input method, device, storage medium and electronic equipment - Google Patents
Automatic information input method, device, storage medium and electronic equipment Download PDFInfo
- Publication number
- CN110070081A CN110070081A CN201910189500.7A CN201910189500A CN110070081A CN 110070081 A CN110070081 A CN 110070081A CN 201910189500 A CN201910189500 A CN 201910189500A CN 110070081 A CN110070081 A CN 110070081A
- Authority
- CN
- China
- Prior art keywords
- scan image
- information
- image
- text
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/166—Editing, e.g. inserting or deleting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
- G06V20/63—Scene text, e.g. street names
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Quality & Reliability (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Character Input (AREA)
Abstract
The disclosure is directed to a kind of automatic information input method, device, storage medium and electronic equipments, belong to machine learning applied technical field, this method comprises: receiving the target information typing instruction to the document of preset kind, after the scan image for obtaining the document, the clarity classification of the scan image is identified;When the clarity classification be it is recognizable, the Doctype data, the scan image data of document and object information data are inputted into preparatory trained machine learning model together, obtain location information of the target information in the scan image;According to the positional information after the partial region image of the position, text in parts of images is identified;By the Characters target position.The disclosure realizes scan image target typing information automatic, that typing file is accurately positioned, effectively increases the efficiency and accuracy rate of file destination data input, while the waste of human cost is effectively reduced by training machine learning model.
Description
Technical field
This disclosure relates to machine learning applied technical field, in particular to a kind of automatic information input method, dress
It sets, storage medium and electronic equipment.
Background technique
Generally, data input is from the information collection comprising many information, such as file etc. therefrom searches out target information
Afterwards, target information is extracted and is entered into specified position.
Currently, in the enterprises such as conventional silver industry, the letter such as corporate client can fill up a form when the various operations such as opening an account, file
Various enterprise's certificate informations are ceased and provide, teller first carries out account-opening, then by all information scannings into system, by special
Typing personnel concentrate to be entered into system according to scanned picture progress manual identified, is taken time and effort in this way, and different business
Type, different form types, the information for needing therefrom to obtain is different, and artificial search efficiency is low, human cost height, error rate
It is high.Simultaneously currently without one kind in enterprises such as banks, letter is needed according to type of service, the automatic accurate lookup of different form types
The method of breath, typing information.
Therefore, it is necessary to a kind of new automatic information input method, device, storage medium and electronic equipments.
It should be noted that information is only used for reinforcing the reason to the background of the disclosure disclosed in above-mentioned background technology part
Solution, therefore may include the information not constituted to the prior art known to persons of ordinary skill in the art.
Summary of the invention
The disclosure is designed to provide a kind of automatic information typing scheme, and then is reducing people at least to a certain extent
In the case where power cost, reduction labor intensity, automatic, accurate and efficient automatic positioning obtains target information, is entered into mesh
Cursor position.
According to one aspect of the disclosure, a kind of automatic information input method is provided, comprising:
When receiving the target information typing instruction to the destination document of preset kind, the scanning of the destination document is obtained
Image;
Identify that the clarity classification of the scan image, the clarity classification include that can recognize and not can recognize;
When the scan image clarity classification be it is recognizable, by the categorical data of the destination document, the target text
The scan image data and object information data of shelves input preparatory trained machine learning model together, obtain the target
Location information of the information in the scan image;
According to the positional information from the scan image, the area image of the corresponding position of the location information is obtained
Afterwards, the text in the area image is identified in the way of optical character identification, by text conversion in the area image
For text;
The Characters target typing position that text in the area image is converted to.
It is described to work as the target received to the destination document of preset kind in a kind of illustrative embodiments of the disclosure
Data input instructs, after the scan image for obtaining the destination document, the method also includes:
Identify the text point in the upper left corner in the scan image of document, the lower left corner, the upper right corner, the lower right corner;
In the file and picture that the four positions interception identified includes all texts, as the preparatory trained machine of input
The source images of the scan image data of learning model.
In a kind of illustrative embodiments of the disclosure, the clarity classification of the identification scan image is described
Clarity classification includes that can recognize and not can recognize, comprising:
According to the preset kind of the scan image, the corresponding standard of the preset kind is searched in the database and is swept
Trace designs picture;
The pixel value that the standard scan image is subtracted with the pixel value of the pixel of the scan image, obtains pixel value
Difference;
If the difference is positive, the clarity of the scan image is recognizable;
If the difference is negative, the clarity of the scan image is that not can recognize.
In a kind of illustrative embodiments of the disclosure, the pixel value with the pixel of the scan image subtracts institute
The pixel value for stating standard scan image obtains the difference of pixel value, comprising:
The sum of the pixel value for obtaining each pixel of the scan image, as the sum of first pixel value;
The sum of the pixel value for obtaining each pixel of the standard scan image, as the sum of second pixel value;
The sum of described second pixel value is subtracted with the sum of described first pixel value, total difference of pixel value is obtained, as institute
State the difference of pixel value.
In a kind of originally exemplary embodiment, the image scanned is the standard scan with preset kind in database
The image of the same size size of image, two images establish rectangular coordinate system, image all using the pixel in the lower left corner as origin
In each pixel be a coordinate points;It is characterized in that, identifying the clarity classification of the scan image, the clarity
Classification includes that can recognize and not can recognize, comprising:
The pixel value of each pixel for the file and picture that scanning obtains is subtracted to the standard scan of preset kind in database
The pixel value of each pixel of the same coordinate of image obtains the difference of each pixel;
If the number of the positive difference is more than that predetermined threshold judges that image definition is recognizable;
If the number of the negative difference is more than that predetermined threshold judges image definition not can recognize.
In a kind of illustrative embodiments of the disclosure, set according to the corresponding location information of the partial region image
Placeholder associated with the location information, the placeholder are arranged at target typing position, which is characterized in that described to incite somebody to action
The Characters target typing position that text is converted in the area image, comprising:
Placeholder associated with the location information is searched according to the corresponding location information of the partial region image;
The occupy-place associated with the location information is replaced with the text that text is converted in the area image
Symbol, by the Characters target typing position.
In a kind of illustrative embodiments of the disclosure, the training method of the machine learning model is:
The scan image data and object information data sample group collection of Doctype data, document are collected, it is each described
Sample group has demarcated location information of the target information in scan image in advance.
The Doctype data, the scan image data of document and object information data sample group are inputted into machine respectively
Device learning model obtains in each sample group in the scan image of Doctype, the location information of target information in the picture;
Location information of the target information exported by machine learning model if there is sample group in scan image, and to sample
The location information that this group is demarcated in advance is inconsistent, then adjusts the coefficient of machine learning model, until the target information of output exists
Location information in image with all sample groups are demarcated in advance consistent, training terminates.
In a kind of illustrative embodiments of the disclosure, the clarity classification of the scan image is identified, it is described clear
Spending classification includes after can recognize and not can recognize, which is characterized in that the method also includes:
If the clarity classification of the scan image be it is recognizable, identify the text in the scan image, will be described
Text is converted to text in scan image;
The target information is matched to from the text using target information dedicated templates;
Search target position associated with the target information;
The target information is saved in the target position.
In this example embodiment, identify that the clarity classification of the scan image, the clarity classification include that can know
After not and not can recognize, which is characterized in that the method also includes:
If the clarity classification of the scan image be it is recognizable, identify the text in the scan image, will be described
Text is converted to text in scan image;
The target information is matched to from the text using target information dedicated templates;
Search target position associated with the target information;
The target information is saved in the target position.
According to one aspect of the disclosure, a kind of automatic information input device is provided characterized by comprising
Module is obtained, when receiving the target information typing instruction to the destination document of preset kind, obtains the target
The scan image of document;
Identification module identifies that the clarity classification of the scan image, the clarity classification include recognizable and can not
Identification;
Judgment module, when the scan image clarity classification be it is recognizable, by the categorical data of the destination document, institute
The scan image data and object information data for stating destination document input preparatory trained machine learning model together, obtain
Location information of the target information in the scan image;
Conversion module obtains the corresponding position of the location information according to the positional information from the scan image
Area image after, the text in the area image is identified in the way of optical character identification, by the area image
Middle text is converted to text;
Recording module, the Characters target typing position that text in the area image is converted to.
According to one aspect of the disclosure, a kind of computer readable storage medium is provided, automatic information record is stored thereon with
Enter program, which is characterized in that realize described in any one of claim 1-7 when the automatic information recording program is executed by processor
Method.
According to one aspect of the disclosure, a kind of electronic equipment is provided characterized by comprising
Processor;And
Memory, for storing the automatic information recording program of the processor;Wherein, the processor be configured to via
It executes the automatic information recording program and carrys out the perform claim requirement described in any item methods of 1-7.
A kind of automatic information input method of the disclosure and device,
Firstly, obtaining the destination document when receiving the target information typing instruction to the destination document of preset kind
Scan image;Identify that the clarity classification of the scan image, the clarity classification include that can recognize and not can recognize;?
When needing the target information in typing portion document, the image of the scanning version of the document is obtained, so that it may utilize various electronics
Equipment is transmitted, is checked, utilizes the text of optical character identification thereon in subsequent step;Will using optical character identification come
Text on identification image just needs scan image that there is enough clarity can accurately be identified, so carrying out clarity
Identification.
Then, when the scan image clarity classification be it is recognizable, by the categorical data of the destination document, described
The scan image data and object information data of destination document input preparatory trained machine learning model together, obtain institute
State location information of the target information in the scan image;Content in document is usually very much, and artificial search efficiency is very low, but
Being these usually has a scheduled format for the document of transacting business, that is, it is generally necessary to the target information of typing in fixation
Position, utilize the image informations such as scanning image size, Doctype and target information training machine learning model, so that it may
It accurately identifies to export in a type of document using the machine learning model and be exported after the location information of target information, rear
Target information is obtained in continuous step.
Then, according to the positional information from the scan image, the area of the corresponding position of the location information is obtained
After area image, the text in the area image is identified in the way of optical character identification, by the area image Chinese
Word is converted to text;The partial region image of the position is obtained using the location information of the target information of acquisition, it can be by institute
There are other regions in document to reject, avoids the identification interference of other parts bring, while improving recognition efficiency and accuracy rate.
Finally, the Characters target typing position that text in the partial region image is converted to;It will be different
The text of position, that is, the corresponding position in target information typing target position corresponding with each position, can be accurately real
The typing of existing information.
It should be understood that above general description and following detailed description be only it is exemplary and explanatory, not
The disclosure can be limited.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows the implementation for meeting the disclosure
Example, and together with specification for explaining the principles of this disclosure.It should be evident that the accompanying drawings in the following description is only the disclosure
Some embodiments for those of ordinary skill in the art without creative efforts, can also basis
These attached drawings obtain other attached drawings.
Fig. 1 schematically shows a kind of flow chart of automatic information input method.
Fig. 2 schematically shows a kind of Application Scenarios-Example figure of automatic information input method.
Fig. 3 schematically shows a kind of method flow diagram of clarity classification for judging scan image.
Fig. 4 schematically shows a kind of block diagram of automatic information input device.
Fig. 5 schematically shows a kind of electronic equipment example block diagram for realizing above-mentioned automatic information input method.
Fig. 6 schematically shows a kind of computer readable storage medium for realizing above-mentioned automatic information input method.
Specific embodiment
Example embodiment is described more fully with reference to the drawings.However, example embodiment can be with a variety of shapes
Formula is implemented, and is not understood as limited to example set forth herein;On the contrary, thesing embodiments are provided so that the disclosure will more
Fully and completely, and by the design of example embodiment comprehensively it is communicated to those skilled in the art.Described feature, knot
Structure or characteristic can be incorporated in any suitable manner in one or more embodiments.In the following description, it provides perhaps
More details fully understand embodiment of the present disclosure to provide.It will be appreciated, however, by one skilled in the art that can
It is omitted with technical solution of the disclosure one or more in the specific detail, or others side can be used
Method, constituent element, device, step etc..In other cases, be not shown in detail or describe known solution to avoid a presumptuous guest usurps the role of the host and
So that all aspects of this disclosure thicken.
In addition, attached drawing is only the schematic illustrations of the disclosure, it is not necessarily drawn to scale.Identical attached drawing mark in figure
Note indicates same or similar part, thus will omit repetition thereof.Some block diagrams shown in the drawings are function
Energy entity, not necessarily must be corresponding with physically or logically independent entity.These function can be realized using software form
Energy entity, or these functional entitys are realized in one or more hardware modules or integrated circuit, or at heterogeneous networks and/or place
These functional entitys are realized in reason device device and/or microcontroller device.
Automatic information input method is provided firstly in this example embodiment, which can run
In server, server cluster or Cloud Server etc. can also be run on, certainly, those skilled in the art can also be according to demand
Method of the invention is run in other platforms, and particular determination is not done to this in the present exemplary embodiment.Refering to what is shown in Fig. 1, should be certainly
Dynamic information input method may comprise steps of:
Step S110. obtains the target when receiving the target information typing instruction to the destination document of preset kind
The scan image of document.
Step S120. identifies that the clarity classification of the scan image, the clarity classification include recognizable and can not
Identification.
Step S130. when the scan image clarity classification be it is recognizable, by the categorical data of the destination document, institute
The scan image data and object information data for stating destination document input preparatory trained machine learning model together, obtain
Location information of the target information in the scan image.
Step S140. from the scan image, obtains the corresponding position of the location information according to the positional information
Area image after, the text in the area image is identified in the way of optical character identification, by the area image
Middle text is converted to text.
The Characters target typing position that text in the area image is converted to by step S150..
In above-mentioned automatic information input method, firstly, when receiving the target information record to the destination document of preset kind
Enter instruction, obtains the scan image of the destination document;Identify the clarity classification of the scan image, the clarity classification
Including can recognize and not can recognize;When needing the target information in typing portion document, the scanning version of the document is obtained
Image, so that it may be transmitted, be checked using various electronic equipments, utilize the text of optical character identification thereon in subsequent step
Word;To identify that the text on image just needs scan image to have enough clarity can be accurate using optical character identification
It is identified, so carrying out clarity identification.Then, when the scan image clarity classification be it is recognizable, by the mesh
The categorical data of document, the scan image data of the destination document and object information data is marked to input train in advance together
Machine learning model, obtain location information of the target information in the scan image;Content in document is usually very
More, artificial search efficiency is very low, but these usually have scheduled format for the document of transacting business, that is, usually need
It wants the target information of typing in fixed position, utilizes the image informations such as scanning image size, Doctype and target information
Training machine learning model, so that it may accurately identify that target is believed in a type of document of output using the machine learning model
It is exported after the location information of breath, obtains target information in the next steps.Then, according to the positional information from the scanning figure
As in, after the area image for obtaining the corresponding position of the location information, the area is identified in the way of optical character identification
Text in the area image is converted to text by the text in area image;Utilize the location information of the target information of acquisition
The partial region image of the position is obtained, regions other in all documents can be rejected, other parts bring is avoided to know
It does not interfere, while improving recognition efficiency and accuracy rate.Finally, the text that text is converted in the partial region image is recorded
Enter target typing position;By the text of different positions, that is, target information typing target position corresponding with each position
Corresponding position can accurately realize the typing of information.
In the following, will be carried out in conjunction with attached drawing to each step in automatic information input method above-mentioned in this example embodiment detailed
Thin explanation and explanation.
Works as the target information typing received to the destination document of preset kind and instructs in step s 110, described in acquisition
The scan image of destination document.
In this exemplary embodiment, refering to what is shown in Fig. 2, server 210 receives the target information to preset kind
After typing instruction, controls in scanning device 220 and scan paper document to scan the image of version, then crawl service automatically
In device 210;In a kind of example, server 210 can also directly obtain the scanning figure scanned in advance from other servers
Picture;Wherein server 201 can not be done particular determination herein, be swept with the various terminal devices with processing function such as computer, mobile phone
Particular determination can not be done with the various equipment with scanning function such as mobile phone, scanning machine herein by retouching equipment 220.Needing typing
When target information in a document, the image of the scanning version of the document is obtained, so that it may various electronic equipments be utilized to carry out
It transmits, check, utilize the text on optical character identification file scanned image in subsequent step.
Identifies the clarity classification of the scan image in the step s 120, the clarity classification include can recognize and
It not can recognize.
In this exemplary embodiment, Yao Liyong optical character identification just needs scanning figure come the text identified on image
As having enough clarity that can accurately be identified, so clarity identification is carried out after the scan image for obtaining document,
If clarity ensures that enough accurately identifies text therein using optical character identification, it is quasi- that identification is effectively ensured
True rate.
It is described to work as the target information record received to the destination document of preset kind in a kind of originally exemplary embodiment
Enter to instruct, after the scan image for obtaining the destination document, the method also includes:
Identify the text point in the upper left corner in the scan image of document, the lower left corner, the upper right corner, the lower right corner;
In the file and picture that the four positions interception identified includes all texts, as the preparatory trained machine of input
The source images of the scan image data of learning model.
The image that preliminary sweep obtains all be with and document used in normal size the proportional image of paper, but
To be in scanning process, which can not be, there are some unexpected situations, causes document irregular, by identifying using image recognition
The text point in the upper left corner, the lower left corner, the upper right corner, the lower right corner in image;Using the four positions interception identified comprising all
The file and picture of text.It can be navigated to using the position of the text in the upper left corner, the lower left corner, the upper right corner, the lower right corner in document
The image of the size of particular size comprising all texts, and then the position of target information is accurately obtained in subsequent step.
In a kind of originally exemplary embodiment, refering to what is shown in Fig. 3, the clarity class of the identification scan image
Not, the clarity classification includes that can recognize and not can recognize, comprising:
Step S310 searches the preset kind pair according to the preset kind of the scan image in the database
The standard scan image answered;
Step S320 subtracts the pixel value of the standard scan image with the pixel value of the pixel of the scan image, obtains
To the difference of pixel value;
Step S330, if the difference is positive, the clarity of the scan image is recognizable;
Step S340, if the difference is negative, the clarity of the scan image is that not can recognize.
In database the corresponding standard scan image of preset kind be a kind of image of Doctype by working practice and
The image can with utility optical character identification with minimum identification degree that Experimental comparison obtains, the wherein pixel of the image
The pixel size of minimum resolution can be characterized.It thus can use the scan image of document that scanning obtains and corresponding document
The minimum identifiable scan image of type, that is, correspond to the difference progress of the pixel value of the standard scan image of Doctype
Definition judgment, if the difference is positive, the clarity of the scan image be it is recognizable, if the difference is negative, explanation is swept
The pixel of the pixel ratio standard scan image of tracing picture is low, then the clarity of the scan image is that not can recognize;Can have in this way
Effect guarantees the Text region accuracy that optical character identification is utilized in next step, effectively improves typing accuracy rate and efficiency.
In a kind of originally exemplary embodiment, the standard scan is subtracted with the pixel value of the pixel of the scan image
The pixel value of image obtains the difference of pixel value, comprising:
The sum of the pixel value for obtaining each pixel of the scan image, as the sum of first pixel value;
The sum of the pixel value for obtaining each pixel of the standard scan image, as the sum of second pixel value;
The sum of described second pixel value is subtracted with the sum of described first pixel value, total difference of pixel value is obtained, as institute
State the difference of pixel value.
Judge using the difference of total pixel value of two images the clarity classification of scan image, it can be from two width figures
The angle of the size of total pixel of picture carries out comparison on the whole, in the case where slight change being ignored to a certain extent,
Realize efficient, accurate definition judgment.
In a kind of originally exemplary embodiment, the image scanned is the standard scan with preset kind in database
The image of the same size size of image, two images establish rectangular coordinate system, image all using the pixel in the lower left corner as origin
In each pixel be a coordinate points;It is characterized in that, identifying the clarity classification of the scan image, the clarity
Classification includes that can recognize and not can recognize, comprising:
The pixel value of each pixel for the file and picture that scanning obtains is subtracted to the standard scan of preset kind in database
The pixel value of each pixel of the same coordinate of image obtains the difference of each pixel;
If the number of the positive difference is more than that predetermined threshold judges that image definition is recognizable;
If the number of the negative difference is more than that predetermined threshold judges image definition not can recognize.
Rectangular coordinate system is established by origin (0,0) of the pixel in the lower left corner of scan image, the pixel in image is with directly
Point represents in angular coordinate system, and it is 0, ordinate as 1 pixel that (0,1), which represents abscissa, and (1,0) represents abscissa and is 1, indulges
The pixel ... ... (m, n) that coordinate is 0 represents pixel of the abscissa as m, ordinate as n, and wherein m and n is positive integer;Database
In image equally indicate, the comparison of the pixel of each pixel thus can be accurately carried out by coordinate.When the difference
The number being positive is more than that predetermined threshold judges that image definition is recognizable, such as one share 100 pixels, and predetermined threshold is
80, while the difference being positive has 85, then judges that image definition is recognizable.
In step s 130 when the scan image clarity classification be it is recognizable, by the number of types of the destination document
Preparatory trained machine learning mould is inputted together according to, the scan image data of the destination document and object information data
Type obtains location information of the target information in the scan image.
In this exemplary embodiment, since described image is identified as can recognize, so that it may accurate to obtain the letter needed
Breath, obtains relevant factor for target information, the scan image data and object information data of Doctype data, document are defeated
Entering trained machine learning model can automatically, accurately obtain corresponding to such industry in the scan image of the Doctype
The location information of the target information of business in the picture, the location information can be comprising target information, and not contain other letters
The location information of the image-region of the minimum dimension of breath, and then can be accurately obtained comprising target information, and without containing other
The image-region of the minimum dimension of information, and then accurately obtain target information;It can effectively improve recognition accuracy, identification in this way
Efficiency reduces human cost.
In a kind of originally exemplary embodiment, the training method of the machine learning model is:
The scan image data and object information data sample group collection of Doctype data, document are collected, it is each described
Sample group has demarcated location information of the target information in scan image in advance.
The Doctype data, the scan image data of document and object information data sample group are inputted into machine respectively
Device learning model obtains in each sample group in the scan image of Doctype, the location information of target information in the picture;
Location information of the target information exported by machine learning model if there is sample group in scan image, and to sample
The location information that this group is demarcated in advance is inconsistent, then adjusts the coefficient of machine learning model, until the target information of output exists
Location information in image with all sample groups are demarcated in advance consistent, training terminates.
Utilize position of the target information described in the scan image for having demarcated a kind of Doctype in advance in scan image
The Doctype data of information, the scan image data of document and object information data sample group, training machine learning model,
Position and Doctype in the file and picture of the target information of document in each type is closely related, also with scan image
Original size is related in certain proportion, and target information can accurately guide the target information of acquisition and the relationship of document, so
Doctype data, the scan image data of document and object information data sample group accurately can learn mould by training machine
Type.
It is corresponding to obtain the location information according to the positional information from the scan image by step S140
After the area image of position, the text in the area image is identified in the way of optical character identification, by the region
Text is converted to text in image.
In this exemplary embodiment, existing optical character identification can accurately identify the text in image,
By the image for obtaining the partial region of target position, so that it may only identify the text in the image of the part.Not only facilitate
It obtains, there is the interference it is possible to prevente effectively from other information.
The Characters target typing position that text in the area image is converted to by step S150.
In this exemplary embodiment, after obtaining target information by optical character identification, target information is saved in
The corresponding typing position of target information, the corresponding position can be carried out by setting with the associated placeholder of target information true
It is fixed.Then accurate typing is realized by the way that target information is replaced placeholder associated there.
It is described to record the text that text is converted in the partial region image in a kind of originally exemplary embodiment
Enter target typing position, comprising:
Obtain the corresponding target position of text that text in the partial region image is converted to;
By the Characters target position being converted to.
In a kind of this exemplary embodiment, according to the corresponding location information setting of the partial region image with it is described
The associated placeholder of location information, the placeholder are arranged at target typing position, which is characterized in that described by the area
The Characters target typing position that text is converted in area image, comprising:
Placeholder associated with the location information is searched according to the corresponding location information of the partial region image;
The occupy-place associated with the location information is replaced with the text that text is converted in the area image
Symbol, by the Characters target typing position.
Due to the position of target information in the scan image of each type of document be it is uniquely fixed, so according to described
The location information of partial region image sets placeholder associated with the location information, and the placeholder is arranged in target position
Set place, so that it may target position be accurately associated with using position of the target information in scan image, then effectively improve letter
Cease the accuracy rate of typing.
In this example embodiment, identify that the clarity classification of the scan image, the clarity classification include that can know
After not and not can recognize, which is characterized in that the method also includes:
If the clarity classification of the scan image be it is recognizable, identify the text in the scan image, will be described
Text is converted to text in scan image;
The target information is matched to from the text using target information dedicated templates;
Search target position associated with the target information;
The target information is saved in the target position.
It identifies that the text conversion in scan image is text in the way of optical character identification, then utilizes dedicated mesh
Information matches template is marked, for example, can realize accurately will be interior between " name and nationality " in document for name+* * *+nationality
Hold and realize matching, then searches the target position with target information template unique association, so that it may realize the accurate of target information
Typing.
The disclosure additionally provides a kind of automatic information input device.Refering to what is shown in Fig. 4, the automatic information input device can be with
Including obtaining module 410, identification module 420, judgment module 430, conversion module 440 and recording module 450.Wherein:
Module 410 is obtained, when receiving the target information typing instruction to the destination document of preset kind, obtains the mesh
Mark the scan image of document;
Identification module 420 identifies the clarity classification of the scan image, and the clarity classification is including can recognize and not
It can recognize;
Judgment module 430, when the scan image clarity classification be it is recognizable, by the number of types of the destination document
Preparatory trained machine learning mould is inputted together according to, the scan image data of the destination document and object information data
Type obtains location information of the target information in the scan image;
Conversion module 440 obtains the corresponding position of the location information according to the positional information from the scan image
After the area image set, the text in the area image is identified in the way of optical character identification, by the administrative division map
Text is converted to text as in;
Recording module 450, the Characters target typing position that text in the area image is converted to.
The detail of each module is in corresponding automatic information input method in above-mentioned automatic information input device
It is described in detail, therefore details are not described herein again.
It should be noted that although being referred to several modules or list for acting the equipment executed in the above detailed description
Member, but this division is not enforceable.In fact, according to embodiment of the present disclosure, it is above-described two or more
Module or the feature and function of unit can embody in a module or unit.Conversely, an above-described mould
The feature and function of block or unit can be to be embodied by multiple modules or unit with further division.
In addition, although describing each step of method in the disclosure in the accompanying drawings with particular order, this does not really want
These steps must be executed in this particular order by asking or implying, or having to carry out step shown in whole could realize
Desired result.Additional or alternative, it is convenient to omit multiple steps are merged into a step and executed by certain steps, and/
Or a step is decomposed into execution of multiple steps etc..
Through the above description of the embodiments, those skilled in the art is it can be readily appreciated that example described herein is implemented
Mode can also be realized by software realization in such a way that software is in conjunction with necessary hardware.Therefore, according to the disclosure
The technical solution of embodiment can be embodied in the form of software products, which can store non-volatile at one
Property storage medium (can be CD-ROM, USB flash disk, mobile hard disk etc.) in or network on, including some instructions are so that a calculating
Equipment (can be personal computer, server, mobile terminal or network equipment etc.) is executed according to disclosure embodiment
Method.
In an exemplary embodiment of the disclosure, a kind of electronic equipment that can be realized the above method is additionally provided.
Person of ordinary skill in the field it is understood that various aspects of the invention can be implemented as system, method or
Program product.Therefore, various aspects of the invention can be embodied in the following forms, it may be assumed that complete hardware embodiment, complete
The embodiment combined in terms of full Software Implementation (including firmware, microcode etc.) or hardware and software, can unite here
Referred to as circuit, " module " or " system ".
The electronic equipment 500 of this embodiment according to the present invention is described referring to Fig. 5.The electronics that Fig. 5 is shown
Equipment 500 is only an example, should not function to the embodiment of the present invention and use scope bring any restrictions.
As shown in figure 5, electronic equipment 500 is showed in the form of universal computing device.The component of electronic equipment 500 can wrap
It includes but is not limited to: at least one above-mentioned processing unit 510, at least one above-mentioned storage unit 520, the different system components of connection
The bus 530 of (including storage unit 520 and processing unit 510).
Wherein, the storage unit is stored with program code, and said program code can be held by the processing unit 510
Row, so that various according to the present invention described in the execution of the processing unit 510 above-mentioned " illustrative methods " part of this specification
The step of illustrative embodiments.For example, the processing unit 510 can execute step S110 as shown in fig. 1: working as reception
Target information typing to the destination document to preset kind instructs, and obtains the scan image of the destination document;S120: identification
The clarity classification of the scan image, the clarity classification include that can recognize and not can recognize;Step S130: it is swept when described
Tracing image sharpness classification be it is recognizable, by the categorical data of the destination document, the scan image data of the destination document
And object information data inputs preparatory trained machine learning model together, obtains the target information in the scanning figure
Location information as in;Step S140: according to the positional information from the scan image, the mesh location information pair is obtained
After the area image for the position answered, the text in the area image is identified in the way of optical character identification, it will be described
Text is converted to text in area image;Step S150: the Characters target that text in the area image is converted to
Typing position.
Storage unit 520 may include the readable medium of volatile memory cell form, such as Random Access Storage Unit
(RAM) 5201 and/or cache memory unit 5202, it can further include read-only memory unit (ROM) 5203.
Storage unit 520 can also include program/utility with one group of (at least one) program module 5205
5204, such program module 5205 includes but is not limited to: operating system, one or more application program, other program moulds
It may include the realization of network environment in block and program data, each of these examples or certain combination.
Bus 530 can be to indicate one of a few class bus structures or a variety of, including storage unit bus or storage
Cell controller, peripheral bus, graphics acceleration port, processing unit use any bus structures in a variety of bus structures
Local bus.
Electronic equipment 500 can also be with one or more external equipments 700 (such as keyboard, sensing equipment, bluetooth equipment
Deng) communication, the equipment that also client can be enabled interact with the electronic equipment 500 with one or more communicates, and/or with make
Any equipment (such as the router, modulation /demodulation that the electronic equipment 500 can be communicated with one or more of the other calculating equipment
Device etc.) communication.This communication can be carried out by input/output (I/O) interface 550.Also, electronic equipment 500 can be with
By network adapter 560 and one or more network (such as local area network (LAN), wide area network (WAN) and/or public network,
Such as internet) communication.As shown, network adapter 560 is communicated by bus 530 with other modules of electronic equipment 500.
It should be understood that although not shown in the drawings, other hardware and/or software module can not used in conjunction with electronic equipment 500, including but not
Be limited to: microcode, device driver, redundant processing unit, external disk drive array, RAID system, tape drive and
Data backup storage system etc..
Through the above description of the embodiments, those skilled in the art is it can be readily appreciated that example described herein is implemented
Mode can also be realized by software realization in such a way that software is in conjunction with necessary hardware.Therefore, according to the disclosure
The technical solution of embodiment can be embodied in the form of software products, which can store non-volatile at one
Property storage medium (can be CD-ROM, USB flash disk, mobile hard disk etc.) in or network on, including some instructions are so that a calculating
Equipment (can be personal computer, server, terminal installation or network equipment etc.) is executed according to disclosure embodiment
Method.
In an exemplary embodiment of the disclosure, a kind of computer readable storage medium is additionally provided, energy is stored thereon with
Enough realize the program product of this specification above method.In some possible embodiments, various aspects of the invention may be used also
In the form of being embodied as a kind of program product comprising program code, when described program product is run on the terminal device, institute
Program code is stated for executing the terminal device described in above-mentioned " illustrative methods " part of this specification according to this hair
The step of bright various illustrative embodiments.
Refering to what is shown in Fig. 6, describing the program product for realizing the above method of embodiment according to the present invention
600, can using portable compact disc read only memory (CD-ROM) and including program code, and can in terminal device,
Such as it is run on PC.However, program product of the invention is without being limited thereto, in this document, readable storage medium storing program for executing can be with
To be any include or the tangible medium of storage program, the program can be commanded execution system, device or device use or
It is in connection.
Described program product can be using any combination of one or more readable mediums.Readable medium can be readable letter
Number medium or readable storage medium storing program for executing.Readable storage medium storing program for executing for example can be but be not limited to electricity, magnetic, optical, electromagnetic, infrared ray or
System, device or the device of semiconductor, or any above combination.The more specific example of readable storage medium storing program for executing is (non exhaustive
List) include: electrical connection with one or more conducting wires, portable disc, hard disk, random access memory (RAM), read-only
Memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read only memory
(CD-ROM), light storage device, magnetic memory device or above-mentioned any appropriate combination.
Computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal,
In carry readable program code.The data-signal of this propagation can take various forms, including but not limited to electromagnetic signal,
Optical signal or above-mentioned any appropriate combination.Readable signal medium can also be any readable Jie other than readable storage medium storing program for executing
Matter, the readable medium can send, propagate or transmit for by instruction execution system, device or device use or and its
The program of combined use.
The program code for including on readable medium can transmit with any suitable medium, including but not limited to wirelessly, have
Line, optical cable, RF etc. or above-mentioned any appropriate combination.
The program for executing operation of the present invention can be write with any combination of one or more programming languages
Code, described program design language include object oriented program language-Java, C++ etc., further include conventional
Procedural programming language-such as " C " language or similar programming language.Program code can be fully in client
It calculates and executes in equipment, partly executes on the client device, being executed as an independent software package, partially in client's calculating
Upper side point is executed on a remote computing or is executed in remote computing device or server completely.It is being related to far
Journey calculates in the situation of equipment, and remote computing device can pass through the network of any kind, including local area network (LAN) or wide area network
(WAN), it is connected to client computing device, or, it may be connected to external computing device (such as utilize ISP
To be connected by internet).
In addition, above-mentioned attached drawing is only the schematic theory of processing included by method according to an exemplary embodiment of the present invention
It is bright, rather than limit purpose.It can be readily appreciated that the time that above-mentioned processing shown in the drawings did not indicated or limited these processing is suitable
Sequence.In addition, be also easy to understand, these processing, which can be, for example either synchronously or asynchronously to be executed in multiple modules.
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to its of the disclosure
His embodiment.This application is intended to cover any variations, uses, or adaptations of the disclosure, these modifications, purposes or
Adaptive change follow the general principles of this disclosure and including the undocumented common knowledge in the art of the disclosure or
Conventional techniques.The description and examples are only to be considered as illustrative, and the true scope and spirit of the disclosure are by claim
It points out.
Claims (10)
1. a kind of automatic information input method characterized by comprising
When receiving the target information typing instruction to the destination document of preset kind, the scanning figure of the destination document is obtained
Picture;
Identify that the clarity classification of the scan image, the clarity classification include that can recognize and not can recognize;
When the scan image clarity classification be it is recognizable, by the categorical data of the destination document, the destination document
Scan image data and object information data input preparatory trained machine learning model together, obtain the target information
Location information in the scan image;
According to the positional information from the scan image, after the area image for obtaining the corresponding position of the location information,
The text in the area image is identified in the way of optical character identification, and text in the area image is converted into text
This;
The Characters target typing position that text in the area image is converted to.
2. the method according to claim 1, wherein described work as the mesh received to the destination document of preset kind
Data input is marked to instruct, after the scan image for obtaining the destination document, the method also includes:
Identify the text point in the upper left corner in the scan image of document, the lower left corner, the upper right corner, the lower right corner;
In the file and picture that the four positions interception identified includes all texts, as input trained machine learning in advance
The source images of the scan image data of model.
3. the method according to claim 1, wherein the clarity classification of the identification scan image, institute
Stating clarity classification includes that can recognize and not can recognize, comprising:
According to the preset kind of the scan image, the corresponding standard scan figure of the preset kind is searched in the database
Picture;
The pixel value that the standard scan image is subtracted with the pixel value of the pixel of the scan image, obtains the difference of pixel value
Value;
If the difference is positive, the clarity of the scan image is recognizable;
If the difference is negative, the clarity of the scan image is that not can recognize.
4. according to the method described in claim 3, it is characterized in that, the pixel value with the pixel of the scan image subtracts
The pixel value of the standard scan image, obtains the difference of pixel value, comprising:
The sum of the pixel value for obtaining each pixel of the scan image, as the sum of first pixel value;
The sum of the pixel value for obtaining each pixel of the standard scan image, as the sum of second pixel value;
The sum of described second pixel value is subtracted with the sum of described first pixel value, total difference of pixel value is obtained, as the picture
The difference of element value.
5. according to the method described in claim 1, according to the corresponding location information setting of the partial region image and institute's rheme
The associated placeholder of confidence manner of breathing, the placeholder are arranged at target typing position, which is characterized in that described by the region
The Characters target typing position that text is converted in image, comprising:
Placeholder associated with the location information is searched according to the corresponding location information of the partial region image;
The placeholder associated with the location information is replaced with the text that text is converted in the area image, with
By the Characters target typing position.
6. the method according to claim 1, wherein the training method of the machine learning model is:
Collect the scan image data and object information data sample group collection of Doctype data, document, each sample
Group has demarcated location information of the target information in scan image in advance.
The Doctype data, the scan image data of document and object information data sample group are inputted into engineering respectively
Model is practised, is obtained in each sample group in the scan image of Doctype, the location information of target information in the picture;
Location information of the target information exported by machine learning model if there is sample group in scan image, and to sample group
The location information demarcated in advance is inconsistent, then adjusts the coefficient of machine learning model, until the target information of output is in image
In location information with all sample groups are demarcated in advance it is consistent, training terminate.
7. the clarity classification includes according to the method described in claim 1, identifying the clarity classification of the scan image
After can recognize and not can recognize, which is characterized in that the method also includes:
If the clarity classification of the scan image be it is recognizable, the text in the scan image is identified, by the scanning
Text is converted to text in image;
The target information is matched to from the text using target information dedicated templates;
Search target position associated with the target information;
The target information is saved in the target position.
8. a kind of automatic information input device characterized by comprising
Module is obtained, when receiving the target information typing instruction to the destination document of preset kind, obtains the destination document
Scan image;
Identification module identifies that the clarity classification of the scan image, the clarity classification include that can recognize and not can recognize;
Judgment module, when the scan image clarity classification be it is recognizable, by the categorical data of the destination document, the mesh
The scan image data and object information data for marking document input preparatory trained machine learning model together, obtain described
Location information of the target information in the scan image;
Conversion module obtains the area of the corresponding position of the location information according to the positional information from the scan image
After area image, the text in the area image is identified in the way of optical character identification, by the area image Chinese
Word is converted to text;
Recording module, the Characters target typing position that text in the area image is converted to.
9. a kind of computer readable storage medium is stored thereon with automatic information recording program, which is characterized in that the automatic letter
Breath recording program realizes the described in any item methods of claim 1-7 when being executed by processor.
10. a kind of electronic equipment characterized by comprising
Processor;And
Memory, for storing the automatic information recording program of the processor;Wherein, the processor is configured to via execution
The automatic information recording program carrys out perform claim and requires the described in any item methods of 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910189500.7A CN110070081A (en) | 2019-03-13 | 2019-03-13 | Automatic information input method, device, storage medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910189500.7A CN110070081A (en) | 2019-03-13 | 2019-03-13 | Automatic information input method, device, storage medium and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110070081A true CN110070081A (en) | 2019-07-30 |
Family
ID=67366175
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910189500.7A Pending CN110070081A (en) | 2019-03-13 | 2019-03-13 | Automatic information input method, device, storage medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110070081A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110555410A (en) * | 2019-09-04 | 2019-12-10 | 青岛大学 | automatic paper file digitalizing method |
CN111311197A (en) * | 2020-03-05 | 2020-06-19 | 中国工商银行股份有限公司 | Travel data processing method and device |
CN111598122A (en) * | 2020-04-01 | 2020-08-28 | 深圳壹账通智能科技有限公司 | Data verification method and device, electronic equipment and storage medium |
CN111768471A (en) * | 2019-09-29 | 2020-10-13 | 北京京东尚科信息技术有限公司 | Method and device for editing characters in picture |
CN111832551A (en) * | 2020-07-15 | 2020-10-27 | 网易有道信息技术(北京)有限公司 | Text image processing method and device, electronic scanning equipment and storage medium |
CN112560411A (en) * | 2020-12-21 | 2021-03-26 | 深圳供电局有限公司 | Intelligent personnel information input method and system |
CN113449196A (en) * | 2021-07-16 | 2021-09-28 | 北京天眼查科技有限公司 | Information generation method and device, electronic equipment and readable storage medium |
CN114385849A (en) * | 2022-03-24 | 2022-04-22 | 北京惠朗时代科技有限公司 | Difference display method, device, equipment and storage medium |
WO2023182713A1 (en) * | 2022-03-24 | 2023-09-28 | (주)인포플라 | Method and system for generating event for object on screen by recognizing screen information including text and non-text images on basis of artificial intelligence |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101976235A (en) * | 2010-09-21 | 2011-02-16 | 天津神舟通用数据技术有限公司 | Extensible Word report automatically-generating method based on dynamic web page |
CN105335707A (en) * | 2015-10-19 | 2016-02-17 | 广东欧珀移动通信有限公司 | Method and apparatus for acquiring fingerprint image to be identified, and mobile terminal |
CN106650718A (en) * | 2016-12-21 | 2017-05-10 | 远光软件股份有限公司 | Certificate image identification method and apparatus |
CN108346106A (en) * | 2018-02-23 | 2018-07-31 | 平安科技(深圳)有限公司 | Bill input method, system, optical character identification server and storage medium |
CN109308476A (en) * | 2018-09-06 | 2019-02-05 | 邬国锐 | Billing information processing method, system and computer readable storage medium |
CN109344905A (en) * | 2018-10-22 | 2019-02-15 | 王子蕴 | A kind of transmission facility automatic fault recognition methods based on integrated study |
-
2019
- 2019-03-13 CN CN201910189500.7A patent/CN110070081A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101976235A (en) * | 2010-09-21 | 2011-02-16 | 天津神舟通用数据技术有限公司 | Extensible Word report automatically-generating method based on dynamic web page |
CN105335707A (en) * | 2015-10-19 | 2016-02-17 | 广东欧珀移动通信有限公司 | Method and apparatus for acquiring fingerprint image to be identified, and mobile terminal |
CN106650718A (en) * | 2016-12-21 | 2017-05-10 | 远光软件股份有限公司 | Certificate image identification method and apparatus |
CN108346106A (en) * | 2018-02-23 | 2018-07-31 | 平安科技(深圳)有限公司 | Bill input method, system, optical character identification server and storage medium |
CN109308476A (en) * | 2018-09-06 | 2019-02-05 | 邬国锐 | Billing information processing method, system and computer readable storage medium |
CN109344905A (en) * | 2018-10-22 | 2019-02-15 | 王子蕴 | A kind of transmission facility automatic fault recognition methods based on integrated study |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110555410A (en) * | 2019-09-04 | 2019-12-10 | 青岛大学 | automatic paper file digitalizing method |
CN110555410B (en) * | 2019-09-04 | 2021-11-02 | 青岛大学 | Automatic paper file digitalizing method |
CN111768471A (en) * | 2019-09-29 | 2020-10-13 | 北京京东尚科信息技术有限公司 | Method and device for editing characters in picture |
CN111311197A (en) * | 2020-03-05 | 2020-06-19 | 中国工商银行股份有限公司 | Travel data processing method and device |
CN111598122A (en) * | 2020-04-01 | 2020-08-28 | 深圳壹账通智能科技有限公司 | Data verification method and device, electronic equipment and storage medium |
CN111832551A (en) * | 2020-07-15 | 2020-10-27 | 网易有道信息技术(北京)有限公司 | Text image processing method and device, electronic scanning equipment and storage medium |
CN112560411A (en) * | 2020-12-21 | 2021-03-26 | 深圳供电局有限公司 | Intelligent personnel information input method and system |
CN113449196A (en) * | 2021-07-16 | 2021-09-28 | 北京天眼查科技有限公司 | Information generation method and device, electronic equipment and readable storage medium |
CN113449196B (en) * | 2021-07-16 | 2024-04-19 | 北京金堤科技有限公司 | Information generation method and device, electronic equipment and readable storage medium |
CN114385849A (en) * | 2022-03-24 | 2022-04-22 | 北京惠朗时代科技有限公司 | Difference display method, device, equipment and storage medium |
WO2023182713A1 (en) * | 2022-03-24 | 2023-09-28 | (주)인포플라 | Method and system for generating event for object on screen by recognizing screen information including text and non-text images on basis of artificial intelligence |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110070081A (en) | Automatic information input method, device, storage medium and electronic equipment | |
US10878372B2 (en) | Method, system and device for association of commodities and price tags | |
CN111931664A (en) | Mixed note image processing method and device, computer equipment and storage medium | |
CN109670494B (en) | Text detection method and system with recognition confidence | |
US20210073514A1 (en) | Automated signature extraction and verification | |
CN109766879A (en) | Generation, character detection method, device, equipment and the medium of character machining model | |
CN107454964A (en) | A kind of commodity recognition method and device | |
CN110110320A (en) | Automatic treaty review method, apparatus, medium and electronic equipment | |
US11170214B2 (en) | Method and system for leveraging OCR and machine learning to uncover reuse opportunities from collaboration boards | |
US11017498B2 (en) | Ground truth generation from scanned documents | |
CN111400426B (en) | Robot position deployment method, device, equipment and medium | |
EP4138050A1 (en) | Table generating method and apparatus, electronic device, storage medium and product | |
EP3979129A1 (en) | Object recognition method and apparatus, and electronic device and storage medium | |
CN110619252A (en) | Method, device and equipment for identifying form data in picture and storage medium | |
CN115205883A (en) | Data auditing method, device, equipment and storage medium based on OCR (optical character recognition) and NLP (non-line language) | |
CN112232354A (en) | Character recognition method, device, equipment and storage medium | |
CN114418124A (en) | Method, device, equipment and storage medium for generating graph neural network model | |
CN109388935A (en) | Document verification method and device, electronic equipment and readable storage medium storing program for executing | |
CN111723799A (en) | Coordinate positioning method, device, equipment and storage medium | |
CN109009902A (en) | Blind-guiding stick and blind-guiding method | |
US20230048495A1 (en) | Method and platform of generating document, electronic device and storage medium | |
CN110348436A (en) | Text information in image is carried out to know method for distinguishing and relevant device | |
CN113592981B (en) | Picture labeling method and device, electronic equipment and storage medium | |
CN114359931A (en) | Express bill identification method and device, computer equipment and storage medium | |
CN111291758B (en) | Method and device for recognizing seal characters |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
CB02 | Change of applicant information |
Address after: 201, room 518000, building A, No. 1, front Bay Road, Qianhai Shenzhen Guangdong Shenzhen Hong Kong cooperation zone (Qianhai business secretary) Applicant after: Shenzhen one ledger Intelligent Technology Co., Ltd. Address before: 518000 Guangdong city of Shenzhen province Qianhai Shenzhen Hong Kong cooperation zone before Bay Road No. 1 building 201 room A Applicant before: Shenzhen one ledger Intelligent Technology Co., Ltd. |
|
CB02 | Change of applicant information | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |