CN108921158A - Method for correcting image, device and computer readable storage medium - Google Patents
Method for correcting image, device and computer readable storage medium Download PDFInfo
- Publication number
- CN108921158A CN108921158A CN201810611500.7A CN201810611500A CN108921158A CN 108921158 A CN108921158 A CN 108921158A CN 201810611500 A CN201810611500 A CN 201810611500A CN 108921158 A CN108921158 A CN 108921158A
- Authority
- CN
- China
- Prior art keywords
- image
- seal
- carried out
- detection
- aligning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/242—Aligning, centring, orientation detection or correction of the image by image rotation, e.g. by 90 degrees
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/273—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion removing elements interfering with the pattern to be recognised
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/28—Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of method for correcting image, device and computer readable storage mediums, belong to technical field of image processing, and method includes:It is corresponding to construct at least one neural network model at least one aligning step for multiple aligning steps that image rectification is related to;Based on multiple aligning steps and at least one neural network model, product process configuration file;The circulation logic between multiple aligning steps is being configured in procedure configuration files, is generating image rectification process;When receiving image rectification instruction, target image is corrected according to image rectification process, and the target image after output calibration.The embodiment of the present invention can be for the image information with written historical materials under different scenes, image information being corrected into specification, being easy to detect with identify, and neural network model is constructed by the way of deep learning, and apply in the aligning step of image rectification process, so that obtaining higher accuracy rate to image rectification.
Description
Technical field
The invention belongs to technical field of image processing more particularly to a kind of method for correcting image, device and computer-readable
Storage medium.
Background technique
Optical character identification (Optical Character Recognition, OCR) is usually applied to document process, knows
It is by the optics input mode such as scan, take pictures by the text of various bills, newpapers and periodicals, books, manuscript and other printed matters in not
It is converted into image information, character recognition technology is recycled to convert image information to the computer input technology that can be used.Light
Learning character recognition technologies is always the important technical for assisting people to carry out image recognition, document reading, parsing and processing, extensively
It is general to be applied to the industries such as bank, insurance, the tax, audit, law.
Using optical character identification file and picture, file and picture there are certain requirements, if file and picture is due to equipment category
Property, operating condition limitation cause optical device shooting, scanning image information in, comprising a degree of noise, distortion because
Element and image document background are complicated, there is such as shading, watermark, baseline, wire or interference with an official seal affixed and are superimposed, then can be to text
Word detection makes a big impact with identification, is not inconsistent document detection, recognition result and true semantic information, thus to document is based on
The process of identification or specific business bring difficulty.Therefore need before optical character identification file and picture, to file and picture into
Row image rectification.
There is many defects for traditional image correction method, such as in terms of process, lack general, system stream
Journey, it is difficult to by the image information with written historical materials under real scene, correct image at specification, being easy to detect with identify
Information;For another example, in terms of algorithm, lack versatility and accuracy, the figure with noise, distortion under all scenes can not be covered
Picture, certain algorithm units therein, as perspective transform correction, it is de-watermarked, remove seal, it is difficult to achieve the desired results.
Summary of the invention
In order to solve problems in the prior art, the present invention provides a kind of method for correcting image, device and computer-readable
Storage medium, can for the image information with written historical materials under different scenes, be corrected into specification, be easy to detect and knowledge
Other image information, and neural network model is constructed by the way of deep learning, and apply the correction in image rectification process
In step, so that obtaining higher accuracy rate to image rectification.
Specific technical solution provided in an embodiment of the present invention is as follows:
In a first aspect, the present invention provides a kind of method for correcting image, the method includes:
For at least one aligning step for multiple aligning steps that image rectification is related to, at least one nerve of corresponding building
Network model;
Based on the multiple aligning step and at least one described neural network model, product process configuration file;
The circulation logic between the multiple aligning step is being configured in the procedure configuration files, is generating image
Correcting process;
When receiving image rectification instruction, target image is corrected according to described image correcting process, and exports
The target image after correction.
In some embodiments, at least one aligning step of the multiple aligning steps being related to for image rectification,
Correspondence constructs at least one neural network model:
First nerves network is trained using the image pattern for marking out fringe region, building corresponds to perspective transform
The Model for Edge Detection of aligning step, to carry out image using the Model for Edge Detection in the perspective transform aligning step
Edge detection;And/or
The individual character training sample being synthetically formed using individual character sample with random background is trained nervus opticus network, structure
The text detection model corresponding to rotation correction step is built, to use the text detection model in the rotation correction step
Carry out individual character detection;And/or
The individual character training sample pair with direction of rotation label that random direction rotation generates is carried out using the individual character sample
Third nerve network is trained, and building corresponds to the words direction detection model of rotation correction step, so as in the rotation
Aligning step carries out angle detecting using the words direction detection model;And/or
Fourth nerve network is trained using the image pattern for marking out seal region, building corresponds to rotation correction
The seal detection model of step, to carry out seal detection using the seal detection model in seal removal step.
In some embodiments, described image correcting process sequentially include perspective transform aligning step, rotation correction step,
Shading removes step, seal removal step and table line and removes step.
In some embodiments, the perspective transform aligning step includes:
It treats perspective transform correction image and carries out Image Edge-Detection, and straight line fitting and connected domain are carried out to image border
Analysis, determines the quadrilateral area where written historical materials described in image;
Super-pixel block segmentation is carried out to perspective transform correction image to described, and the super-pixel block obtained to segmentation is shown
The detection of work property and foreground area cluster, determine the external quadrangle of the minimum of foreground area and the foreground area;
According to the union of the external quadrangle of minimum being calculated and the quadrilateral area as a result, to described to saturating
Perspective transform is carried out depending on transformation correction image.
In some embodiments, the rotation correction step includes:
The individual character treated in rotation correction image carries out individual character detection, obtains the individual character area of the maximum preset quantity of area
Domain;
Angle detecting is carried out to each individual character region, and to rotation correction figure according to angle detecting calibration of the output results
Picture.
In some embodiments, the shading removal step includes:
Color notation conversion space is carried out to the image of shading to be removed;
Binaryzation is carried out to the shading image to be removed that color notation conversion space obtains, obtains prospect character area and back
Scape watermark region;
Foreground mask operation is carried out to the original image of the prospect character area and the image of the shading to be removed, retains institute
Prospect character area is stated, the background watermark region is removed.
In some embodiments, the seal removal step includes:
Seal detection is carried out to the image of seal to be removed, detects all seal regions;
Each seal region is extracted, and color notation conversion space is carried out to the seal region extracted;
Color cluster and binaryzation are carried out to the seal region that color notation conversion space obtains, obtain prospect seal region
With background text region;
The original image of the background text region and the image of the seal to be removed is subjected to background masking operations, retains institute
Background text region is stated, prospect seal region is removed.
In some embodiments, the table line removal step includes:
Inverse processing is carried out to the binary image of table line to be removed, obtains inverse image;
Vertical structure unit and horizontal structure unit are constructed, and uses the vertical structure unit and the horizontal structure list
Member is corroded and is expanded to the binary image of the table line to be removed respectively, and only horizontal table sideline and vertical is obtained
The binary map in table sideline;
The binary image of the table line to be removed is carried out step-by-step with the obtained binary map to subtract each other, is removed
The image in table sideline, and inverse is handled again to the obtained described image for removing table sideline.
In some embodiments, described to be based on the multiple aligning step and at least one described neural network model, life
After procedure configuration files step, the method also includes:
Scene type based on different images carries out dynamic configuration to the circulation logic between the multiple aligning step,
Generate the multiple images correcting process that there is corresponding relationship with multiple scene types.
In some embodiments, described when receiving image rectification instruction, defined according to the configuration file described in
Image rectification process to target image be corrected including:
Obtain the scene type of the target image;
In described multiple images correcting process, target image corresponding with the scene type of the target image is determined
Correcting process;
The target image is corrected according to the target image correcting process.
Second aspect provides a kind of image correction apparatus based on any method of first aspect, described device
Including:
Module is constructed, at least one aligning step of multiple aligning steps for being related to for image rectification, corresponding structure
Build at least one neural network model;
First generation module is generated for being based on the multiple aligning step and at least one described neural network model
Procedure configuration files;
Second generation module, in the procedure configuration files to the circulation logic between the multiple aligning step
It is configured, generates image rectification process;
Correction module, for when receive image rectification instruction when, according to described image correcting process to target image into
Row correction, and the target image after output calibration.
The third aspect, provides a kind of image correction apparatus, and described device includes:
One or more processors;
Storage device, for storing one or more programs;
When one or more of programs are executed by one or more of processors, so that one or more of processing
Device realizes the method as described in above-mentioned first aspect is any.
Fourth aspect provides a kind of computer readable storage medium, is stored thereon with computer program, described program quilt
The method as described in above-mentioned first aspect is any is realized when processor executes.
Compared with traditional image correction method, the embodiment of the present invention is had the advantages that:
1, multiple aligning steps of image rectification are combined, forms complete set, general image rectification process, it is real
The image flame detection of existing end-to-end (end-to-end), can be for the image information with written historical materials under different scenes, school
Just at specification, be easy to detect with identification image information;
2, neural network model is constructed by the way of deep learning, and applies the aligning step in image rectification process
In, so that obtaining higher accuracy rate to image rectification;
3, traditional images correcting algorithm is compared, robustness is stronger, and correction has higher success rate.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment
Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for
For those of ordinary skill in the art, without creative efforts, it can also be obtained according to these attached drawings other
Attached drawing.
Fig. 1 is a kind of flow chart of method for correcting image shown according to an exemplary embodiment;
Fig. 2 is the flow chart of building Model for Edge Detection shown according to an exemplary embodiment;
Fig. 3 is the flow chart of building text detection model shown according to an exemplary embodiment;
Fig. 4 is the flow chart of building words direction detection model shown according to an exemplary embodiment;
Fig. 5 is the flow chart of building seal detection model shown according to an exemplary embodiment;
Fig. 6 is the flow chart of perspective transform aligning step shown according to an exemplary embodiment;
Fig. 7 is the flow chart of rotation correction step shown according to an exemplary embodiment;
Fig. 8 is the flow chart of shading removal step shown according to an exemplary embodiment;
Fig. 9 is the flow chart of seal removal step shown according to an exemplary embodiment;
Figure 10 is the flow chart of table line removal step shown according to an exemplary embodiment;
Figure 11 is a kind of flow chart of method for correcting image shown according to an exemplary embodiment;
Figure 12 is a kind of block diagram of image correction apparatus shown according to an exemplary embodiment.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with attached in the embodiment of the present invention
Figure, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is only this
Invention a part of the embodiment, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art exist
Every other embodiment obtained under the premise of creative work is not made, shall fall within the protection scope of the present invention.
Method for correcting image provided in an embodiment of the present invention, this method close multiple aligning step groups of image rectification
Come, form complete set, general image rectification process, realizes the image flame detection of end-to-end (end-to-end), can be directed to
The image information with written historical materials under different scenes, image information being corrected into specification, being easy to detect with identify, this
Outside, this method also constructs neural network model by the way of deep learning, and applies the aligning step in image rectification process
In, it enables to obtain higher accuracy rate to image rectification.Wherein, the executing subject of the method for correcting image can be service
Device, and the video card for supporting CUDA parallel computation is configured, it is provided with the web services of image rectification on the server, is shown for terminal
The page shown and the database for storing image document.Wherein, which can pass through network and at least one client
End is communicatively coupled, wherein the server can be individual server, be also possible to the server being made of multiple servers
Group, and in the server zone, it can communicate connection between multiple servers;Server is by carrying out data friendship with client
Mutually, to provide a user image correction function, wherein client can be mobile phone, laptop, desktop computer, plate electricity
The electronic equipments such as brain, intelligent TV set.
Fig. 1 is the flow chart of method for correcting image shown according to an exemplary embodiment, shown referring to Fig.1, this method
Include the following steps:
At least one aligning step of S1, the multiple aligning steps being related to for image rectification, it is corresponding to construct at least one
Neural network model.
Wherein, multiple aligning steps that image rectification is related to may include perspective transform aligning step, rotation correction step
Suddenly, at least two in shading removal step, seal removal step and table line removal step.
Wherein, the written historical materials image (mainly photo) that perspective transform aligning step is used to have perspective distortion, into
Row is repaired, and removes frame and character area is reduced to rectangle;Rotation correction step is used for the text of 4 angles that may be present
Shelves data image (0 °, 90 °, 180 °, 270 °) rotates to normal angled (0 °);Shading removes step for document information image
Shading, watermark removal, only retain text;Seal removal step, which is used to pinpoint in removal document information image, covers character area
Seal;Table line removes step and is used to remove the table wire frame in the image of binaryzation, in exclusion subsequent detection, identification
The interference being likely to occur.
For at least one aligning step in above-mentioned multiple aligning steps, corresponding building neural network model, using as
The configuration information of at least one aligning step is applied in corresponding aligning step, enables to obtain image rectification higher
Accuracy rate.
S2, multiple aligning steps and at least one neural network model, product process configuration file are based on.
Specifically, the process may include:
Graphical configuration interface provide it is multiple choose item, so that user chooses in multiple choose in item;
Item is chosen according to what user chose, determines and needs to join the corrected processing step of image and its corresponding algorithm
Number, wherein processing step may include perspective transform aligning step, rotation correction step, shading removal step, seal removal step
The combination of one or more of rapid and table line removal step;And
According to the compatibility between algorithm parameter, algorithm parameter is configured, and with default parameters auto-complete user
Not set parameter information;
Product process configuration file, wherein the procedure configuration files are JSON string format, and recording in configuration file has
Title, parameter list, the default parameters etc. of specific Processing Algorithm.
S3, the circulation logic between multiple aligning steps is being configured in procedure configuration files, is generating image rectification
Process.
Specifically, carrying out allocation optimum to the circulation logic between multiple aligning steps, serial image calibration positive stream is generated
Journey exports as JSON character string, and each unit is an independent algorithm steps in character string, wherein parameter item include preceding paragraph according to
Rely title, the title of subsequent step, algorithm default parameters and the user setting parameter of step.
Wherein, image rectification process sequentially include perspective transform aligning step, rotation correction step, shading removal step,
Seal removes step, table line removes step.
S4, when receive image rectification instruction when, target image is corrected according to image rectification process, and export school
Target image after just.
Specifically, user's input picture target image on the client, for providing the web services of image rectification, being used for
The page that terminal is shown.
Each aligning step in image rectification process is executed according to sequence to be corrected image.
It is corresponding at least one aligning step for multiple aligning steps that image rectification is related in the embodiment of the present invention
At least one neural network model is constructed, from there through constructing neural network model by the way of deep learning, and is applied
In the aligning step of image rectification process, so that obtaining higher accuracy rate to image rectification;By being based on multiple aligning steps
With at least one neural network model, product process configuration file, and in procedure configuration files to multiple aligning steps it
Between circulation logic configured, generate image rectification process, closed from there through by multiple aligning step groups of image rectification
Come, is capable of forming complete set, general image rectification process, thus realize the image flame detection of end-to-end (end-to-end),
And then when receiving image rectification instruction, school can be carried out to target image according to the image rectification process that configuration file defines
Just, thus realize can be suitable for different scenes under the image information with written historical materials, be corrected into specification, be easy to detect
With the image information of identification, and there is stronger robustness, correction has higher success rate.
Fig. 2 is the flow chart of building Model for Edge Detection shown according to an exemplary embodiment, as shown in Fig. 2, building
Model for Edge Detection specifically includes:
S21, image document is collected comprising various bills, certificate, the photo of document and scanned copy etc..
Written historical materials entity edge in S22, mark image.
Specifically, mark the written historical materials entity edge in image, the result figure of binaryzation is formed, in result figure, non-side
Edge pixel is black, and edge pixel point is white.
S23, first nerves network is trained using the image pattern for marking out fringe region.
Wherein, first nerves network training is preferably VGG network, removes the last one pond layer and full articulamentum, and
Network structure after each layer of increase side output layer (side output layer) is trained.
S24, the Model for Edge Detection for corresponding to perspective transform aligning step is generated, to make in perspective transform aligning step
Image Edge-Detection is carried out with Model for Edge Detection.
Fig. 3 is the flow chart of building text detection model shown according to an exemplary embodiment, as shown in figure 3, building
Text detection model specifically includes:
S31, individual character sample is obtained comprising the set of Chinese individual character and English alphabet generates different lists with multiple fonts
The image of word.
S32, individual character sample are synthesized with random background, form training sample.
Specifically, the character in training sample image is outlined using rectangle frame, the apex coordinate of rectangle is recorded, and is stored in
It marks in file.
S33, nervus opticus network is trained using the training sample of formation.
Wherein, nervus opticus network training is preferably FRCNN (that is, Faster-RCNN).
S34, the text detection model for corresponding to rotation correction step is generated, to examine in rotation correction step using text
It surveys model and carries out individual character detection.
Fig. 4 is the flow chart of building words direction detection model shown according to an exemplary embodiment, as shown in figure 4,
Building words direction detection model specifically includes:
S41, individual character sample is obtained comprising the set of Chinese individual character and English alphabet generates different lists with multiple fonts
The image of word.
S42, the rotation (0 °, 90 °, 180 °, 270 °) that random four direction is carried out to individual character sample generate the list with rotation
Printed words sheet, and record direction of rotation.
Specifically, tagged respectively, the label of 0 ° of rotation is 0, the mark being rotated by 90 ° after turning at random to individual character sample
Label are 1, and the label of 180 ° of rotation is 2, and the label of 270 ° of rotation is 3, and label is saved.
S43, third nerve network is trained using the individual character sample with rotation.
Wherein, third nerve network is preferably 5 layers of convolutional neural networks.
S44, the words direction detection model for corresponding to rotation correction step is generated, to use text in rotation correction step
Word direction detection model carries out angle detecting.
Fig. 5 is the flow chart of building seal detection model shown according to an exemplary embodiment, as shown in figure 5, building
Seal detection model specifically includes:
S51, signet image pattern is obtained.
S52, data mark, mark the seal region in image pattern, generate training sample.
Specifically, outlining seal region with rectangle, record rectangle pinpoints coordinate, and deposit marks in file.
S53, fourth nerve network is trained using training sample.
Wherein, fourth nerve network is preferably FRCNN (that is, Faster-RCNN).
S54, the seal detection model for corresponding to rotation correction step is generated, to examine in seal removal step using seal
It surveys model and carries out seal detection.
Fig. 6 is the flow chart of perspective transform aligning step shown according to an exemplary embodiment, as shown in fig. 6, perspective
Transformation aligning step specifically includes:
S61, input picture, input picture are to correct image to perspective transform.
S62, perspective transform correction image progress Image Edge-Detection is treated.
Specifically, Model for Edge Detection, which can be used, carries out Image Edge-Detection.
S63, straight line fitting is carried out to image border.
Specifically, carrying out straight line fitting to image border according to convenient for information, document information boundary is found out.
S64, connected domain analysis determine the quadrilateral area in image where written historical materials.
S65, perspective transform correction image progress super-pixel block segmentation is treated.
S66, conspicuousness detection is carried out to the super-pixel block that segmentation obtains.
Specifically, carrying out conspicuousness detection to super-pixel block, the significance of each super-pixel is calculated, and indicated with gray value
Significance, gray value is higher to illustrate that the confidence level that the region is prospect is higher.
S67, foreground area cluster, determine the external quadrangle of the minimum of foreground area and foreground area.
Specifically, it is foreground area that gray value, which is greater than preset threshold, according to the gray value of super-pixel block, it is less than default threshold
Value is background area, and seeks minimum external convex quadrangle to foreground area.Wherein, preset threshold is preferably average gray value
70%.
The union of the external quadrangle of minimum and quadrilateral area that S68, basis are calculated is as a result, treat perspective transform school
Positive image carries out perspective transform, the image after obtaining perspective transform.
Image after S69, output perspective transform.
It should be noted that step S62 to step S64 realizes the quadrilateral area determined in image where written historical materials
Process, step S65 to step S67 realizes the process for determining the external quadrangle of minimum of foreground area and foreground area, this
Inventive embodiments are not construed as limiting the execution sequencing of two processes, it is preferable that two processes are performed simultaneously, to improve
The efficiency of perspective transform correction.
In this way, by perspective transform aligning step, by the written historical materials image (mainly photo) with perspective distortion, into
Row is repaired, and removes frame and character area is reduced to rectangle, the image after obtaining perspective transform correction.
Fig. 7 is the flow chart of rotation correction step shown according to an exemplary embodiment, as shown in fig. 7, rotation correction
Step specifically includes:
S71, input picture, input picture are to rotation correction image.
S72, the individual character treated in rotation correction image carry out individual character detection, obtain the individual character of the maximum preset quantity of area
Region.Wherein it is possible to carry out individual character detection using text detection model.
S73, angle detecting is carried out to each individual character region, and rotation correction image is waited for according to angle detecting calibration of the output results.
Angle detecting is carried out to each individual character region specifically, words direction detection model can be used, is determined wait revolve
The direction for positive image of transferring to another school, and rotation correction image is treated according to direction and is reversely rotated, correcting reverse is postrotational wait revolve
It transfers to another school positive image.
Image after S74, output rotation correction.
In this way, pass through rotation correction step, it would be possible to existing 4 angles document information image (0 °, 90 °, 180 °,
270 °) rotate to normal angled (0 °), the image after obtaining rotation correction.
Fig. 8 is the flow chart of shading removal step shown according to an exemplary embodiment, as shown in figure 8, shading removes
Step specifically includes:
S81, input picture, input picture are shading image to be removed.
S82, color notation conversion space is carried out to the image of shading to be removed.
Specifically, RGB color is converted to hsv color space.
S83, binary conversion treatment is carried out to the shading image to be removed that color notation conversion space obtains, obtains prospect character area
With background watermark region.
Specifically, taking the channel V in hsv color space, OTSU (maximum variance between clusters) binaryzation is carried out, finds out background text
Block domain and background watermark region.
S84, foreground mask operation is carried out to the original image of prospect character area and the image of shading to be removed, retains prospect text
Block domain removes background watermark region.
Image after S85, output removal shading.
In this way, removing step by shading, the shading of the document information image after rotation correction, watermark are removed, only protected
Stay text.
Fig. 9 is the flow chart of seal removal step shown according to an exemplary embodiment, as shown in figure 9, seal removes
Step specifically includes:
S91, input picture, input picture are seal image to be removed.
S92, seal detection is carried out to the image of seal to be removed, detects all seal regions.
Specifically, seal detection model, which can be used, carries out seal detection, all seal regions in image are found out
(Bounding Box)。
S93, each seal region is extracted, and color notation conversion space is carried out to the seal region extracted.
Specifically, all image-regions comprising seal are extracted, Color Channel is converted to HSV from RGB.
S94, color cluster and binaryzation are carried out to the seal region that color notation conversion space obtains, obtains prospect seal region
With background text region.
S95, the original image of background text region and the image of seal to be removed is subjected to background masking operations, retains background text
Block domain removes prospect seal region.
Image after S96, output removal seal.
In this way, removing step by seal, the seal of character area is covered in fixed point removal document information image.
Figure 10 is the flow chart of table line removal step shown according to an exemplary embodiment, as shown in Figure 10, table
Line removal step specifically includes:
S101, input picture, input picture are the binary image of table line to be removed.
S102, inverse processing is carried out to the binary image of table line to be removed, obtains inverse image.
S103, building structural unit.Construct vertical structure unit and horizontal structure unit.
Specifically, building width is 1 pixel respectively, to be highly picture altitude be divided by 10 vertical structure unit and height
1 pixel, width be picture traverse divided by 10 horizontal structure unit.
S104, the binary image of table line to be removed is carried out respectively using vertical structure unit and horizontal structure unit
Corrosion and expansion, obtain the binary map in only horizontal table sideline and vertical table sideline.
S105, the binary image of table line to be removed is subtracted each other with obtained binary map progress step-by-step, obtains removing table
The image in lattice sideline.
S106, to the obtained image for removing table sideline, inverse is handled again.
Image after S107, output removal table line.
In this way, removing step by table line, the table wire frame in the image of binaryzation is removed, subsequent inspection is excluded
The interference surveyed, be likely to occur in identification.
Figure 11 is a kind of flow chart of method for correcting image shown according to an exemplary embodiment, as shown in figure 11, should
Method includes the following steps:
At least one aligning step of S111, the multiple aligning steps being related to for image rectification, corresponding building at least one
A neural network model.
Wherein, the step is identical as step S1, is not repeated here herein.
S112, multiple aligning steps and at least one neural network model, product process configuration file are based on.
Wherein, the step is identical as step S2, is not repeated here herein.
S113, the scene type based on different images, in procedure configuration files to the circulation between each aligning step
Logic carries out dynamic configuration, generates the multiple images correcting process for having corresponding relationship with multiple scene types.
Specifically, the image of scanned copy is directed to, since there is no lens distortion and perspective transform, without executing perspective transform
Aligning step, therefore, the image of corresponding scanned copy, the image rectification process of generation successively includes that rotation correction step, shading are gone
Except step, seal removal step and table line remove step.
For the image of photo form, due to needing to be implemented perspective transform there are certain perspective transform or lens distortion
Aligning step, the image rectification process of generation successively include perspective transform aligning step, rotation correction step, shading removal step
Suddenly, seal removal step and table line remove step.
For the image without table, step is removed without executive table line, therefore the image rectification process generated is not wrapped
Line containing table removes step.
For there is no the images of seal covering text, without executing seal removal step, therefore the image rectification generated
Process does not include seal and removes step.
Wherein, perspective transform aligning step, rotation correction step, shading removal step, seal removal step and table
The concrete processing procedure of the line removal each aligning step of step please refers to above-described embodiment, and details are not described herein.
S114, when receive image rectification instruction when, obtain the scene type of target image.
Wherein, the scene type of target image can be also possible to lead to user by what is inputted between user interface
It is recognized after the classifier constructed in advance, which is to carry out in advance to the characteristics of image of different images as training sample
It is trained and building.
S115, in multiple images correcting process, determine target image corresponding with the scene type of target image school
Positive process, and target image is corrected according to target image correcting process.
It is corresponding at least one aligning step for multiple aligning steps that image rectification is related in the embodiment of the present invention
At least one neural network model is constructed, from there through constructing neural network model by the way of deep learning, and is applied
In the aligning step of image rectification process, so that obtaining higher accuracy rate to image rectification;By being based on multiple aligning steps
With at least one neural network model, product process configuration file, and the scene type based on different images are configured in process
Dynamic configuration is carried out to the circulation logic between multiple aligning steps in file, generating has corresponding relationship with multiple scene types
Multiple images correcting process.Multiple aligning step dynamic groups of image rectification are closed from there through based on image scene type
Come, forms multiple images correcting process, and then image rectification can be carried out for the image of different scenes type, be more directed to
Property, while correcting more efficient;It, can be by determining the field with target image as a result, when receiving image rectification instruction
The corresponding target image correcting process of scape type carries out image rectification to target image, so that difference can be suitable for by realizing
The image information with written historical materials under scene, image information being corrected into specification, being easy to detect with identify, and have more
Strong robustness, correction have higher success rate.
As the realization to the method for correcting image in above-described embodiment, the embodiment of the present invention also provides a kind of based on above-mentioned
The image correction apparatus of the method for embodiment, the device can be run on server.
Figure 12 is a kind of block diagram of image correction apparatus shown according to an exemplary embodiment, as shown in figure 12, the dress
Set including:
Module 121 is constructed, at least one aligning step of multiple aligning steps for being related to for image rectification is corresponding
Construct at least one neural network model;
First generation module 122, for based on multiple aligning steps and at least one neural network model, product process to be matched
Set file;
Second generation module 123, for being carried out in procedure configuration files to the circulation logic between multiple aligning steps
Configuration generates image rectification process;
Correction module 124, for being carried out to target image according to image rectification process when receiving image rectification instruction
Correction, and the target image after output calibration.
The embodiment of the present invention passes through a kind of image correction apparatus, multiple corrections which is related to by being directed to image rectification
At least one aligning step of step, it is corresponding to construct at least one neural network model, from there through the side using deep learning
Formula constructs neural network model, and applies in the aligning step of image rectification process, so that obtaining to image rectification higher
Accuracy rate;By based on multiple aligning steps and at least one neural network model, product process configuration file, and in process
The circulation logic between multiple aligning steps is configured in configuration file, image rectification process is generated, from there through will scheme
Multiple aligning steps of picture correction combine, and complete set, general image rectification process are capable of forming, to realize that end is arrived
The image flame detection of (end-to-end) is held, and then when receiving image rectification instruction, the figure that can be defined according to configuration file
As correcting process is corrected target image, to realize the image with written historical materials that can be suitable under different scenes
Information, image information being corrected into specification, being easy to detect with identify, and there is stronger robustness, correction has higher success rate.
In addition, the embodiment of the present invention also provides a kind of image correction apparatus, including:
One or more processors;
Storage device, for storing one or more programs;
When one or more of programs are executed by one or more of processors, so that one or more of processing
Device realizes the method as described in above-described embodiment.
In addition, the embodiment of the present invention also provides a kind of computer readable storage medium, it is stored thereon with computer program, institute
State the method realized as described in above-described embodiment when program is executed by processor.
It should be understood by those skilled in the art that, the embodiment in the embodiment of the present invention can provide as method, system or meter
Calculation machine program product.Therefore, complete hardware embodiment, complete software embodiment can be used in the embodiment of the present invention or combine soft
The form of the embodiment of part and hardware aspect.Moreover, being can be used in the embodiment of the present invention in one or more wherein includes meter
Computer-usable storage medium (including but not limited to magnetic disk storage, CD-ROM, the optical memory of calculation machine usable program code
Deng) on the form of computer program product implemented.
It is referring to the method for middle embodiment, equipment (system) according to embodiments of the present invention and to calculate in the embodiment of the present invention
The flowchart and/or the block diagram of machine program product describes.It should be understood that can be realized by computer program instructions flow chart and/or
The combination of the process and/or box in each flow and/or block and flowchart and/or the block diagram in block diagram.It can mention
For the processing of these computer program instructions to general purpose computer, special purpose computer, Embedded Processor or other programmable datas
The processor of equipment is to generate a machine, so that being executed by computer or the processor of other programmable data processing devices
Instruction generation refer to for realizing in one or more flows of the flowchart and/or one or more blocks of the block diagram
The device of fixed function.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates,
Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or
The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting
Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or
The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one
The step of function of being specified in a box or multiple boxes.
Although the preferred embodiment in the embodiment of the present invention has been described, once a person skilled in the art knows
Basic creative concept, then additional changes and modifications may be made to these embodiments.So appended claims are intended to explain
Being includes preferred embodiment and all change and modification for falling into range in the embodiment of the present invention.
Obviously, various changes and modifications can be made to the invention without departing from essence of the invention by those skilled in the art
Mind and range.In this way, if these modifications and changes of the present invention belongs to the range of the claims in the present invention and its equivalent technologies
Within, then the present invention is also intended to include these modifications and variations.
Claims (13)
1. a kind of method for correcting image, which is characterized in that the method includes:
It is corresponding to construct at least one neural network at least one aligning step for multiple aligning steps that image rectification is related to
Model;
Based on the multiple aligning step and at least one described neural network model, product process configuration file;
The circulation logic between the multiple aligning step is being configured in the procedure configuration files, is generating image rectification
Process;
When receiving image rectification instruction, target image is corrected according to described image correcting process, and output calibration
The target image afterwards.
2. the method according to claim 1, wherein the multiple aligning steps being related to for image rectification
At least one aligning step, corresponding at least one neural network model that constructs include:
First nerves network is trained using the image pattern for marking out fringe region, building corresponds to perspective transform and corrects
The Model for Edge Detection of step, to carry out image border using the Model for Edge Detection in the perspective transform aligning step
Detection;And/or
The individual character training sample being synthetically formed using individual character sample with random background is trained nervus opticus network, building pair
It should be in the text detection model of rotation correction step, to be carried out in the rotation correction step using the text detection model
Individual character detection;And/or
The individual character training sample with direction of rotation label of random direction rotation generation is carried out to third using the individual character sample
Neural network is trained, and building corresponds to the words direction detection model of rotation correction step, so as in the rotation correction
Step carries out angle detecting using the words direction detection model;And/or
Fourth nerve network is trained using the image pattern for marking out seal region, building corresponds to rotation correction step
Seal detection model, so as to the seal removal step using the seal detection model carry out seal detection.
3. method according to claim 1 or 2, which is characterized in that described image correcting process sequentially includes perspective transform
Aligning step, rotation correction step, shading removal step, seal removal step and table line remove step.
4. according to the method described in claim 3, it is characterized in that, the perspective transform aligning step includes:
It treats perspective transform correction image and carries out Image Edge-Detection, and straight line fitting and connected domain point are carried out to image border
Analysis, determines the quadrilateral area where written historical materials described in image;
Super-pixel block segmentation is carried out to perspective transform correction image to described, and conspicuousness is carried out to the super-pixel block that segmentation obtains
Detection and foreground area cluster, determine the external quadrangle of the minimum of foreground area and the foreground area;
According to the union of the external quadrangle of minimum and the quadrilateral area that are calculated as a result, becoming to described wait have an X-rayed
It changes correction image and carries out perspective transform.
5. according to the method described in claim 3, it is characterized in that, the rotation correction step includes:
The individual character treated in rotation correction image carries out individual character detection, obtains the individual character region of the maximum preset quantity of area;
Angle detecting is carried out to each individual character region, and to rotation correction image according to angle detecting calibration of the output results.
6. according to the method described in claim 3, it is characterized in that, shading removal step includes:
Color notation conversion space is carried out to the image of shading to be removed;
Binaryzation is carried out to the shading image to be removed that color notation conversion space obtains, obtains prospect character area and background water
Print region;
Foreground mask operation is carried out to the original image of the prospect character area and the image of the shading to be removed, before reservation is described
Scape character area removes the background watermark region.
7. according to the method described in claim 3, it is characterized in that, seal removal step includes:
Seal detection is carried out to the image of seal to be removed, detects all seal regions;
Each seal region is extracted, and color notation conversion space is carried out to the seal region extracted;
Color cluster and binaryzation are carried out to the seal region that color notation conversion space obtains, obtain prospect seal region and back
Scape character area;
The original image of the background text region and the image of the seal to be removed is subjected to background masking operations, retains the back
Scape character area removes prospect seal region.
8. according to the method described in claim 3, it is characterized in that, table line removal step includes:
Inverse processing is carried out to the binary image of table line to be removed, obtains inverse image;
Vertical structure unit and horizontal structure unit are constructed, and uses the vertical structure unit and the horizontal structure unit point
The other binary image to the table line to be removed is corroded and is expanded, and only horizontal table sideline and vertical table are obtained
The binary map in sideline;
The binary image of the table line to be removed is carried out step-by-step with the obtained binary map to subtract each other, obtains removing table
The image in sideline, and inverse is handled again to the obtained described image for removing table sideline.
9. the method according to claim 1, wherein described be based on the multiple aligning step and described at least one
A neural network model, after product process configuration file step, the method also includes:
Scene type based on different images is patrolling the circulation between the multiple aligning step in the procedure configuration files
It collects and carries out dynamic configuration, generate the multiple images correcting process that there is corresponding relationship with multiple scene types.
10. according to the method described in claim 9, it is characterized in that, it is described when receive image rectification instruction when, according to described
The described image correcting process that configuration file defines to target image be corrected including:
Obtain the scene type of the target image;
In described multiple images correcting process, target image correction corresponding with the scene type of the target image is determined
Process;
The target image is corrected according to the target image correcting process.
11. a kind of image correction apparatus based on any the method for claim 1~10, which is characterized in that described device packet
It includes:
Module is constructed, at least one aligning step of multiple aligning steps for being related to for image rectification, corresponding building is extremely
A few neural network model;
First generation module, for being based on the multiple aligning step and at least one described neural network model, product process
Configuration file;
Second generation module, for being carried out in the procedure configuration files to the circulation logic between the multiple aligning step
Configuration generates image rectification process;
Correction module, for carrying out school to target image according to described image correcting process when receiving image rectification instruction
Just, the target image and after output calibration.
12. a kind of image correction apparatus, which is characterized in that including:
One or more processors;
Storage device, for storing one or more programs;
When one or more of programs are executed by one or more of processors, so that one or more of processors are real
The now method as described in claim 1~10 any one.
13. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that described program is processed
The method as described in claim 1~10 any one is realized when device executes.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810611500.7A CN108921158A (en) | 2018-06-14 | 2018-06-14 | Method for correcting image, device and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810611500.7A CN108921158A (en) | 2018-06-14 | 2018-06-14 | Method for correcting image, device and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108921158A true CN108921158A (en) | 2018-11-30 |
Family
ID=64420856
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810611500.7A Pending CN108921158A (en) | 2018-06-14 | 2018-06-14 | Method for correcting image, device and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108921158A (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109886974A (en) * | 2019-01-28 | 2019-06-14 | 北京易道博识科技有限公司 | A kind of seal minimizing technology |
CN109902680A (en) * | 2019-03-04 | 2019-06-18 | 四川长虹电器股份有限公司 | The detection of picture rotation angle and bearing calibration based on convolutional neural networks |
CN110047049A (en) * | 2019-04-19 | 2019-07-23 | 西安航天恒星科技实业(集团)有限公司 | The compatible real-time quick radiation correction method of more stars |
CN110443773A (en) * | 2019-08-20 | 2019-11-12 | 江西博微新技术有限公司 | File and picture denoising method, server and storage medium based on seal identification |
CN111091532A (en) * | 2019-10-30 | 2020-05-01 | 中国资源卫星应用中心 | Remote sensing image color evaluation method and system based on multilayer perceptron |
CN111325214A (en) * | 2020-02-27 | 2020-06-23 | 珠海格力智能装备有限公司 | Jet printing character extraction processing method and device, storage medium and electronic equipment |
CN111402168A (en) * | 2020-03-19 | 2020-07-10 | 同盾控股有限公司 | Image target correction method and device, terminal and storage medium |
CN111402367A (en) * | 2020-03-27 | 2020-07-10 | 维沃移动通信有限公司 | Image processing method and electronic equipment |
CN111445386A (en) * | 2020-04-15 | 2020-07-24 | 深源恒际科技有限公司 | Image correction method based on four-point detection of text content |
CN111767859A (en) * | 2020-06-30 | 2020-10-13 | 北京百度网讯科技有限公司 | Image correction method and device, electronic equipment and computer-readable storage medium |
WO2021000702A1 (en) * | 2019-06-29 | 2021-01-07 | 华为技术有限公司 | Image detection method, device, and system |
CN112927122A (en) * | 2021-04-14 | 2021-06-08 | 北京小米移动软件有限公司 | Watermark removing method, device and storage medium |
CN113033540A (en) * | 2021-04-14 | 2021-06-25 | 易视腾科技股份有限公司 | Contour fitting and correcting method for scene characters, electronic device and storage medium |
WO2021147631A1 (en) * | 2020-01-21 | 2021-07-29 | 杭州大拿科技股份有限公司 | Handwritten content removing method and device and storage medium |
CN114299528A (en) * | 2021-12-27 | 2022-04-08 | 万达信息股份有限公司 | Information extraction and structuring method for scanned document |
CN111523292B (en) * | 2020-04-23 | 2023-09-15 | 北京百度网讯科技有限公司 | Method and device for acquiring image information |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101645136A (en) * | 2009-08-26 | 2010-02-10 | 福州欣创摩尔电子科技有限公司 | Image identification and detection system |
CN103093218A (en) * | 2013-01-14 | 2013-05-08 | 西南大学 | Automatically recognizing form type method and device |
CN104463124A (en) * | 2014-12-11 | 2015-03-25 | 天津普达软件技术有限公司 | Milk box spray-printed character recognition method |
CN105426856A (en) * | 2015-11-25 | 2016-03-23 | 成都数联铭品科技有限公司 | Image table character identification method |
CN105574513A (en) * | 2015-12-22 | 2016-05-11 | 北京旷视科技有限公司 | Character detection method and device |
US9366455B1 (en) * | 2015-07-14 | 2016-06-14 | Laser Heating Advanced Technologies, Llc | System and method for indirectly heating a liquid with a laser beam immersed within the liquid |
CN106846011A (en) * | 2016-12-30 | 2017-06-13 | 金蝶软件(中国)有限公司 | Business license recognition methods and device |
CN107862303A (en) * | 2017-11-30 | 2018-03-30 | 平安科技(深圳)有限公司 | Information identifying method, electronic installation and the readable storage medium storing program for executing of form class diagram picture |
CN107958201A (en) * | 2017-10-13 | 2018-04-24 | 上海眼控科技股份有限公司 | A kind of intelligent checking system and method for vehicle annual test insurance policy form |
-
2018
- 2018-06-14 CN CN201810611500.7A patent/CN108921158A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101645136A (en) * | 2009-08-26 | 2010-02-10 | 福州欣创摩尔电子科技有限公司 | Image identification and detection system |
CN103093218A (en) * | 2013-01-14 | 2013-05-08 | 西南大学 | Automatically recognizing form type method and device |
CN104463124A (en) * | 2014-12-11 | 2015-03-25 | 天津普达软件技术有限公司 | Milk box spray-printed character recognition method |
US9366455B1 (en) * | 2015-07-14 | 2016-06-14 | Laser Heating Advanced Technologies, Llc | System and method for indirectly heating a liquid with a laser beam immersed within the liquid |
CN105426856A (en) * | 2015-11-25 | 2016-03-23 | 成都数联铭品科技有限公司 | Image table character identification method |
CN105574513A (en) * | 2015-12-22 | 2016-05-11 | 北京旷视科技有限公司 | Character detection method and device |
CN106846011A (en) * | 2016-12-30 | 2017-06-13 | 金蝶软件(中国)有限公司 | Business license recognition methods and device |
CN107958201A (en) * | 2017-10-13 | 2018-04-24 | 上海眼控科技股份有限公司 | A kind of intelligent checking system and method for vehicle annual test insurance policy form |
CN107862303A (en) * | 2017-11-30 | 2018-03-30 | 平安科技(深圳)有限公司 | Information identifying method, electronic installation and the readable storage medium storing program for executing of form class diagram picture |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109886974A (en) * | 2019-01-28 | 2019-06-14 | 北京易道博识科技有限公司 | A kind of seal minimizing technology |
CN109902680A (en) * | 2019-03-04 | 2019-06-18 | 四川长虹电器股份有限公司 | The detection of picture rotation angle and bearing calibration based on convolutional neural networks |
CN110047049A (en) * | 2019-04-19 | 2019-07-23 | 西安航天恒星科技实业(集团)有限公司 | The compatible real-time quick radiation correction method of more stars |
WO2021000702A1 (en) * | 2019-06-29 | 2021-01-07 | 华为技术有限公司 | Image detection method, device, and system |
CN110443773A (en) * | 2019-08-20 | 2019-11-12 | 江西博微新技术有限公司 | File and picture denoising method, server and storage medium based on seal identification |
CN111091532A (en) * | 2019-10-30 | 2020-05-01 | 中国资源卫星应用中心 | Remote sensing image color evaluation method and system based on multilayer perceptron |
US11823358B2 (en) | 2020-01-21 | 2023-11-21 | Hangzhou Dana Technology Inc. | Handwritten content removing method and device and storage medium |
WO2021147631A1 (en) * | 2020-01-21 | 2021-07-29 | 杭州大拿科技股份有限公司 | Handwritten content removing method and device and storage medium |
CN111325214A (en) * | 2020-02-27 | 2020-06-23 | 珠海格力智能装备有限公司 | Jet printing character extraction processing method and device, storage medium and electronic equipment |
CN111325214B (en) * | 2020-02-27 | 2023-02-14 | 珠海格力智能装备有限公司 | Jet printing character extraction processing method and device, storage medium and electronic equipment |
CN111402168A (en) * | 2020-03-19 | 2020-07-10 | 同盾控股有限公司 | Image target correction method and device, terminal and storage medium |
CN111402168B (en) * | 2020-03-19 | 2024-04-05 | 同盾控股有限公司 | Image target correction method and device, terminal and storage medium |
CN111402367A (en) * | 2020-03-27 | 2020-07-10 | 维沃移动通信有限公司 | Image processing method and electronic equipment |
CN111402367B (en) * | 2020-03-27 | 2023-09-26 | 维沃移动通信有限公司 | Image processing method and electronic equipment |
CN111445386A (en) * | 2020-04-15 | 2020-07-24 | 深源恒际科技有限公司 | Image correction method based on four-point detection of text content |
CN111523292B (en) * | 2020-04-23 | 2023-09-15 | 北京百度网讯科技有限公司 | Method and device for acquiring image information |
CN111767859A (en) * | 2020-06-30 | 2020-10-13 | 北京百度网讯科技有限公司 | Image correction method and device, electronic equipment and computer-readable storage medium |
CN112927122A (en) * | 2021-04-14 | 2021-06-08 | 北京小米移动软件有限公司 | Watermark removing method, device and storage medium |
CN113033540A (en) * | 2021-04-14 | 2021-06-25 | 易视腾科技股份有限公司 | Contour fitting and correcting method for scene characters, electronic device and storage medium |
CN114299528A (en) * | 2021-12-27 | 2022-04-08 | 万达信息股份有限公司 | Information extraction and structuring method for scanned document |
CN114299528B (en) * | 2021-12-27 | 2024-03-22 | 万达信息股份有限公司 | Information extraction and structuring method for scanned document |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108921158A (en) | Method for correcting image, device and computer readable storage medium | |
US20240135700A1 (en) | Systems and methods for image based content capture and extraction utilizing deep learning neural network and bounding box detection training techniques | |
Piva | An overview on image forensics | |
CN108399405B (en) | Business license identification method and device | |
US20180137321A1 (en) | Method and system for decoding two-dimensional code using weighted average gray-scale algorithm | |
US9495735B2 (en) | Document unbending systems and methods | |
CN111353961B (en) | Document curved surface correction method and device | |
CN108846385B (en) | Image identification and correction method and device based on convolution-deconvolution neural network | |
US11836969B2 (en) | Preprocessing images for OCR using character pixel height estimation and cycle generative adversarial networks for better character recognition | |
CN111783757A (en) | OCR technology-based identification card recognition method in complex scene | |
CN107945111B (en) | Image stitching method based on SURF (speeded up robust features) feature extraction and CS-LBP (local binary Pattern) descriptor | |
CN104536999A (en) | Random fiber code anti-counterfeiting database construction method based on image processing | |
CN113592735A (en) | Text page image restoration method and system, electronic equipment and computer readable medium | |
CN111814576A (en) | Shopping receipt picture identification method based on deep learning | |
CN115660933A (en) | Method, device and equipment for identifying watermark information | |
CN111414905B (en) | Text detection method, text detection device, electronic equipment and storage medium | |
Ovodov | Optical Braille recognition using object detection neural network | |
CN109147002B (en) | Image processing method and device | |
CN111104941A (en) | Image direction correcting method and device and electronic equipment | |
Nachappa et al. | Adaptive dewarping of severely warped camera-captured document images based on document map generation | |
CN110084229A (en) | A kind of seal detection method, device, equipment and readable storage medium storing program for executing | |
WO2022021687A1 (en) | Method for positioning quick response code area, and electronic device and storage medium | |
CN116403226A (en) | Unconstrained fold document image correction method, system, equipment and storage medium | |
CN116110069A (en) | Answer sheet identification method and device based on coding mark points and relevant medium thereof | |
CN113591657B (en) | OCR layout recognition method and device, electronic equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181130 |
|
RJ01 | Rejection of invention patent application after publication |