WO2023053102A1 - Method and system for automated proofreading of digitized visual imageries - Google Patents
Method and system for automated proofreading of digitized visual imageries Download PDFInfo
- Publication number
- WO2023053102A1 WO2023053102A1 PCT/IB2022/059427 IB2022059427W WO2023053102A1 WO 2023053102 A1 WO2023053102 A1 WO 2023053102A1 IB 2022059427 W IB2022059427 W IB 2022059427W WO 2023053102 A1 WO2023053102 A1 WO 2023053102A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- proofreading
- sample
- automated
- images
- digitized
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/001—Industrial image inspection using an image reference approach
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Definitions
- the invention disclosed herein belongs to the field of automated image analysis. More particularly, this invention encompasses an inventive method and system for algorithmic proof-reading of sampled versus reference visual imageries.
- proofreading is associated with detection, communication, and correction of errors or mismatches between sample versus reference sources of data.
- Proofreading is most often done manually which, besides being time and labor intensive, is highly dependent on skill and fatigue of the proofreader. Accuracy and pace of proofreading naturally take a downfall when considering more than one references to tally against. It would therefore be much desirable to have some automated means for prescriptive comparison of sampled versus reference images.
- the present inventors propose amalgamating principles of digital image processing techniques to proofreading to thereby enable machine speeds, accuracy and precision in qualitative proofreading of sampled versus reference visual indicia.
- Image analysis generally refers to extraction and logical analysis of information determined in image data.
- visual perception of an image by the human eye involves identification of various concepts and objects in the concerned imagery and creation of associations there-between
- digital image processing techniques have evolved today to analogously output higher-level data constructs that can be logically analyzed by a computer in progressive steps constituting a value chain of analytically relevant data that ultimately identifies with the original imagery. It would be highly desirable therefore that such approach finds application in automated qualitative proofreading of sampled versus reference images.
- sample and reference sources of data are often not present in the same format, resolution and/ or suffer from optical aberrations if not taken ideally by the same image-capture device.
- aberrations, distortions, noise, image-capture perspectives to name a few, majorly affect the generation of output higher-level data constructs required for digital image processing techniques, and therefore dilute accurate detection, communication, and correction of errors or mismatches between sample versus reference source/s of data.
- the method so provided is able to handle both software generated as well as scanned images, and specifically overcoming issues of skew factor incidental therein. It is another objective further to the aforesaid objective(s) that the method so provided allows multiple user-selectable approaches for comparison of specific area-of-interest in the imageries being compared.
- FIG. 1 is a block diagram illustrating one execution cycle of the methodology proposed in the present invention.
- the present invention propounds a fast and resource-optimized computer-implemented automated methodology for prescriptive comparison of imageries of which the novelty and ingenuity are identified in allowing user-selection of the area(s) of interest in said imageries, and user-electability among approaches for comparing said area(s) of interest to accurately report on the comparison so carried out.
- images referred are ones obtained from a digital imaging system such as a artwork imaging software or a flatbed scanner (if the reference is a physical printout).
- resolution of the present invention is correlated with resolution of the scanner used and not the computing system involved. Operative association of the scanner to a computer, and general operations of these components requires no particular skill or collateral knowledge other than expected from a person of average skill in the art.
- the present invention is free of constraints entailing otherwise from capital, operation and maintenance costs besides negating the requirement of trained skilled operators for implementation of the present invention.
- a typical run cycle of the methodology proposed herein initializes with the user selecting, in step 001 , a reference for comparison. If not already had, said reference is converted to an image format in subsequent step 002.
- the user then may, in a subsequent step 003, proceed with selection of one or more area/s of interest (hereinafter, the “AOI”) for comparative analysis with the sample source.
- the one or more AOI is/ are selected in alternative steps 004, 005, 006, or 007 using approaches selected alternatively among a dieline, or background AOI (largest object detection), or box type AOI (automatic detection from cut lines), or by manually cropping with mouse pointer respectively.
- the user may proceed, in step 008 to select a sample for comparison.
- said sample may be selected among those scanned from their printed versions, via step 009, to a digitized image format, or alternatively chosen, via step 010 among those prearranged to be available in digitized image format, or further alternatively converted, in step 011 from their respective preexisting native formats.
- the reference and sample images that is the digitized visual imageries, are obtained, they are subjected to two successive subroutines, that is, a pattern matching in step 013, and a deviation detection subroutine in step 014.
- the pattern matching subroutine of step 013 is a specialized algorithm intended to compare the sample and reference imageries by finding matching constituent objects in said imageries. For this, edge objects in both sample and reference imageries are identified first, followed by mapping of relative object positions by finding relative edge objects in the reference AOI and sample image, and then matching of patterns to therefore reach an assessment of the degree of matching between the selected sample and reference imageries.
- both sample and reference images are resampled to half of original specifications to thus result in image size reduction which, in turn, allows fast processing and memory optimization.
- Edges of objects are then detected, labeled, and the identified objects are finally parsed to filter very small and very large objects.
- matching of patterns between the selected sample and reference imageries is achieved by matching all identified reference edge objects for possible rotations of up in the sample - 0°, 90°, 180° and 270°.
- similar edge objects are first identified on basis of matching length, width and edge pixel count.
- similar edge objects are mapped by locating all objects in sample at same relative positions in reference image. Matches observed are enumerated as an absolute count, and saved in memory as percentage of matches observed, and relative AOI position mapped.
- UPs are identified and counted from the matching information outputted, on basis of the maximum match and related AOI match in the sample image. If UP count matches the user-defined value at step 012, the process is halted. Otherwise, the process is continued by adding the UP position in memory and filtering neighboring UP positions from matching memory to verify if UP count matches the user- defined value at step 012.
- the deviation detection subroutine of step 014 is a specialized algorithm intended to detect deviations in the sample images versus AOI in the reference images. For this, the AOI and UP coordinates of both sample and reference images are doubled and the images are cropped based on these coordinates, then pixel to pixel differences are detected, and thus an assessment is reached on deviations present, which is reported to the user.
- step 014 addresses this issue by provisioning for pixel to pixel difference mapping which allows detection of deviation pixels which are reportable to the user.
- step 014 The objective mentioned in the preceding paragraph is achieved by the subroutine of step 014 by allowing accurate mapping of pixel to pixel differences when either comparing digitized images or one readymade I scanned PDF to another. It is relatively easier comparing PDFs prepared by artwork designed software, as there is no issue of skew in the images so prepared. In such case, the sample and reference imageries are smoothened to remove the noise and edge irregularities, and then pixel to pixel differences at same positions are computed to determine whether the corresponding pixels are matching. The sample pixel is marked deviant if the computed difference is more than the user-defined sensitivity value set at step 012.
- step 014 provides for that the objects, that is text or any content are first identified and mapped in reference and sample images and then the corresponding objects are used for computation of pixel to pixel differences.
- step 014 The objective mentioned in the preceding paragraph is achieved by the subroutine of step 014 by arranging for that the sample and reference imageries are first smoothened to remove the noise and edge irregularities before detecting and labeling the edges of objects, and finally parsing the identified objects to filter very small and very large edge objects. For comparative analysis, a similar sample edge object in nearby position is located for every reference edge object. If not found, shifts from nearby sample and reference edge object pairs are applied for identification of corresponding sample and reference edge object pairs. Finally, for each mapped object pair, one to one pixel intensity differences at same positions are computed to determine whether the corresponding pixels are matching. The sample pixel is marked deviant if the computed difference is more than the user-defined sensitivity parameter value set at step 012.
- objects identified from deviant pixels are marked as deviation objects and filtered on size as per the user-defined minimum error parameter value set at step 012. Closer objects are grouped as a single deviation, and thus an assessment is reached on deviations present, which is reported to the user.
- the present invention has been reduced to practice and found to be operative, as conceptualized, for proof reading of digital and/ or printed content against a reference digital document. Upon implementation, the present invention is more particularly identified by the following demonstrable salient features-
- Deviations between sample and reference images are positionally identifiable by pixel to pixel comparison and further sorted as per unique patterns;
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
Disclosed herein is a fast and resource-optimized computer-implemented automated methodology for prescriptive comparison of imageries of which the novelty and ingenuity are identified in allowing user- selection of the area(s) of interest in said imageries, and user-electability among approaches for comparing said area(s) of interest and user-discretion as to the sensitivity, minimum error size, and unique pattern to accurately report on the comparison so carried out.
Description
Method and system for automated proofreading of digitized visual imageries
Cross references to related applications: This non-provisional patent application claims the benefit of US provisional application no. 63/251639 filed on 03 October 2021 , the contents of which are incorporated herein in their entirety by reference.
Statement Regarding Federally Sponsored Research or Development: None applicable
Reference to Sequence Listing, a Table, or a Computer Program Listing Compact Disc Appendix: None
Field of the invention
The invention disclosed herein belongs to the field of automated image analysis. More particularly, this invention encompasses an inventive method and system for algorithmic proof-reading of sampled versus reference visual imageries.
Background of the invention and description of related art
Conventionally, proofreading is associated with detection, communication, and correction of errors or mismatches between sample versus reference sources of data. Proofreading is most often done manually which, besides being time and labor intensive, is highly dependent on skill and fatigue of the proofreader. Accuracy and pace of proofreading naturally take a downfall when considering more than one references to tally against. It would therefore be much desirable to have some automated means for prescriptive comparison of sampled versus reference images.
In the niche voiced above, the present inventors propose amalgamating principles of digital image processing techniques to proofreading to thereby enable machine speeds, accuracy and precision in qualitative proofreading of sampled versus reference visual indicia.
Image analysis generally refers to extraction and logical analysis of information determined in image data. Just as visual perception of an image by the human eye involves identification of various concepts and objects in the concerned imagery and creation of associations there-between, digital image processing techniques have evolved today to analogously output higher-level data constructs that can be logically analyzed by a computer in progressive steps constituting a value chain of analytically relevant data that ultimately identifies with the original imagery. It would be highly desirable therefore that such approach finds application in automated qualitative proofreading of sampled versus reference images.
As can be appreciated, there are numerous issues in automation of proofreading methodologies using comparative image analysis techniques. Prime among them, is that the sample and reference sources of data are often not present in the same format, resolution and/ or suffer from optical aberrations if not taken ideally by the same image-capture device. In such a framework, aberrations, distortions, noise, image-capture perspectives, to name a few, majorly affect the generation of output higher-level data constructs required for digital image processing techniques, and therefore dilute accurate detection, communication, and correction of errors or mismatches between sample versus reference source/s of data.
As furthermore foreseen, appropriate area/s of interest need to be accurately identified and precisely mapped while comparing a sample versus reference source/s of data. It would be hence beneficial that the system needed for efficient automated proofreading of digitized visual imageries is capable of identifying and matching attributes between sample versus reference source/s of data in a foolproof manner, and also that it is able to accommodate for any among the aforementioned deleterious aspects affecting generation of output higher-level data constructs in digital image processing techniques.
Prior art, to the limited extent presently surveyed, does not list a single effective solution embracing all considerations mentioned hereinabove, thus preserving an acute necessity-to-invent for the present inventors who, as result of their focused research, have come up with novel solutions for resolving all needs of the art once and for all. Work of the presently named inventors, specifically directed against the technical problems recited hereinabove and currently part of the public domain including earlier filed patent applications, is neither expressly nor impliedly admitted as prior art against the present disclosures.
A better understanding of the objects, advantages, features, properties and relationships of the present invention will be obtained from the underlying specification, which sets forth the best mode contemplated by the inventor of carrying out the present invention.
Objectives of the present invention
The present invention is identified in addressing at least all major deficiencies of art discussed in the foregoing section by effectively addressing the objectives stated under, of which:
It is a primary objective to provide an effective method for comparison of sampled versus reference images.
It is another objective further to the aforesaid objective(s) that the method so provided is able to handle both software generated as well as scanned images, and specifically overcoming issues of skew factor incidental therein.
It is another objective further to the aforesaid objective(s) that the method so provided allows multiple user-selectable approaches for comparison of specific area-of-interest in the imageries being compared.
It is another objective further to the aforesaid objective(s) that the method so provided is fully automated via fast and optimized computational logic with low processing time, low demands on processor resources, and effective use of available computer memory stores.
It is another objective further to the aforesaid objective(s) that the method so provided is error-free and lends itself to accurate implementation even at hands of a user of average skill in the art.
It is another objective further to the aforesaid objective(s) that implementation of the method so provided does not involve any complicated or overtly expensive hardware.
It is another objective further to the aforesaid objective(s) that implementation of the method is possible via a remote server, in a software-as-a-service (SaaS) model.
The manner in which the above objectives are achieved, together with other objects and advantages which will become subsequently apparent, reside in the detailed description set forth below in reference to the accompanying drawings and furthermore specifically outlined in the independent claims. Other advantageous embodiments of the invention are specified in the dependent claims.
Brief description of drawings
The present invention is explained herein under with reference to the following drawings, in which,
FIG. 1 is a block diagram illustrating one execution cycle of the methodology proposed in the present invention.
The above drawings are illustrative of particular examples of the present invention but are not intended to limit the scope thereof. In above drawings, wherever possible, the same references and symbols have been used throughout to refer to the same or similar parts. Though numbering has been introduced to demarcate reference to specific components in relation to such references being made in different sections of this specification, all components are not shown or numbered in each drawing to avoid obscuring the invention proposed.
Attention of the reader is now requested to the detailed description to follow which narrates a preferred embodiment of the present invention and such other ways in which principles of the invention may be employed without parting from the essence of the invention claimed herein.
Summary of the invention
The present invention propounds a fast and resource-optimized computer-implemented automated methodology for prescriptive comparison of imageries of which the novelty and ingenuity are identified in allowing user-selection of the area(s) of interest in said imageries, and user-electability among approaches for comparing said area(s) of interest to accurately report on the comparison so carried out.
Detailed Description
Principally, general purpose of the present invention is to assess disabilities and shortcomings inherent to known systems comprising state of the art and develop new systems incorporating all available advantages of known art and none of its disadvantages. Accordingly, the disclosures herein are directed towards an inventive method and system for automated algorithmic proof-reading of sampled versus reference visual imageries.
In the embodiment recited herein, the reader shall presume that images referred are ones obtained from a digital imaging system such as a artwork imaging software or a flatbed scanner (if the reference is a physical printout). As will be realised further, resolution of the present invention is correlated with resolution of the scanner used and not the computing system involved. Operative association of the scanner to a computer, and general operations of these components requires no particular skill or collateral knowledge other than expected from a person of average skill in the art. Hence, the present invention is free of constraints entailing otherwise from capital, operation and maintenance costs besides negating the requirement of trained skilled operators for implementation of the present invention.
Referring to the accompanying FIG. 1 , it can be seen that a typical run cycle of the methodology proposed herein initializes with the user selecting, in step 001 , a reference for comparison. If not already had, said reference is converted to an image format in subsequent step 002. The user then may, in a subsequent step 003, proceed with selection of one or more area/s of interest (hereinafter, the “AOI”) for comparative analysis with the sample source. At discretion of the user, the one or more AOI is/ are selected in alternative steps 004, 005, 006, or 007 using approaches selected alternatively among a dieline, or background AOI (largest object detection), or box type AOI (automatic detection from cut lines), or by manually cropping with mouse pointer respectively.
With continued reference to FIG. 1 , it can be seen that once the AOI in reference image is selected, the user may proceed, in step 008 to select a sample for comparison. As may be readily appreciated, said sample may be selected among those scanned from their printed versions, via step 009, to a digitized image format, or alternatively chosen, via step 010 among those prearranged to be available in digitized image format, or further alternatively converted, in step 011 from their respective preexisting native formats.
With yet continued reference to FIG. 1 , it can be seen that once the reference and sample images, that is the digitized visual imageries, are obtained, they are subjected to two successive subroutines, that is, a pattern matching in step 013, and a deviation detection subroutine in step 014. User-selectable parameters, including the sensitivity, minimum error size, and Unique Pattern (hereinafter, the “UP”) count sample for the aforesaid subroutines are set in a precedent step 012. Collective output of these subroutines, which is nothing but an assessment of proofreading undertaken on the sample and reference imageries, is ultimately generated at step 015, thereby completing the typical run cycle of the methodology proposed herein.
Among the subroutines mentioned in the foregoing narration, the pattern matching subroutine of step 013 is a specialized algorithm intended to compare the sample and reference imageries by finding matching constituent objects in said imageries. For this, edge objects in both sample and reference imageries are identified first, followed by mapping of relative object positions by finding relative edge objects in the reference AOI and sample image, and then matching of patterns to therefore reach an assessment of the degree of matching between the selected sample and reference imageries.
According to one aspect of the present invention embodied in the subroutine of step 013, for identification of edge objects, especially if their sources are of very high resolution or large size, both sample and reference images are resampled to half of original specifications to thus result in image size reduction which, in turn, allows fast processing and memory optimization. Edges of objects are then detected, labeled, and the identified objects are finally parsed to filter very small and very large objects.
According to another aspect of the present invention further embodied in the subroutine of step 013, matching of patterns between the selected sample and reference imageries is achieved by matching all identified reference edge objects for possible rotations of up in the sample - 0°, 90°, 180° and 270°. Here, similar edge objects are first identified on basis of matching length, width and edge pixel count. Next, similar edge objects are mapped by locating all objects in sample at same relative positions in reference image. Matches observed are enumerated as an absolute count, and saved in memory as percentage of matches observed, and relative AOI position mapped.
According to another aspect of the present invention further embodied in the subroutine of step 013, UPs are identified and counted from the matching information outputted, on basis of the maximum match and related AOI match in the sample image. If UP count matches the user-defined value at step 012, the process is halted. Otherwise, the process is continued by adding the UP position in memory and filtering neighboring UP positions from matching memory to verify if UP count matches the user- defined value at step 012.
Next among the subroutines mentioned in the foregoing narration, the deviation detection subroutine of step 014 is a specialized algorithm intended to detect deviations in the sample images versus AOI in the reference images. For this, the AOI and UP coordinates of both sample and reference images are
doubled and the images are cropped based on these coordinates, then pixel to pixel differences are detected, and thus an assessment is reached on deviations present, which is reported to the user.
While comparing digitized images or one readymade I scanned PDF to another, skew factor differential is observed due to different sensors I non-linear sensor response in the scanning processes responsible for generation of said imageries (sample images are prone to having skews, particularly if prepared using a roller scanner due to motion of the sample while scanning). The subroutine of step 014 addresses this issue by provisioning for pixel to pixel difference mapping which allows detection of deviation pixels which are reportable to the user.
The objective mentioned in the preceding paragraph is achieved by the subroutine of step 014 by allowing accurate mapping of pixel to pixel differences when either comparing digitized images or one readymade I scanned PDF to another. It is relatively easier comparing PDFs prepared by artwork designed software, as there is no issue of skew in the images so prepared. In such case, the sample and reference imageries are smoothened to remove the noise and edge irregularities, and then pixel to pixel differences at same positions are computed to determine whether the corresponding pixels are matching. The sample pixel is marked deviant if the computed difference is more than the user-defined sensitivity value set at step 012.
On the other hand, skew is usually present in scanned images, due to which positions of the text or content in entire image appear changed. To avoid wrong computation of pixel to pixel differences therefore, the subroutine of step 014 provides for that the objects, that is text or any content are first identified and mapped in reference and sample images and then the corresponding objects are used for computation of pixel to pixel differences.
The objective mentioned in the preceding paragraph is achieved by the subroutine of step 014 by arranging for that the sample and reference imageries are first smoothened to remove the noise and edge irregularities before detecting and labeling the edges of objects, and finally parsing the identified objects to filter very small and very large edge objects. For comparative analysis, a similar sample edge object in nearby position is located for every reference edge object. If not found, shifts from nearby sample and reference edge object pairs are applied for identification of corresponding sample and reference edge object pairs. Finally, for each mapped object pair, one to one pixel intensity differences at same positions are computed to determine whether the corresponding pixels are matching. The sample pixel is marked deviant if the computed difference is more than the user-defined sensitivity parameter value set at step 012.
According to another aspect of the present invention further embodied in the subroutine of step 014, objects identified from deviant pixels are marked as deviation objects and filtered on size as per the user-defined minimum error parameter value set at step 012. Closer objects are grouped as a single deviation, and thus an assessment is reached on deviations present, which is reported to the user.
The present invention has been reduced to practice and found to be operative, as conceptualized, for proof reading of digital and/ or printed content against a reference digital document. Upon implementation, the present invention is more particularly identified by the following demonstrable salient features-
1 ) Ability to handle both software generated as well as scanned images therefore overcoming issues of skew factor encountered therein;
2) Deviations between sample and reference images are positionally identifiable by pixel to pixel comparison and further sorted as per unique patterns;
3) Multiple user-selectable approaches for selection of area/s of interest in reference images for comparative matching with sample images;
4) Fast and optimized algorithms which reduce processing time and demands on processor resources;
5) Multiple sampling levels for sample and reference images to therefore use available computer memory stores effectively;
6) System is customizable for specific user requirements.
As will be realized further, the present invention is capable of various other embodiments and that its several components and related details are capable of various alterations, all without departing from the basic concept of the present invention. Accordingly, the foregoing description will be regarded as illustrative in nature and not as restrictive in any form whatsoever. Modifications and variations of the system and apparatus described herein will be obvious to those skilled in the art. Such modifications and variations are intended to come within ambit of the present invention, which is limited only by the appended claims.
Claims
8
Claims
We claim-
1 ) A method for automated proofreading of digitized visual imageries, comprising- a) Constituting an application environment by communicatively associating a scanner to a computer, wherein- i. the method for automated proofreading of digitized visual imageries is provisioned for execution, as an executable software, on said computer; and ii. a scanner for scanning printed images and relaying said captured images in real time to said computer for processing by the executable software provisioned on said computer. b) Selecting a reference for comparison, therein converting the reference into a digitized image format if said reference is not already in the form of a digitized image format, to thus result in a reference image; c) Detecting an area of interest for comparison within the reference image; d) Selecting at least one area of interest within said reference image, via at least one user- opted approach; e) Selecting a stored sample for comparison, therein converting the stored sample into an a digitized image format if said stored sample is not already in the form of a digitized image format; and f) Proofreading of the reference and sample images being compared, by sequential processing using subroutines respectively for pattern matching and deviation detection therein, in accordance with user-configurable parameters, to therefore generate a final report as to the automated proofreading of digitized visual imageries undertaken.
2) The method for automated proofreading of digitized visual imageries according to claim 1 , wherein the conversion of the reference and stored sample to a digitized image format, if either of said reference and stored sample are not already in a digitized image format, is done by imaging means chosen among photography, scanning and their equivalents.
3) The method for automated proofreading of digitized visual imageries according to claim 1 , wherein the user-opted approach for selection of the at least one area of interest within the reference image is at least one among- a) Selecting from die lines in multiple layers;
9 b) Selecting from background articles of interest, in particular being the largest object detected; c) Selection from box-type articles of interest, in particular being automated selection from among cut lines; and d) Selection via manual cropping of the reference image with help of a computer mouse pointer. ) The method for automated proofreading of digitized visual imageries according to claim 1 , wherein the user-configurable parameters are selected among- a) Sensitivity; b) Minimum error size; and c) Unique pattern count. ) The method for automated proofreading of digitized visual imageries according to claim 1 , wherein the subroutine for pattern matching between the sample and reference images consists of finding matching constituent objects in the reference and sample images to therefore reach an assessment of the degree of matching between the selected sample and reference images. ) The method for automated proofreading of digitized visual imageries according to claim 5, wherein the step of finding matching constituent objects is done by- a) Identifying similar edge objects in both sample and reference images; b) Mapping of relative object positions by finding relative edge objects in the reference area of interest and sample image; and c) Matching of unique patterns to therefore reach an assessment of the degree of matching between the selected sample and reference images, said assessment being quantified via enumeration as an absolute count of matches observed which are saved in memory as percentage of matches observed and relative area of interest position mapped. ) The method for automated proofreading of digitized visual imageries according to claim 6, wherein in the step of identification of similar edge objects, if their sources are of very high resolution or large size, both the sample and reference images are resampled to half of original specifications to thus result in image size reduction which, in turn, allows fast processing and memory optimization.
10 ) The method for automated proofreading of digitized visual imageries according to claim 6, wherein the step of matching of patterns is achieved by matching all identified reference edge objects for possible rotations selected among 0°, 90°, 180° and 270° of unique patterns. ) The method for automated proofreading of digitized visual imageries according to claims 4 and 6, wherein the matching of unique patterns is halted if the unique pattern count matches that set by the user as part of the user-configurable parameters. 0) The method for automated proofreading of digitized visual imageries according to claims 4 and 6, wherein the matching of unique patterns is continued with addition of unique patterns stored in memory and filtering neighboring unique pattern positions from matching memory to verify if unique pattern count matches that set by the user as part of the user-configurable parameters. 1 ) The method for automated proofreading of digitized visual imageries according to claim 1 , wherein the subroutine for deviation detection between the sample and reference images comprises- a) Doubling the coordinates of articles of interest and unique patterns in both sample and reference images to thereby crop said images; b) Detecting pixel to pixel differences between the cropped images to therefore reach an assessment of the deviations present, characterized in that said assessment reached is unaffected by skew factor differential if any present in the images being compared, as the pixel to pixel difference mapping performed allows detection of deviation pixels which are contained in the final report being generated. 2) The method for automated proofreading of digitized visual imageries according to claims 4 and 1 1 , wherein for each mapped object pair, one to one pixel intensity difference is computed at same position, to therefore mark a pixel as deviant if the value of the computed intensity difference is more than that set for sensitivity by the user as part of the user-configurable parameters. 3) The method for automated proofreading of digitized visual imageries according to claim 1 1 , wherein, in the event the reference and sample images are skew-free PDFs prepared by sources chosen between artwork designed software and flatbed scanner, said sample and reference images are smoothened to remove the noise and edge irregularities before detecting and labeling the edges of objects, detecting the pixel to pixel difference mapping, and parsing the identified objects to filter very small and very large edge objects. 4) The method for automated proofreading of digitized visual imageries according to claim 1 1 , wherein first the text or any content are first identified and mapped in reference and sample images and then the corresponding objects are used for computation of pixel to pixel
11 differences in the event said sample images are having skews, particularly if prepared using a roller scanner due to motion of the sample while scanning. ) The method for automated proofreading of digitized visual imageries according to claims 4 and 13, wherein the edge objects identified from deviant pixels are marked as deviation objects and filtered on size as per the value for minimum error size set by the user as part of the user- configurable parameters. ) The method for automated proofreading of digitized visual imageries according to claim 1 , wherein the executable software is provisioned for execution on the computer by either between a standalone installation and online access from a cloud server in a software-as-a-service model.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202163251639P | 2021-10-03 | 2021-10-03 | |
US63/251,639 | 2021-10-03 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023053102A1 true WO2023053102A1 (en) | 2023-04-06 |
Family
ID=85780512
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2022/059427 WO2023053102A1 (en) | 2021-10-03 | 2022-10-03 | Method and system for automated proofreading of digitized visual imageries |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2023053102A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117349189A (en) * | 2023-12-05 | 2024-01-05 | 四川才子软件信息网络有限公司 | APP new version testing method, equipment and medium |
CN117853509A (en) * | 2023-12-29 | 2024-04-09 | 北京航星永志科技有限公司 | File image edge clipping method, device, equipment and medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102947858A (en) * | 2010-04-08 | 2013-02-27 | 帕斯科数码有限责任公司(商业用为奥姆尼克斯有限责任公司) | Image quality assessment including comparison of overlapped margins |
CN101677351B (en) * | 2008-09-17 | 2013-02-27 | 富士施乐株式会社 | Image processing apparatus, image forming apparatus and image processing method |
-
2022
- 2022-10-03 WO PCT/IB2022/059427 patent/WO2023053102A1/en unknown
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101677351B (en) * | 2008-09-17 | 2013-02-27 | 富士施乐株式会社 | Image processing apparatus, image forming apparatus and image processing method |
CN102947858A (en) * | 2010-04-08 | 2013-02-27 | 帕斯科数码有限责任公司(商业用为奥姆尼克斯有限责任公司) | Image quality assessment including comparison of overlapped margins |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117349189A (en) * | 2023-12-05 | 2024-01-05 | 四川才子软件信息网络有限公司 | APP new version testing method, equipment and medium |
CN117349189B (en) * | 2023-12-05 | 2024-03-15 | 四川才子软件信息网络有限公司 | APP new version testing method, equipment and medium |
CN117853509A (en) * | 2023-12-29 | 2024-04-09 | 北京航星永志科技有限公司 | File image edge clipping method, device, equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023053102A1 (en) | Method and system for automated proofreading of digitized visual imageries | |
US11797886B2 (en) | Image processing device, image processing method, and image processing program | |
US10366309B2 (en) | Image quality assessment and improvement for performing optical character recognition | |
WO2019117065A1 (en) | Data generation device, data generation method and data generation program | |
US20140301608A1 (en) | Chemical structure recognition tool | |
US10430687B2 (en) | Trademark graph element identification method, apparatus and system, and computer storage medium | |
CN115641332B (en) | Method, device, medium and equipment for detecting product edge appearance defects | |
CN114897868A (en) | Pole piece defect identification and model training method and device and electronic equipment | |
JP2014126445A (en) | Alignment device, defect inspection device, alignment method and control program | |
US11514702B2 (en) | Systems and methods for processing images | |
CN117115823A (en) | Tamper identification method and device, computer equipment and storage medium | |
JP2017521011A (en) | Symbol optical detection method | |
CN111079752A (en) | Method and device for identifying circuit breaker in infrared image and readable storage medium | |
CN115374517A (en) | Testing method and device for wiring software, electronic equipment and storage medium | |
KR102272745B1 (en) | Inspection System and Method for Compact Camera Module Cover | |
US11150849B2 (en) | Device and method for checking the printing of an article | |
CN114255213A (en) | Defect detection method and system based on semantic alignment | |
US20180173989A1 (en) | Process and system of identification of products in motion in a product line | |
CN112861861A (en) | Method and device for identifying nixie tube text and electronic equipment | |
JP5540595B2 (en) | Printed matter inspection apparatus, printed matter inspection system, printed matter inspection method, and printed matter inspection program | |
US11335007B2 (en) | Method to generate neural network training image annotations | |
CN117036267A (en) | Curved surface printing detection method, system and storage medium | |
JP5757299B2 (en) | Form design device, form design method, and form design program | |
Chavda et al. | Detection of defect in pharma-tablets using image processing | |
US20230153939A1 (en) | Identifying location of shreds on an imaged form |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22875312 Country of ref document: EP Kind code of ref document: A1 |